domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
compgenomr.github.io | http://compgenomr.github.io/book/trees-and-forests-random-forests-in-action.html |
5\.12 Trees and forests: Random forests in action
-------------------------------------------------
### 5\.12\.1 Decision trees
Decision trees are a popular method for various machine learning tasks mostly because their interpretability is very high. A decision tree is a series of filters on the predictor variables. The series of filters end up in a class prediction. Each filter is a binary yes/no question, which creates bifurcations in the series of filters thus leading to a treelike structure. The filters are dependent on the type of predictor variables. If the variables are categorical, such as gender, then the filters could be “is gender female” type of questions. If the variables are continuous, such as gene expression, the filter could be “is PIGX expression larger than 210?”. Every point where we filter samples based on these questions are called “decision nodes”. The tree\-fitting algorithm finds the best variables at decision nodes depending on how well they split the samples into classes after the application of the decision node. Decision trees handle both categorical and numeric predictor variables, they are easy to interpret, and they can deal with missing variables. Despite their advantages, decision trees tend to overfit if they are grown very deep and can learn irregular patterns.
There are many variants of tree\-based machine learning algorithms. However, most algorithms construct decision nodes in a top down manner. They select the best variables to use in decision nodes based on how homogeneous the sample sets are after the split. One measure of homogeneity is “Gini impurity”. This measure is calculated for each subset after the split and later summed up as a weighted average. For a decision node that splits the data perfectly in a two\-class problem, the gini impurity will be \\(0\\), and for a node that splits the data into a subset that has 50% class A and 50% class B the impurity will be \\(0\.5\\). Formally, the gini impurity, \\({I}\_{G}(p)\\), of a set of samples with known class labels for \\(K\\) classes is the following, where \\(p\_{i}\\) is the probability of observing class \\(i\\) in the subset:
\\\[
{\\displaystyle {I}\_{G}(p)\=\\sum \_{i\=1}^{K}p\_{i}(1\-p\_{i})\=\\sum \_{i\=1}^{K}p\_{i}\-\\sum \_{i\=1}^{K}{p\_{i}}^{2}\=1\-\\sum \_{i\=1}^{K}{p\_{i}}^{2}}
\\]
For example, if a subset of data after split has 75% class A and 25% class B for that subset, the impurity would be \\(1\-(0\.75^2\+0\.25^2\)\=0\.375\\). If the other subset had 5% class A and 95% class B, its impurity would be \\(1\-(0\.95^2\+0\.05^2\)\=0\.095\\). If the subset sizes after the split were equal, total weighted impurity would be \\(0\.5\*0\.375\+0\.5\*0\.095\= 0\.235\\). These calculations will be done for each potential variable and the split, and every node will be constructed based on gini impurity decrease. If the variable is continuous, the cutoff value will be decided based on the best impurity. For example, gene expression values will have splits such as “PIGX expression \< 2\.1”. Here \\(2\.1\\) is the cutoff value that produces the best impurity. There are other homogeneity measures, however gini impurity is the one that is used for random forests, which we will introduce next.
### 5\.12\.2 Trees to forests
Random forests are devised to counter the shortcomings of decision trees. They are simply ensembles of decision trees. Each tree is trained with a different randomly selected part of the data with randomly selected predictor variables. The goal of introducing randomness is to reduce the variance of the model so it does not overfit, at the expense of a small increase in the bias and some loss of interpretability. This strategy generally boosts the performance of the final model.
The random forests algorithm tries to decorrelate the trees so that they learn different things about the data. It does this by selecting a random subset of variables. If one or a few predictor variables are very strong predictors for the response variable, these features will be selected in many of the trees, causing them to become correlated. Random subsampling of predictor variables ensures that not always the best predictors overall are selected for every tree and, the model does
have a chance to learn other features of the data.
Another sampling method introduced when building random forest models is bootstrap resampling before constructing each tree. This brings the advantage of out\-of\-the\-bag (OOB) error prediction. In this case, the prediction error can be estimated for training samples that were OOB, meaning they were not used in the training, for some percentage of the trees. The prediction error for each sample can be estimated from the trees where that sample was OOB. OOB estimates claimed to be a good alternative to cross\-validation estimated errors (Breiman [2001](#ref-breiman2001random)).
FIGURE 5\.9: Random forest concept. Individual decision trees are built with sampling strategies. Votes from each tree define the final class.
For demonstration purposes, we will use the `caret` package interface to the `ranger` random forest package. This is a fast implementation of the original random forest algorithm. For random forests, we have two critical arguments. One of the most critical arguments for random forest is the number of predictor variables to sample in each split of the tree. This parameter controls the independence between the trees, and as explained before, this limits overfitting. Below, we are going to fit a random forest model to our tumor subtype problem. We will set `mtry=100` and not perform the training procedure to find the best `mtry` value for simplicity. However, it is good practice
to run the model with cross\-validation and let it pick the best parameters based on the cross\-validation performance. It defaults to the square root of number of predictor variables. Another variable we can tune is the minimum node size of terminal nodes in the trees (`min.node.size`). This controls the depth of the trees grown. Setting this to larger numbers might cost a small loss in accuracy but the algorithm will run faster.
```
set.seed(17)
# we will do no resampling based prediction error
# although it is advised to do so even for random forests
trctrl <- trainControl(method = "none")
# we will now train random forest model
rfFit <- train(subtype~.,
data = training,
method = "ranger",
trControl=trctrl,
importance="permutation", # calculate importance
tuneGrid = data.frame(mtry=100,
min.node.size = 1,
splitrule="gini")
)
# print OOB error
rfFit$finalModel$prediction.error
```
```
## [1] 0.01538462
```
### 5\.12\.3 Variable importance
Random forests come with built\-in variable importance metrics. One of the metrics is similar to the “variable dropout metric” where the predictor variables are permuted. In this case, OOB samples are used and the variables are permuted one at a time. Every time, the samples with the permuted variables are fed to the network and the decrease in accuracy is measured. Using this quantity, the variables can be ranked.
A less costly method with similar performance is to use gini impurity. Every time a variable is used in a tree to make a split, the gini impurity is less than the parent node. This method adds up these gini impurity decreases for each individual variable across the trees and divides it by the number of the trees in the forest. This metric is often consistent with the permutation importance measure (Breiman [2001](#ref-breiman2001random)). Below, we are going to plot the permutation\-based importance metric. This metric has been calculated during the run of the model above. We will use the `caret::varImp()` function to access the importance values and plot them using the `plot()` function; the result is shown in Figure [5\.10](trees-and-forests-random-forests-in-action.html#fig:RFvarImp).
```
plot(varImp(rfFit),top=10)
```
FIGURE 5\.10: Top 10 important variables based on permutation\-based method for the random forest classification.
### 5\.12\.1 Decision trees
Decision trees are a popular method for various machine learning tasks mostly because their interpretability is very high. A decision tree is a series of filters on the predictor variables. The series of filters end up in a class prediction. Each filter is a binary yes/no question, which creates bifurcations in the series of filters thus leading to a treelike structure. The filters are dependent on the type of predictor variables. If the variables are categorical, such as gender, then the filters could be “is gender female” type of questions. If the variables are continuous, such as gene expression, the filter could be “is PIGX expression larger than 210?”. Every point where we filter samples based on these questions are called “decision nodes”. The tree\-fitting algorithm finds the best variables at decision nodes depending on how well they split the samples into classes after the application of the decision node. Decision trees handle both categorical and numeric predictor variables, they are easy to interpret, and they can deal with missing variables. Despite their advantages, decision trees tend to overfit if they are grown very deep and can learn irregular patterns.
There are many variants of tree\-based machine learning algorithms. However, most algorithms construct decision nodes in a top down manner. They select the best variables to use in decision nodes based on how homogeneous the sample sets are after the split. One measure of homogeneity is “Gini impurity”. This measure is calculated for each subset after the split and later summed up as a weighted average. For a decision node that splits the data perfectly in a two\-class problem, the gini impurity will be \\(0\\), and for a node that splits the data into a subset that has 50% class A and 50% class B the impurity will be \\(0\.5\\). Formally, the gini impurity, \\({I}\_{G}(p)\\), of a set of samples with known class labels for \\(K\\) classes is the following, where \\(p\_{i}\\) is the probability of observing class \\(i\\) in the subset:
\\\[
{\\displaystyle {I}\_{G}(p)\=\\sum \_{i\=1}^{K}p\_{i}(1\-p\_{i})\=\\sum \_{i\=1}^{K}p\_{i}\-\\sum \_{i\=1}^{K}{p\_{i}}^{2}\=1\-\\sum \_{i\=1}^{K}{p\_{i}}^{2}}
\\]
For example, if a subset of data after split has 75% class A and 25% class B for that subset, the impurity would be \\(1\-(0\.75^2\+0\.25^2\)\=0\.375\\). If the other subset had 5% class A and 95% class B, its impurity would be \\(1\-(0\.95^2\+0\.05^2\)\=0\.095\\). If the subset sizes after the split were equal, total weighted impurity would be \\(0\.5\*0\.375\+0\.5\*0\.095\= 0\.235\\). These calculations will be done for each potential variable and the split, and every node will be constructed based on gini impurity decrease. If the variable is continuous, the cutoff value will be decided based on the best impurity. For example, gene expression values will have splits such as “PIGX expression \< 2\.1”. Here \\(2\.1\\) is the cutoff value that produces the best impurity. There are other homogeneity measures, however gini impurity is the one that is used for random forests, which we will introduce next.
### 5\.12\.2 Trees to forests
Random forests are devised to counter the shortcomings of decision trees. They are simply ensembles of decision trees. Each tree is trained with a different randomly selected part of the data with randomly selected predictor variables. The goal of introducing randomness is to reduce the variance of the model so it does not overfit, at the expense of a small increase in the bias and some loss of interpretability. This strategy generally boosts the performance of the final model.
The random forests algorithm tries to decorrelate the trees so that they learn different things about the data. It does this by selecting a random subset of variables. If one or a few predictor variables are very strong predictors for the response variable, these features will be selected in many of the trees, causing them to become correlated. Random subsampling of predictor variables ensures that not always the best predictors overall are selected for every tree and, the model does
have a chance to learn other features of the data.
Another sampling method introduced when building random forest models is bootstrap resampling before constructing each tree. This brings the advantage of out\-of\-the\-bag (OOB) error prediction. In this case, the prediction error can be estimated for training samples that were OOB, meaning they were not used in the training, for some percentage of the trees. The prediction error for each sample can be estimated from the trees where that sample was OOB. OOB estimates claimed to be a good alternative to cross\-validation estimated errors (Breiman [2001](#ref-breiman2001random)).
FIGURE 5\.9: Random forest concept. Individual decision trees are built with sampling strategies. Votes from each tree define the final class.
For demonstration purposes, we will use the `caret` package interface to the `ranger` random forest package. This is a fast implementation of the original random forest algorithm. For random forests, we have two critical arguments. One of the most critical arguments for random forest is the number of predictor variables to sample in each split of the tree. This parameter controls the independence between the trees, and as explained before, this limits overfitting. Below, we are going to fit a random forest model to our tumor subtype problem. We will set `mtry=100` and not perform the training procedure to find the best `mtry` value for simplicity. However, it is good practice
to run the model with cross\-validation and let it pick the best parameters based on the cross\-validation performance. It defaults to the square root of number of predictor variables. Another variable we can tune is the minimum node size of terminal nodes in the trees (`min.node.size`). This controls the depth of the trees grown. Setting this to larger numbers might cost a small loss in accuracy but the algorithm will run faster.
```
set.seed(17)
# we will do no resampling based prediction error
# although it is advised to do so even for random forests
trctrl <- trainControl(method = "none")
# we will now train random forest model
rfFit <- train(subtype~.,
data = training,
method = "ranger",
trControl=trctrl,
importance="permutation", # calculate importance
tuneGrid = data.frame(mtry=100,
min.node.size = 1,
splitrule="gini")
)
# print OOB error
rfFit$finalModel$prediction.error
```
```
## [1] 0.01538462
```
### 5\.12\.3 Variable importance
Random forests come with built\-in variable importance metrics. One of the metrics is similar to the “variable dropout metric” where the predictor variables are permuted. In this case, OOB samples are used and the variables are permuted one at a time. Every time, the samples with the permuted variables are fed to the network and the decrease in accuracy is measured. Using this quantity, the variables can be ranked.
A less costly method with similar performance is to use gini impurity. Every time a variable is used in a tree to make a split, the gini impurity is less than the parent node. This method adds up these gini impurity decreases for each individual variable across the trees and divides it by the number of the trees in the forest. This metric is often consistent with the permutation importance measure (Breiman [2001](#ref-breiman2001random)). Below, we are going to plot the permutation\-based importance metric. This metric has been calculated during the run of the model above. We will use the `caret::varImp()` function to access the importance values and plot them using the `plot()` function; the result is shown in Figure [5\.10](trees-and-forests-random-forests-in-action.html#fig:RFvarImp).
```
plot(varImp(rfFit),top=10)
```
FIGURE 5\.10: Top 10 important variables based on permutation\-based method for the random forest classification.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/logistic-regression-and-regularization.html |
5\.13 Logistic regression and regularization
--------------------------------------------
Logistic regression is a statistical method that is used to model a binary response variable based on predictor variables. Although initially devised for two\-class or binary response problems, this method can be generalized to multiclass problems. However, our example tumor sample data is a binary response or two\-class problem, therefore we will not go into the multiclass case in this chapter.
Logistic regression is very similar to linear regression as a concept and it can be thought of as a “maximum likelihood estimation” problem where we are trying to find statistical parameters that maximize the likelihood of the observed data being sampled from the statistical distribution of interest. This is also very related to the general cost/loss function approach we see in supervised machine learning algorithms. In the case of binary response variables, the simple linear regression model, such as \\(y\_i \\sim \\beta \_{0}\+\\beta \_{1}x\_i\\), would be a poor choice because it can easily generate values outside of the \\(0\\) to \\(1\\) boundary. What we need is a
model that restricts the lower bound of the prediction to zero and an upper
bound to \\(1\\). The first thing towards this requirement is to formulate the problem differently. If \\(y\_i\\) can only be \\(0\\) or \\(1\\), we can formulate \\(y\_i\\) as a realization of a random variable that can take the values one and zero with probabilities \\(p\_i\\) and \\(1\-{p\_i}\\), respectively. This random variable follows the Bernoulli distribution, and instead of predicting the binary variable we can formulate the problem as \\(p\_i \\sim \\beta \_{0}\+\\beta \_{1}x\_i\\). However, our initial problem still stands, simple linear regression will still result in values that are beyond \\(0\\) and \\(1\\) boundaries. A model that satisfies the boundary requirement is the logistic equation shown below.
\\\[
{\\displaystyle p\_i\={\\frac {e^{(\\beta \_{0}\+\\beta \_{1}x\_i)}}{1\+e^{(\\beta \_{0}\+\\beta\_{1}x\_i)}}}}
\\]
This equation can be linearized by the following transformation
\\\[
{\\displaystyle \\operatorname{logit} (p\_i)\=\\ln \\left({\\frac {p\_i}{1\-p\_i}}\\right)\=\\beta \_{0}\+\\beta \_{1}x\_i}
\\]
The left\-hand side is termed the logit, which stands for “logistic unit”. It is also known as the log odds. In this case, our model will produce values on the log scale and with the logistic equation above, we can transform the values to the \\(0\-1\\) range. Now, the question remains: “What are the best parameter estimates for our training set”. Within the maximum likelihood framework we have touched upon in Chapter [3](stats.html#stats), the best parameter estimates are the ones that maximize the likelihood of the statistical model actually producing the observed data. You can think of this fitting as a probability distribution to an observed data set. The parameters of the probability distribution should maximize the likelihood that the observed data came from the distribution in question. If we were using a Gaussian distribution we would change the mean and variance parameters until the observed data was more plausible to be drawn from that specific Gaussian distribution.
In logistic regression, the response variable is modeled with a binomial distribution or its special case Bernoulli distribution. The value of each response variable, \\(y\_i\\), is 0 or 1, and we need to figure out parameter \\(p\_i\\) values that could generate such a distribution of 0s and 1s. If we can find the best \\(p\_i\\) values for each tumor sample \\(i\\), we would be maximizing the log\-likelihood function of the model over the observed data. The maximum log\-likelihood function for our binary response variable case is shown as Equation [(5\.1\)](logistic-regression-and-regularization.html#eq:logLik).
\\\[\\begin{equation}
\\operatorname{\\ln} (L)\=\\sum\_{i\=1}^N\\bigg\[{\\ln(1\-p\_i)\+y\_i\\ln \\left({\\frac {p\_i}{1\-p\_i}}\\right)\\bigg]}
\\tag{5\.1}
\\end{equation}\\]
In order to maximize this equation we have to find optimum \\(p\_i\\) values which are dependent on parameters \\(\\beta\_0\\) and \\(\\beta\_1\\), and also dependent on the values of predictor variables \\(x\_i\\). We can rearrange the equation replacing \\(p\_i\\) with the logistic equation. In addition, many optimization functions minimize rather than maximize. Therefore, we will be using negative log likelihood, which is also called the “log loss” or “logistic loss” function. The function below is the “log loss” function. We substituted \\(p\_i\\) with the logistic equation and simplified the expression.
\\\[\\begin{equation}
\\operatorname L\_{log}\=\-{\\ln}(L)\=\-\\sum\_{i\=1}^N\\bigg\[\-{\\ln(1\+e^{(\\beta \_{0}\+\\beta \_{1}x\_i)})\+y\_i \\left(\\beta \_{0}\+\\beta \_{1}x\_i\\right)\\bigg]}
\\tag{5\.2}
\\end{equation}\\]
Now, let us see how this works in practice. First, as in the example above we will use one predictor variable, the expression of one gene to classify tumor samples to “CIMP” and “noCIMP” subtypes. We will be using PDPN gene expression, which was one of the most important variables in our random forest model. We will use the formula interface in `caret`, where we will supply the names of the response and predictor variables in a formula. In this case, we will be using a core R function, `glm()`, from the `stats` package. “glm” stands for generalized linear models, and it is the main interface for different types of regression
in R.
```
# fit logistic regression model
# method and family defines the type of regression
# in this case these arguments mean that we are doing logistic
# regression
lrFit = train(subtype ~ PDPN,
data=training, trControl=trainControl("none"),
method="glm", family="binomial")
# create data to plot the sigmoid curve
newdat <- data.frame(PDPN=seq(min(training$PDPN),
max(training$PDPN),len=100))
# predict probabilities for the simulated data
newdat$subtype = predict(lrFit, newdata=newdat, type="prob")[,1]
# plot the sigmoid curve and the training data
plot(ifelse(subtype=="CIMP",1,0) ~ PDPN,
data=training, col="red4",
ylab="subtype as 0 or 1", xlab="PDPN expression")
lines(subtype ~ PDPN, newdat, col="green4", lwd=2)
```
FIGURE 5\.11: Sigmoid curve for prediction of subtype based on one predictor variable.
Figure [5\.11](logistic-regression-and-regularization.html#fig:logReg1) shows the sigmoidal curve that is fitted by the logistic regression. “noCIMP” subtype has higher expression of the PDPN gene than the “CIMP” subtype. In other words, the higher the values of PDPN, the more likely that the tumor sample will be classified as “noCIMP”. We can also assess the performance of our model with the test set and the training set. Let us try to do that again with the `caret::predict()` and `caret::confusionMatrix()` functions.
```
# training accuracy
class.res=predict(lrFit,training[,-1])
confusionMatrix(training[,1],class.res)$overall[1]
```
```
## Accuracy
## 0.9461538
```
```
# test accuracy
class.res=predict(lrFit,testing[,-1])
confusionMatrix(testing[,1],class.res)$overall[1]
```
```
## Accuracy
## 0.9259259
```
The test accuracy is slightly worse than the training accuracy. Overall this is not as good as k\-NN, but remember we used only one predictor variable. We have thousands of genes as predictor variables. Now we will try to use all of them in the classification problem. After fitting the model, we will check training and test accuracy. We fit the model again with the `caret::train()` function.
```
lrFit2 = train(subtype ~ .,
data=training,
# no model tuning with sampling
trControl=trainControl("none"),
method="glm", family="binomial")
# training accuracy
class.res=predict(lrFit2,training[,-1])
confusionMatrix(training[,1],class.res)$overall[1]
```
```
## Accuracy
## 1
```
```
# test accuracy
class.res=predict(lrFit2,testing[,-1])
confusionMatrix(testing[,1],class.res)$overall[1]
```
```
## Accuracy
## 0.4259259
```
Training accuracy is \\(1\\), so training error is \\(0\\), and nothing is misclassified in the training set. However, test accuracy/error is close to terrible. It does only little better than a random guess. If we randomly assigned class labels we would get 0\.5 accuracy. The test set accuracy is 0\.55 despite the 100% training accuracy. This is because the model overfits to the training data. There are too many variables in the model. The number of predictor variables is \~6\.5 times more than the number of samples. The excess of predictor variables makes the model very flexible (high variance), and this leads to overfitting.
### 5\.13\.1 Regularization in order to avoid overfitting
If we can limit the flexibility of the model, this might help with performance on the unseen, new data sets. Generally, any modification of the learning method to improve performance on the unseen datasets is called regularization. We need regularization to introduce bias to the model and to decrease the variance. This can be achieved by modifying the loss function with a penalty term which effectively shrinks the estimates of the coefficients. Therefore these types of methods within the framework of regression are also called “shrinkage” methods or “penalized regression” methods.
One way to ensure shrinkage is to add the penalty term, \\(\\lambda\\sum{\\beta\_j}^2\\), to the loss function. This penalty term is also known as the L2 norm or L2 penalty. It is calculated as the square root of the sum of the squared vector values. This term will help shrink the coefficients in the regression towards zero. The new loss function is as follows, where \\(j\\) is the number of parameters/coefficients in the model and \\(L\_{log}\\) is the log loss function in Eq. [(5\.2\)](logistic-regression-and-regularization.html#eq:llog).
\\\[\\begin{equation}
L\_{log}\+\\lambda\\sum\_{j\=1}^p{\\beta\_j}^2
\\tag{5\.3}
\\end{equation}\\]
This penalized loss function is called “ridge regression” (Hoerl and Kennard [1970](#ref-hoerl1970ridge)). When we add the penalty, the only way the optimization procedure keeps the overall loss function minimum is to assign smaller values to the coefficients. The \\(\\lambda\\) parameter controls how much emphasis is given to the penalty term. The higher the \\(\\lambda\\) value, the more coefficients in the regression will be pushed towards zero. However, they will never be exactly zero. This is not desirable if we want the model to select important variables. A small modification to the penalty is to use the absolute values of \\(B\_j\\) instead of squared values. This penalty is called the “L1 norm” or “L1 penalty”. The regression method that uses the L1 penalty is known as “Lasso regression” (Tibshirani [1996](#ref-tibshirani1996regression)).
\\\[
L\_{log}\+\\lambda\\sum\_{j\=1}^p{\|\\beta\_j}\|
\\]
However, the L1 penalty tends to pick one variable at random when predictor variables are correlated. In this case, it looks like one of the variables is not important although it might still have predictive power. The Ridge regression on the other hand shrinks coefficients of correlated variables towards each other, keeping all of them. It has been shown that both Lasso and Ridge regression have their drawbacks and advantages (Friedman, Hastie, and Tibshirani [2010](#ref-friedman2010regularization)). More recently, a method called “elastic net” was proposed to include the best of both worlds (Zou and Hastie [2005](#ref-zou2005regularization)). This method uses both L1 and L2 penalties. The equation below shows the modified loss function by this penalty. As you can see the \\(\\lambda\\) parameter still controls the weight that is given to the penalty. This time the additional parameter \\(\\alpha\\) controls the weight given to L1 or L2 penalty and it is a value between 0 and 1\.
\\\[
L\_{log}\+\\lambda\\sum\_{j\=1}^p{(\\alpha\\beta\_j^2\+(1\-\\alpha)\|\\beta\_j}\|)
\\]
We have now got the concept behind regularization and we can see how it works in practice. We are going to use elastic net on our tumor subtype prediction problem. We will let cross\-validation select the best \\(\\lambda\\) and we will fix the \\(\\alpha\\) parameter at \\(0\.5\\).
```
set.seed(17)
library(glmnet)
# this method controls everything about training
# we will just set up 10-fold cross validation
trctrl <- trainControl(method = "cv",number=10)
# we will now train elastic net model
# it will try
enetFit <- train(subtype~., data = training,
method = "glmnet",
trControl=trctrl,
# alpha and lambda paramters to try
tuneGrid = data.frame(alpha=0.5,
lambda=seq(0.1,0.7,0.05)))
# best alpha and lambda values by cross-validation accuracy
enetFit$bestTune
```
```
## alpha lambda
## 1 0.5 0.1
```
```
# test accuracy
class.res=predict(enetFit,testing[,-1])
confusionMatrix(testing[,1],class.res)$overall[1]
```
```
## Accuracy
## 0.9814815
```
As you can see regularization worked, the tuning step selected \\(\\lambda\=1\\), and we were able to get a satisfactory test set accuracy with the best model.
### 5\.13\.2 Variable importance
The variable importance of the penalized regression, especially for lasso and elastic net, is more or less out of the box. As discussed, these methods will set regression coefficients for irrelevant variables to zero. This provides a system for selecting important variables but it does not necessarily provide a way to rank them. Using the size of the regression coefficients is a way to rank predictor variables, however if the data is not normalized, you will get different scales for different variables. In our case, we normalized the data and we know that the variables have the same scale before they went into the training. We can use this fact and rank them based on the regression coefficients. The `caret::varImp()` function uses the coefficients to rank the variables from the elastic net model. Below, were going to plot the top 10 important variables which are normalized to the importance of the most important variable.
```
plot(varImp(enetFit),top=10)
```
FIGURE 5\.12: Variable importance metric for elastic net. This metric uses regression coefficients as importance.
**Want to know more ?**
* Lecture by Trevor Hastie on regularized regression. You probably need to understand the basics of regression and its terminology to follow this. However, the lecture is not very heavy on math. <https://youtu.be/BU2gjoLPfDc>
### 5\.13\.1 Regularization in order to avoid overfitting
If we can limit the flexibility of the model, this might help with performance on the unseen, new data sets. Generally, any modification of the learning method to improve performance on the unseen datasets is called regularization. We need regularization to introduce bias to the model and to decrease the variance. This can be achieved by modifying the loss function with a penalty term which effectively shrinks the estimates of the coefficients. Therefore these types of methods within the framework of regression are also called “shrinkage” methods or “penalized regression” methods.
One way to ensure shrinkage is to add the penalty term, \\(\\lambda\\sum{\\beta\_j}^2\\), to the loss function. This penalty term is also known as the L2 norm or L2 penalty. It is calculated as the square root of the sum of the squared vector values. This term will help shrink the coefficients in the regression towards zero. The new loss function is as follows, where \\(j\\) is the number of parameters/coefficients in the model and \\(L\_{log}\\) is the log loss function in Eq. [(5\.2\)](logistic-regression-and-regularization.html#eq:llog).
\\\[\\begin{equation}
L\_{log}\+\\lambda\\sum\_{j\=1}^p{\\beta\_j}^2
\\tag{5\.3}
\\end{equation}\\]
This penalized loss function is called “ridge regression” (Hoerl and Kennard [1970](#ref-hoerl1970ridge)). When we add the penalty, the only way the optimization procedure keeps the overall loss function minimum is to assign smaller values to the coefficients. The \\(\\lambda\\) parameter controls how much emphasis is given to the penalty term. The higher the \\(\\lambda\\) value, the more coefficients in the regression will be pushed towards zero. However, they will never be exactly zero. This is not desirable if we want the model to select important variables. A small modification to the penalty is to use the absolute values of \\(B\_j\\) instead of squared values. This penalty is called the “L1 norm” or “L1 penalty”. The regression method that uses the L1 penalty is known as “Lasso regression” (Tibshirani [1996](#ref-tibshirani1996regression)).
\\\[
L\_{log}\+\\lambda\\sum\_{j\=1}^p{\|\\beta\_j}\|
\\]
However, the L1 penalty tends to pick one variable at random when predictor variables are correlated. In this case, it looks like one of the variables is not important although it might still have predictive power. The Ridge regression on the other hand shrinks coefficients of correlated variables towards each other, keeping all of them. It has been shown that both Lasso and Ridge regression have their drawbacks and advantages (Friedman, Hastie, and Tibshirani [2010](#ref-friedman2010regularization)). More recently, a method called “elastic net” was proposed to include the best of both worlds (Zou and Hastie [2005](#ref-zou2005regularization)). This method uses both L1 and L2 penalties. The equation below shows the modified loss function by this penalty. As you can see the \\(\\lambda\\) parameter still controls the weight that is given to the penalty. This time the additional parameter \\(\\alpha\\) controls the weight given to L1 or L2 penalty and it is a value between 0 and 1\.
\\\[
L\_{log}\+\\lambda\\sum\_{j\=1}^p{(\\alpha\\beta\_j^2\+(1\-\\alpha)\|\\beta\_j}\|)
\\]
We have now got the concept behind regularization and we can see how it works in practice. We are going to use elastic net on our tumor subtype prediction problem. We will let cross\-validation select the best \\(\\lambda\\) and we will fix the \\(\\alpha\\) parameter at \\(0\.5\\).
```
set.seed(17)
library(glmnet)
# this method controls everything about training
# we will just set up 10-fold cross validation
trctrl <- trainControl(method = "cv",number=10)
# we will now train elastic net model
# it will try
enetFit <- train(subtype~., data = training,
method = "glmnet",
trControl=trctrl,
# alpha and lambda paramters to try
tuneGrid = data.frame(alpha=0.5,
lambda=seq(0.1,0.7,0.05)))
# best alpha and lambda values by cross-validation accuracy
enetFit$bestTune
```
```
## alpha lambda
## 1 0.5 0.1
```
```
# test accuracy
class.res=predict(enetFit,testing[,-1])
confusionMatrix(testing[,1],class.res)$overall[1]
```
```
## Accuracy
## 0.9814815
```
As you can see regularization worked, the tuning step selected \\(\\lambda\=1\\), and we were able to get a satisfactory test set accuracy with the best model.
### 5\.13\.2 Variable importance
The variable importance of the penalized regression, especially for lasso and elastic net, is more or less out of the box. As discussed, these methods will set regression coefficients for irrelevant variables to zero. This provides a system for selecting important variables but it does not necessarily provide a way to rank them. Using the size of the regression coefficients is a way to rank predictor variables, however if the data is not normalized, you will get different scales for different variables. In our case, we normalized the data and we know that the variables have the same scale before they went into the training. We can use this fact and rank them based on the regression coefficients. The `caret::varImp()` function uses the coefficients to rank the variables from the elastic net model. Below, were going to plot the top 10 important variables which are normalized to the importance of the most important variable.
```
plot(varImp(enetFit),top=10)
```
FIGURE 5\.12: Variable importance metric for elastic net. This metric uses regression coefficients as importance.
**Want to know more ?**
* Lecture by Trevor Hastie on regularized regression. You probably need to understand the basics of regression and its terminology to follow this. However, the lecture is not very heavy on math. <https://youtu.be/BU2gjoLPfDc>
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/other-supervised-algorithms.html |
5\.14 Other supervised algorithms
---------------------------------
We will next introduce a couple of other supervised algorithms for completeness but in less detail. These algorithms are also as popular as the others we introduced above and people who are interested in computational genomics see them used in the field for different problems. These algorithms also fit to the general framework of optimization of a cost/loss function. However, the approaches to the construction of the cost function and the cost function itself are different in each case.
### 5\.14\.1 Gradient boosting
Gradient boosting is a prediction model that uses an ensemble of decision trees similar to random forest. However, the decision trees are added sequentially, which is why these models are also called “Multiple Additive Regression Trees (MART)” (Friedman and Meulman [2003](#ref-friedman2003mart)). Apart from this, you will see similar methods called “Gradient boosting machines (GBM)”(J. H. Friedman [2001](#ref-friedman2001gbm)) or “Boosted regression trees (BRT)” (Elith, Leathwick, and Hastie [2008](#ref-elith2008brt)) in the literature.
Generally, “boosting” refers to an iterative learning approach where each new model tries to focus on data points where the previous ensemble of simple models did not predict well. Gradient boosting is an improvement over that, where each new model tries to focus on the residual errors (prediction error for the current ensemble of models) of the previous model. Specifically in gradient boosting, the simple models are trees. As in random forests, many trees are grown but in this case, trees are sequentially grown and each tree focuses on fixing the shortcomings of the previous trees. Figure [5\.13](other-supervised-algorithms.html#fig:GBMcartoon) shows this concept. One of the most widely used algorithms for gradient boosting is `XGboost` which stands for “extreme gradient boosting” (Chen and Guestrin [2016](#ref-chen2016xgboost)). Below we will demonstrate how to use this on our problem. `XGboost` as well as other gradient boosting methods has many parameters to regularize and optimize the complexity of the model. Finding the best parameters for your problem might take some time. However, this flexibility comes with benefits; methods depending on `XGboost` have won many machine learning competitions (Chen and Guestrin [2016](#ref-chen2016xgboost)).
FIGURE 5\.13: Gradient boosting machines concept. Individual decision trees are built sequentially in order to fix the errors from the previous trees.
The most important parameters are number of trees (`nrounds`), tree depth (`max_depth`), and learning rate or shrinkage (`eta`). Generally, the more trees we have, the better the algorithm will learn because each tree tries to fix classification errors that the previous tree ensemble could not perform. Having too many trees might cause overfitting. However, the learning rate parameter, eta, combats that by shrinking the contribution of each new tree. This can be set to lower values if you have many trees. You can either set a large number of trees and then tune the model with the learning rate parameter or set the learning rate low, say to \\(0\.01\\) or \\(0\.1\\) and tune the number of trees. Similarly, tree depth also controls for overfitting. The deeper the tree, the more usually it will overfit. This has to be tuned as well; the default is at 6\. You can try to explore a range around the default. Apart from these, as in random forests, you can subsample the training data and/or the predictive variables. These strategies can also help you counter overfitting.
We are now going to use `XGboost` with the caret package on our cancer subtype classification problem. We are going to try different learning rate parameters. In this instance, we also subsample the dataset before we train each tree. The “subsample” parameter controls this and we set this to be 0\.5, which means that before we train a tree we will sample 50% of the data and use only that portion to train the tree.
```
library(xgboost)
set.seed(17)
# we will just set up 5-fold cross validation
trctrl <- trainControl(method = "cv",number=5)
# we will now train elastic net model
# it will try
gbFit <- train(subtype~., data = training,
method = "xgbTree",
trControl=trctrl,
# paramters to try
tuneGrid = data.frame(nrounds=200,
eta=c(0.05,0.1,0.3),
max_depth=4,
gamma=0,
colsample_bytree=1,
subsample=0.5,
min_child_weight=1))
# best parameters by cross-validation accuracy
gbFit$bestTune
```
```
## nrounds max_depth eta gamma colsample_bytree min_child_weight subsample
## 2 200 4 0.1 0 1 1 0.5
```
Similar to random forests, we can estimate the variable importance for gradient boosting using the improvement in gini impurity or other performance\-related metrics every time a variable is selected in a tree. Again, the `caret::varImp()` function can be used to plot the importance metrics.
**Want to know more ?**
* More background on gradient boosting and XGboost: (<https://xgboost.readthedocs.io/en/latest/tutorials/model.html>). This explains the cost/loss function and regularization in more detail.
* Lecture on Gradient boosting and random forests by Trevor Hastie: (<https://youtu.be/wPqtzj5VZus>)
### 5\.14\.2 Support Vector Machines (SVM)
Support vector machines (SVM) were popularized in the 90s due the efficiency and the performance of the algorithm (Boser, Guyon, and Vapnik [1992](#ref-boser1992svm)). The algorithm works by identifying the optimal decision boundary that separates the data points into different groups (or classes), and then predicts the class of new observations based on this separation boundary. Depending on the situation, the different groups might be separable by a linear straight line or by a non\-linear boundary line or plane. If you review k\-NN decision boundaries in Figure [5\.7](model-tuning-and-avoiding-overfitting.html#fig:kNNboundary), you can see that the decision boundary is not linear. SVM can deal with linear or non\-linear decision boundaries.
First, SVM can map the data to higher dimensions where the decision boundary can be linear. This is achieved by applying certain mathematical functions, called “kernel functions”, to the predictor variable space. For example, a second\-degree polynomial can be applied to predictor variables which creates new variables and in this new space the problem is linearly separable. Figure [5\.14](other-supervised-algorithms.html#fig:SVMcartoon) demonstrates this concept where points in feature space are mapped to quadratic space where linear separation is possible.
FIGURE 5\.14: Support vector machine concept. With the help of a kernel function,points in feature space are mapped to higher dimensions where linear separation is possible.
Second, SVM not only tries to find a decision boundary, but tries to find the boundary with the largest buffer zone on the sides of the boundary. Having a boundary with a large buffer or “margin”, as it is formally called, will perform better for the new data points not used in the model training (margin is marked in Figure [5\.14](other-supervised-algorithms.html#fig:SVMcartoon) ). In addition, SVM calculates the decision boundary with some error toleration. As we have seen it may not always be possible to find a linear boundary that perfectly separates the classes. SVM tolerates some degree of error, as in data points on the wrong side of the decision boundary.
Another important feature of the algorithm is that SVM decides on the decision boundary by only relying on the “landmark” data points, formally known as “support vectors”. These are points that are closest to the decision boundary and harder to classify. By keeping track of such points only for decision boundary creation, the computational complexity of the algorithm is reduced. However, this depends on the margin or the buffer zone. If we have a large margin then there are many landmark points. The extent of the margin is also related to the variance\-bias trade\-off. If the allowed margin is small the classification will try to find a boundary that makes fewer errors in the training set therefore might overfit. If the margin is larger, it will tolerate more errors in the training set and might generalize better. Practically, this is controlled by the “C” or “Cost” parameter in the SVM example we will show below. Another important choice we will make is the kernel function. Below we use the radial basis kernel function. This function provides an extra predictor dimension where the problem is linearly separable. The model we will use has only one parameter, which is “C”. It is recommended that \\(C\\) is in the form of \\(2^k\\) where \\(k\\) is in the range of \-5 and 15 (Hsu, Chang, Lin, et al. [2003](#ref-hsu2003practical)). Another parameter that can be tuned is related to the radial basis function called “sigma”. A smaller sigma means less bias and more variance, while a larger sigma means less variance and more bias. Again, exponential sequences are recommended for tuning that (Hsu, Chang, Lin, et al. [2003](#ref-hsu2003practical)). We will set it to 1 for demonstration purposes below.
```
#svm code here
library(kernlab)
set.seed(17)
# we will just set up 5-fold cross validation
trctrl <- trainControl(method = "cv",number=5)
# we will now train elastic net model
# it will try
svmFit <- train(subtype~., data = training,
# this SVM used radial basis function
method = "svmRadial",
trControl=trctrl,
tuneGrid=data.frame(C=c(0.25,0.5,1),
sigma=1))
```
**Want to know more ?**
* MIT lecture by Patrick Winston on SVM: <https://youtu.be/_PwhiWxHK8o>. This lecture explains the concept with some mathematical background. It is not hard to follow. You should be able to follow this if you know what vectors are and if you have some knowledge on derivatives and basic algebra.
* Online demo for SVM: (<https://cs.stanford.edu/people/karpathy/svmjs/demo/>). You can play with sigma and C parameters for radial basis SVM and see how they affect the decision boundary.
### 5\.14\.3 Neural networks and deep versions of it
Neural networks are another popular machine learning method which is recently regaining popularity. The earlier versions of the algorithm were popularized in the 80s and 90s. The advantage of neural networks is like SVM, they can model non\-linear decision boundaries. The basic idea of neural networks is to combine the predictor variables in order to model the response variable as a non\-linear function. In a neural network, input variables pass through several layers that combine the variables and transform those combinations and recombine outputs depending on how many layers the network has. In the conceptual example in Figure [5\.15](other-supervised-algorithms.html#fig:neuralNetDiagram) the input nodes receive predictor variables and make linear combinations of them in the form of \\(\\sum ( w\_ixi \+b)\\). Simply put, the variables are multiplied with weights and summed up. This is what we call “linear combination”. These quantities are further fed into another layer called the hidden layer where an activation function is applied on the sums. And these results are further fed into an output node which outputs class probabilities assuming we are working on a classification algorithm. There could be many more hidden layers that will even further combine the output from hidden layers before them. The algorithm in the end also has a cost function similar to the logistic regression cost function, but it now has to estimate all the weight parameters: \\(w\_i\\). This is a more complicated problem than logistic regression because of the number of parameters to be estimated but neural networks are able to fit complex functions due their parameter space flexibility as well.
FIGURE 5\.15: Diagram for a simple neural network, their combinations pass through hidden layers and are combined again for the output. Predictor variables are fed to the network and weights are adjusted to optimize the cost function.
In a practical sense, the number of nodes in the hidden layer (size) and some regularization on the weights can be applied to control for overfitting. This is called the calculated (decay) parameter controls for overfitting.
We will train a simple neural network on our cancer data set. In this simple example, the network architecture is somewhat fixed. We can only the choose number of nodes (denoted by “size”) in the hidden layer and a regularization parameter (denoted by “decay”). Increasing the number of nodes in the hidden layer or in other implementations increasing the number of the hidden layers, will help model non\-linear relationships but can overfit. One way to combat that is to limit the number of nodes in the hidden layer; another way is to regularize the weights. The decay parameter does just that, it penalizes the loss function by \\(decay(weigths^2\)\\). In the example below, we try 1 or 2 nodes in the hidden layer in the interest of simplicity and run\-time. In addition, we set `decay=0`, which will correspond to not doing any regularization.
```
#svm code here
library(nnet)
set.seed(17)
# we will just set up 5-fold cross validation
trctrl <- trainControl(method = "cv",number=5)
# we will now train neural net model
# it will try
nnetFit <- train(subtype~., data = training,
method = "nnet",
trControl=trctrl,
tuneGrid=data.frame(size=1:2,decay=0
),
# this is maximum number of weights
# needed for the nnet method
MaxNWts=2000)
```
The example we used above is a bit outdated. The modern “deep” neural networks provide much more flexibility in the number of nodes, number of layers and regularization options. In many areas, especially computer vision deep neural networks are the state\-of\-the\-art (LeCun, Bengio, and Hinton [2015](#ref-lecun2015deep)). These modern implementations of neural networks are available in R via the `keras` package and can also be trained via the `caret` package with the similar interface we have shown until now.
**Want to know more ?**
* Deep neural networks in R: (<https://keras.rstudio.com/>). There are examples and background information on deep neural networks.
* Online demo for neural networks: ([https://cs.stanford.edu/\~karpathy/svmjs/demo/demonn.html](https://cs.stanford.edu/~karpathy/svmjs/demo/demonn.html)). You can see the effect of the number of hidden layers and number of nodes on the decision boundary.
### 5\.14\.4 Ensemble learning
Ensemble learning models are simply combinations of different machine learning models. By now, we already introduced the concept of ensemble learning in random forests and gradient boosting. However, this concept can be generalized to combining all kinds of different models. “Random forests” is an ensemble of the same type of models, decision trees. We can also have ensembles of different types of models. For example, we can combine random forest, k\-NN and elastic net models, and make class predictions based on the votes from those different models. Below, we are showing how to do this. We are going to get predictions for three different models on the test set, use majority voting to decide on the class label, and then check performance using `caret::confusionMatrix()`.
```
# predict with k-NN model
knnPred=as.character(predict(knnFit,testing[,-1],type="class"))
# predict with elastic Net model
enetPred=as.character(predict(enetFit,testing[,-1]))
# predict with random forest model
rfPred=as.character(predict(rfFit,testing[,-1]))
# do voting for class labels
# code finds the most frequent class label per row
votingPred=apply(cbind(knnPred,enetPred,rfPred),1,
function(x) names(which.max(table(x))))
# check accuracy
confusionMatrix(data=testing[,1],
reference=as.factor(votingPred))$overall[1]
```
```
## Accuracy
## 0.9814815
```
In the test set, we were able to obtain perfect accuracy after voting. More complicated and accurate ways to build ensembles exist. We could also use the mean of class probabilities instead of voting for final class predictions. We can even combine models in a regression\-based scheme to assign weights to the votes or to the predicted class probabilities of each model. In these cases, the prediction performance of the ensembles can also be tested with sampling techniques such as cross\-validation. You can think of this as another layer of optimization or modeling for combining results from different models. We will not pursue this further in this chapter but packages such as [`caretEnsemble`](https://cran.r-project.org/web/packages/caretEnsemble/), [`SuperLearner`](https://cran.r-project.org/web/packages/SuperLearner/index.html) or [`mlr`](https://mlr.mlr-org.com/) can combine models in various ways described above.
### 5\.14\.1 Gradient boosting
Gradient boosting is a prediction model that uses an ensemble of decision trees similar to random forest. However, the decision trees are added sequentially, which is why these models are also called “Multiple Additive Regression Trees (MART)” (Friedman and Meulman [2003](#ref-friedman2003mart)). Apart from this, you will see similar methods called “Gradient boosting machines (GBM)”(J. H. Friedman [2001](#ref-friedman2001gbm)) or “Boosted regression trees (BRT)” (Elith, Leathwick, and Hastie [2008](#ref-elith2008brt)) in the literature.
Generally, “boosting” refers to an iterative learning approach where each new model tries to focus on data points where the previous ensemble of simple models did not predict well. Gradient boosting is an improvement over that, where each new model tries to focus on the residual errors (prediction error for the current ensemble of models) of the previous model. Specifically in gradient boosting, the simple models are trees. As in random forests, many trees are grown but in this case, trees are sequentially grown and each tree focuses on fixing the shortcomings of the previous trees. Figure [5\.13](other-supervised-algorithms.html#fig:GBMcartoon) shows this concept. One of the most widely used algorithms for gradient boosting is `XGboost` which stands for “extreme gradient boosting” (Chen and Guestrin [2016](#ref-chen2016xgboost)). Below we will demonstrate how to use this on our problem. `XGboost` as well as other gradient boosting methods has many parameters to regularize and optimize the complexity of the model. Finding the best parameters for your problem might take some time. However, this flexibility comes with benefits; methods depending on `XGboost` have won many machine learning competitions (Chen and Guestrin [2016](#ref-chen2016xgboost)).
FIGURE 5\.13: Gradient boosting machines concept. Individual decision trees are built sequentially in order to fix the errors from the previous trees.
The most important parameters are number of trees (`nrounds`), tree depth (`max_depth`), and learning rate or shrinkage (`eta`). Generally, the more trees we have, the better the algorithm will learn because each tree tries to fix classification errors that the previous tree ensemble could not perform. Having too many trees might cause overfitting. However, the learning rate parameter, eta, combats that by shrinking the contribution of each new tree. This can be set to lower values if you have many trees. You can either set a large number of trees and then tune the model with the learning rate parameter or set the learning rate low, say to \\(0\.01\\) or \\(0\.1\\) and tune the number of trees. Similarly, tree depth also controls for overfitting. The deeper the tree, the more usually it will overfit. This has to be tuned as well; the default is at 6\. You can try to explore a range around the default. Apart from these, as in random forests, you can subsample the training data and/or the predictive variables. These strategies can also help you counter overfitting.
We are now going to use `XGboost` with the caret package on our cancer subtype classification problem. We are going to try different learning rate parameters. In this instance, we also subsample the dataset before we train each tree. The “subsample” parameter controls this and we set this to be 0\.5, which means that before we train a tree we will sample 50% of the data and use only that portion to train the tree.
```
library(xgboost)
set.seed(17)
# we will just set up 5-fold cross validation
trctrl <- trainControl(method = "cv",number=5)
# we will now train elastic net model
# it will try
gbFit <- train(subtype~., data = training,
method = "xgbTree",
trControl=trctrl,
# paramters to try
tuneGrid = data.frame(nrounds=200,
eta=c(0.05,0.1,0.3),
max_depth=4,
gamma=0,
colsample_bytree=1,
subsample=0.5,
min_child_weight=1))
# best parameters by cross-validation accuracy
gbFit$bestTune
```
```
## nrounds max_depth eta gamma colsample_bytree min_child_weight subsample
## 2 200 4 0.1 0 1 1 0.5
```
Similar to random forests, we can estimate the variable importance for gradient boosting using the improvement in gini impurity or other performance\-related metrics every time a variable is selected in a tree. Again, the `caret::varImp()` function can be used to plot the importance metrics.
**Want to know more ?**
* More background on gradient boosting and XGboost: (<https://xgboost.readthedocs.io/en/latest/tutorials/model.html>). This explains the cost/loss function and regularization in more detail.
* Lecture on Gradient boosting and random forests by Trevor Hastie: (<https://youtu.be/wPqtzj5VZus>)
### 5\.14\.2 Support Vector Machines (SVM)
Support vector machines (SVM) were popularized in the 90s due the efficiency and the performance of the algorithm (Boser, Guyon, and Vapnik [1992](#ref-boser1992svm)). The algorithm works by identifying the optimal decision boundary that separates the data points into different groups (or classes), and then predicts the class of new observations based on this separation boundary. Depending on the situation, the different groups might be separable by a linear straight line or by a non\-linear boundary line or plane. If you review k\-NN decision boundaries in Figure [5\.7](model-tuning-and-avoiding-overfitting.html#fig:kNNboundary), you can see that the decision boundary is not linear. SVM can deal with linear or non\-linear decision boundaries.
First, SVM can map the data to higher dimensions where the decision boundary can be linear. This is achieved by applying certain mathematical functions, called “kernel functions”, to the predictor variable space. For example, a second\-degree polynomial can be applied to predictor variables which creates new variables and in this new space the problem is linearly separable. Figure [5\.14](other-supervised-algorithms.html#fig:SVMcartoon) demonstrates this concept where points in feature space are mapped to quadratic space where linear separation is possible.
FIGURE 5\.14: Support vector machine concept. With the help of a kernel function,points in feature space are mapped to higher dimensions where linear separation is possible.
Second, SVM not only tries to find a decision boundary, but tries to find the boundary with the largest buffer zone on the sides of the boundary. Having a boundary with a large buffer or “margin”, as it is formally called, will perform better for the new data points not used in the model training (margin is marked in Figure [5\.14](other-supervised-algorithms.html#fig:SVMcartoon) ). In addition, SVM calculates the decision boundary with some error toleration. As we have seen it may not always be possible to find a linear boundary that perfectly separates the classes. SVM tolerates some degree of error, as in data points on the wrong side of the decision boundary.
Another important feature of the algorithm is that SVM decides on the decision boundary by only relying on the “landmark” data points, formally known as “support vectors”. These are points that are closest to the decision boundary and harder to classify. By keeping track of such points only for decision boundary creation, the computational complexity of the algorithm is reduced. However, this depends on the margin or the buffer zone. If we have a large margin then there are many landmark points. The extent of the margin is also related to the variance\-bias trade\-off. If the allowed margin is small the classification will try to find a boundary that makes fewer errors in the training set therefore might overfit. If the margin is larger, it will tolerate more errors in the training set and might generalize better. Practically, this is controlled by the “C” or “Cost” parameter in the SVM example we will show below. Another important choice we will make is the kernel function. Below we use the radial basis kernel function. This function provides an extra predictor dimension where the problem is linearly separable. The model we will use has only one parameter, which is “C”. It is recommended that \\(C\\) is in the form of \\(2^k\\) where \\(k\\) is in the range of \-5 and 15 (Hsu, Chang, Lin, et al. [2003](#ref-hsu2003practical)). Another parameter that can be tuned is related to the radial basis function called “sigma”. A smaller sigma means less bias and more variance, while a larger sigma means less variance and more bias. Again, exponential sequences are recommended for tuning that (Hsu, Chang, Lin, et al. [2003](#ref-hsu2003practical)). We will set it to 1 for demonstration purposes below.
```
#svm code here
library(kernlab)
set.seed(17)
# we will just set up 5-fold cross validation
trctrl <- trainControl(method = "cv",number=5)
# we will now train elastic net model
# it will try
svmFit <- train(subtype~., data = training,
# this SVM used radial basis function
method = "svmRadial",
trControl=trctrl,
tuneGrid=data.frame(C=c(0.25,0.5,1),
sigma=1))
```
**Want to know more ?**
* MIT lecture by Patrick Winston on SVM: <https://youtu.be/_PwhiWxHK8o>. This lecture explains the concept with some mathematical background. It is not hard to follow. You should be able to follow this if you know what vectors are and if you have some knowledge on derivatives and basic algebra.
* Online demo for SVM: (<https://cs.stanford.edu/people/karpathy/svmjs/demo/>). You can play with sigma and C parameters for radial basis SVM and see how they affect the decision boundary.
### 5\.14\.3 Neural networks and deep versions of it
Neural networks are another popular machine learning method which is recently regaining popularity. The earlier versions of the algorithm were popularized in the 80s and 90s. The advantage of neural networks is like SVM, they can model non\-linear decision boundaries. The basic idea of neural networks is to combine the predictor variables in order to model the response variable as a non\-linear function. In a neural network, input variables pass through several layers that combine the variables and transform those combinations and recombine outputs depending on how many layers the network has. In the conceptual example in Figure [5\.15](other-supervised-algorithms.html#fig:neuralNetDiagram) the input nodes receive predictor variables and make linear combinations of them in the form of \\(\\sum ( w\_ixi \+b)\\). Simply put, the variables are multiplied with weights and summed up. This is what we call “linear combination”. These quantities are further fed into another layer called the hidden layer where an activation function is applied on the sums. And these results are further fed into an output node which outputs class probabilities assuming we are working on a classification algorithm. There could be many more hidden layers that will even further combine the output from hidden layers before them. The algorithm in the end also has a cost function similar to the logistic regression cost function, but it now has to estimate all the weight parameters: \\(w\_i\\). This is a more complicated problem than logistic regression because of the number of parameters to be estimated but neural networks are able to fit complex functions due their parameter space flexibility as well.
FIGURE 5\.15: Diagram for a simple neural network, their combinations pass through hidden layers and are combined again for the output. Predictor variables are fed to the network and weights are adjusted to optimize the cost function.
In a practical sense, the number of nodes in the hidden layer (size) and some regularization on the weights can be applied to control for overfitting. This is called the calculated (decay) parameter controls for overfitting.
We will train a simple neural network on our cancer data set. In this simple example, the network architecture is somewhat fixed. We can only the choose number of nodes (denoted by “size”) in the hidden layer and a regularization parameter (denoted by “decay”). Increasing the number of nodes in the hidden layer or in other implementations increasing the number of the hidden layers, will help model non\-linear relationships but can overfit. One way to combat that is to limit the number of nodes in the hidden layer; another way is to regularize the weights. The decay parameter does just that, it penalizes the loss function by \\(decay(weigths^2\)\\). In the example below, we try 1 or 2 nodes in the hidden layer in the interest of simplicity and run\-time. In addition, we set `decay=0`, which will correspond to not doing any regularization.
```
#svm code here
library(nnet)
set.seed(17)
# we will just set up 5-fold cross validation
trctrl <- trainControl(method = "cv",number=5)
# we will now train neural net model
# it will try
nnetFit <- train(subtype~., data = training,
method = "nnet",
trControl=trctrl,
tuneGrid=data.frame(size=1:2,decay=0
),
# this is maximum number of weights
# needed for the nnet method
MaxNWts=2000)
```
The example we used above is a bit outdated. The modern “deep” neural networks provide much more flexibility in the number of nodes, number of layers and regularization options. In many areas, especially computer vision deep neural networks are the state\-of\-the\-art (LeCun, Bengio, and Hinton [2015](#ref-lecun2015deep)). These modern implementations of neural networks are available in R via the `keras` package and can also be trained via the `caret` package with the similar interface we have shown until now.
**Want to know more ?**
* Deep neural networks in R: (<https://keras.rstudio.com/>). There are examples and background information on deep neural networks.
* Online demo for neural networks: ([https://cs.stanford.edu/\~karpathy/svmjs/demo/demonn.html](https://cs.stanford.edu/~karpathy/svmjs/demo/demonn.html)). You can see the effect of the number of hidden layers and number of nodes on the decision boundary.
### 5\.14\.4 Ensemble learning
Ensemble learning models are simply combinations of different machine learning models. By now, we already introduced the concept of ensemble learning in random forests and gradient boosting. However, this concept can be generalized to combining all kinds of different models. “Random forests” is an ensemble of the same type of models, decision trees. We can also have ensembles of different types of models. For example, we can combine random forest, k\-NN and elastic net models, and make class predictions based on the votes from those different models. Below, we are showing how to do this. We are going to get predictions for three different models on the test set, use majority voting to decide on the class label, and then check performance using `caret::confusionMatrix()`.
```
# predict with k-NN model
knnPred=as.character(predict(knnFit,testing[,-1],type="class"))
# predict with elastic Net model
enetPred=as.character(predict(enetFit,testing[,-1]))
# predict with random forest model
rfPred=as.character(predict(rfFit,testing[,-1]))
# do voting for class labels
# code finds the most frequent class label per row
votingPred=apply(cbind(knnPred,enetPred,rfPred),1,
function(x) names(which.max(table(x))))
# check accuracy
confusionMatrix(data=testing[,1],
reference=as.factor(votingPred))$overall[1]
```
```
## Accuracy
## 0.9814815
```
In the test set, we were able to obtain perfect accuracy after voting. More complicated and accurate ways to build ensembles exist. We could also use the mean of class probabilities instead of voting for final class predictions. We can even combine models in a regression\-based scheme to assign weights to the votes or to the predicted class probabilities of each model. In these cases, the prediction performance of the ensembles can also be tested with sampling techniques such as cross\-validation. You can think of this as another layer of optimization or modeling for combining results from different models. We will not pursue this further in this chapter but packages such as [`caretEnsemble`](https://cran.r-project.org/web/packages/caretEnsemble/), [`SuperLearner`](https://cran.r-project.org/web/packages/SuperLearner/index.html) or [`mlr`](https://mlr.mlr-org.com/) can combine models in various ways described above.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/predicting-continuous-variables-regression-with-machine-learning.html |
5\.15 Predicting continuous variables: Regression with machine learning
-----------------------------------------------------------------------
Until now, we only considered methods that can help us predict class labels. However, all the methods we have shown can also be used to predict continuous variables. In this case, the methods will try to optimize the prediction in error which is usually in the form of the sum of squared errors (SSE): \\(SSE\=\\sum (Y\-f(X))^2\\), where \\(Y\\) is the continuous response variable and \\(f(X)\\) is the outcome of the machine learning model.
In this section, we are going to show how to use a supervised learning method for regression. All the methods we have introduced previously in the context of classification can also do regression. Technically, this is just a simple change in the cost function format and the optimization step still tries to optimize the parameters of the cost function. In many cases, if your response variable is numeric, methods in the `caret` package will automatically apply regression.
### 5\.15\.1 Use case: Predicting age from DNA methylation
We will demonstrate random forest regression using a different data set which has a continuous response variable. This time we are going to try to predict the age of individuals from their DNA methylation levels. Methylation is a DNA modification which has implications in gene regulation and cell state. We have introduced DNA methylation in depth in Chapters [1](intro.html#intro) and [10](bsseq.html#bsseq), however for now, what we need to know is that there are about 24 million CpG dinucleotides in the human genome. Their methylation status can be measured with quantitative assays and the value is between 0 and 1\. If it is 0, the CpG is not methylated in any of the cells in the sample, and if it is 1, the CpG is methylated in all the cells of the sample. It has been shown that methylation is predictive of the age of the individual that the sample is taken from (Numata, Ye, Hyde, et al. [2012](#ref-numata2012dna); Horvath [2013](#ref-horvath2013dna)). Now, we will try to test that with a data set containing hundreds of individuals, their age, and methylation values for \~27000 CpGs. We first read in the files and construct a training set.
### 5\.15\.2 Reading and processing the data
Let us first read in the data. When we run the summary and histogram we see that the methylation values are between \\(0\\) and \\(1\\) and there are \\(108\\) samples (see Figure [5\.16](predicting-continuous-variables-regression-with-machine-learning.html#fig:readMethAge) ). Typically, methylation values have bimodal distribution. In this case many of them have values around \\(0\\) and the second\-most frequent value bracket is around \\(0\.9\\).
```
# file path for CpG methylation and age
fileMethAge=system.file("extdata",
"CpGmeth2Age.rds",
package="compGenomRData")
# read methylation-age table
ameth=readRDS(fileMethAge)
dim(ameth)
```
```
## [1] 108 27579
```
```
summary(ameth[,1:3])
```
```
## Age cg26211698 cg03790787
## Min. :-0.4986 Min. :0.01223 Min. :0.05001
## 1st Qu.:-0.4027 1st Qu.:0.01885 1st Qu.:0.07818
## Median :18.8466 Median :0.02269 Median :0.08964
## Mean :25.9083 Mean :0.02483 Mean :0.09300
## 3rd Qu.:49.6110 3rd Qu.:0.02888 3rd Qu.:0.10423
## Max. :83.6411 Max. :0.04883 Max. :0.16271
```
```
# plot histogram of methylation values
hist(unlist(ameth[,-1]),border="white",
col="cornflowerblue",main="",xlab="methylation values")
```
FIGURE 5\.16: Histogram of methylation values in the training set for age prediction.
There are \\(\~27000\\) predictor variables. We can remove the ones that have low variation across samples. In this case, the methylation values are between \\(0\\) and \\(1\\). The CpGs that have low variation are not likely to have any association with age; they could simply be technical variation of the experiment. We will remove CpGs that have less than 0\.1 standard deviation.
```
ameth=ameth[,c(TRUE,matrixStats::colSds(as.matrix(ameth[,-1]))>0.1)]
dim(ameth)
```
```
## [1] 108 2290
```
### 5\.15\.3 Running random forest regression
Now we can use random forest regression to predict the age from methylation values. We are then going to plot the predicted vs. observed ages and see how good our predictions are. The resulting plots are shown in Figure [5\.17](predicting-continuous-variables-regression-with-machine-learning.html#fig:predictAge).
```
set.seed(18)
par(mfrow=c(1,2))
# we are not going to do any cross-validatin
# and rely on OOB error
trctrl <- trainControl(method = "none")
# we will now train random forest model
rfregFit <- train(Age~.,
data = ameth,
method = "ranger",
trControl=trctrl,
# calculate importance
importance="permutation",
tuneGrid = data.frame(mtry=50,
min.node.size = 5,
splitrule="variance")
)
# plot Observed vs OOB predicted values from the model
plot(ameth$Age,rfregFit$finalModel$predictions,
pch=19,xlab="observed Age",
ylab="OOB predicted Age")
mtext(paste("R-squared",
format(rfregFit$finalModel$r.squared,digits=2)))
# plot residuals
plot(ameth$Age,(rfregFit$finalModel$predictions-ameth$Age),
pch=18,ylab="residuals (predicted-observed)",
xlab="observed Age",col="blue3")
abline(h=0,col="red4",lty=2)
```
FIGURE 5\.17: Observed vs. predicted age (Left). Residual plot showing that for older people the error increases (Right).
In this instance, we are using OOB errors and \\(R^2\\) value which shows how the model performs on OOB samples. The model can capture the general trend and it has acceptable OOB performance. It is not perfect as it makes errors on average close to 10 years when predicting the age, and the errors are more severe for older people (Figure [5\.17](predicting-continuous-variables-regression-with-machine-learning.html#fig:predictAge)). This could be due to having fewer older people to model or missing/inadequate predictor variables. However, everything we discussed in classification applies here. We had even fewer data points than the classification problem, so we did not do a split for a test data set. However, this should also be done for regression problems, especially when we are going to compare the performance of different models or want to have a better idea of the real\-world performance of our model. We might also be interested in which variables are most important as in the classification problem; we can use the `caret:varImp()` function to get access to random\-forest\-specific variable importance metrics.
### 5\.15\.1 Use case: Predicting age from DNA methylation
We will demonstrate random forest regression using a different data set which has a continuous response variable. This time we are going to try to predict the age of individuals from their DNA methylation levels. Methylation is a DNA modification which has implications in gene regulation and cell state. We have introduced DNA methylation in depth in Chapters [1](intro.html#intro) and [10](bsseq.html#bsseq), however for now, what we need to know is that there are about 24 million CpG dinucleotides in the human genome. Their methylation status can be measured with quantitative assays and the value is between 0 and 1\. If it is 0, the CpG is not methylated in any of the cells in the sample, and if it is 1, the CpG is methylated in all the cells of the sample. It has been shown that methylation is predictive of the age of the individual that the sample is taken from (Numata, Ye, Hyde, et al. [2012](#ref-numata2012dna); Horvath [2013](#ref-horvath2013dna)). Now, we will try to test that with a data set containing hundreds of individuals, their age, and methylation values for \~27000 CpGs. We first read in the files and construct a training set.
### 5\.15\.2 Reading and processing the data
Let us first read in the data. When we run the summary and histogram we see that the methylation values are between \\(0\\) and \\(1\\) and there are \\(108\\) samples (see Figure [5\.16](predicting-continuous-variables-regression-with-machine-learning.html#fig:readMethAge) ). Typically, methylation values have bimodal distribution. In this case many of them have values around \\(0\\) and the second\-most frequent value bracket is around \\(0\.9\\).
```
# file path for CpG methylation and age
fileMethAge=system.file("extdata",
"CpGmeth2Age.rds",
package="compGenomRData")
# read methylation-age table
ameth=readRDS(fileMethAge)
dim(ameth)
```
```
## [1] 108 27579
```
```
summary(ameth[,1:3])
```
```
## Age cg26211698 cg03790787
## Min. :-0.4986 Min. :0.01223 Min. :0.05001
## 1st Qu.:-0.4027 1st Qu.:0.01885 1st Qu.:0.07818
## Median :18.8466 Median :0.02269 Median :0.08964
## Mean :25.9083 Mean :0.02483 Mean :0.09300
## 3rd Qu.:49.6110 3rd Qu.:0.02888 3rd Qu.:0.10423
## Max. :83.6411 Max. :0.04883 Max. :0.16271
```
```
# plot histogram of methylation values
hist(unlist(ameth[,-1]),border="white",
col="cornflowerblue",main="",xlab="methylation values")
```
FIGURE 5\.16: Histogram of methylation values in the training set for age prediction.
There are \\(\~27000\\) predictor variables. We can remove the ones that have low variation across samples. In this case, the methylation values are between \\(0\\) and \\(1\\). The CpGs that have low variation are not likely to have any association with age; they could simply be technical variation of the experiment. We will remove CpGs that have less than 0\.1 standard deviation.
```
ameth=ameth[,c(TRUE,matrixStats::colSds(as.matrix(ameth[,-1]))>0.1)]
dim(ameth)
```
```
## [1] 108 2290
```
### 5\.15\.3 Running random forest regression
Now we can use random forest regression to predict the age from methylation values. We are then going to plot the predicted vs. observed ages and see how good our predictions are. The resulting plots are shown in Figure [5\.17](predicting-continuous-variables-regression-with-machine-learning.html#fig:predictAge).
```
set.seed(18)
par(mfrow=c(1,2))
# we are not going to do any cross-validatin
# and rely on OOB error
trctrl <- trainControl(method = "none")
# we will now train random forest model
rfregFit <- train(Age~.,
data = ameth,
method = "ranger",
trControl=trctrl,
# calculate importance
importance="permutation",
tuneGrid = data.frame(mtry=50,
min.node.size = 5,
splitrule="variance")
)
# plot Observed vs OOB predicted values from the model
plot(ameth$Age,rfregFit$finalModel$predictions,
pch=19,xlab="observed Age",
ylab="OOB predicted Age")
mtext(paste("R-squared",
format(rfregFit$finalModel$r.squared,digits=2)))
# plot residuals
plot(ameth$Age,(rfregFit$finalModel$predictions-ameth$Age),
pch=18,ylab="residuals (predicted-observed)",
xlab="observed Age",col="blue3")
abline(h=0,col="red4",lty=2)
```
FIGURE 5\.17: Observed vs. predicted age (Left). Residual plot showing that for older people the error increases (Right).
In this instance, we are using OOB errors and \\(R^2\\) value which shows how the model performs on OOB samples. The model can capture the general trend and it has acceptable OOB performance. It is not perfect as it makes errors on average close to 10 years when predicting the age, and the errors are more severe for older people (Figure [5\.17](predicting-continuous-variables-regression-with-machine-learning.html#fig:predictAge)). This could be due to having fewer older people to model or missing/inadequate predictor variables. However, everything we discussed in classification applies here. We had even fewer data points than the classification problem, so we did not do a split for a test data set. However, this should also be done for regression problems, especially when we are going to compare the performance of different models or want to have a better idea of the real\-world performance of our model. We might also be interested in which variables are most important as in the classification problem; we can use the `caret:varImp()` function to get access to random\-forest\-specific variable importance metrics.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/exercises-3.html |
5\.16 Exercises
---------------
### 5\.16\.1 Classification
For this set of exercises we will be using the gene expression and patient annotation data from the glioblastoma patient. You can read the data as shown below:
```
library(compGenomRData)
# get file paths
fileLGGexp=system.file("extdata",
"LGGrnaseq.rds",
package="compGenomRData")
fileLGGann=system.file("extdata",
"patient2LGGsubtypes.rds",
package="compGenomRData")
# gene expression values
gexp=readRDS(fileLGGexp)
# patient annotation
patient=readRDS(fileLGGann)
```
1. Our first task is to not use any data transformation and do classification. Run the k\-NN classifier on the data without any transformation or scaling. What is the effect on classification accuracy for k\-NN predicting the CIMP and noCIMP status of the patient? \[Difficulty: **Beginner**]
2. Bootstrap resampling can be used to measure the variability of the prediction error. Use bootstrap resampling with k\-NN for the prediction accuracy. How different is it from cross\-validation for different \\(k\\)s? \[Difficulty: **Intermediate**]
3. There are a number of ways to get variable importance for a classification problem. Run random forests on the classification problem above. Compare the variable importance metrics from random forest and the one obtained from DALEX. How many variables are the same in the top 10? \[Difficulty: **Advanced**]
4. Come up with a unified importance score by normalizing importance scores from random forests and DALEX, followed by taking the average of those scores. \[Difficulty: **Advanced**]
### 5\.16\.2 Regression
For this set of problems we will use the regression data set where we tried to predict the age of the sample from the methylation values. The data can be loaded as shown below:
```
# file path for CpG methylation and age
fileMethAge=system.file("extdata",
"CpGmeth2Age.rds",
package="compGenomRData")
# read methylation-age table
ameth=readRDS(fileMethAge)
```
1. Run random forest regression and plot the importance metrics. \[Difficulty: **Beginner**]
2. Split 20% of the methylation\-age data as test data and run elastic net regression on the training portion to tune parameters and test it on the test portion. \[Difficulty: **Intermediate**]
3. Run an ensemble model for regression using the **caretEnsemble** or **mlr** package and compare the results with the elastic net and random forest model. Did the test accuracy increase?
**HINT:** You need to install these extra packages and learn how to use them in the context of ensemble models. \[Difficulty: **Advanced**]
### 5\.16\.1 Classification
For this set of exercises we will be using the gene expression and patient annotation data from the glioblastoma patient. You can read the data as shown below:
```
library(compGenomRData)
# get file paths
fileLGGexp=system.file("extdata",
"LGGrnaseq.rds",
package="compGenomRData")
fileLGGann=system.file("extdata",
"patient2LGGsubtypes.rds",
package="compGenomRData")
# gene expression values
gexp=readRDS(fileLGGexp)
# patient annotation
patient=readRDS(fileLGGann)
```
1. Our first task is to not use any data transformation and do classification. Run the k\-NN classifier on the data without any transformation or scaling. What is the effect on classification accuracy for k\-NN predicting the CIMP and noCIMP status of the patient? \[Difficulty: **Beginner**]
2. Bootstrap resampling can be used to measure the variability of the prediction error. Use bootstrap resampling with k\-NN for the prediction accuracy. How different is it from cross\-validation for different \\(k\\)s? \[Difficulty: **Intermediate**]
3. There are a number of ways to get variable importance for a classification problem. Run random forests on the classification problem above. Compare the variable importance metrics from random forest and the one obtained from DALEX. How many variables are the same in the top 10? \[Difficulty: **Advanced**]
4. Come up with a unified importance score by normalizing importance scores from random forests and DALEX, followed by taking the average of those scores. \[Difficulty: **Advanced**]
### 5\.16\.2 Regression
For this set of problems we will use the regression data set where we tried to predict the age of the sample from the methylation values. The data can be loaded as shown below:
```
# file path for CpG methylation and age
fileMethAge=system.file("extdata",
"CpGmeth2Age.rds",
package="compGenomRData")
# read methylation-age table
ameth=readRDS(fileMethAge)
```
1. Run random forest regression and plot the importance metrics. \[Difficulty: **Beginner**]
2. Split 20% of the methylation\-age data as test data and run elastic net regression on the training portion to tune parameters and test it on the test portion. \[Difficulty: **Intermediate**]
3. Run an ensemble model for regression using the **caretEnsemble** or **mlr** package and compare the results with the elastic net and random forest model. Did the test accuracy increase?
**HINT:** You need to install these extra packages and learn how to use them in the context of ensemble models. \[Difficulty: **Advanced**]
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/operations-on-genomic-intervals-with-genomicranges-package.html |
6\.1 Operations on genomic intervals with `GenomicRanges` package
-----------------------------------------------------------------
The [Bioconductor](http://bioconductor.org) project has a dedicated package called [`GenomicRanges`](http://www.bioconductor.org/packages/release/bioc/html/GenomicRanges.html) to deal with genomic intervals. In this section, we will provide use cases involving operations on genomic intervals. The main reason we will stick to this package is that it provides tools to do overlap operations. However, the package requires that users operate on specific data types that are conceptually similar to a tabular data structure implemented in a way that makes overlapping and related operations easier. The main object we will be using is called the `GRanges` object and we will also see some other related objects from the `GenomicRanges` package.
### 6\.1\.1 How to create and manipulate a GRanges object
`GRanges` (from `GenomicRanges` package) is the main object that holds the genomic intervals and extra information about those intervals. Here we will show how to create one. Conceptually, it is similar to a data frame and some operations such as using `[ ]` notation to subset the table will also work on `GRanges`, but keep in mind that not everything that works for data frames will work on `GRanges` objects.
```
library(GenomicRanges)
gr=GRanges(seqnames=c("chr1","chr2","chr2"),
ranges=IRanges(start=c(50,150,200),
end=c(100,200,300)),
strand=c("+","-","-")
)
gr
```
```
## GRanges object with 3 ranges and 0 metadata columns:
## seqnames ranges strand
## <Rle> <IRanges> <Rle>
## [1] chr1 50-100 +
## [2] chr2 150-200 -
## [3] chr2 200-300 -
## -------
## seqinfo: 2 sequences from an unspecified genome; no seqlengths
```
```
# subset like a data frame
gr[1:2,]
```
```
## GRanges object with 2 ranges and 0 metadata columns:
## seqnames ranges strand
## <Rle> <IRanges> <Rle>
## [1] chr1 50-100 +
## [2] chr2 150-200 -
## -------
## seqinfo: 2 sequences from an unspecified genome; no seqlengths
```
As you can see, it looks a bit like a data frame. Also, note that the peculiar second argument “ranges” basically contains the start and end positions of the genomic intervals. However, you cannot just give start and end positions, you actually have to provide another object of `IRanges`. Do not let this confuse you; `GRanges` actually depends on another object that is very similar to itself called `IRanges` and you have to provide the “ranges” argument as an `IRanges` object. In its simplest form, an `IRanges` object can be constructed by providing start and end positions to the `IRanges()` function. Think of it as something you just have to provide in order to construct the `GRanges` object.
`GRanges` can also contain other information about the genomic interval such as scores, names, etc. You can provide extra information at the time of the construction or you can add it later. Here is how you can do that:
```
gr=GRanges(seqnames=c("chr1","chr2","chr2"),
ranges=IRanges(start=c(50,150,200),
end=c(100,200,300)),
names=c("id1","id3","id2"),
scores=c(100,90,50)
)
# or add it later (replaces the existing meta data)
mcols(gr)=DataFrame(name2=c("pax6","meis1","zic4"),
score2=c(1,2,3))
gr=GRanges(seqnames=c("chr1","chr2","chr2"),
ranges=IRanges(start=c(50,150,200),
end=c(100,200,300)),
names=c("id1","id3","id2"),
scores=c(100,90,50)
)
# or appends to existing meta data
mcols(gr)=cbind(mcols(gr),
DataFrame(name2=c("pax6","meis1","zic4")) )
gr
```
```
## GRanges object with 3 ranges and 3 metadata columns:
## seqnames ranges strand | names scores name2
## <Rle> <IRanges> <Rle> | <character> <numeric> <character>
## [1] chr1 50-100 * | id1 100 pax6
## [2] chr2 150-200 * | id3 90 meis1
## [3] chr2 200-300 * | id2 50 zic4
## -------
## seqinfo: 2 sequences from an unspecified genome; no seqlengths
```
```
# elementMetadata() and values() do the same things
elementMetadata(gr)
```
```
## DataFrame with 3 rows and 3 columns
## names scores name2
## <character> <numeric> <character>
## 1 id1 100 pax6
## 2 id3 90 meis1
## 3 id2 50 zic4
```
```
values(gr)
```
```
## DataFrame with 3 rows and 3 columns
## names scores name2
## <character> <numeric> <character>
## 1 id1 100 pax6
## 2 id3 90 meis1
## 3 id2 50 zic4
```
```
# you may also add metadata using the $ operator, as for data frames
gr$name3 = c("A","C", "B")
gr
```
```
## GRanges object with 3 ranges and 4 metadata columns:
## seqnames ranges strand | names scores name2 name3
## <Rle> <IRanges> <Rle> | <character> <numeric> <character> <character>
## [1] chr1 50-100 * | id1 100 pax6 A
## [2] chr2 150-200 * | id3 90 meis1 C
## [3] chr2 200-300 * | id2 50 zic4 B
## -------
## seqinfo: 2 sequences from an unspecified genome; no seqlengths
```
### 6\.1\.2 Getting genomic regions into R as GRanges objects
There are multiple ways you can read your genomic features into R and create a `GRanges` object. Most genomic interval data comes in a tabular format that has the basic information about the location of the interval and some other information. We already showed how to read BED files as a data frame in Chapter [2](Rintro.html#Rintro). Now we will show how to convert it to the `GRanges` object. This is one way of doing it, but there are more convenient ways described further in the text.
```
# read CpGi data set
filePath=system.file("extdata",
"cpgi.hg19.chr21.bed",
package="compGenomRData")
cpgi.df = read.table(filePath, header = FALSE,
stringsAsFactors=FALSE)
# remove chr names with "_"
cpgi.df =cpgi.df [grep("_",cpgi.df[,1],invert=TRUE),]
cpgi.gr=GRanges(seqnames=cpgi.df[,1],
ranges=IRanges(start=cpgi.df[,2],
end=cpgi.df[,3]))
```
You may need to do some pre\-processing before/after reading in the BED file. Below is an example of getting transcription start sites from BED files containing RefSeq transcript locations.
```
# read refseq file
filePathRefseq=system.file("extdata",
"refseq.hg19.chr21.bed",
package="compGenomRData")
ref.df = read.table(filePathRefseq, header = FALSE,
stringsAsFactors=FALSE)
ref.gr=GRanges(seqnames=ref.df[,1],
ranges=IRanges(start=ref.df[,2],
end=ref.df[,3]),
strand=ref.df[,6],name=ref.df[,4])
# get TSS
tss.gr=ref.gr
# end of the + strand genes must be equalized to start pos
end(tss.gr[strand(tss.gr)=="+",]) =start(tss.gr[strand(tss.gr)=="+",])
# startof the - strand genes must be equalized to end pos
start(tss.gr[strand(tss.gr)=="-",])=end(tss.gr[strand(tss.gr)=="-",])
# remove duplicated TSSes ie alternative transcripts
# this keeps the first instance and removes duplicates
tss.gr=tss.gr[!duplicated(tss.gr),]
```
Another way of doing this from a BED file is to use the `readTranscriptfeatures()`
function from the `genomation` package. This function takes care of the steps described in the code chunk above.
Reading the genomic features as text files and converting to `GRanges` is not the only way to create a `GRanges` object. With the help of the [`rtracklayer`](http://www.bioconductor.org/packages/release/bioc/html/rtracklayer.html) package we can directly import BED files.
```
require(rtracklayer)
# we are reading a BED file, the path to the file
# is stored in filePathRefseq variable
import.bed(filePathRefseq)
```
Next, we will show how to use other methods to automatically obtain the data in the `GRanges` format from online databases. But you will not be able to use these methods for every data set, so it is good to know how to read data from flat files as well. We will use the `rtracklayer` package to download data from the UCSC Genome Browser. We will download CpG islands as `GRanges` objects. The `rtracklayer` workflow we show below works like using the UCSC table browser. You need to select which species you are working with, then you need to select which dataset you need to download and lastly you download the UCSC dataset or track as a `GRanges` object.
```
require(rtracklayer)
session <- browserSession("UCSC",url = 'http://genome-euro.ucsc.edu/cgi-bin/')
genome(session) <- "mm9"
## choose CpG island track on chr12
query <- ucscTableQuery(session, track="CpG Islands",table="cpgIslandExt",
range=GRangesForUCSCGenome("mm9", "chr12"))
## get the GRanges object for the track
track(query)
```
There is also an interface to the Ensembl database called [biomaRt](https://bioconductor.org/packages/release/bioc/html/biomaRt.html).
This package will enable you to access and import all of the datasets included
in Ensembl. Another similar package is [AnnotationHub](https://bioconductor.org/packages/release/bioc/html/AnnotationHub.html).
This package is an aggregator for different datasets from various sources.
Using `AnnotationHub` one can access data sets from the UCSC browser, Ensembl browser
and datasets from genomics consortia such as ENCODE and Roadmap Epigenomics.
We provide examples of using `Biomart` package further into the chapter. In addition, the `AnnotationHub` package is used in Chapter [9](chipseq.html#chipseq).
#### 6\.1\.2\.1 Frequently used file formats and how to read them into R as a table
There are multiple file formats in genomics but some of them you will see more
frequently than others. We already mentioned some of them. Here is a list of files
and functions that can read them into R as `GRanges` objects or something coercible to
`GRanges` objects.
1. **BED**: This format is used and popularized by the UCSC browser, and can hold a variety of
information including exon/intron structure of transcripts in a single line. We will be using BED files in this chapter. In its simplest form, the BED file contains the chromosome name, the start position and end position for a genomic feature of interest.
* `genomation::readBed()`
* `genomation::readTranscriptFeatures()` good for getting intron/exon/promoters from BED12 files
* `rtracklayer::import.bed()`
2. **GFF**: GFF format is a tabular text format for genomic features similar to BED. However,
it is a more flexible format than BED, which makes it harder to parse at times. Many gene annotation files are in this format.
* `genomation::gffToGranges()`
* `rtracklayer::impot.gff()`
3. **BAM/SAM**: BAM format is a compressed and indexed tabular file format designed for aligned sequencing reads. SAM is the uncompressed version of the BAM file. We will touch upon BAM files in this chapter. The uncompressed SAM file is similar in spirit to a BED file where you have the basic location of chromosomal location information plus additional columns that are related to the quality of alignment or other relevant information. We will introduce this format in detail later in this chapter.
* `GenomicAlignments::readGAlignments`
* `Rsamtools::scanBam` returns a data frame with columns from a SAM/BAM file.
4. **BigWig**: This is used to for storing scores associated with genomic intervals. It is an indexed format. Similar to BAM, this makes it easier to query and only necessary portions
of the file could be loaded into memory.
* `rtracklayer::import.bw()`
5. **Generic Text files**: This represents any text file with the minimal information of chromosome, start and end coordinates.
* `genomation::readGeneric()`
6. **Tabix/Bcf**: These are tabular file formats indexed and compressed similar to
BAM. The following functions return lists rather than tabular data structures. These
formats are mostly used to store genomic variation data such as SNPs and indels.
* `Rsamtools::scanTabix`
* `Rsamtools::scanBcf`
### 6\.1\.3 Finding regions that do/do not overlap with another set of regions
This is one of the most common tasks in genomics. Usually, you have a set of regions that you are interested in and you want to see if they overlap with another set of regions or see how many of them overlap. A good example is transcription factor binding sites determined by [ChIP\-seq](http://en.wikipedia.org/wiki/ChIP-sequencing) experiments. We will introduce ChIP\-seq in more detail in Chapter [9](chipseq.html#chipseq). However, in these types of experiments and the following analysis, one usually ends up with genomic regions that are bound by transcription factors. One of the standard next questions would be to annotate binding sites with genomic annotations such as promoter, exon, intron and/or CpG islands, which are important for gene regulation. Below is a demonstration of how transcription factor binding sites can be annotated using CpG islands. First, we will get the subset of binding sites that overlap with the CpG islands. In this case, binding sites are ChIP\-seq peaks.
In the code snippet below, we read the ChIP\-seq analysis output files using the `genomation::readBroadPeak()` function. This function directly outputs a `GRanges` object. These output files are similar to BED files, where the location of the predicted binding sites are written out in a tabular format with some analysis\-related scores and/or P\-values. After reading the files, we can find the subset of peaks that overlap with the CpG islands using the `subsetByoverlaps()` function.
```
library(genomation)
filePathPeaks=system.file("extdata",
"wgEncodeHaibTfbsGm12878Sp1Pcr1xPkRep1.broadPeak.gz",
package="compGenomRData")
# read the peaks from a bed file
pk1.gr=readBroadPeak(filePathPeaks)
# get the peaks that overlap with CpG islands
subsetByOverlaps(pk1.gr,cpgi.gr)
```
```
## GRanges object with 44 ranges and 5 metadata columns:
## seqnames ranges strand | name score signalValue
## <Rle> <IRanges> <Rle> | <character> <integer> <numeric>
## [1] chr21 9825360-9826582 * | peak14562 56 183.11
## [2] chr21 9968469-9968984 * | peak14593 947 3064.92
## [3] chr21 15755368-15755956 * | peak14828 90 291.90
## [4] chr21 19191579-19192525 * | peak14840 290 940.03
## [5] chr21 26979619-26980048 * | peak14854 32 104.67
## ... ... ... ... . ... ... ...
## [40] chr21 46237464-46237809 * | peak15034 32 106.36
## [41] chr21 46707702-46708084 * | peak15037 67 217.02
## [42] chr21 46961552-46961875 * | peak15039 38 124.31
## [43] chr21 47743587-47744125 * | peak15050 353 1141.58
## [44] chr21 47878412-47878891 * | peak15052 104 338.78
## pvalue qvalue
## <integer> <integer>
## [1] -1 -1
## [2] -1 -1
## [3] -1 -1
## [4] -1 -1
## [5] -1 -1
## ... ... ...
## [40] -1 -1
## [41] -1 -1
## [42] -1 -1
## [43] -1 -1
## [44] -1 -1
## -------
## seqinfo: 23 sequences from an unspecified genome; no seqlengths
```
For each CpG island, we can count the number of peaks that overlap with a given CpG island with `GenomicRanges::countOverlaps()`.
```
counts=countOverlaps(pk1.gr,cpgi.gr)
head(counts)
```
```
## [1] 0 0 0 0 0 0
```
The `GenomicRanges::findOverlaps()` function can be used to see one\-to\-one overlaps between peaks and CpG islands. It returns a matrix showing which peak overlaps which CpG island.
```
findOverlaps(pk1.gr,cpgi.gr)
```
```
## Hits object with 45 hits and 0 metadata columns:
## queryHits subjectHits
## <integer> <integer>
## [1] 14562 1
## [2] 14593 3
## [3] 14828 8
## [4] 14840 13
## [5] 14854 16
## ... ... ...
## [41] 15034 155
## [42] 15037 166
## [43] 15039 176
## [44] 15050 192
## [45] 15052 200
## -------
## queryLength: 26121 / subjectLength: 205
```
Another interesting thing would be to look at the distances to the nearest CpG islands for each peak. In addition, just finding the nearest CpG island could also be interesting. Oftentimes, you will need to find the nearest TSS or gene to your regions of interest, and the code below is handy for doing that using the `nearest()` and `distanceToNearest()` functions, the resulting plot is shown in Figure [6\.2](operations-on-genomic-intervals-with-genomicranges-package.html#fig:findNearest).
```
# find nearest CpGi to each TSS
n.ind=nearest(pk1.gr,cpgi.gr)
# get distance to nearest
dists=distanceToNearest(pk1.gr,cpgi.gr,select="arbitrary")
dists
```
```
## Hits object with 620 hits and 1 metadata column:
## queryHits subjectHits | distance
## <integer> <integer> | <integer>
## [1] 14440 1 | 384188
## [2] 14441 1 | 382968
## [3] 14442 1 | 381052
## [4] 14443 1 | 379311
## [5] 14444 1 | 376978
## ... ... ... . ...
## [616] 15055 205 | 26212
## [617] 15056 205 | 27402
## [618] 15057 205 | 30468
## [619] 15058 205 | 31611
## [620] 15059 205 | 34090
## -------
## queryLength: 26121 / subjectLength: 205
```
```
# histogram of the distances to nearest TSS
dist2plot=mcols(dists)[,1]
hist(log10(dist2plot),xlab="log10(dist to nearest TSS)",
main="Distances")
```
FIGURE 6\.2: Histogram of distances of CpG islands to the nearest TSSes.
### 6\.1\.1 How to create and manipulate a GRanges object
`GRanges` (from `GenomicRanges` package) is the main object that holds the genomic intervals and extra information about those intervals. Here we will show how to create one. Conceptually, it is similar to a data frame and some operations such as using `[ ]` notation to subset the table will also work on `GRanges`, but keep in mind that not everything that works for data frames will work on `GRanges` objects.
```
library(GenomicRanges)
gr=GRanges(seqnames=c("chr1","chr2","chr2"),
ranges=IRanges(start=c(50,150,200),
end=c(100,200,300)),
strand=c("+","-","-")
)
gr
```
```
## GRanges object with 3 ranges and 0 metadata columns:
## seqnames ranges strand
## <Rle> <IRanges> <Rle>
## [1] chr1 50-100 +
## [2] chr2 150-200 -
## [3] chr2 200-300 -
## -------
## seqinfo: 2 sequences from an unspecified genome; no seqlengths
```
```
# subset like a data frame
gr[1:2,]
```
```
## GRanges object with 2 ranges and 0 metadata columns:
## seqnames ranges strand
## <Rle> <IRanges> <Rle>
## [1] chr1 50-100 +
## [2] chr2 150-200 -
## -------
## seqinfo: 2 sequences from an unspecified genome; no seqlengths
```
As you can see, it looks a bit like a data frame. Also, note that the peculiar second argument “ranges” basically contains the start and end positions of the genomic intervals. However, you cannot just give start and end positions, you actually have to provide another object of `IRanges`. Do not let this confuse you; `GRanges` actually depends on another object that is very similar to itself called `IRanges` and you have to provide the “ranges” argument as an `IRanges` object. In its simplest form, an `IRanges` object can be constructed by providing start and end positions to the `IRanges()` function. Think of it as something you just have to provide in order to construct the `GRanges` object.
`GRanges` can also contain other information about the genomic interval such as scores, names, etc. You can provide extra information at the time of the construction or you can add it later. Here is how you can do that:
```
gr=GRanges(seqnames=c("chr1","chr2","chr2"),
ranges=IRanges(start=c(50,150,200),
end=c(100,200,300)),
names=c("id1","id3","id2"),
scores=c(100,90,50)
)
# or add it later (replaces the existing meta data)
mcols(gr)=DataFrame(name2=c("pax6","meis1","zic4"),
score2=c(1,2,3))
gr=GRanges(seqnames=c("chr1","chr2","chr2"),
ranges=IRanges(start=c(50,150,200),
end=c(100,200,300)),
names=c("id1","id3","id2"),
scores=c(100,90,50)
)
# or appends to existing meta data
mcols(gr)=cbind(mcols(gr),
DataFrame(name2=c("pax6","meis1","zic4")) )
gr
```
```
## GRanges object with 3 ranges and 3 metadata columns:
## seqnames ranges strand | names scores name2
## <Rle> <IRanges> <Rle> | <character> <numeric> <character>
## [1] chr1 50-100 * | id1 100 pax6
## [2] chr2 150-200 * | id3 90 meis1
## [3] chr2 200-300 * | id2 50 zic4
## -------
## seqinfo: 2 sequences from an unspecified genome; no seqlengths
```
```
# elementMetadata() and values() do the same things
elementMetadata(gr)
```
```
## DataFrame with 3 rows and 3 columns
## names scores name2
## <character> <numeric> <character>
## 1 id1 100 pax6
## 2 id3 90 meis1
## 3 id2 50 zic4
```
```
values(gr)
```
```
## DataFrame with 3 rows and 3 columns
## names scores name2
## <character> <numeric> <character>
## 1 id1 100 pax6
## 2 id3 90 meis1
## 3 id2 50 zic4
```
```
# you may also add metadata using the $ operator, as for data frames
gr$name3 = c("A","C", "B")
gr
```
```
## GRanges object with 3 ranges and 4 metadata columns:
## seqnames ranges strand | names scores name2 name3
## <Rle> <IRanges> <Rle> | <character> <numeric> <character> <character>
## [1] chr1 50-100 * | id1 100 pax6 A
## [2] chr2 150-200 * | id3 90 meis1 C
## [3] chr2 200-300 * | id2 50 zic4 B
## -------
## seqinfo: 2 sequences from an unspecified genome; no seqlengths
```
### 6\.1\.2 Getting genomic regions into R as GRanges objects
There are multiple ways you can read your genomic features into R and create a `GRanges` object. Most genomic interval data comes in a tabular format that has the basic information about the location of the interval and some other information. We already showed how to read BED files as a data frame in Chapter [2](Rintro.html#Rintro). Now we will show how to convert it to the `GRanges` object. This is one way of doing it, but there are more convenient ways described further in the text.
```
# read CpGi data set
filePath=system.file("extdata",
"cpgi.hg19.chr21.bed",
package="compGenomRData")
cpgi.df = read.table(filePath, header = FALSE,
stringsAsFactors=FALSE)
# remove chr names with "_"
cpgi.df =cpgi.df [grep("_",cpgi.df[,1],invert=TRUE),]
cpgi.gr=GRanges(seqnames=cpgi.df[,1],
ranges=IRanges(start=cpgi.df[,2],
end=cpgi.df[,3]))
```
You may need to do some pre\-processing before/after reading in the BED file. Below is an example of getting transcription start sites from BED files containing RefSeq transcript locations.
```
# read refseq file
filePathRefseq=system.file("extdata",
"refseq.hg19.chr21.bed",
package="compGenomRData")
ref.df = read.table(filePathRefseq, header = FALSE,
stringsAsFactors=FALSE)
ref.gr=GRanges(seqnames=ref.df[,1],
ranges=IRanges(start=ref.df[,2],
end=ref.df[,3]),
strand=ref.df[,6],name=ref.df[,4])
# get TSS
tss.gr=ref.gr
# end of the + strand genes must be equalized to start pos
end(tss.gr[strand(tss.gr)=="+",]) =start(tss.gr[strand(tss.gr)=="+",])
# startof the - strand genes must be equalized to end pos
start(tss.gr[strand(tss.gr)=="-",])=end(tss.gr[strand(tss.gr)=="-",])
# remove duplicated TSSes ie alternative transcripts
# this keeps the first instance and removes duplicates
tss.gr=tss.gr[!duplicated(tss.gr),]
```
Another way of doing this from a BED file is to use the `readTranscriptfeatures()`
function from the `genomation` package. This function takes care of the steps described in the code chunk above.
Reading the genomic features as text files and converting to `GRanges` is not the only way to create a `GRanges` object. With the help of the [`rtracklayer`](http://www.bioconductor.org/packages/release/bioc/html/rtracklayer.html) package we can directly import BED files.
```
require(rtracklayer)
# we are reading a BED file, the path to the file
# is stored in filePathRefseq variable
import.bed(filePathRefseq)
```
Next, we will show how to use other methods to automatically obtain the data in the `GRanges` format from online databases. But you will not be able to use these methods for every data set, so it is good to know how to read data from flat files as well. We will use the `rtracklayer` package to download data from the UCSC Genome Browser. We will download CpG islands as `GRanges` objects. The `rtracklayer` workflow we show below works like using the UCSC table browser. You need to select which species you are working with, then you need to select which dataset you need to download and lastly you download the UCSC dataset or track as a `GRanges` object.
```
require(rtracklayer)
session <- browserSession("UCSC",url = 'http://genome-euro.ucsc.edu/cgi-bin/')
genome(session) <- "mm9"
## choose CpG island track on chr12
query <- ucscTableQuery(session, track="CpG Islands",table="cpgIslandExt",
range=GRangesForUCSCGenome("mm9", "chr12"))
## get the GRanges object for the track
track(query)
```
There is also an interface to the Ensembl database called [biomaRt](https://bioconductor.org/packages/release/bioc/html/biomaRt.html).
This package will enable you to access and import all of the datasets included
in Ensembl. Another similar package is [AnnotationHub](https://bioconductor.org/packages/release/bioc/html/AnnotationHub.html).
This package is an aggregator for different datasets from various sources.
Using `AnnotationHub` one can access data sets from the UCSC browser, Ensembl browser
and datasets from genomics consortia such as ENCODE and Roadmap Epigenomics.
We provide examples of using `Biomart` package further into the chapter. In addition, the `AnnotationHub` package is used in Chapter [9](chipseq.html#chipseq).
#### 6\.1\.2\.1 Frequently used file formats and how to read them into R as a table
There are multiple file formats in genomics but some of them you will see more
frequently than others. We already mentioned some of them. Here is a list of files
and functions that can read them into R as `GRanges` objects or something coercible to
`GRanges` objects.
1. **BED**: This format is used and popularized by the UCSC browser, and can hold a variety of
information including exon/intron structure of transcripts in a single line. We will be using BED files in this chapter. In its simplest form, the BED file contains the chromosome name, the start position and end position for a genomic feature of interest.
* `genomation::readBed()`
* `genomation::readTranscriptFeatures()` good for getting intron/exon/promoters from BED12 files
* `rtracklayer::import.bed()`
2. **GFF**: GFF format is a tabular text format for genomic features similar to BED. However,
it is a more flexible format than BED, which makes it harder to parse at times. Many gene annotation files are in this format.
* `genomation::gffToGranges()`
* `rtracklayer::impot.gff()`
3. **BAM/SAM**: BAM format is a compressed and indexed tabular file format designed for aligned sequencing reads. SAM is the uncompressed version of the BAM file. We will touch upon BAM files in this chapter. The uncompressed SAM file is similar in spirit to a BED file where you have the basic location of chromosomal location information plus additional columns that are related to the quality of alignment or other relevant information. We will introduce this format in detail later in this chapter.
* `GenomicAlignments::readGAlignments`
* `Rsamtools::scanBam` returns a data frame with columns from a SAM/BAM file.
4. **BigWig**: This is used to for storing scores associated with genomic intervals. It is an indexed format. Similar to BAM, this makes it easier to query and only necessary portions
of the file could be loaded into memory.
* `rtracklayer::import.bw()`
5. **Generic Text files**: This represents any text file with the minimal information of chromosome, start and end coordinates.
* `genomation::readGeneric()`
6. **Tabix/Bcf**: These are tabular file formats indexed and compressed similar to
BAM. The following functions return lists rather than tabular data structures. These
formats are mostly used to store genomic variation data such as SNPs and indels.
* `Rsamtools::scanTabix`
* `Rsamtools::scanBcf`
#### 6\.1\.2\.1 Frequently used file formats and how to read them into R as a table
There are multiple file formats in genomics but some of them you will see more
frequently than others. We already mentioned some of them. Here is a list of files
and functions that can read them into R as `GRanges` objects or something coercible to
`GRanges` objects.
1. **BED**: This format is used and popularized by the UCSC browser, and can hold a variety of
information including exon/intron structure of transcripts in a single line. We will be using BED files in this chapter. In its simplest form, the BED file contains the chromosome name, the start position and end position for a genomic feature of interest.
* `genomation::readBed()`
* `genomation::readTranscriptFeatures()` good for getting intron/exon/promoters from BED12 files
* `rtracklayer::import.bed()`
2. **GFF**: GFF format is a tabular text format for genomic features similar to BED. However,
it is a more flexible format than BED, which makes it harder to parse at times. Many gene annotation files are in this format.
* `genomation::gffToGranges()`
* `rtracklayer::impot.gff()`
3. **BAM/SAM**: BAM format is a compressed and indexed tabular file format designed for aligned sequencing reads. SAM is the uncompressed version of the BAM file. We will touch upon BAM files in this chapter. The uncompressed SAM file is similar in spirit to a BED file where you have the basic location of chromosomal location information plus additional columns that are related to the quality of alignment or other relevant information. We will introduce this format in detail later in this chapter.
* `GenomicAlignments::readGAlignments`
* `Rsamtools::scanBam` returns a data frame with columns from a SAM/BAM file.
4. **BigWig**: This is used to for storing scores associated with genomic intervals. It is an indexed format. Similar to BAM, this makes it easier to query and only necessary portions
of the file could be loaded into memory.
* `rtracklayer::import.bw()`
5. **Generic Text files**: This represents any text file with the minimal information of chromosome, start and end coordinates.
* `genomation::readGeneric()`
6. **Tabix/Bcf**: These are tabular file formats indexed and compressed similar to
BAM. The following functions return lists rather than tabular data structures. These
formats are mostly used to store genomic variation data such as SNPs and indels.
* `Rsamtools::scanTabix`
* `Rsamtools::scanBcf`
### 6\.1\.3 Finding regions that do/do not overlap with another set of regions
This is one of the most common tasks in genomics. Usually, you have a set of regions that you are interested in and you want to see if they overlap with another set of regions or see how many of them overlap. A good example is transcription factor binding sites determined by [ChIP\-seq](http://en.wikipedia.org/wiki/ChIP-sequencing) experiments. We will introduce ChIP\-seq in more detail in Chapter [9](chipseq.html#chipseq). However, in these types of experiments and the following analysis, one usually ends up with genomic regions that are bound by transcription factors. One of the standard next questions would be to annotate binding sites with genomic annotations such as promoter, exon, intron and/or CpG islands, which are important for gene regulation. Below is a demonstration of how transcription factor binding sites can be annotated using CpG islands. First, we will get the subset of binding sites that overlap with the CpG islands. In this case, binding sites are ChIP\-seq peaks.
In the code snippet below, we read the ChIP\-seq analysis output files using the `genomation::readBroadPeak()` function. This function directly outputs a `GRanges` object. These output files are similar to BED files, where the location of the predicted binding sites are written out in a tabular format with some analysis\-related scores and/or P\-values. After reading the files, we can find the subset of peaks that overlap with the CpG islands using the `subsetByoverlaps()` function.
```
library(genomation)
filePathPeaks=system.file("extdata",
"wgEncodeHaibTfbsGm12878Sp1Pcr1xPkRep1.broadPeak.gz",
package="compGenomRData")
# read the peaks from a bed file
pk1.gr=readBroadPeak(filePathPeaks)
# get the peaks that overlap with CpG islands
subsetByOverlaps(pk1.gr,cpgi.gr)
```
```
## GRanges object with 44 ranges and 5 metadata columns:
## seqnames ranges strand | name score signalValue
## <Rle> <IRanges> <Rle> | <character> <integer> <numeric>
## [1] chr21 9825360-9826582 * | peak14562 56 183.11
## [2] chr21 9968469-9968984 * | peak14593 947 3064.92
## [3] chr21 15755368-15755956 * | peak14828 90 291.90
## [4] chr21 19191579-19192525 * | peak14840 290 940.03
## [5] chr21 26979619-26980048 * | peak14854 32 104.67
## ... ... ... ... . ... ... ...
## [40] chr21 46237464-46237809 * | peak15034 32 106.36
## [41] chr21 46707702-46708084 * | peak15037 67 217.02
## [42] chr21 46961552-46961875 * | peak15039 38 124.31
## [43] chr21 47743587-47744125 * | peak15050 353 1141.58
## [44] chr21 47878412-47878891 * | peak15052 104 338.78
## pvalue qvalue
## <integer> <integer>
## [1] -1 -1
## [2] -1 -1
## [3] -1 -1
## [4] -1 -1
## [5] -1 -1
## ... ... ...
## [40] -1 -1
## [41] -1 -1
## [42] -1 -1
## [43] -1 -1
## [44] -1 -1
## -------
## seqinfo: 23 sequences from an unspecified genome; no seqlengths
```
For each CpG island, we can count the number of peaks that overlap with a given CpG island with `GenomicRanges::countOverlaps()`.
```
counts=countOverlaps(pk1.gr,cpgi.gr)
head(counts)
```
```
## [1] 0 0 0 0 0 0
```
The `GenomicRanges::findOverlaps()` function can be used to see one\-to\-one overlaps between peaks and CpG islands. It returns a matrix showing which peak overlaps which CpG island.
```
findOverlaps(pk1.gr,cpgi.gr)
```
```
## Hits object with 45 hits and 0 metadata columns:
## queryHits subjectHits
## <integer> <integer>
## [1] 14562 1
## [2] 14593 3
## [3] 14828 8
## [4] 14840 13
## [5] 14854 16
## ... ... ...
## [41] 15034 155
## [42] 15037 166
## [43] 15039 176
## [44] 15050 192
## [45] 15052 200
## -------
## queryLength: 26121 / subjectLength: 205
```
Another interesting thing would be to look at the distances to the nearest CpG islands for each peak. In addition, just finding the nearest CpG island could also be interesting. Oftentimes, you will need to find the nearest TSS or gene to your regions of interest, and the code below is handy for doing that using the `nearest()` and `distanceToNearest()` functions, the resulting plot is shown in Figure [6\.2](operations-on-genomic-intervals-with-genomicranges-package.html#fig:findNearest).
```
# find nearest CpGi to each TSS
n.ind=nearest(pk1.gr,cpgi.gr)
# get distance to nearest
dists=distanceToNearest(pk1.gr,cpgi.gr,select="arbitrary")
dists
```
```
## Hits object with 620 hits and 1 metadata column:
## queryHits subjectHits | distance
## <integer> <integer> | <integer>
## [1] 14440 1 | 384188
## [2] 14441 1 | 382968
## [3] 14442 1 | 381052
## [4] 14443 1 | 379311
## [5] 14444 1 | 376978
## ... ... ... . ...
## [616] 15055 205 | 26212
## [617] 15056 205 | 27402
## [618] 15057 205 | 30468
## [619] 15058 205 | 31611
## [620] 15059 205 | 34090
## -------
## queryLength: 26121 / subjectLength: 205
```
```
# histogram of the distances to nearest TSS
dist2plot=mcols(dists)[,1]
hist(log10(dist2plot),xlab="log10(dist to nearest TSS)",
main="Distances")
```
FIGURE 6\.2: Histogram of distances of CpG islands to the nearest TSSes.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/dealing-with-mapped-high-throughput-sequencing-reads.html |
6\.2 Dealing with mapped high\-throughput sequencing reads
----------------------------------------------------------
The reads from sequencing machines are usually pre\-processed and aligned to the genome with the help of specific bioinformatics tools. We have introduced the details of general read processing, quality check and alignment methods in Chapter [7](processingReads.html#processingReads). In this section we will deal with mapped reads. Since each mapped read has a start and end position the genome, mapped reads can be thought of as genomic intervals stored in a file. After mapping, the next task is to quantify the enrichment of those aligned reads in the regions of interest. You may want to count how many reads overlap with your promoter set of interest or you may want to quantify RNA\-seq reads overlap with exons. This is similar to operations on genomic intervals which are described previously. If you can read all your alignments into memory and create a `GRanges` object, you can apply the previously described operations. However, most of the time we can not read all mapped reads into memory, so we have to use specialized tools to query and quantify alignments on a given set of regions. One of the most common alignment formats is SAM/BAM format, most aligners will produce SAM/BAM output or you will be able to convert your specific alignment format to SAM/BAM format. The BAM format is a binary version of the human\-readable SAM format. The SAM format has specific columns that contain different kinds of information about the alignment such as mismatches, qualities etc. (see \[[http://samtools.sourceforge.net/SAM1\.pdf](http://samtools.sourceforge.net/SAM1.pdf)] for SAM format specification).
### 6\.2\.1 Counting mapped reads for a set of regions
The `Rsamtools` package has functions to query BAM files. The function we will use in the first example is the `countBam()` function, which takes input of the BAM file and param argument. The `param` argument takes a `ScanBamParam` object. The object is instantiated using `ScanBamParam()` and contains parameters for scanning the BAM file. The example below is a simple example where `ScanBamParam()` only includes regions of interest, promoters on chr21\.
```
promoter.gr=tss.gr
start(promoter.gr)=start(promoter.gr)-1000
end(promoter.gr) =end(promoter.gr)+1000
promoter.gr=promoter.gr[seqnames(promoter.gr)=="chr21"]
library(Rsamtools)
bamfilePath=system.file("extdata",
"wgEncodeHaibTfbsGm12878Sp1Pcr1xAlnRep1.chr21.bam",
package="compGenomRData")
# get reads for regions of interest from the bam file
param <- ScanBamParam(which=promoter.gr)
counts=countBam(bamfilePath, param=param)
```
Alternatively, aligned reads can be read in using the `GenomicAlignments` package (which on this occasion relies on the `Rsamtools` package).
```
library(GenomicAlignments)
alns <- readGAlignments(bamfilePath, param=param)
```
### 6\.2\.1 Counting mapped reads for a set of regions
The `Rsamtools` package has functions to query BAM files. The function we will use in the first example is the `countBam()` function, which takes input of the BAM file and param argument. The `param` argument takes a `ScanBamParam` object. The object is instantiated using `ScanBamParam()` and contains parameters for scanning the BAM file. The example below is a simple example where `ScanBamParam()` only includes regions of interest, promoters on chr21\.
```
promoter.gr=tss.gr
start(promoter.gr)=start(promoter.gr)-1000
end(promoter.gr) =end(promoter.gr)+1000
promoter.gr=promoter.gr[seqnames(promoter.gr)=="chr21"]
library(Rsamtools)
bamfilePath=system.file("extdata",
"wgEncodeHaibTfbsGm12878Sp1Pcr1xAlnRep1.chr21.bam",
package="compGenomRData")
# get reads for regions of interest from the bam file
param <- ScanBamParam(which=promoter.gr)
counts=countBam(bamfilePath, param=param)
```
Alternatively, aligned reads can be read in using the `GenomicAlignments` package (which on this occasion relies on the `Rsamtools` package).
```
library(GenomicAlignments)
alns <- readGAlignments(bamfilePath, param=param)
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/dealing-with-continuous-scores-over-the-genome.html |
6\.3 Dealing with continuous scores over the genome
---------------------------------------------------
Most high\-throughput data can be viewed as a continuous score over the bases of the genome. In case of RNA\-seq or ChIP\-seq experiments, the data can be represented as read coverage values per genomic base position. In addition, other information (not necessarily from high\-throughput experiments) can be represented this way. The GC content and conservation scores per base are prime examples of other data sets that can be represented as scores over the genome. This sort of data can be stored as a generic text file or can have special formats such as Wig (stands for wiggle) from UCSC, or the bigWig format, which is an indexed binary format of the wig files. The bigWig format is great for data that covers a large fraction of the genome with varying scores, because the file is much smaller than regular text files that have the same information and it can be queried more easily since it is indexed.
In R/Bioconductor, continuous data can also be represented in a compressed format, called Rle vector, which stands for run\-length encoded vector. This gives superior memory performance over regular vectors because repeating consecutive values are represented as one value in the Rle vector (see Figure [6\.3](dealing-with-continuous-scores-over-the-genome.html#fig:Rle)).
FIGURE 6\.3: Rle encoding explained.
Typically, for genome\-wide data you will have an `RleList` object, which is a list of Rle vectors per chromosome. You can obtain such vectors by reading the reads in and calling the `coverage()` function from the `GenomicRanges` package. Let’s try that on the above data set.
```
covs=coverage(alns) # get coverage vectors
covs
```
```
## RleList of length 24
## $chr1
## integer-Rle of length 249250621 with 1 run
## Lengths: 249250621
## Values : 0
##
## $chr2
## integer-Rle of length 243199373 with 1 run
## Lengths: 243199373
## Values : 0
##
## $chr3
## integer-Rle of length 198022430 with 1 run
## Lengths: 198022430
## Values : 0
##
## $chr4
## integer-Rle of length 191154276 with 1 run
## Lengths: 191154276
## Values : 0
##
## $chr5
## integer-Rle of length 180915260 with 1 run
## Lengths: 180915260
## Values : 0
##
## ...
## <19 more elements>
```
Alternatively, you can get the coverage from the BAM file directly. Below, we are getting the coverage directly from the BAM file for our previously defined promoters.
```
covs=coverage(bamfilePath, param=param) # get coverage vectors
```
One of the most common ways of storing score data is, as mentioned, the wig or bigWig format. Most of the ENCODE project data can be downloaded in bigWig format. In addition, conservation scores can also be downloaded in the wig/bigWig format. You can import bigWig files into R using the `import()` function from the `rtracklayer` package. However, it is generally not advisable to read the whole bigWig file in memory as was the case with BAM files. Usually, you will be interested in only a fraction of the genome, such as promoters, exons etc. So it is best that you extract the data for those regions and read those into memory rather than the whole file. Below we read a bigWig file only for the bases on promoters. The operation returns a `GRanges` object with the score column which indicates the scores in the bigWig file per genomic region.
```
library(rtracklayer)
# File from ENCODE ChIP-seq tracks
bwFile=system.file("extdata",
"wgEncodeHaibTfbsA549.chr21.bw",
package="compGenomRData")
bw.gr=import(bwFile, which=promoter.gr) # get coverage vectors
bw.gr
```
```
## GRanges object with 9205 ranges and 1 metadata column:
## seqnames ranges strand | score
## <Rle> <IRanges> <Rle> | <numeric>
## [1] chr21 9825456-9825457 * | 1
## [2] chr21 9825458-9825464 * | 2
## [3] chr21 9825465-9825466 * | 4
## [4] chr21 9825467-9825470 * | 5
## [5] chr21 9825471 * | 6
## ... ... ... ... . ...
## [9201] chr21 48055809-48055856 * | 2
## [9202] chr21 48055857-48055858 * | 1
## [9203] chr21 48055872-48055921 * | 1
## [9204] chr21 48055944-48055993 * | 1
## [9205] chr21 48056069-48056118 * | 1
## -------
## seqinfo: 1 sequence from an unspecified genome
```
Following this we can create an `RleList` object from the `GRanges` with the `coverage()` function.
```
cov.bw=coverage(bw.gr,weight = "score")
# or get this directly from
cov.bw=import(bwFile, which=promoter.gr,as = "RleList")
```
### 6\.3\.1 Extracting subsections of Rle and RleList objects
Frequently, we will need to extract subsections of the Rle vectors or `RleList` objects.
We will need to do this to visualize that subsection or get some statistics out
of those sections. For example, we could be interested in average coverage per
base for the regions we are interested in. We have to extract those regions
from the `RleList` object and apply summary statistics. Below, we show how to extract
subsections of the `RleList` object. We are extracting promoter regions from the ChIP\-seq
read coverage `RleList`. Following that, we will plot one of the promoter’s coverage values.
```
myViews=Views(cov.bw,as(promoter.gr,"IRangesList")) # get subsets of coverage
# there is a views object for each chromosome
myViews
```
```
## RleViewsList object of length 1:
## $chr21
## Views on a 48129895-length Rle subject
##
## views:
## start end width
## [1] 42218039 42220039 2001 [2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [2] 17441841 17443841 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [3] 17565698 17567698 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [4] 30395937 30397937 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [5] 27542138 27544138 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 1 1 1 ...]
## [6] 27511708 27513708 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [7] 32930290 32932290 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [8] 27542446 27544446 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [9] 28338439 28340439 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## ... ... ... ... ...
## [370] 47517032 47519032 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [371] 47648157 47650157 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [372] 47603373 47605373 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [373] 47647738 47649738 2001 [2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 ...]
## [374] 47704236 47706236 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [375] 47742785 47744785 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [376] 47881383 47883383 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [377] 48054506 48056506 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [378] 48024035 48026035 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
```
```
myViews[[1]]
```
```
## Views on a 48129895-length Rle subject
##
## views:
## start end width
## [1] 42218039 42220039 2001 [2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [2] 17441841 17443841 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [3] 17565698 17567698 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [4] 30395937 30397937 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [5] 27542138 27544138 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 1 1 1 ...]
## [6] 27511708 27513708 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [7] 32930290 32932290 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [8] 27542446 27544446 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [9] 28338439 28340439 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## ... ... ... ... ...
## [370] 47517032 47519032 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [371] 47648157 47650157 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [372] 47603373 47605373 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [373] 47647738 47649738 2001 [2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 ...]
## [374] 47704236 47706236 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [375] 47742785 47744785 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [376] 47881383 47883383 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [377] 48054506 48056506 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [378] 48024035 48026035 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
```
```
# get the coverage vector from the 5th view and plot
plot(myViews[[1]][[5]],type="l")
```
FIGURE 6\.4: Coverage vector extracted from the RleList via the Views() function is plotted as a line plot.
Next, we are interested in average coverage per base for the promoters using summary
functions that work on the `Views` object.
```
# get the mean of the views
head(
viewMeans(myViews[[1]])
)
```
```
## [1] 0.2258871 0.3498251 1.2243878 0.4997501 2.0904548 0.6996502
```
```
# get the max of the views
head(
viewMaxs(myViews[[1]])
)
```
```
## [1] 2 4 12 4 21 6
```
### 6\.3\.1 Extracting subsections of Rle and RleList objects
Frequently, we will need to extract subsections of the Rle vectors or `RleList` objects.
We will need to do this to visualize that subsection or get some statistics out
of those sections. For example, we could be interested in average coverage per
base for the regions we are interested in. We have to extract those regions
from the `RleList` object and apply summary statistics. Below, we show how to extract
subsections of the `RleList` object. We are extracting promoter regions from the ChIP\-seq
read coverage `RleList`. Following that, we will plot one of the promoter’s coverage values.
```
myViews=Views(cov.bw,as(promoter.gr,"IRangesList")) # get subsets of coverage
# there is a views object for each chromosome
myViews
```
```
## RleViewsList object of length 1:
## $chr21
## Views on a 48129895-length Rle subject
##
## views:
## start end width
## [1] 42218039 42220039 2001 [2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [2] 17441841 17443841 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [3] 17565698 17567698 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [4] 30395937 30397937 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [5] 27542138 27544138 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 1 1 1 ...]
## [6] 27511708 27513708 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [7] 32930290 32932290 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [8] 27542446 27544446 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [9] 28338439 28340439 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## ... ... ... ... ...
## [370] 47517032 47519032 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [371] 47648157 47650157 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [372] 47603373 47605373 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [373] 47647738 47649738 2001 [2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 ...]
## [374] 47704236 47706236 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [375] 47742785 47744785 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [376] 47881383 47883383 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [377] 48054506 48056506 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [378] 48024035 48026035 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
```
```
myViews[[1]]
```
```
## Views on a 48129895-length Rle subject
##
## views:
## start end width
## [1] 42218039 42220039 2001 [2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [2] 17441841 17443841 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [3] 17565698 17567698 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [4] 30395937 30397937 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [5] 27542138 27544138 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 1 1 1 ...]
## [6] 27511708 27513708 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [7] 32930290 32932290 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [8] 27542446 27544446 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [9] 28338439 28340439 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## ... ... ... ... ...
## [370] 47517032 47519032 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [371] 47648157 47650157 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [372] 47603373 47605373 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [373] 47647738 47649738 2001 [2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 ...]
## [374] 47704236 47706236 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [375] 47742785 47744785 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [376] 47881383 47883383 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
## [377] 48054506 48056506 2001 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
## [378] 48024035 48026035 2001 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ...]
```
```
# get the coverage vector from the 5th view and plot
plot(myViews[[1]][[5]],type="l")
```
FIGURE 6\.4: Coverage vector extracted from the RleList via the Views() function is plotted as a line plot.
Next, we are interested in average coverage per base for the promoters using summary
functions that work on the `Views` object.
```
# get the mean of the views
head(
viewMeans(myViews[[1]])
)
```
```
## [1] 0.2258871 0.3498251 1.2243878 0.4997501 2.0904548 0.6996502
```
```
# get the max of the views
head(
viewMaxs(myViews[[1]])
)
```
```
## [1] 2 4 12 4 21 6
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/genomic-intervals-with-more-information-summarizedexperiment-class.html |
6\.4 Genomic intervals with more information: SummarizedExperiment class
------------------------------------------------------------------------
As we have seen, genomic intervals can be mainly contained in a `GRanges` object.
It can also contain additional columns associated with each interval. Here
you can save information such as read counts or other scores associated with the
interval. However,
genomic data often have many layers. With `GRanges` you can have a table
associated with the intervals, but what happens if you have many tables and each
table has some metadata associated with it. In addition, rows and columns might
have additional annotation that cannot be contained by row or column names.
For these cases, the `SummarizedExperiment` class is ideal. It can hold multi\-layered
tabular data associated with each genomic interval and the meta\-data associated with
rows and columns, or associated with each table. For example, genomic intervals
associated with the `SummarizedExperiment` object can be gene locations, and
each tabular data structure can be RNA\-seq read counts in a time course experiment.
Each table could represent different conditions in which experiments are performed.
The `SummarizedExperiment` class is outlined in the figure below (Figure [6\.5](genomic-intervals-with-more-information-summarizedexperiment-class.html#fig:SumExpOv) ).
FIGURE 6\.5: Overview of SummarizedExperiment class and functions. Adapted from the SummarizedExperiment package vignette.
### 6\.4\.1 Create a SummarizedExperiment object
Here we show how to create a basic `SummarizedExperiment` object. We will first
create a matrix of read counts. This matrix will represent read counts from
a series of RNA\-seq experiments from different time points. Following that,
we create a `GRanges` object to represent the locations of the genes, and a table
for column annotations. This will include the names for the columns and any
other value we want to represent. Finally, we will create a `SummarizedExperiment`
object by combining all those pieces.
```
# simulate an RNA-seq read counts table
nrows <- 200
ncols <- 6
counts <- matrix(runif(nrows * ncols, 1, 1e4), nrows)
# create gene locations
rowRanges <- GRanges(rep(c("chr1", "chr2"), c(50, 150)),
IRanges(floor(runif(200, 1e5, 1e6)), width=100),
strand=sample(c("+", "-"), 200, TRUE),
feature_id=paste0("gene", 1:200))
# create table for the columns
colData <- DataFrame(timepoint=1:6,
row.names=LETTERS[1:6])
# create SummarizedExperiment object
se=SummarizedExperiment(assays=list(counts=counts),
rowRanges=rowRanges, colData=colData)
se
```
```
## class: RangedSummarizedExperiment
## dim: 200 6
## metadata(0):
## assays(1): counts
## rownames: NULL
## rowData names(1): feature_id
## colnames(6): A B ... E F
## colData names(1): timepoint
```
### 6\.4\.2 Subset and manipulate the SummarizedExperiment object
Now that we have a `SummarizedExperiment` object, we can subset it and extract/change
parts of it.
#### 6\.4\.2\.1 Extracting parts of the object
`colData()` and `rowData()` extract the column\-associated and row\-associated
tables. `metaData()` extracts the meta\-data table if there is any table associated.
```
colData(se) # extract column associated data
```
```
## DataFrame with 6 rows and 1 column
## timepoint
## <integer>
## A 1
## B 2
## C 3
## D 4
## E 5
## F 6
```
```
rowData(se) # extrac row associated data
```
```
## DataFrame with 200 rows and 1 column
## feature_id
## <character>
## 1 gene1
## 2 gene2
## 3 gene3
## 4 gene4
## 5 gene5
## ... ...
## 196 gene196
## 197 gene197
## 198 gene198
## 199 gene199
## 200 gene200
```
To extract the main table or tables that contain the values of interest such
as read counts, we must use the `assays()` function. This returns a list of
`DataFrame` objects associated with the object.
```
assays(se) # extract list of assays
```
```
## List of length 1
## names(1): counts
```
You can use names with `$` or `[]` notation to extract specific tables from the list.
```
assays(se)$counts # get the table named "counts"
assays(se)[[1]] # get the first table
```
#### 6\.4\.2\.2 Subsetting
Subsetting is easy using `[ ]` notation. This is similar to the way we
subset data frames or matrices.
```
# subset the first five transcripts and first three samples
se[1:5, 1:3]
```
```
## class: RangedSummarizedExperiment
## dim: 5 3
## metadata(0):
## assays(1): counts
## rownames: NULL
## rowData names(1): feature_id
## colnames(3): A B C
## colData names(1): timepoint
```
One can also use the `$` operator to subset based on `colData()` columns. You can
extract certain samples or in our case, time points.
```
se[, se$timepoint == 1]
```
In addition, as `SummarizedExperiment` objects are `GRanges` objects on steroids,
they support all of the `findOverlaps()` methods and associated functions that
work on `GRanges` objects.
```
# Subset for only rows which are in chr1:100,000-1,100,000
roi <- GRanges(seqnames="chr1", ranges=100000:1100000)
subsetByOverlaps(se, roi)
```
```
## class: RangedSummarizedExperiment
## dim: 50 6
## metadata(0):
## assays(1): counts
## rownames: NULL
## rowData names(1): feature_id
## colnames(6): A B ... E F
## colData names(1): timepoint
```
### 6\.4\.1 Create a SummarizedExperiment object
Here we show how to create a basic `SummarizedExperiment` object. We will first
create a matrix of read counts. This matrix will represent read counts from
a series of RNA\-seq experiments from different time points. Following that,
we create a `GRanges` object to represent the locations of the genes, and a table
for column annotations. This will include the names for the columns and any
other value we want to represent. Finally, we will create a `SummarizedExperiment`
object by combining all those pieces.
```
# simulate an RNA-seq read counts table
nrows <- 200
ncols <- 6
counts <- matrix(runif(nrows * ncols, 1, 1e4), nrows)
# create gene locations
rowRanges <- GRanges(rep(c("chr1", "chr2"), c(50, 150)),
IRanges(floor(runif(200, 1e5, 1e6)), width=100),
strand=sample(c("+", "-"), 200, TRUE),
feature_id=paste0("gene", 1:200))
# create table for the columns
colData <- DataFrame(timepoint=1:6,
row.names=LETTERS[1:6])
# create SummarizedExperiment object
se=SummarizedExperiment(assays=list(counts=counts),
rowRanges=rowRanges, colData=colData)
se
```
```
## class: RangedSummarizedExperiment
## dim: 200 6
## metadata(0):
## assays(1): counts
## rownames: NULL
## rowData names(1): feature_id
## colnames(6): A B ... E F
## colData names(1): timepoint
```
### 6\.4\.2 Subset and manipulate the SummarizedExperiment object
Now that we have a `SummarizedExperiment` object, we can subset it and extract/change
parts of it.
#### 6\.4\.2\.1 Extracting parts of the object
`colData()` and `rowData()` extract the column\-associated and row\-associated
tables. `metaData()` extracts the meta\-data table if there is any table associated.
```
colData(se) # extract column associated data
```
```
## DataFrame with 6 rows and 1 column
## timepoint
## <integer>
## A 1
## B 2
## C 3
## D 4
## E 5
## F 6
```
```
rowData(se) # extrac row associated data
```
```
## DataFrame with 200 rows and 1 column
## feature_id
## <character>
## 1 gene1
## 2 gene2
## 3 gene3
## 4 gene4
## 5 gene5
## ... ...
## 196 gene196
## 197 gene197
## 198 gene198
## 199 gene199
## 200 gene200
```
To extract the main table or tables that contain the values of interest such
as read counts, we must use the `assays()` function. This returns a list of
`DataFrame` objects associated with the object.
```
assays(se) # extract list of assays
```
```
## List of length 1
## names(1): counts
```
You can use names with `$` or `[]` notation to extract specific tables from the list.
```
assays(se)$counts # get the table named "counts"
assays(se)[[1]] # get the first table
```
#### 6\.4\.2\.2 Subsetting
Subsetting is easy using `[ ]` notation. This is similar to the way we
subset data frames or matrices.
```
# subset the first five transcripts and first three samples
se[1:5, 1:3]
```
```
## class: RangedSummarizedExperiment
## dim: 5 3
## metadata(0):
## assays(1): counts
## rownames: NULL
## rowData names(1): feature_id
## colnames(3): A B C
## colData names(1): timepoint
```
One can also use the `$` operator to subset based on `colData()` columns. You can
extract certain samples or in our case, time points.
```
se[, se$timepoint == 1]
```
In addition, as `SummarizedExperiment` objects are `GRanges` objects on steroids,
they support all of the `findOverlaps()` methods and associated functions that
work on `GRanges` objects.
```
# Subset for only rows which are in chr1:100,000-1,100,000
roi <- GRanges(seqnames="chr1", ranges=100000:1100000)
subsetByOverlaps(se, roi)
```
```
## class: RangedSummarizedExperiment
## dim: 50 6
## metadata(0):
## assays(1): counts
## rownames: NULL
## rowData names(1): feature_id
## colnames(6): A B ... E F
## colData names(1): timepoint
```
#### 6\.4\.2\.1 Extracting parts of the object
`colData()` and `rowData()` extract the column\-associated and row\-associated
tables. `metaData()` extracts the meta\-data table if there is any table associated.
```
colData(se) # extract column associated data
```
```
## DataFrame with 6 rows and 1 column
## timepoint
## <integer>
## A 1
## B 2
## C 3
## D 4
## E 5
## F 6
```
```
rowData(se) # extrac row associated data
```
```
## DataFrame with 200 rows and 1 column
## feature_id
## <character>
## 1 gene1
## 2 gene2
## 3 gene3
## 4 gene4
## 5 gene5
## ... ...
## 196 gene196
## 197 gene197
## 198 gene198
## 199 gene199
## 200 gene200
```
To extract the main table or tables that contain the values of interest such
as read counts, we must use the `assays()` function. This returns a list of
`DataFrame` objects associated with the object.
```
assays(se) # extract list of assays
```
```
## List of length 1
## names(1): counts
```
You can use names with `$` or `[]` notation to extract specific tables from the list.
```
assays(se)$counts # get the table named "counts"
assays(se)[[1]] # get the first table
```
#### 6\.4\.2\.2 Subsetting
Subsetting is easy using `[ ]` notation. This is similar to the way we
subset data frames or matrices.
```
# subset the first five transcripts and first three samples
se[1:5, 1:3]
```
```
## class: RangedSummarizedExperiment
## dim: 5 3
## metadata(0):
## assays(1): counts
## rownames: NULL
## rowData names(1): feature_id
## colnames(3): A B C
## colData names(1): timepoint
```
One can also use the `$` operator to subset based on `colData()` columns. You can
extract certain samples or in our case, time points.
```
se[, se$timepoint == 1]
```
In addition, as `SummarizedExperiment` objects are `GRanges` objects on steroids,
they support all of the `findOverlaps()` methods and associated functions that
work on `GRanges` objects.
```
# Subset for only rows which are in chr1:100,000-1,100,000
roi <- GRanges(seqnames="chr1", ranges=100000:1100000)
subsetByOverlaps(se, roi)
```
```
## class: RangedSummarizedExperiment
## dim: 50 6
## metadata(0):
## assays(1): counts
## rownames: NULL
## rowData names(1): feature_id
## colnames(6): A B ... E F
## colData names(1): timepoint
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/visualizing-and-summarizing-genomic-intervals.html |
6\.5 Visualizing and summarizing genomic intervals
--------------------------------------------------
Data integration and visualization is cornerstone of genomic data analysis. Below, we will
show different ways of integrating and visualizing genomic intervals. These methods
can be used to visualize large amounts of data in a locus\-specific or multi\-loci
manner.
### 6\.5\.1 Visualizing intervals on a locus of interest
Oftentimes, we will be interested in a particular genomic locus and try to visualize
different genomic datasets over that locus. This is similar to looking at the data
over one of the genome browsers. Below we will display genes, GpG islands and read
coverage from a ChIP\-seq experiment using the `Gviz` package. For the `Gviz` package, we first need to
set the tracks to display. The tracks can be in various formats. They can be R
objects such as `IRanges`,`GRanges` and `data.frame`, or they can be in flat file formats
such as bigWig, BED, and BAM. After the tracks are set, we can display them with the
`plotTracks` function, the resulting plot is shown in Figure [6\.6](visualizing-and-summarizing-genomic-intervals.html#fig:GvizExchp6).
```
library(Gviz)
# set tracks to display
# set CpG island track
cpgi.track=AnnotationTrack(cpgi.gr,
name = "CpG")
# set gene track
# we will get this from EBI Biomart webservice
gene.track <- BiomartGeneRegionTrack(genome = "hg19",
chromosome = "chr21",
start = 27698681, end = 28083310,
name = "ENSEMBL")
# set track for ChIP-seq coverage
chipseqFile=system.file("extdata",
"wgEncodeHaibTfbsA549.chr21.bw",
package="compGenomRData")
cov.track=DataTrack(chipseqFile,type = "l",
name="coverage")
# call the display function plotTracks
track.list=list(cpgi.track,gene.track,cov.track)
plotTracks(track.list,from=27698681,to=28083310,chromsome="chr21")
```
FIGURE 6\.6: Genomic data tracks visualized using the Gviz functions.
### 6\.5\.2 Summaries of genomic intervals on multiple loci
Looking at data one region at a time could be inefficient. One can summarize
different data sets over thousands of regions of interest and identify patterns.
These summaries can include different data types such as motifs, read coverage
and other scores associated with genomic intervals. The `genomation` package can
summarize and help identify patterns in the datasets. The datasets can have
different kinds of information and multiple file types can be used such as BED, GFF, BAM and bigWig. We will look at H3K4me3 ChIP\-seq and DNAse\-seq signals from the H1 embryonic stem cell line. H3K4me3 is usually associated with promoters and regions with high DNAse\-seq signal are associated with accessible regions, which means mostly regulatory regions. We will summarize those datasets around the transcription start sites (TSS) of genes on chromosome 20 of the human hg19 assembly. We will first read the genes and extract the region around the TSS, 500bp upstream and downstream. We will then create a matrix of ChIP\-seq scores for those regions. Each row will represent a region around a specific TSS and columns will be the scores per base. We will then plot average enrichment values around the TSS of genes on chromosome 20\.
```
# get transcription start sites on chr20
library(genomation)
transcriptFile=system.file("extdata",
"refseq.hg19.chr20.bed",
package="compGenomRData")
feat=readTranscriptFeatures(transcriptFile,
remove.unusual = TRUE,
up.flank = 500, down.flank = 500)
prom=feat$promoters # get promoters from the features
# get for H3K4me3 values around TSSes
# we use strand.aware=TRUE so - strands will
# be reversed
H3K4me3File=system.file("extdata",
"H1.ESC.H3K4me3.chr20.bw",
package="compGenomRData")
sm=ScoreMatrix(H3K4me3File,prom,
type="bigWig",strand.aware = TRUE)
# look for the average enrichment
plotMeta(sm, profile.names = "H3K4me3", xcoords = c(-500,500),
ylab="H3K4me3 enrichment",dispersion = "se",
xlab="bases around TSS")
```
FIGURE 6\.7: Meta\-region plot using genomation.
The resulting plot is shown in Figure [6\.7](visualizing-and-summarizing-genomic-intervals.html#fig:metaRegionchp6). The pattern we see is expected, there is a dip just around TSS and the signal is more
intense downstream of the TSS.
We can also plot a heatmap where each row is a
region around the TSS and color coded by enrichment. This can show us not only the
general pattern, as in the meta\-region
plot, but also how many of the regions produce such a pattern. The `heatMatrix()` function shown below achieves that. The resulting heatmap plot is shown in Figure [6\.8](visualizing-and-summarizing-genomic-intervals.html#fig:heatmatrix1Chp6).
```
heatMatrix(sm,order=TRUE,xcoords = c(-500,500),
xlab="bases around TSS")
```
FIGURE 6\.8: Heatmap of enrichment of H3K4me2 around the TSS.
Here we saw that about half of the regions do not have any signal. In addition it seems the multi\-modal profile we have observed earlier is more complicated. Certain regions seem to have signal on both sides of the TSS, whereas others have signal mostly on the downstream side.
Normally, there would be more than one experiment or we can integrate datasets from
public repositories. In this case, we can see how different signals look in the regions we are interested in. Now, we will also use DNAse\-seq data and create a list of matrices with our datasets and plot the average profile of the signals from both datasets. The resulting meta\-region plot is shown in Figure [6\.9](visualizing-and-summarizing-genomic-intervals.html#fig:heatmatrixlistchp6).
```
DNAseFile=system.file("extdata",
"H1.ESC.dnase.chr20.bw",
package="compGenomRData")
sml=ScoreMatrixList(c(H3K4me3=H3K4me3File,
DNAse=DNAseFile),prom,
type="bigWig",strand.aware = TRUE)
plotMeta(sml)
```
FIGURE 6\.9: Average profiles of DNAse and H3K4me3 ChIP\-seq.
We should now look at the heatmaps side by side and we should also cluster the rows
based on their similarity. We will be using `multiHeatMatrix` since we have multiple `ScoreMatrix` objects in the list. In this case, we will also use the `winsorize` argument to limit extreme values,
every score above 95th percentile will be equalized the value of the 95th percentile. In addition, `heatMatrix` and `multiHeatMatrix` can cluster the rows.
Below, we will be using k\-means clustering with 3 clusters.
```
set.seed(1029)
multiHeatMatrix(sml,order=TRUE,xcoords = c(-500,500),
xlab="bases around TSS",winsorize = c(0,95),
matrix.main = c("H3K4me3","DNAse"),
column.scale=TRUE,
clustfun=function(x) kmeans(x, centers=3)$cluster)
```
FIGURE 6\.10: Heatmaps of H3K4me3 and DNAse data.
The resulting heatmaps are shown in Figure [6\.10](visualizing-and-summarizing-genomic-intervals.html#fig:multiHeatMatrix). These plots revealed a different picture than we have observed before. Almost half of the promoters have no signal for DNAse or H3K4me3; these regions are probably not active and associated genes are not expressed. For regions with the H3K4me3 signal, there are two major patterns: one pattern where both downstream and upstream of the TSS are enriched, and on the other pattern, mostly downstream of the TSS is enriched.
### 6\.5\.3 Making karyograms and circos plots
Chromosomal karyograms and circos plots are beneficial for displaying data over the
whole genome of chromosomes of interest, although the information that can be
displayed over these large regions are usually not very clear and only large trends
can be discerned by eye, such as loss of methylation in large regions or genome\-wide.
Below, we show how to use the `ggbio` package for plotting.
This package has a slightly different syntax than base graphics. The syntax follows
“grammar of graphics” logic, and depends on the `ggplot2` package we introduced in Chapter [2](Rintro.html#Rintro). It is
a deconstructed way of thinking about the plot. You add your data and apply mappings
and transformations in order to achieve the final output. In `ggbio`, things are
relatively easy since a high\-level function, the `autoplot` function, will recognize
most of the datatypes and guess the most appropriate plot type. You can change
its behavior by applying low\-level functions. We first get the sizes of chromosomes
and make a karyogram template. The empty karyogram is shown in Figure [6\.11](visualizing-and-summarizing-genomic-intervals.html#fig:karyo1).
```
library(ggbio)
data(ideoCyto, package = "biovizBase")
p <- autoplot(seqinfo(ideoCyto$hg19), layout = "karyogram")
p
```
FIGURE 6\.11: Karyogram example.
Next, we would like to plot CpG islands on this karyogram. We simply do this
by adding a layer with the `layout_karyogram()` function. The resulting karyogram is shown in Figure [6\.12](visualizing-and-summarizing-genomic-intervals.html#fig:karyo2).
```
# read CpG islands from a generic text file
CpGiFile=filePath=system.file("extdata",
"CpGi.hg19.table.txt",
package="compGenomRData")
cpgi.gr=genomation::readGeneric(CpGiFile,
chr = 1, start = 2, end = 3,header=TRUE,
keep.all.metadata =TRUE,remove.unusual=TRUE )
p + layout_karyogram(cpgi.gr)
```
FIGURE 6\.12: Karyogram of CpG islands over the human genome.
Next, we would like to plot some data over the chromosomes. This could be the ChIP\-seq
signal
or any other signal over the genome; we will use CpG island scores from the data set
we read earlier. We will plot a point proportional to “obsExp” column in the data set. We use the `ylim` argument to squish the chromosomal rectangles and plot on top of those. The `aes` argument defines how the data is mapped to geometry. In this case,
the argument indicates that the points will have an x coordinate from CpG island start positions and a y coordinate from the obsExp score of CpG islands. The resulting karyogram is shown in Figure [6\.13](visualizing-and-summarizing-genomic-intervals.html#fig:karyoCpG).
```
p + layout_karyogram(cpgi.gr, aes(x= start, y = obsExp),
geom="point",
ylim = c(2,50), color = "red",
size=0.1,rect.height=1)
```
FIGURE 6\.13: Karyogram of CpG islands and their observed/expected scores over the human genome.
Another way to depict regions or quantitative signals on the chromosomes is circos plots. These are circular plots usually used for showing chromosomal rearrangements, but can also be used for depicting signals. The `ggbio` package can produce all kinds of circos plots. Below, we will show how to use that for our CpG island score example, and the resulting plot is shown in Figure [6\.14](visualizing-and-summarizing-genomic-intervals.html#fig:circosCpG).
```
# set the chromsome in a circle
# color set to white to look transparent
p <- ggplot() + layout_circle(ideoCyto$hg19, geom = "ideo", fill = "white",
colour="white",cytoband = TRUE,
radius = 39, trackWidth = 2)
# plot the scores as points
p <- p + layout_circle(cpgi.gr, geom = "point", grid=TRUE,
size = 0.01, aes(y = obsExp),color="red",
radius = 42, trackWidth = 10)
# set the chromosome names
p <- p + layout_circle(as(seqinfo(ideoCyto$hg19),"GRanges"),
geom = "text", aes(label = seqnames),
vjust = 0, radius = 55, trackWidth = 7,
size=3)
# display the plot
p
```
FIGURE 6\.14: Circos plot for CpG island scores.
### 6\.5\.1 Visualizing intervals on a locus of interest
Oftentimes, we will be interested in a particular genomic locus and try to visualize
different genomic datasets over that locus. This is similar to looking at the data
over one of the genome browsers. Below we will display genes, GpG islands and read
coverage from a ChIP\-seq experiment using the `Gviz` package. For the `Gviz` package, we first need to
set the tracks to display. The tracks can be in various formats. They can be R
objects such as `IRanges`,`GRanges` and `data.frame`, or they can be in flat file formats
such as bigWig, BED, and BAM. After the tracks are set, we can display them with the
`plotTracks` function, the resulting plot is shown in Figure [6\.6](visualizing-and-summarizing-genomic-intervals.html#fig:GvizExchp6).
```
library(Gviz)
# set tracks to display
# set CpG island track
cpgi.track=AnnotationTrack(cpgi.gr,
name = "CpG")
# set gene track
# we will get this from EBI Biomart webservice
gene.track <- BiomartGeneRegionTrack(genome = "hg19",
chromosome = "chr21",
start = 27698681, end = 28083310,
name = "ENSEMBL")
# set track for ChIP-seq coverage
chipseqFile=system.file("extdata",
"wgEncodeHaibTfbsA549.chr21.bw",
package="compGenomRData")
cov.track=DataTrack(chipseqFile,type = "l",
name="coverage")
# call the display function plotTracks
track.list=list(cpgi.track,gene.track,cov.track)
plotTracks(track.list,from=27698681,to=28083310,chromsome="chr21")
```
FIGURE 6\.6: Genomic data tracks visualized using the Gviz functions.
### 6\.5\.2 Summaries of genomic intervals on multiple loci
Looking at data one region at a time could be inefficient. One can summarize
different data sets over thousands of regions of interest and identify patterns.
These summaries can include different data types such as motifs, read coverage
and other scores associated with genomic intervals. The `genomation` package can
summarize and help identify patterns in the datasets. The datasets can have
different kinds of information and multiple file types can be used such as BED, GFF, BAM and bigWig. We will look at H3K4me3 ChIP\-seq and DNAse\-seq signals from the H1 embryonic stem cell line. H3K4me3 is usually associated with promoters and regions with high DNAse\-seq signal are associated with accessible regions, which means mostly regulatory regions. We will summarize those datasets around the transcription start sites (TSS) of genes on chromosome 20 of the human hg19 assembly. We will first read the genes and extract the region around the TSS, 500bp upstream and downstream. We will then create a matrix of ChIP\-seq scores for those regions. Each row will represent a region around a specific TSS and columns will be the scores per base. We will then plot average enrichment values around the TSS of genes on chromosome 20\.
```
# get transcription start sites on chr20
library(genomation)
transcriptFile=system.file("extdata",
"refseq.hg19.chr20.bed",
package="compGenomRData")
feat=readTranscriptFeatures(transcriptFile,
remove.unusual = TRUE,
up.flank = 500, down.flank = 500)
prom=feat$promoters # get promoters from the features
# get for H3K4me3 values around TSSes
# we use strand.aware=TRUE so - strands will
# be reversed
H3K4me3File=system.file("extdata",
"H1.ESC.H3K4me3.chr20.bw",
package="compGenomRData")
sm=ScoreMatrix(H3K4me3File,prom,
type="bigWig",strand.aware = TRUE)
# look for the average enrichment
plotMeta(sm, profile.names = "H3K4me3", xcoords = c(-500,500),
ylab="H3K4me3 enrichment",dispersion = "se",
xlab="bases around TSS")
```
FIGURE 6\.7: Meta\-region plot using genomation.
The resulting plot is shown in Figure [6\.7](visualizing-and-summarizing-genomic-intervals.html#fig:metaRegionchp6). The pattern we see is expected, there is a dip just around TSS and the signal is more
intense downstream of the TSS.
We can also plot a heatmap where each row is a
region around the TSS and color coded by enrichment. This can show us not only the
general pattern, as in the meta\-region
plot, but also how many of the regions produce such a pattern. The `heatMatrix()` function shown below achieves that. The resulting heatmap plot is shown in Figure [6\.8](visualizing-and-summarizing-genomic-intervals.html#fig:heatmatrix1Chp6).
```
heatMatrix(sm,order=TRUE,xcoords = c(-500,500),
xlab="bases around TSS")
```
FIGURE 6\.8: Heatmap of enrichment of H3K4me2 around the TSS.
Here we saw that about half of the regions do not have any signal. In addition it seems the multi\-modal profile we have observed earlier is more complicated. Certain regions seem to have signal on both sides of the TSS, whereas others have signal mostly on the downstream side.
Normally, there would be more than one experiment or we can integrate datasets from
public repositories. In this case, we can see how different signals look in the regions we are interested in. Now, we will also use DNAse\-seq data and create a list of matrices with our datasets and plot the average profile of the signals from both datasets. The resulting meta\-region plot is shown in Figure [6\.9](visualizing-and-summarizing-genomic-intervals.html#fig:heatmatrixlistchp6).
```
DNAseFile=system.file("extdata",
"H1.ESC.dnase.chr20.bw",
package="compGenomRData")
sml=ScoreMatrixList(c(H3K4me3=H3K4me3File,
DNAse=DNAseFile),prom,
type="bigWig",strand.aware = TRUE)
plotMeta(sml)
```
FIGURE 6\.9: Average profiles of DNAse and H3K4me3 ChIP\-seq.
We should now look at the heatmaps side by side and we should also cluster the rows
based on their similarity. We will be using `multiHeatMatrix` since we have multiple `ScoreMatrix` objects in the list. In this case, we will also use the `winsorize` argument to limit extreme values,
every score above 95th percentile will be equalized the value of the 95th percentile. In addition, `heatMatrix` and `multiHeatMatrix` can cluster the rows.
Below, we will be using k\-means clustering with 3 clusters.
```
set.seed(1029)
multiHeatMatrix(sml,order=TRUE,xcoords = c(-500,500),
xlab="bases around TSS",winsorize = c(0,95),
matrix.main = c("H3K4me3","DNAse"),
column.scale=TRUE,
clustfun=function(x) kmeans(x, centers=3)$cluster)
```
FIGURE 6\.10: Heatmaps of H3K4me3 and DNAse data.
The resulting heatmaps are shown in Figure [6\.10](visualizing-and-summarizing-genomic-intervals.html#fig:multiHeatMatrix). These plots revealed a different picture than we have observed before. Almost half of the promoters have no signal for DNAse or H3K4me3; these regions are probably not active and associated genes are not expressed. For regions with the H3K4me3 signal, there are two major patterns: one pattern where both downstream and upstream of the TSS are enriched, and on the other pattern, mostly downstream of the TSS is enriched.
### 6\.5\.3 Making karyograms and circos plots
Chromosomal karyograms and circos plots are beneficial for displaying data over the
whole genome of chromosomes of interest, although the information that can be
displayed over these large regions are usually not very clear and only large trends
can be discerned by eye, such as loss of methylation in large regions or genome\-wide.
Below, we show how to use the `ggbio` package for plotting.
This package has a slightly different syntax than base graphics. The syntax follows
“grammar of graphics” logic, and depends on the `ggplot2` package we introduced in Chapter [2](Rintro.html#Rintro). It is
a deconstructed way of thinking about the plot. You add your data and apply mappings
and transformations in order to achieve the final output. In `ggbio`, things are
relatively easy since a high\-level function, the `autoplot` function, will recognize
most of the datatypes and guess the most appropriate plot type. You can change
its behavior by applying low\-level functions. We first get the sizes of chromosomes
and make a karyogram template. The empty karyogram is shown in Figure [6\.11](visualizing-and-summarizing-genomic-intervals.html#fig:karyo1).
```
library(ggbio)
data(ideoCyto, package = "biovizBase")
p <- autoplot(seqinfo(ideoCyto$hg19), layout = "karyogram")
p
```
FIGURE 6\.11: Karyogram example.
Next, we would like to plot CpG islands on this karyogram. We simply do this
by adding a layer with the `layout_karyogram()` function. The resulting karyogram is shown in Figure [6\.12](visualizing-and-summarizing-genomic-intervals.html#fig:karyo2).
```
# read CpG islands from a generic text file
CpGiFile=filePath=system.file("extdata",
"CpGi.hg19.table.txt",
package="compGenomRData")
cpgi.gr=genomation::readGeneric(CpGiFile,
chr = 1, start = 2, end = 3,header=TRUE,
keep.all.metadata =TRUE,remove.unusual=TRUE )
p + layout_karyogram(cpgi.gr)
```
FIGURE 6\.12: Karyogram of CpG islands over the human genome.
Next, we would like to plot some data over the chromosomes. This could be the ChIP\-seq
signal
or any other signal over the genome; we will use CpG island scores from the data set
we read earlier. We will plot a point proportional to “obsExp” column in the data set. We use the `ylim` argument to squish the chromosomal rectangles and plot on top of those. The `aes` argument defines how the data is mapped to geometry. In this case,
the argument indicates that the points will have an x coordinate from CpG island start positions and a y coordinate from the obsExp score of CpG islands. The resulting karyogram is shown in Figure [6\.13](visualizing-and-summarizing-genomic-intervals.html#fig:karyoCpG).
```
p + layout_karyogram(cpgi.gr, aes(x= start, y = obsExp),
geom="point",
ylim = c(2,50), color = "red",
size=0.1,rect.height=1)
```
FIGURE 6\.13: Karyogram of CpG islands and their observed/expected scores over the human genome.
Another way to depict regions or quantitative signals on the chromosomes is circos plots. These are circular plots usually used for showing chromosomal rearrangements, but can also be used for depicting signals. The `ggbio` package can produce all kinds of circos plots. Below, we will show how to use that for our CpG island score example, and the resulting plot is shown in Figure [6\.14](visualizing-and-summarizing-genomic-intervals.html#fig:circosCpG).
```
# set the chromsome in a circle
# color set to white to look transparent
p <- ggplot() + layout_circle(ideoCyto$hg19, geom = "ideo", fill = "white",
colour="white",cytoband = TRUE,
radius = 39, trackWidth = 2)
# plot the scores as points
p <- p + layout_circle(cpgi.gr, geom = "point", grid=TRUE,
size = 0.01, aes(y = obsExp),color="red",
radius = 42, trackWidth = 10)
# set the chromosome names
p <- p + layout_circle(as(seqinfo(ideoCyto$hg19),"GRanges"),
geom = "text", aes(label = seqnames),
vjust = 0, radius = 55, trackWidth = 7,
size=3)
# display the plot
p
```
FIGURE 6\.14: Circos plot for CpG island scores.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/exercises-4.html |
6\.6 Exercises
--------------
The data for the exercises is within the `compGenomRData` package.
Run the following to see the data files.
```
dir(system.file("extdata",
package="compGenomRData"))
```
You will need some of those files to complete the exercises.
### 6\.6\.1 Operations on genomic intervals with the `GenomicRanges` package
1. Create a `GRanges` object using the information in the table below:\[Difficulty: **Beginner**]
| chr | start | end | strand | score |
| --- | --- | --- | --- | --- |
| chr1 | 10000 | 10300 | \+ | 10 |
| chr1 | 11100 | 11500 | \- | 20 |
| chr2 | 20000 | 20030 | \+ | 15 |
2. Use the `start()`, `end()`, `strand()`,`seqnames()` and `width()` functions on the `GRanges`
object you created. Figure out what they are doing. Can you get a subset of the `GRanges` object for intervals that are only on the \+ strand? If you can do that, try getting intervals that are on chr1\. *HINT:* `GRanges` objects can be subset using the `[ ]` operator, similar to data frames, but you may need
to use `start()`, `end()` and `strand()`,`seqnames()` within the `[]`. \[Difficulty: **Beginner/Intermediate**]
3. Import mouse (mm9 assembly) CpG islands and RefSeq transcripts for chr12 from the UCSC browser as `GRanges` objects using `rtracklayer` functions. HINT: Check chapter content and modify the code there as necessary. If that somehow does not work, go to the UCSC browser and download it as a BED file. The track name for Refseq genes is “RefSeq Genes” and the table name is “refGene”. \[Difficulty: **Beginner/Intermediate**]
4. Following from the exercise above, get the promoters of Refseq transcripts (\-1000bp and \+1000 bp of the TSS) and calculate what percentage of them overlap with CpG islands. HINT: You have to get the promoter coordinates and use the `findOverlaps()` or `subsetByOverlaps()` from the `GenomicRanges` package. To get promoters, type `?promoters` on the R console and see how to use that function to get promoters or calculate their coordinates as shown in the chapter. \[Difficulty: **Beginner/Intermediate**]
5. Plot the distribution of CpG island lengths for CpG islands that overlap with the
promoters. \[Difficulty: **Beginner/Intermediate**]
6. Get canonical peaks for SP1 (peaks that are in both replicates) on chr21\. Peaks for each replicate are located in the `wgEncodeHaibTfbsGm12878Sp1Pcr1xPkRep1.broadPeak.gz` and `wgEncodeHaibTfbsGm12878Sp1Pcr1xPkRep2.broadPeak.gz` files. **HINT**: You need to use `findOverlaps()` or `subsetByOverlaps()` to get the subset of peaks that occur in both replicates (canonical peaks). You can try to read “…broadPeak.gz” files using the `genomation::readBroadPeak()` function; broadPeak is just an extended BED format. In addition, you can try to use `the coverage()` and `slice()` functions to get more precise canonical peak locations. \[Difficulty: **Intermediate/Advanced**]
### 6\.6\.2 Dealing with mapped high\-throughput sequencing reads
1. Count the reads overlapping with canonical SP1 peaks using the BAM file for one of the replicates. The following file in the `compGenomRData` package contains the alignments for SP1 ChIP\-seq reads: `wgEncodeHaibTfbsGm12878Sp1Pcr1xAlnRep1.chr21.bam`. **HINT**: Use functions from the `GenomicAlignments` package. \[Difficulty: **Beginner/Intermediate**]
### 6\.6\.3 Dealing with contiguous scores over the genome
1. Extract the `Views` object for the promoters on chr20 from the `H1.ESC.H3K4me1.chr20.bw` file available at `CompGenomRData` package. Plot the first “View” as a line plot. **HINT**: See the code in the relevant section in the chapter and adapt the code from there. \[Difficulty: **Beginner/Intermediate**]
2. Make a histogram of the maximum signal for the Views in the object you extracted above. You can use any of the view summary functions or use `lapply()` and write your own summary function. \[Difficulty: **Beginner/Intermediate**]
3. Get the genomic positions of maximum signal in each view and make a `GRanges` object. **HINT**: See the `?viewRangeMaxs` help page. Try to make a `GRanges` object out of the returned object. \[Difficulty: **Intermediate**]
### 6\.6\.4 Visualizing and summarizing genomic intervals
1. Extract \-500,\+500 bp regions around the TSSes on chr21; there are refseq files for the hg19 human genome assembly in the `compGenomRData` package. Use SP1 ChIP\-seq data in the `compGenomRData` package, access the file path via the `system.file()` function, the file name is:
`wgEncodeHaibTfbsGm12878Sp1Pcr1xAlnRep1.chr21.bam`. Create an average profile of read coverage around the TSSes. Following that, visualize the read coverage with a heatmap. **HINT**: All of these are possible using the `genomation` package functions. Check `help(ScoreMatrix)` to see how you can use bam files. As an example here is how you can get the file path to refseq annotation on chr21\. \[Difficulty: **Intermediate/Advanced**]
```
transcriptFilechr21=system.file("extdata",
"refseq.hg19.chr21.bed",
package="compGenomRData")
```
2. Extract \-500,\+500 bp regions around the TSSes on chr20\. Use H3K4me3 (`H1.ESC.H3K4me3.chr20.bw`) and H3K27ac (`H1.ESC.H3K27ac.chr20.bw`) ChIP\-seq enrichment data in the `compGenomRData` package and create heatmaps and average signal profiles for regions around the TSSes.\[Difficulty: **Intermediate/Advanced**]
3. Download P300 ChIP\-seq peaks data from the UCSC browser. The peaks are locations where P300 binds. The P300 binding marks enhancer regions in the genome. (**HINT**: group: “regulation”, track: “Txn Factor ChIP”, table:“wgEncodeRegTfbsClusteredV3”, you need to filter the rows for “EP300” name.) Check enrichment of H3K4me3, H3K27ac and DNase\-seq (`H1.ESC.dnase.chr20.bw`) experiments on chr20 on and arounf the P300 binding\-sites, use data from `compGenomRData` package. Make multi\-heatmaps and metaplots. What is different from the TSS profiles? \[Difficulty: **Advanced**]
4. Cluster the rows of multi\-heatmaps for the task above. Are there obvious clusters? **HINT**: Check arguments of the `multiHeatMatrix()` function. \[Difficulty: **Advanced**]
5. Visualize one of the \-500,\+500 bp regions around the TSS using `Gviz` functions. You should visualize both H3K4me3 and H3K27ac and the gene models. \[Difficulty: **Advanced**]
### 6\.6\.1 Operations on genomic intervals with the `GenomicRanges` package
1. Create a `GRanges` object using the information in the table below:\[Difficulty: **Beginner**]
| chr | start | end | strand | score |
| --- | --- | --- | --- | --- |
| chr1 | 10000 | 10300 | \+ | 10 |
| chr1 | 11100 | 11500 | \- | 20 |
| chr2 | 20000 | 20030 | \+ | 15 |
2. Use the `start()`, `end()`, `strand()`,`seqnames()` and `width()` functions on the `GRanges`
object you created. Figure out what they are doing. Can you get a subset of the `GRanges` object for intervals that are only on the \+ strand? If you can do that, try getting intervals that are on chr1\. *HINT:* `GRanges` objects can be subset using the `[ ]` operator, similar to data frames, but you may need
to use `start()`, `end()` and `strand()`,`seqnames()` within the `[]`. \[Difficulty: **Beginner/Intermediate**]
3. Import mouse (mm9 assembly) CpG islands and RefSeq transcripts for chr12 from the UCSC browser as `GRanges` objects using `rtracklayer` functions. HINT: Check chapter content and modify the code there as necessary. If that somehow does not work, go to the UCSC browser and download it as a BED file. The track name for Refseq genes is “RefSeq Genes” and the table name is “refGene”. \[Difficulty: **Beginner/Intermediate**]
4. Following from the exercise above, get the promoters of Refseq transcripts (\-1000bp and \+1000 bp of the TSS) and calculate what percentage of them overlap with CpG islands. HINT: You have to get the promoter coordinates and use the `findOverlaps()` or `subsetByOverlaps()` from the `GenomicRanges` package. To get promoters, type `?promoters` on the R console and see how to use that function to get promoters or calculate their coordinates as shown in the chapter. \[Difficulty: **Beginner/Intermediate**]
5. Plot the distribution of CpG island lengths for CpG islands that overlap with the
promoters. \[Difficulty: **Beginner/Intermediate**]
6. Get canonical peaks for SP1 (peaks that are in both replicates) on chr21\. Peaks for each replicate are located in the `wgEncodeHaibTfbsGm12878Sp1Pcr1xPkRep1.broadPeak.gz` and `wgEncodeHaibTfbsGm12878Sp1Pcr1xPkRep2.broadPeak.gz` files. **HINT**: You need to use `findOverlaps()` or `subsetByOverlaps()` to get the subset of peaks that occur in both replicates (canonical peaks). You can try to read “…broadPeak.gz” files using the `genomation::readBroadPeak()` function; broadPeak is just an extended BED format. In addition, you can try to use `the coverage()` and `slice()` functions to get more precise canonical peak locations. \[Difficulty: **Intermediate/Advanced**]
### 6\.6\.2 Dealing with mapped high\-throughput sequencing reads
1. Count the reads overlapping with canonical SP1 peaks using the BAM file for one of the replicates. The following file in the `compGenomRData` package contains the alignments for SP1 ChIP\-seq reads: `wgEncodeHaibTfbsGm12878Sp1Pcr1xAlnRep1.chr21.bam`. **HINT**: Use functions from the `GenomicAlignments` package. \[Difficulty: **Beginner/Intermediate**]
### 6\.6\.3 Dealing with contiguous scores over the genome
1. Extract the `Views` object for the promoters on chr20 from the `H1.ESC.H3K4me1.chr20.bw` file available at `CompGenomRData` package. Plot the first “View” as a line plot. **HINT**: See the code in the relevant section in the chapter and adapt the code from there. \[Difficulty: **Beginner/Intermediate**]
2. Make a histogram of the maximum signal for the Views in the object you extracted above. You can use any of the view summary functions or use `lapply()` and write your own summary function. \[Difficulty: **Beginner/Intermediate**]
3. Get the genomic positions of maximum signal in each view and make a `GRanges` object. **HINT**: See the `?viewRangeMaxs` help page. Try to make a `GRanges` object out of the returned object. \[Difficulty: **Intermediate**]
### 6\.6\.4 Visualizing and summarizing genomic intervals
1. Extract \-500,\+500 bp regions around the TSSes on chr21; there are refseq files for the hg19 human genome assembly in the `compGenomRData` package. Use SP1 ChIP\-seq data in the `compGenomRData` package, access the file path via the `system.file()` function, the file name is:
`wgEncodeHaibTfbsGm12878Sp1Pcr1xAlnRep1.chr21.bam`. Create an average profile of read coverage around the TSSes. Following that, visualize the read coverage with a heatmap. **HINT**: All of these are possible using the `genomation` package functions. Check `help(ScoreMatrix)` to see how you can use bam files. As an example here is how you can get the file path to refseq annotation on chr21\. \[Difficulty: **Intermediate/Advanced**]
```
transcriptFilechr21=system.file("extdata",
"refseq.hg19.chr21.bed",
package="compGenomRData")
```
2. Extract \-500,\+500 bp regions around the TSSes on chr20\. Use H3K4me3 (`H1.ESC.H3K4me3.chr20.bw`) and H3K27ac (`H1.ESC.H3K27ac.chr20.bw`) ChIP\-seq enrichment data in the `compGenomRData` package and create heatmaps and average signal profiles for regions around the TSSes.\[Difficulty: **Intermediate/Advanced**]
3. Download P300 ChIP\-seq peaks data from the UCSC browser. The peaks are locations where P300 binds. The P300 binding marks enhancer regions in the genome. (**HINT**: group: “regulation”, track: “Txn Factor ChIP”, table:“wgEncodeRegTfbsClusteredV3”, you need to filter the rows for “EP300” name.) Check enrichment of H3K4me3, H3K27ac and DNase\-seq (`H1.ESC.dnase.chr20.bw`) experiments on chr20 on and arounf the P300 binding\-sites, use data from `compGenomRData` package. Make multi\-heatmaps and metaplots. What is different from the TSS profiles? \[Difficulty: **Advanced**]
4. Cluster the rows of multi\-heatmaps for the task above. Are there obvious clusters? **HINT**: Check arguments of the `multiHeatMatrix()` function. \[Difficulty: **Advanced**]
5. Visualize one of the \-500,\+500 bp regions around the TSS using `Gviz` functions. You should visualize both H3K4me3 and H3K27ac and the gene models. \[Difficulty: **Advanced**]
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/quality-check-on-sequencing-reads.html |
7\.2 Quality check on sequencing reads
--------------------------------------
The sequencing technologies usually produce basecalls with varying quality. In addition, there could be sample\-specific issues in your sequencing run, such as adapter contamination. It is standard procedure to check the quality of the reads and identify problems before doing further analysis. Checking the quality and making some decisions for the downstream analysis can influence the outcome of your project.
Below, we will walk you through the quality check steps using the [`Rqc`](https://bioconductor.org/packages/release/bioc/html/Rqc.html) package. First, we need to feed fastq files to the `rqc()` function and obtain an object with sequence quality\-related results. We are using example fastq files from the `ShortRead` package.
```
library(Rqc)
folder = system.file(package="ShortRead", "extdata/E-MTAB-1147")
# feeds fastq.qz files in "folder" to quality check function
qcRes=rqc(path = folder, pattern = ".fastq.gz", openBrowser=FALSE)
```
### 7\.2\.1 Sequence quality per base/cycle
Now that we have the `qcRes` object, we can plot various sequence quality metrics for our fastq files. We will first plot “sequence quality per base/cycle”. This plot, shown in Figure [7\.3](quality-check-on-sequencing-reads.html#fig:CycleQualityBoxPlot), depicts the quality scores across all bases at each position in the reads.
```
rqcCycleQualityBoxPlot(qcRes)
```
FIGURE 7\.3: Per base sequence quality boxplot.
In our case, the x\-axis in the plot is labeled as “cycle”. This is because in each sequencing “cycle” a fluorescently labeled nucleotide is added to complement the template sequence, and the sequencing machine identifies which nucleotide is added. Therefore, cycles correspond to bases/nucleotides along the read, and the number of cycles is equivalent to the read length.
Long sequences can have degraded quality towards the ends of the reads. Looking at quality distribution over base positions can help us decide to do trimming towards the end of the reads or not. A good sample will have median quality scores per base above 28\. If scores are below 20 towards the ends, you can think about trimming the reads.
### 7\.2\.2 Sequence content per base/cycle
Per\-base sequence content shows nucleotide proportions for each position. In a random sequencing library there should be no nucleotide bias and the lines should be almost parallel with each other. The code below shows how to get this plot. The resulting plot is shown in Figure [7\.4](quality-check-on-sequencing-reads.html#fig:baseCallFreq).
```
rqcCycleBaseCallsLinePlot(qcRes)
```
FIGURE 7\.4: Percentage of nucleotide bases per position across different FASTQ files.
However, some types of sequencing libraries can produce a biased sequence composition. For example, in RNA\-Seq, it is common to have bias at the beginning of the reads. This happens because of random primers annealing to the start of reads during RNA\-Seq library preparation. These primers are not truly random, which leads to a variation at the beginning of the reads. Although RNA\-seq experiments will usually have these biases, this will not affect the ability of measuring gene expression.
In addition, some libraries are inherently biased in their sequence composition. For example, in bisulfite sequencing experiments, most of the cytosines will be converted to thymines. This will create a difference in C and T base compositions over the read, however this type of difference is normal for bisulfite sequencing experiments.
### 7\.2\.3 Read frequency plot
This plot shows the degree of duplication for every read in the library. We show how to get this plot in the code snippet below and the resulting plot is in Figure [7\.5](quality-check-on-sequencing-reads.html#fig:ReadFrequencyPlot). A high level of duplication, non\-unique reads, is likely to indicate an enrichment bias. Technical duplicates arising from PCR artifacts could cause this. PCR is a common step in library preparation which creates many copies of the sequence fragment. In RNA\-seq data, the non\-unique read proportion can reach more than 20%. However, these duplications may stem from genes simply being expressed at high levels. This means that there will be many copies of transcripts and many copies of the same fragment. Since we cannot be sure these duplicated reads are due to PCR bias or an effect of high transcription, we should not remove duplicated reads in RNA\-seq analysis. However, in ChIP\-seq experiments duplicated reads are more likely to be due to PCR bias.
```
rqcReadFrequencyPlot(qcRes)
```
FIGURE 7\.5: The percent of different duplication levels in FASTQ files. Most of the reads in all libraries have only one copy in this case.
### 7\.2\.4 Other quality metrics and QC tools
Over\-represented k\-mers along the reads can be an additional check. If there are such sequences it may point to adapter contamination and should be trimmed. Adapters are known sequences that are added to the ends of the reads. This kind of contamination could also be visible at “sequence content per base” plots. In addition, if you know the adapter sequences, you can match it to the end of the reads and trim them. The most popular tool for sequencing quality control is the fastQC tool (Andrews [2010](#ref-noauthor_babraham_nodate)), which is written in Java. It produces the plots that we described above in addition to k\-mer overrepresentation and adapter overrepresentation plots. The R package [fastqcr](https://cran.r-project.org/web/packages/fastqcr/index.html) can run this Java tool and produce R\-based plots and reports. This package simply calls the Java tool and parses its results. Below, we show how to do that.
```
library(fastqcr)
# install the FASTQC java tool
fastqc_install()
# call FASTQC and record the resulting statistics
# in fastqc_results folder
fastqc(fq.dir = folder,qc.dir = "fastqc_results")
```
Now that we have run FastQC on our fastq files, we can read the results to R and construct plots or reports. The `gc_report()` function can create an Rmarkdown\-based report from FastQC output.
```
# view the report rendered by R functions
qc_report(qc.path="fastqc_results",
result.file="reportFile", preview = TRUE)
```
Alternatively, we can read the results with `qc_read()` and make specific plots we are interested in with `qc_plot()`.
```
# read QC results to R for one fastq file
qc <- qc_read("fastqc_results/ERR127302_1_subset_fastqc.zip")
# make plots, example "Per base sequence quality plot"
qc_plot(qc, "Per base sequence quality")
```
Apart from this, the bioconductor packages Rqc (de Souza, Carvalho, and Lopes\-Cendes [2018](#ref-Rqc)) (see `Rqc::rqcReport` function), QuasR (Gaidatzis, Lerch, Hahne, et al. [2015](#ref-gaidatzis_quasr:_2015)) (see `QuasR::qQCReport` function), systemPipeR (Backman and Girke [2016](#ref-backman_systempiper:_2016)) (see `systemPipeR::seeFastq` function), and ShortRead (Morgan, Anders, Lawrence, et al. [2009](#ref-morgan_shortread:_2009)) (see `ShortRead::report` function) can all generate quality reports in a similar fashion to FastQC with some differences in plot content and number.
### 7\.2\.1 Sequence quality per base/cycle
Now that we have the `qcRes` object, we can plot various sequence quality metrics for our fastq files. We will first plot “sequence quality per base/cycle”. This plot, shown in Figure [7\.3](quality-check-on-sequencing-reads.html#fig:CycleQualityBoxPlot), depicts the quality scores across all bases at each position in the reads.
```
rqcCycleQualityBoxPlot(qcRes)
```
FIGURE 7\.3: Per base sequence quality boxplot.
In our case, the x\-axis in the plot is labeled as “cycle”. This is because in each sequencing “cycle” a fluorescently labeled nucleotide is added to complement the template sequence, and the sequencing machine identifies which nucleotide is added. Therefore, cycles correspond to bases/nucleotides along the read, and the number of cycles is equivalent to the read length.
Long sequences can have degraded quality towards the ends of the reads. Looking at quality distribution over base positions can help us decide to do trimming towards the end of the reads or not. A good sample will have median quality scores per base above 28\. If scores are below 20 towards the ends, you can think about trimming the reads.
### 7\.2\.2 Sequence content per base/cycle
Per\-base sequence content shows nucleotide proportions for each position. In a random sequencing library there should be no nucleotide bias and the lines should be almost parallel with each other. The code below shows how to get this plot. The resulting plot is shown in Figure [7\.4](quality-check-on-sequencing-reads.html#fig:baseCallFreq).
```
rqcCycleBaseCallsLinePlot(qcRes)
```
FIGURE 7\.4: Percentage of nucleotide bases per position across different FASTQ files.
However, some types of sequencing libraries can produce a biased sequence composition. For example, in RNA\-Seq, it is common to have bias at the beginning of the reads. This happens because of random primers annealing to the start of reads during RNA\-Seq library preparation. These primers are not truly random, which leads to a variation at the beginning of the reads. Although RNA\-seq experiments will usually have these biases, this will not affect the ability of measuring gene expression.
In addition, some libraries are inherently biased in their sequence composition. For example, in bisulfite sequencing experiments, most of the cytosines will be converted to thymines. This will create a difference in C and T base compositions over the read, however this type of difference is normal for bisulfite sequencing experiments.
### 7\.2\.3 Read frequency plot
This plot shows the degree of duplication for every read in the library. We show how to get this plot in the code snippet below and the resulting plot is in Figure [7\.5](quality-check-on-sequencing-reads.html#fig:ReadFrequencyPlot). A high level of duplication, non\-unique reads, is likely to indicate an enrichment bias. Technical duplicates arising from PCR artifacts could cause this. PCR is a common step in library preparation which creates many copies of the sequence fragment. In RNA\-seq data, the non\-unique read proportion can reach more than 20%. However, these duplications may stem from genes simply being expressed at high levels. This means that there will be many copies of transcripts and many copies of the same fragment. Since we cannot be sure these duplicated reads are due to PCR bias or an effect of high transcription, we should not remove duplicated reads in RNA\-seq analysis. However, in ChIP\-seq experiments duplicated reads are more likely to be due to PCR bias.
```
rqcReadFrequencyPlot(qcRes)
```
FIGURE 7\.5: The percent of different duplication levels in FASTQ files. Most of the reads in all libraries have only one copy in this case.
### 7\.2\.4 Other quality metrics and QC tools
Over\-represented k\-mers along the reads can be an additional check. If there are such sequences it may point to adapter contamination and should be trimmed. Adapters are known sequences that are added to the ends of the reads. This kind of contamination could also be visible at “sequence content per base” plots. In addition, if you know the adapter sequences, you can match it to the end of the reads and trim them. The most popular tool for sequencing quality control is the fastQC tool (Andrews [2010](#ref-noauthor_babraham_nodate)), which is written in Java. It produces the plots that we described above in addition to k\-mer overrepresentation and adapter overrepresentation plots. The R package [fastqcr](https://cran.r-project.org/web/packages/fastqcr/index.html) can run this Java tool and produce R\-based plots and reports. This package simply calls the Java tool and parses its results. Below, we show how to do that.
```
library(fastqcr)
# install the FASTQC java tool
fastqc_install()
# call FASTQC and record the resulting statistics
# in fastqc_results folder
fastqc(fq.dir = folder,qc.dir = "fastqc_results")
```
Now that we have run FastQC on our fastq files, we can read the results to R and construct plots or reports. The `gc_report()` function can create an Rmarkdown\-based report from FastQC output.
```
# view the report rendered by R functions
qc_report(qc.path="fastqc_results",
result.file="reportFile", preview = TRUE)
```
Alternatively, we can read the results with `qc_read()` and make specific plots we are interested in with `qc_plot()`.
```
# read QC results to R for one fastq file
qc <- qc_read("fastqc_results/ERR127302_1_subset_fastqc.zip")
# make plots, example "Per base sequence quality plot"
qc_plot(qc, "Per base sequence quality")
```
Apart from this, the bioconductor packages Rqc (de Souza, Carvalho, and Lopes\-Cendes [2018](#ref-Rqc)) (see `Rqc::rqcReport` function), QuasR (Gaidatzis, Lerch, Hahne, et al. [2015](#ref-gaidatzis_quasr:_2015)) (see `QuasR::qQCReport` function), systemPipeR (Backman and Girke [2016](#ref-backman_systempiper:_2016)) (see `systemPipeR::seeFastq` function), and ShortRead (Morgan, Anders, Lawrence, et al. [2009](#ref-morgan_shortread:_2009)) (see `ShortRead::report` function) can all generate quality reports in a similar fashion to FastQC with some differences in plot content and number.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/filtering-and-trimming-reads.html |
7\.3 Filtering and trimming reads
---------------------------------
Based on the results of the quality check, you may want to trim or filter the reads. The quality check might have shown the number of reads that have low quality scores. These reads will probably not align very well because of the potential mistakes in base calling, or they may align to wrong places in the genome. Therefore, you may want to remove these reads from your fastq file. Another potential scenario is that parts of your reads need to be trimmed in order to align the reads. In some cases, adapters will be present in either side of the read; in other cases technical errors will lead to decreasing base quality towards the ends of the reads. Both in these cases, the portion of the read should be trimmed so the read can align or better align the genome. We will show how to use the `QuasR` package to trim the reads. Other packages such as `ShortRead` also have capabilities to trim and filter reads. However, the `QuasR::preprocessReads()` function provides a single interface to multiple preprocessing possibilities. With this function, we match adapter sequences and remove them. We can remove low\-complexity reads (reads containing repetitive sequences). We can trim the start or ends of the reads by a pre\-defined length.
Below we will first set up the file paths to fastq files and filter them based on their length and whether or not they contain the “N” character, which stands for unidentified base. With the same function we will also trim 3 bases from the end of the reads and also trim segments from the start of the reads if they match the “ACCCGGGA” sequence.
```
library(QuasR)
# obtain a list of fastq file paths
fastqFiles <- system.file(package="ShortRead",
"extdata/E-MTAB-1147",
c("ERR127302_1_subset.fastq.gz",
"ERR127302_2_subset.fastq.gz")
)
# defined processed fastq file names
outfiles <- paste(tempfile(pattern=c("processed_1_",
"processed_2_")),".fastq",sep="")
# process fastq files
# remove reads that have more than 1 N, (nBases)
# trim 3 bases from the end of the reads (truncateEndBases)
# Remove ACCCGGGA patern if it occurs at the start (Lpattern)
# remove reads shorter than 40 base-pairs (minLength)
preprocessReads(fastqFiles, outfiles,
nBases=1,
truncateEndBases=3,
Lpattern="ACCCGGGA",
minLength=40)
```
As we have mentioned, the `ShortRead` package has low\-level functions, which `QuasR::preprocessReads()` also depends on. We can use these low\-level functions to filter reads in ways that are not possible using the `QuasR::preprocessReads()` function. Below we are going to read in a fastq file and filter the reads where every quality score is below 20\.
```
library(ShortRead)
# obtain a list of fastq file paths
fastqFile <- system.file(package="ShortRead",
"extdata/E-MTAB-1147",
"ERR127302_1_subset.fastq.gz")
# read fastq file
fq = readFastq(fastqFile)
# get quality scores per base as a matrix
qPerBase = as(quality(fq), "matrix")
# get number of bases per read that have quality score below 20
# we use this
qcount = rowSums( qPerBase <= 20)
# Number of reads where all Phred scores >= 20
fq[qcount == 0]
```
```
## class: ShortReadQ
## length: 10699 reads; width: 72 cycles
```
We can finally write out the filtered fastq file with the `ShortRead::writeFastq()` function.
```
# write out fastq file with only reads where all
# quality scores per base are above 20
writeFastq(fq[qcount == 0],
paste(fastqFile, "Qfiltered", sep="_"))
```
As fastq files can be quite large, it may not be feasible to read a 30\-Gigabyte file into memory. A more memory\-efficient way would be to read the file piece by piece. We can do our filtering operations for each piece, write the filtered part out, and read a new piece. Fortunately, this is possible using the `ShortRead::FastqStreamer()` function. This function enables “streaming” the fastq file in pieces, which are blocks of the fastq file with a pre\-defined number of reads. We can access the successive blocks with the `yield()` function. Each time we call the `yield()` function after opening the fastq file with `FastqStreamer()`, a new part of the file will be read to the memory.
```
# set up streaming with block size 1000
# every time we call the yield() function 1000 read portion
# of the file will be read successively.
f <- FastqStreamer(fastqFile,readerBlockSize=1000)
# we set up a while loop to call yield() function to
# go through the file
while(length(fq <- yield(f))) {
# remove reads where all quality scores are < 20
# get quality scores per base as a matrix
qPerBase = as(quality(fq), "matrix")
# get number of bases per read that have Q score < 20
qcount = rowSums( qPerBase <= 20)
# write fastq file with mode="a", so every new block
# is written out to the same file
writeFastq(fq[qcount == 0],
paste(fastqFile, "Qfiltered", sep="_"),
mode="a")
}
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/mappingaligning-reads-to-the-genome.html |
7\.4 Mapping/aligning reads to the genome
-----------------------------------------
After the quality check and potential pre\-processing, the reads are ready to be mapped or aligned to the reference genome. This process simply finds the most probable origin of each read in the genome. Since there might be errors in sequencing and mutations in the genomes, we may not find exact matches of reads in the genomes. An important feature of the alignment algorithms is to tolerate potential mismatches between reads and the reference genome. In addition, efficient algorithms and data structures are needed for the alignment to be completed in a reasonable amount of time. Alignment methods usually create data structures to store and efficiently search the genome for matching reads. These data structures are called genome indices and creating these indices is the first step for the read alignment. Based on how indices are created, there are two major types of methods. One class of methods relies on “hash tables”, to store and search the genomes. Hash tables are simple lookup tables in which all possible k\-mers point to locations in the genome. The general idea is that overlapping k\-mers constructed from a read go through this lookup table. Each k\-mer points to potential locations in the genome. Then, the final location for the read is obtained by optimizing the k\-mer chain by their distances in the genome and in the read. This optimization process removes k\-mer locations that are distant from other k\-mers that map nearby each other.
Another class of algorithms builds genome indices by creating a Burrows\-Wheeler transformation of the genome. This in essence creates a compact and searchable data structure for all reads. Although details are out of the scope of this section, these alignment tools provide faster alignment and use less memory. BWA(H. Li and Durbin [2009](#ref-li2009fast)[a](#ref-li2009fast)), Bowtie1/2(Langmead and Salzberg [2012](#ref-langmead2012fast)[a](#ref-langmead2012fast)) and SOAP(R. Li, Yu, Li, et al. [2009](#ref-li2009soap2)) are examples of such algorithms.
The read mapping in R can be done with the `gmapR` (Barr, Wu, and Lawrence [2019](#ref-gmapR)), `QuasR` (Gaidatzis, Lerch, Hahne, et al. [2015](#ref-gaidatzis_quasr:_2015)), `Rsubread` (Liao, Smyth, and Shi [2013](#ref-liao_subread_2013)), and `systemPipeR` (Backman and Girke [2016](#ref-backman_systempiper:_2016)) packages. We will demonstrate read mapping with QuasR which uses the `Rbowtie` package, which wraps the Bowtie aligner. Below, we show how to map reads from a ChIP\-seq experiment using QuasR/bowtie.
We will use the `qAlign()` function which requires two mandatory arguments: 1\) a genome file in either fasta format or as a `BSgenome` package and 2\) a sample file which is a text file and contains file paths to fastq files and sample names. In the case below, sample file looks like this:
```
FileName SampleName
chip_1_1.fq.bz2 Sample1
chip_2_1.fq.bz2 Sample2
```
```
library(QuasR)
# copy example data to current working directory
file.copy(system.file(package="QuasR", "extdata"), ".", recursive=TRUE)
# genome file in fasta format
genomeFile <- "extdata/hg19sub.fa"
# text file containing sample names and fastq file paths
sampleFile <- "extdata/samples_chip_single.txt"
# create alignments
proj <- qAlign(sampleFile, genomeFile)
```
It is good to explain what is going on here as the `qAlign()` function makes things look simple. This function is designed to be easy. For example, it creates a genome index automatically if it does not exist, and will look for existing indices before it creates one. We provided only two arguments, a text file containing sample names and fastq file paths and a reference genome file. In fact, this function also has many knobs and you can change its behavior by supplying different arguments in order to affect the behavior of Bowtie. For example, you can supply parameters to Bowtie using the `alignmentParameter` argument. However the `qAlign()` function is optimized for different types of alignment problems and selects alignment parameters automatically. It is designed to work with alignment and quantification tasks for RNA\-seq, ChIP\-seq, small\-RNA sequencing, Bisulfite sequencing (DNA methylation) and allele\-specific analysis. If you want to change the default bowtie parameters, only do it for simple alignment problems such as ChIP\-seq and RNA\-seq.
**Want to know more ?**
* More on hash tables and Burrows\-Wheeler\-based aligners
+ A survey of sequence alignment algorithms for next\-generation sequencin: (<https://academic.oup.com/bib/article/11/5/473/264166>) H Li, N Homer \- Briefings in bioinformatics, 2010
* More on QuasR and all the alignment and post\-processing capabilities: (<https://bioconductor.org/packages/release/bioc/vignettes/QuasR/inst/doc/QuasR.html>)
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/exercises-5.html |
7\.6 Exercises
--------------
For this set of exercises, we will use the `chip_1_1.fq.bz2` and `chip_2_1.fq.bz2` files from the `QuasR` package. You can reach the folder that contains the files as follows:
```
folder=(system.file(package="QuasR", "extdata"))
dir(folder) # will show the contents of the folder
```
1. Plot the base quality distributions of the ChIP\-seq samples `Rqc` package.
**HINT**: You need to provide a regular expression pattern for extracting the right files from the folder. `"^chip"` matches the files beginning with “chip”. \[Difficulty: **Beginner/Intermediate**]
2. Now we will trim the reads based on the quality scores. Let’s trim 2\-4 bases on the 3’ end depending on the quality scores. You can use the `QuasR::preprocessReads()` function for this purpose. \[Difficulty: **Beginner/Intermediate**]
3. Align the trimmed and untrimmed reads using `QuasR` and plot alignment statistics, did the trimming improve alignments? \[Difficulty: **Intermediate/Advanced**]
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/gene-expression-analysis-using-high-throughput-sequencing-technologies.html |
8\.3 Gene expression analysis using high\-throughput sequencing technologies
----------------------------------------------------------------------------
With the advent of the second\-generation (a.k.a next\-generation or high\-throughput) sequencing technologies,
the number of genes that can be profiled for expression levels with a single experiment has increased to the order of tens of thousands of genes. Therefore, the bottleneck in this process has become the data analysis rather than the data generation. Many statistical methods and computational tools are required for getting meaningful results from the data, which comes with a lot of valuable information along with a lot of sources of noise. Fortunately, most of the steps of RNA\-seq analysis have become quite mature over the years. Below we will first describe how to reach a read count table from raw fastq reads obtained from an Illumina sequencing run. We will then demonstrate in R how to process the count table, make a case\-control differential expression analysis, and do some downstream functional enrichment analysis.
### 8\.3\.1 Processing raw data
#### 8\.3\.1\.1 Quality check and read processing
The first step in any experiment that involves high\-throughput short\-read sequencing should be to check the sequencing quality of the reads before starting to do any downstream analysis. The quality of the input sequences holds fundamental importance in the confidence for the biological conclusions drawn from the experiment. We have introduced quality check and processing in Chapter [7](processingReads.html#processingReads), and those tools and workflows also apply in RNA\-seq analysis.
#### 8\.3\.1\.2 Improving the quality
The second step in the RNA\-seq analysis workflow is to improve the quality of the input reads. This step could be regarded as an optional step when the sequencing quality is very good. However, even with the highest\-quality sequencing datasets, this step may still improve the quality of the input sequences. The most common technical artifacts that can be filtered out are the adapter sequences that contaminate the sequenced reads, and the low\-quality bases that are usually found at the ends of the sequences. Commonly used tools in the field (trimmomatic (Bolger, Lohse, and Usadel [2014](#ref-bolger_trimmomatic:_2014)), trimGalore (Andrews [2010](#ref-noauthor_babraham_nodate))) are again not written in R, however there are alternative R libraries for carrying out the same functionality, for instance, QuasR (Gaidatzis, Lerch, Hahne, et al. [2015](#ref-gaidatzis_quasr:_2015)) (see `QuasR::preprocessReads` function) and ShortRead (Morgan, Anders, Lawrence, et al. [2009](#ref-morgan_shortread:_2009)) (see `ShortRead::filterFastq` function). Some of these approaches are introduced in Chapter [7](processingReads.html#processingReads).
The sequencing quality control and read pre\-processing steps can be visited multiple times until achieving a satisfactory level of quality in the sequence data before moving on to the downstream analysis steps.
### 8\.3\.2 Alignment
Once a decent level of quality in the sequences is reached, the expression level of the genes can be quantified by first mapping the sequences to a reference genome, and secondly matching the aligned reads to the gene annotations, in order to count the number of reads mapping to each gene. If the species under study has a well\-annotated transcriptome, the reads can be aligned to the transcript sequences instead of the reference genome. In cases where there is no good quality reference genome or transcriptome, it is possible to de novo assemble the transcriptome from the sequences and then quantify the expression levels of genes/transcripts.
For RNA\-seq read alignments, apart from the availability of reference genomes and annotations, probably the most important factor to consider when choosing an alignment tool is whether the alignment method considers the absence of intronic regions in the sequenced reads, while the target genome may contain introns. Therefore, it is important to choose alignment tools that take into account alternative splicing. In the basic setting where a read, which originates from a cDNA sequence corresponding to an exon\-exon junction, needs to be split into two parts when aligned against the genome. There are various tools that consider this factor such as STAR (Dobin, Davis, Schlesinger, et al. [2013](#ref-dobin_star:_2013)), Tophat2 (Kim, Pertea, Trapnell, et al. [2013](#ref-kim_tophat2:_2013)), Hisat2 (Kim, Langmead, and Salzberg [2015](#ref-kim_hisat:_2015)), and GSNAP (Wu, Reeder, Lawrence, et al. [2016](#ref-wu_gmap_2016)). Most alignment tools are written in C/C\+\+ languages because of performance concerns. There are also R libraries that can do short read alignments; these are discussed in Chapter [7](processingReads.html#processingReads).
### 8\.3\.3 Quantification
After the reads are aligned to the target, a SAM/BAM file sorted by coordinates should have been obtained. The BAM file contains all alignment\-related information of all the reads that have been attempted to be aligned to the target sequence. This information consists of \- most basically \- the genomic coordinates (chromosome, start, end, strand) of where a sequence was matched (if at all) in the target, specific insertions/deletions/mismatches that describe the differences between the input and target sequences. These pieces of information are used along with the genomic coordinates of genome annotations such as gene/transcript models in order to count how many reads have been sequenced from a gene/transcript. As simple as it may sound, it is not a trivial task to assign reads to a gene/transcript just by comparing the genomic coordinates of the annotations and the sequences, because of confounding factors such as overlapping gene annotations, overlapping exon annotations from different transcript isoforms of a gene, and overlapping annotations from opposite DNA strands in the absence of a strand\-specific sequencing protocol. Therefore, for read counting, it is important to consider:
1. Strand specificity of the sequencing protocol: Are the reads expected to originate from the forward strand, reverse strand, or unspecific?
2. Counting mode:
\- when counting at the gene\-level: When there are overlapping annotations, which features should the read be assigned to? Tools usually have a parameter that lets the user select a counting mode.
\- when counting at the transcript\-level: When there are multiple isoforms of a gene, which isoform should the read be assigned to? This consideration is usually an algorithmic consideration that is not modifiable by the end\-user.
Some tools can couple alignment to quantification (e.g. STAR), while some assume the alignments are already calculated and require BAM files as input. On the other hand, in the presence of good transcriptome annotations, alignment\-free methods (Salmon (Patro, Duggal, Love, et al. [2017](#ref-patro_salmon:_2017)), Kallisto (Bray, Pimentel, Melsted, et al. [2016](#ref-bray_near-optimal_2016)), Sailfish (Patro, Mount, and Kingsford [2014](#ref-patro_sailfish_2014))) can also be used to estimate the expression levels of transcripts/genes. There are also reference\-free quantification methods that can first de novo assemble the transcriptome and estimate the expression levels based on this assembly. Such a strategy can be useful in discovering novel transcripts or may be required in cases when a good reference does not exist. If a reference transcriptome exists but of low quality, a reference\-based transcriptome assembler such as Cufflinks (Trapnell, Williams, Pertea, et al. [2010](#ref-trapnell_transcript_2010)) can be used to improve the transcriptome. In case there is no available transcriptome annotation, a de novo assembler such as Trinity (Haas, Papanicolaou, Yassour, et al. [2013](#ref-haas_novo_2013)) or Trans\-ABySS (Robertson, Schein, Chiu, et al. [2010](#ref-robertson_novo_2010)) can be used to assemble the transcriptome from scratch.
Within R, quantification can be done using:
\- `Rsubread::featureCounts`
\- `QuasR::qCount`
\- `GenomicAlignments::summarizeOverlaps`
### 8\.3\.4 Within sample normalization of the read counts
The most common application after a gene’s expression is quantified (as the number of reads aligned to the gene), is to compare the gene’s expression in different conditions, for instance, in a case\-control setting (e.g. disease versus normal) or in a time\-series (e.g. along different developmental stages). Making such comparisons helps identify the genes that might be responsible for a disease or an impaired developmental trajectory. However, there are multiple caveats that needs to be addressed before making a comparison between the read counts of a gene in different conditions (Maza, Frasse, Senin, et al. [2013](#ref-maza_comparison_2013)).
* Library size (i.e. sequencing depth) varies between samples coming from different lanes of the flow cell of the sequencing machine.
* Longer genes will have a higher number of reads.
* Library composition (i.e. relative size of the studied transcriptome) can be different in two different biological conditions.
* GC content biases across different samples may lead to a biased sampling of genes (Risso, Schwartz, Sherlock, et al. [2011](#ref-risso_gc-content_2011)).
* Read coverage of a transcript can be biased and non\-uniformly distributed along the transcript (Mortazavi, Williams, McCue, et al. [2008](#ref-mortazavi_mapping_2008)).
Therefore these factors need to be taken into account before making comparisons.
The most basic normalization approaches address the sequencing depth bias. Such procedures normalize the read counts per gene by dividing each gene’s read count by a certain value and multiplying it by 10^6\. These normalized values are usually referred to as CPM (counts per million reads):
* Total Counts Normalization (divide counts by the **sum** of all counts)
* Upper Quartile Normalization (divide counts by the **upper quartile** value of the counts)
* Median Normalization (divide counts by the **median** of all counts)
Popular metrics that improve upon CPM are RPKM/FPKM (reads/fragments per kilobase of million reads) and TPM (transcripts per million). RPKM is obtained by dividing the CPM value by another factor, which is the length of the gene per kilobase. FPKM is the same as RPKM, but is used for paired\-end reads. Thus, RPKM/FPKM methods account for, firstly, the **library size**, and secondly, the **gene lengths**.
TPM also controls for both the library size and the gene lengths, however, with the TPM method, the read counts are first normalized by the gene length (per kilobase), and then gene\-length normalized values are divided by the sum of the gene\-length normalized values and multiplied by 10^6\. Thus, the sum of normalized values for TPM will always be equal to 10^6 for each library, while the sum of RPKM/FPKM values do not sum to 10^6\. Therefore, it is easier to interpret TPM values than RPKM/FPKM values.
### 8\.3\.5 Computing different normalization schemes in R
Here we will assume that there is an RNA\-seq count table comprising raw counts, meaning the number of reads counted for each gene has not been exposed to any kind of normalization and consists of integers. The rows of the count table correspond to the genes and the columns represent different samples. Here we will use a subset of the RNA\-seq count table from a colorectal cancer study. We have filtered the original count table for only protein\-coding genes (to improve the speed of calculation) and also selected only five metastasized colorectal cancer samples along with five normal colon samples. There is an additional column `width` that contains the length of the corresponding gene in the unit of base pairs. The length of the genes are important to compute RPKM and TPM values. The original count tables can be found from the recount2 database (<https://jhubiostatistics.shinyapps.io/recount/>) using the SRA project code *SRP029880*, and the experimental setup along with other accessory information can be found from the NCBI Trace archive using the SRA project code [SRP029880\`](https://trace.ncbi.nlm.nih.gov/Traces/sra/?study=SRP029880).
```
#colorectal cancer
counts_file <- system.file("extdata/rna-seq/SRP029880.raw_counts.tsv",
package = "compGenomRData")
coldata_file <- system.file("extdata/rna-seq/SRP029880.colData.tsv",
package = "compGenomRData")
counts <- as.matrix(read.table(counts_file, header = T, sep = '\t'))
```
#### 8\.3\.5\.1 Computing CPM
Let’s do a summary of the counts table. Due to space limitations, the summary for only the first three columns is displayed.
```
summary(counts[,1:3])
```
```
## CASE_1 CASE_2 CASE_3
## Min. : 0 Min. : 0 Min. : 0
## 1st Qu.: 5155 1st Qu.: 6464 1st Qu.: 3972
## Median : 80023 Median : 85064 Median : 64145
## Mean : 295932 Mean : 273099 Mean : 263045
## 3rd Qu.: 252164 3rd Qu.: 245484 3rd Qu.: 210788
## Max. :205067466 Max. :105248041 Max. :222511278
```
To compute the CPM values for each sample (excluding the `width` column):
```
cpm <- apply(subset(counts, select = c(-width)), 2,
function(x) x/sum(as.numeric(x)) * 10^6)
```
Check that the sum of each column after normalization equals to 10^6 (except the width column).
```
colSums(cpm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3 CTRL_4 CTRL_5
## 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06
```
#### 8\.3\.5\.2 Computing RPKM
```
# create a vector of gene lengths
geneLengths <- as.vector(subset(counts, select = c(width)))
# compute rpkm
rpkm <- apply(X = subset(counts, select = c(-width)),
MARGIN = 2,
FUN = function(x) {
10^9 * x / geneLengths / sum(as.numeric(x))
})
```
Check the sample sizes of RPKM. Notice that the sums of samples are all different.
```
colSums(rpkm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3
## 158291.0 153324.2 161775.4 173047.4 172761.4 210032.6 301764.2 241418.3
## CTRL_4 CTRL_5
## 291674.5 252005.7
```
#### 8\.3\.5\.3 Computing TPM
```
#find gene length normalized values
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
```
Check the sample sizes of `tpm`. Notice that the sums of samples are all equal to 10^6\.
```
colSums(tpm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3 CTRL_4 CTRL_5
## 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06
```
None of these metrics (CPM, RPKM/FPKM, TPM) account for the other important confounding factor when comparing expression levels of genes across samples: the **library composition**, which may also be referred to as the **relative size of the compared transcriptomes**. This factor is not dependent on the sequencing technology, it is rather biological. For instance, when comparing transcriptomes of different tissues, there can be sets of genes in one tissue that consume a big chunk of the reads, while in the other tissues they are not expressed at all. This kind of imbalance in the composition of compared transcriptomes can lead to wrong conclusions about which genes are actually differentially expressed. This consideration is addressed in two popular R packages: `DESeq2` (Love, Huber, and Anders [2014](#ref-love_moderated_2014)) and edgeR (Robinson, McCarthy, and Smyth [2010](#ref-robinson_edger:_2010)) each with a different algorithm. `edgeR` uses a normalization procedure called Trimmed Mean of M\-values (TMM). `DESeq2` implements a normalization procedure using median of Ratios, which is obtained by finding the ratio of the log\-transformed count of a gene divided by the average of log\-transformed values of the gene in all samples (geometric mean), and then taking the median of these values for all genes. The raw read count of the gene is finally divided by this value (median of ratios) to obtain the normalized counts.
### 8\.3\.6 Exploratory analysis of the read count table
A typical quality control, in this case interrogating the RNA\-seq experiment design, is to measure the similarity of the samples with each other in terms of the quantified expression level profiles across a set of genes. One important observation to make is to see whether the most similar samples to any given sample are the biological replicates of that sample. This can be computed using unsupervised clustering techniques such as hierarchical clustering and visualized as a heatmap with dendrograms. Another most commonly applied technique is a dimensionality reduction technique called Principal Component Analysis (PCA) and visualized as a two\-dimensional (or in some cases three\-dimensional) scatter plot. In order to find out more about the clustering methods and PCA, please refer to Chapter [4](unsupervisedLearning.html#unsupervisedLearning).
#### 8\.3\.6\.1 Clustering
We can combine clustering and visualization of the clustering results by using heatmap functions that are available in a variety of R libraries. The basic R installation comes with the `stats::heatmap` function. However, there are other libraries available in CRAN (e.g. `pheatmap` (Kolde [2019](#ref-pheatmap))) or Bioconductor (e.g. `ComplexHeatmap` (Z. Gu, Eils, and Schlesner [2016](#ref-gu_complex_2016)[a](#ref-gu_complex_2016))) that come with more flexibility and more appealing visualizations.
Here we demonstrate a heatmap using the `pheatmap` package and the previously calculated `tpm` matrix.
As these matrices can be quite large, both computing the clustering and rendering the heatmaps can take a lot of resources and time. Therefore, a quick and informative way to compare samples is to select a subset of genes that are, for instance, most variable across samples, and use that subset to do the clustering and visualization.
Let’s select the top 100 most variable genes among the samples.
```
#compute the variance of each gene across samples
V <- apply(tpm, 1, var)
#sort the results by variance in decreasing order
#and select the top 100 genes
selectedGenes <- names(V[order(V, decreasing = T)][1:100])
```
Now we can quickly produce a heatmap where samples and genes are clustered (see Figure [8\.1](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:tpmhierClust1) ).
```
library(pheatmap)
pheatmap(tpm[selectedGenes,], scale = 'row', show_rownames = FALSE)
```
FIGURE 8\.1: Clustering and visualization of the topmost variable genes as a heatmap.
We can also overlay some annotation tracks to observe the clusters.
Here it is important to observe whether the replicates of the same sample cluster most closely with each other, or not. Overlaying the heatmap with such annotation and displaying sample groups with distinct colors helps quickly see if there are samples that don’t cluster as expected (see Figure [8\.2](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:tpmhierclust2) ).
```
colData <- read.table(coldata_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
pheatmap(tpm[selectedGenes,], scale = 'row',
show_rownames = FALSE,
annotation_col = colData)
```
FIGURE 8\.2: Clustering samples as a heatmap with sample annotations.
#### 8\.3\.6\.2 PCA
Let’s make a PCA plot to see the clustering of replicates as a scatter plot in two dimensions (Figure [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:pca1)).
```
library(stats)
library(ggplot2)
#transpose the matrix
M <- t(tpm[selectedGenes,])
# transform the counts to log2 scale
M <- log2(M + 1)
#compute PCA
pcaResults <- prcomp(M)
#plot PCA results making use of ggplot2's autoplot function
#ggfortify is needed to let ggplot2 know about PCA data structure.
autoplot(pcaResults, data = colData, colour = 'group')
```
FIGURE 8\.3: PCA plot of samples using TPM counts.
We should observe here whether the samples from the case group (CASE) and samples from the control group (CTRL) can be split into two distinct clusters on the scatter plot of the first two largest principal components.
We can use the `summary` function to summarize the PCA results to observe the contribution of the principal components in the explained variation.
```
summary(pcaResults)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 24.396 2.50514 2.39327 1.93841 1.79193 1.6357 1.46059
## Proportion of Variance 0.957 0.01009 0.00921 0.00604 0.00516 0.0043 0.00343
## Cumulative Proportion 0.957 0.96706 0.97627 0.98231 0.98747 0.9918 0.99520
## PC8 PC9 PC10
## Standard deviation 1.30902 1.12657 4.616e-15
## Proportion of Variance 0.00276 0.00204 0.000e+00
## Cumulative Proportion 0.99796 1.00000 1.000e+00
```
#### 8\.3\.6\.3 Correlation plots
Another complementary approach to see the reproducibility of the experiments is to compute the correlation scores between each pair of samples and draw a correlation plot.
Let’s first compute pairwise correlation scores between every pair of samples.
```
library(stats)
correlationMatrix <- cor(tpm)
```
Let’s have a look at how the correlation matrix looks ([8\.1](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:corrplot2))
(showing only two samples each of case and control samples):
TABLE 8\.1: Correlation scores between samples
| | CASE\_1 | CASE\_2 | CTRL\_1 | CTRL\_2 |
| --- | --- | --- | --- | --- |
| CASE\_1 | 1\.0000000 | 0\.9924606 | 0\.9594011 | 0\.9635760 |
| CASE\_2 | 0\.9924606 | 1\.0000000 | 0\.9725646 | 0\.9793835 |
| CTRL\_1 | 0\.9594011 | 0\.9725646 | 1\.0000000 | 0\.9879862 |
| CTRL\_2 | 0\.9635760 | 0\.9793835 | 0\.9879862 | 1\.0000000 |
We can also draw more visually appealing correlation plots using the `corrplot` package (Figure [8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:corrplot3)).
Using the `addrect` argument, we can split clusters into groups and surround them with rectangles.
By setting the `addCoef.col` argument to ‘white’, we can display the correlation coefficients as numbers in white color.
```
library(corrplot)
corrplot(correlationMatrix, order = 'hclust',
addrect = 2, addCoef.col = 'white',
number.cex = 0.7)
```
FIGURE 8\.4: Correlation plot of samples ordered by hierarchical clustering.
Here pairwise correlation levels are visualized as colored circles. `Blue` indicates positive correlation, while `Red` indicates negative correlation.
We could also plot this correlation matrix as a heatmap (Figure [8\.5](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:corrplot4)). As all the samples have a high pairwise
correlation score, using a heatmap instead of a corrplot helps to see the differences between samples more easily. The
`annotation_col` argument helps to display sample annotations and the `cutree_cols` argument is set to 2 to split the clusters into two groups based on the hierarchical clustering results.
```
library(pheatmap)
# split the clusters into two based on the clustering similarity
pheatmap(correlationMatrix,
annotation_col = colData,
cutree_cols = 2)
```
FIGURE 8\.5: Pairwise correlation of samples displayed as a heatmap.
### 8\.3\.7 Differential expression analysis
Differential expression analysis allows us to test tens of thousands of hypotheses (one test for each gene) against the null hypothesis that the activity of the gene stays the same in two different conditions. There are multiple limiting factors that influence the power of detecting genes that have real changes between two biological conditions. Among these are the limited number of biological replicates, non\-normality of the distribution of the read counts, and higher uncertainty of measurements for lowly expressed genes than highly expressed genes (Love, Huber, and Anders [2014](#ref-love_moderated_2014)). Tools such as `edgeR` and `DESeq2` address these limitations using sophisticated statistical models in order to maximize the amount of knowledge that can be extracted from such noisy datasets. In essence, these models assume that for each gene, the read counts are generated by a negative binomial distribution. This is a popular distribution that is used for modeling count data. This distribution can be specified with a mean parameter, \\(m\\), and a dispersion parameter, \\(\\alpha\\). The dispersion parameter \\(\\alpha\\) is directly related to the variance as the variance of this distribution is formulated as: \\(m\+\\alpha m^{2}\\). Therefore, estimating these parameters is crucial for differential expression tests. The methods used in `edgeR` and `DESeq2` use dispersion estimates from other genes with similar counts to precisely estimate the per\-gene dispersion values. With accurate dispersion parameter estimates, one can estimate the variance more precisely, which in turn
improves the result of the differential expression test. Although statistical models are different, the process here is similar to the moderated t\-test and qualifies as an empirical Bayes method which we introduced in Chapter [3](stats.html#stats). There, we calculated gene\-wise variability and shrunk each gene\-wise variability towards the median variability of all genes. In the case of RNA\-seq the dispersion coefficient \\(\\alpha\\) is shrunk towards the value of dispersion from other genes with similar read counts.
Now let us take a closer look at the `DESeq2` workflow and how it calculates differential expression:
1. The read counts are normalized by computing size factors, which addresses the differences not only in the library sizes, but also the library compositions.
2. For each gene, a dispersion estimate is calculated. The dispersion value computed by `DESeq2` is equal to the squared coefficient of variation (variation divided by the mean).
3. A line is fit across the dispersion estimates of all genes computed in step 2 versus the mean normalized counts of the genes.
4. Dispersion values of each gene are shrunk towards the fitted line in step 3\.
5. A Generalized Linear Model is fitted which considers additional confounding variables related to the experimental design such as sequencing batches, treatment, temperature, patient’s age, sequencing technology, etc., and uses negative binomial distribution for fitting count data.
6. For a given contrast (e.g. treatment type: drug\-A versus untreated), a test for differential expression is carried out against the null hypothesis that the log fold change of the normalized counts of the gene in the given pair of groups is exactly zero.
7. It adjusts p\-values for multiple\-testing.
In order to carry out a differential expression analysis using `DESeq2`, three kinds of inputs are necessary:
1. The **read count table**: This table must be raw read counts as integers that are not processed in any form by a normalization technique. The rows represent features (e.g. genes, transcripts, genomic intervals) and columns represent samples.
2. A **colData** table: This table describes the experimental design.
3. A **design formula**: This formula is needed to describe the variable of interest in the analysis (e.g. treatment status) along with (optionally) other covariates (e.g. batch, temperature, sequencing technology).
Let’s define these inputs:
```
#remove the 'width' column
countData <- as.matrix(subset(counts, select = c(-width)))
#define the experimental setup
colData <- read.table(coldata_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
#define the design formula
designFormula <- "~ group"
```
Now, we are ready to run `DESeq2`.
```
library(DESeq2)
library(stats)
#create a DESeq dataset object from the count matrix and the colData
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = as.formula(designFormula))
#print dds object to see the contents
print(dds)
```
```
## class: DESeqDataSet
## dim: 19719 10
## metadata(1): version
## assays(1): counts
## rownames(19719): TSPAN6 TNMD ... MYOCOS HSFX3
## rowData names(0):
## colnames(10): CASE_1 CASE_2 ... CTRL_4 CTRL_5
## colData names(2): source_name group
```
The `DESeqDataSet` object contains all the information about the experimental setup, the read counts, and the design formulas. Certain functions can be used to access this information separately: `rownames(dds)` shows which features are used in the study (e.g. genes), `colnames(dds)` displays the studied samples, `counts(dds)` displays the count table, and `colData(dds)` displays the experimental setup.
Remove genes that have almost no information in any of the given samples.
```
#For each gene, we count the total number of reads for that gene in all samples
#and remove those that don't have at least 1 read.
dds <- dds[ rowSums(DESeq2::counts(dds)) > 1, ]
```
Now, we can use the `DESeq()` function of `DESeq2`, which is a wrapper function that implements estimation of size factors to normalize the counts, estimation of dispersion values, and computing a GLM model based on the experimental design formula. This function returns a `DESeqDataSet` object, which is an updated version of the `dds` variable that we pass to the function as input.
```
dds <- DESeq(dds)
```
Now, we can compare and contrast the samples based on different variables of interest. In this case, we currently have only one variable, which is the `group` variable that determines if a sample belongs to the CASE group or the CTRL group.
```
#compute the contrast for the 'group' variable where 'CTRL'
#samples are used as the control group.
DEresults = results(dds, contrast = c("group", 'CASE', 'CTRL'))
#sort results by increasing p-value
DEresults <- DEresults[order(DEresults$pvalue),]
```
Thus we have obtained a table containing the differential expression status of case samples compared to the control samples.
It is important to note that the sequence of the elements provided in the `contrast` argument determines which group of samples are to be used as the control. This impacts the way the results are interpreted, for instance, if a gene is found up\-regulated (has a positive log2 fold change), the up\-regulation status is only relative to the factor that is provided as control. In this case, we used samples from the “CTRL” group as control and contrasted the samples from the “CASE” group with respect to the “CTRL” samples. Thus genes with a positive log2 fold change are called up\-regulated in the case samples with respect to the control, while genes with a negative log2 fold change are down\-regulated in the case samples. Whether the deregulation is significant or not, warrants assessment of the adjusted p\-values.
Let’s have a look at the contents of the `DEresults` table.
```
#shows a summary of the results
print(DEresults)
```
```
## log2 fold change (MLE): group CASE vs CTRL
## Wald test p-value: group CASE vs CTRL
## DataFrame with 19097 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue
## <numeric> <numeric> <numeric> <numeric> <numeric>
## CYP2E1 4829889 9.36024 0.215223 43.4909 0.00000e+00
## FCGBP 10349993 -7.57579 0.186433 -40.6355 0.00000e+00
## ASGR2 426422 8.01830 0.216207 37.0863 4.67898e-301
## GCKR 100183 7.82841 0.233376 33.5442 1.09479e-246
## APOA5 438054 10.20248 0.312503 32.6477 8.64906e-234
## ... ... ... ... ... ...
## CCDC195 20.4981 -0.215607 2.89255 -0.0745386 NA
## SPEM3 23.6370 -22.154765 3.02785 -7.3170030 NA
## AC022167.5 21.8451 -2.056240 2.89545 -0.7101618 NA
## BX276092.9 29.9636 0.407326 2.89048 0.1409199 NA
## ETDC 22.5675 -1.795274 2.89421 -0.6202983 NA
## padj
## <numeric>
## CYP2E1 0.00000e+00
## FCGBP 0.00000e+00
## ASGR2 2.87741e-297
## GCKR 5.04945e-243
## APOA5 3.19133e-230
## ... ...
## CCDC195 NA
## SPEM3 NA
## AC022167.5 NA
## BX276092.9 NA
## ETDC NA
```
The first three lines in this output show the contrast and the statistical test that were used to compute these results, along with the dimensions of the resulting table (number of columns and rows). Below these lines is the actual table with 6 columns: `baseMean` represents the average normalized expression of the gene across all considered samples. `log2FoldChange` represents the base\-2 logarithm of the fold change of the normalized expression of the gene in the given contrast. `lfcSE` represents the standard error of log2 fold change estimate, and `stat` is the statistic calculated in the contrast which is translated into a `pvalue` and adjusted for multiple testing in the `padj` column. To find out about the importance of adjusting for multiple testing, see Chapter [3](stats.html#stats).
#### 8\.3\.7\.1 Diagnostic plots
At this point, before proceeding to do any downstream analysis and jumping to conclusions about the biological insights that are reachable with the experimental data at hand, it is important to do some more diagnostic tests to improve our confidence about the quality of the data and the experimental setup.
##### 8\.3\.7\.1\.1 MA plot
An MA plot is useful to observe if the data normalization worked well (Figure [8\.6](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEmaplot)). The MA plot is a scatter plot where the x\-axis denotes the average of normalized counts across samples and the y\-axis denotes the log fold change in the given contrast. Most points are expected to be on the horizontal 0 line (most genes are not expected to be differentially expressed).
```
library(DESeq2)
DESeq2::plotMA(object = dds, ylim = c(-5, 5))
```
FIGURE 8\.6: MA plot of differential expression results.
##### 8\.3\.7\.1\.2 P\-value distribution
It is also important to observe the distribution of raw p\-values (Figure [8\.7](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEpvaldist)). We expect to see a peak around low p\-values and a uniform distribution at P\-values above 0\.1\. Otherwise, adjustment for multiple testing does not work and the results are not meaningful.
```
library(ggplot2)
ggplot(data = as.data.frame(DEresults), aes(x = pvalue)) +
geom_histogram(bins = 100)
```
FIGURE 8\.7: P\-value distribution genes before adjusting for multiple testing.
##### 8\.3\.7\.1\.3 PCA plot
A final diagnosis is to check the biological reproducibility of the sample replicates in a PCA plot or a heatmap. To plot the PCA results, we need to extract the normalized counts from the DESeqDataSet object. It is possible to color the points in the scatter plot by the variable of interest, which helps to see if the replicates cluster well (Figure [8\.8](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEpca)).
```
library(DESeq2)
# extract normalized counts from the DESeqDataSet object
countsNormalized <- DESeq2::counts(dds, normalized = TRUE)
# select top 500 most variable genes
selectedGenes <- names(sort(apply(countsNormalized, 1, var),
decreasing = TRUE)[1:500])
plotPCA(countsNormalized[selectedGenes,],
col = as.numeric(colData$group), adj = 0.5,
xlim = c(-0.5, 0.5), ylim = c(-0.5, 0.6))
```
FIGURE 8\.8: Principle component analysis plot based on top 500 most variable genes.
Alternatively, the normalized counts can be transformed using the `DESeq2::rlog` function and `DESeq2::plotPCA()` can be readily used to plot the PCA results (Figure [8\.9](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DErldnorm)).
```
rld <- rlog(dds)
DESeq2::plotPCA(rld, ntop = 500, intgroup = 'group') +
ylim(-50, 50) + theme_bw()
```
FIGURE 8\.9: PCA plot of top 500 most variable genes.
##### 8\.3\.7\.1\.4 Relative Log Expression (RLE) plot
A similar plot to the MA plot is the RLE (Relative Log Expression) plot that is useful in finding out if the data at hand needs normalization (Gandolfo and Speed [2018](#ref-gandolfo_rle_2018)). Sometimes, even the datasets normalized using the explained methods above may need further normalization due to unforeseen sources of variation that might stem from the library preparation, the person who carries out the experiment, the date of sequencing, the temperature changes in the laboratory at the time of library preparation, and so on and so forth. The RLE plot is a quick diagnostic that can be applied on the raw or normalized count matrices to see if further processing is required.
Let’s do RLE plots on the raw counts and normalized counts using the `EDASeq` package (Risso, Schwartz, Sherlock, et al. [2011](#ref-risso_gc-content_2011)) (see Figure [8\.10](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DErleplot)).
```
library(EDASeq)
par(mfrow = c(1, 2))
plotRLE(countData, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'Raw Counts')
plotRLE(DESeq2::counts(dds, normalized = TRUE),
outline=FALSE, ylim=c(-4, 4),
col = as.numeric(colData$group),
main = 'Normalized Counts')
```
FIGURE 8\.10: Relative log expression plots based on raw and normalized count matrices
Here the RLE plot is comprised of boxplots, where each box\-plot represents the distribution of the relative log expression of the genes expressed in the corresponding sample. Each gene’s expression is divided by the median expression value of that gene across all samples. Then this is transformed to log scale, which gives the relative log expression value for a single gene. The RLE values for all the genes from a sample are visualized as a boxplot.
Ideally the boxplots are centered around the horizontal zero line and are as tightly distributed as possible (Risso, Ngai, Speed, et al. [2014](#ref-risso_normalization_2014)). From the plots that we have made for the raw and normalized count data, we can observe how the normalized dataset has improved upon the raw count data for all the samples. However, in some cases, it is important to visualize RLE plots in combination with other diagnostic plots such as PCA plots, heatmaps, and correlation plots to see if there is more unwanted variation in the data, which can be further accounted for using packages such as `RUVSeq` (Risso, Ngai, Speed, et al. [2014](#ref-risso_normalization_2014)) and `sva` (Leek, Johnson, Parker, et al. [2012](#ref-leek_sva_2012)). We will cover details about the `RUVSeq` package to account for unwanted sources of noise in RNA\-seq datasets in later sections.
### 8\.3\.8 Functional enrichment analysis
#### 8\.3\.8\.1 GO term analysis
In a typical differential expression analysis, thousands of genes are found differentially expressed between two groups of samples. While prior knowledge of the functions of individual genes can give some clues about what kind of cellular processes have been affected, e.g. by a drug treatment, manually going through the whole list of thousands of genes would be very cumbersome and not be very informative in the end. Therefore a commonly used tool to address this problem is to do enrichment analyses of functional terms that appear associated to the given set of differentially expressed genes more often than expected by chance. The functional terms are usually associated to multiple genes. Thus, genes can be grouped into sets by shared functional terms. However, it is important to have an agreed upon controlled vocabulary on the list of terms used to describe the functions of genes. Otherwise, it would be impossible to exchange scientific results globally. That’s why initiatives such as the Gene Ontology Consortium have collated a list of Gene Ontology (GO) terms for each gene. GO term analysis is probably the most common analysis applied after a differential expression analysis. GO term analysis helps quickly find out systematic changes that can describe differences between groups of samples.
In R, one of the simplest ways to do functional enrichment analysis for a set of genes is via the `gProfileR` package.
Let’s select the genes that are significantly differentially expressed between the case and control samples.
Let’s extract genes that have an adjusted p\-value below 0\.1 and that show a 2\-fold change (either negative or positive) in the case compared to control. We will then feed this gene set into the `gProfileR` function. The top 10 detected GO terms are displayed in Table [8\.2](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:GOanalysistable).
```
library(DESeq2)
library(gProfileR)
library(knitr)
# extract differential expression results
DEresults <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
#remove genes with NA values
DE <- DEresults[!is.na(DEresults$padj),]
#select genes with adjusted p-values below 0.1
DE <- DE[DE$padj < 0.1,]
#select genes with absolute log2 fold change above 1 (two-fold change)
DE <- DE[abs(DE$log2FoldChange) > 1,]
#get the list of genes of interest
genesOfInterest <- rownames(DE)
#calculate enriched GO terms
goResults <- gprofiler(query = genesOfInterest,
organism = 'hsapiens',
src_filter = 'GO',
hier_filtering = 'moderate')
```
TABLE 8\.2: Top GO terms sorted by p\-value.
| | p.value | term.size | precision | domain | term.name |
| --- | --- | --- | --- | --- | --- |
| 64 | 0 | 2740 | 0\.223 | CC | plasma membrane part |
| 23 | 0 | 1609 | 0\.136 | BP | ion transport |
| 16 | 0 | 3656 | 0\.258 | BP | regulation of biological quality |
| 30 | 0 | 385 | 0\.042 | BP | extracellular structure organization |
| 34 | 0 | 7414 | 0\.452 | BP | multicellular organismal process |
| 78 | 0 | 1069 | 0\.090 | MF | transmembrane transporter activity |
| 47 | 0 | 1073 | 0\.090 | BP | organic acid metabolic process |
| 5 | 0 | 975 | 0\.083 | BP | response to drug |
| 18 | 0 | 1351 | 0\.107 | BP | biological adhesion |
| 31 | 0 | 4760 | 0\.302 | BP | system development |
#### 8\.3\.8\.2 Gene set enrichment analysis
A gene set is a collection of genes with some common property. This shared property among a set of genes could be a GO term, a common biological pathway, a shared interaction partner, or any biologically relevant commonality that is meaningful in the context of the pursued experiment. Gene set enrichment analysis (GSEA) is a valuable exploratory analysis tool that can associate systematic changes to a high\-level function rather than individual genes. Analysis of coordinated changes of expression levels of gene sets can provide complementary benefits on top of per\-gene\-based differential expression analyses. For instance, consider a gene set belonging to a biological pathway where each member of the pathway displays a slight deregulation in a disease sample compared to a normal sample. In such a case, individual genes might not be picked up by the per\-gene\-based differential expression analysis. Thus, the GO/Pathway enrichment on the differentially expressed list of genes would not show an enrichment of this pathway. However, the additive effect of slight changes of the genes could amount to a large effect at the level of the gene set, thus the pathway could be detected as a significant pathway that could explain the mechanistic problems in the disease sample.
We use the bioconductor package `gage` (Luo, Friedman, Shedden, et al. [2009](#ref-luo_gage:_2009)) to demonstrate how to do GSEA using normalized expression data of the samples as input. Here we are using only two gene sets: one from the top GO term discovered from the previous GO analysis, one that we compile by randomly selecting a list of genes. However, annotated gene sets can be used from databases such as MSIGDB (Subramanian, Tamayo, Mootha, et al. [2005](#ref-subramanian_gene_2005)), which compile gene sets from a variety of resources such as KEGG (Kanehisa, Sato, Kawashima, et al. [2016](#ref-kanehisa_kegg_2016)) and REACTOME (Antonio Fabregat, Jupe, Matthews, et al. [2018](#ref-fabregat_reactome_2018)).
```
#Let's define the first gene set as the list of genes from one of the
#significant GO terms found in the GO analysis. order go results by pvalue
goResults <- goResults[order(goResults$p.value),]
#restrict the terms that have at most 100 genes overlapping with the query
go <- goResults[goResults$overlap.size < 100,]
# use the top term from this table to create a gene set
geneSet1 <- unlist(strsplit(go[1,]$intersection, ','))
#Define another gene set by just randomly selecting 25 genes from the counts
#table get normalized counts from DESeq2 results
normalizedCounts <- DESeq2::counts(dds, normalized = TRUE)
geneSet2 <- sample(rownames(normalizedCounts), 25)
geneSets <- list('top_GO_term' = geneSet1,
'random_set' = geneSet2)
```
Using the defined gene sets, we’d like to do a group comparison between the case samples with respect to the control samples.
```
library(gage)
#use the normalized counts to carry out a GSEA.
gseaResults <- gage(exprs = log2(normalizedCounts+1),
ref = match(rownames(colData[colData$group == 'CTRL',]),
colnames(normalizedCounts)),
samp = match(rownames(colData[colData$group == 'CASE',]),
colnames(normalizedCounts)),
gsets = geneSets, compare = 'as.group')
```
We can observe if there is a significant up\-regulation or down\-regulation of the gene set in the case group compared to the controls by accessing `gseaResults$greater` as in Table [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1) or `gseaResults$less` as in Table [8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost2).
TABLE 8\.3: Up\-regulation statistics
| | p.geomean | stat.mean | p.val | q.val | set.size | exp1 |
| --- | --- | --- | --- | --- | --- | --- |
| top\_GO\_term | 0\.0000 | 7\.1994 | 0\.0000 | 0\.0000 | 32 | 0\.0000 |
| random\_set | 0\.5832 | \-0\.2113 | 0\.5832 | 0\.5832 | 25 | 0\.5832 |
TABLE 8\.4: Down\-regulation statistics
| | p.geomean | stat.mean | p.val | q.val | set.size | exp1 |
| --- | --- | --- | --- | --- | --- | --- |
| random\_set | 0\.4168 | \-0\.2113 | 0\.4168 | 0\.8336 | 25 | 0\.4168 |
| top\_GO\_term | 1\.0000 | 7\.1994 | 1\.0000 | 1\.0000 | 32 | 1\.0000 |
We can see that the random gene set shows no significant up\- or down\-regulation (Tables [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1) and ([8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost2)), while the gene set we defined using the top GO term shows a significant up\-regulation (adjusted p\-value \< 0\.0007\) ([8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1)). It is worthwhile to visualize these systematic changes in a heatmap as in Figure [8\.11](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:gseaPost3).
```
library(pheatmap)
# get the expression data for the gene set of interest
M <- normalizedCounts[rownames(normalizedCounts) %in% geneSet1, ]
# log transform the counts for visualization scaling by row helps visualizing
# relative change of expression of a gene in multiple conditions
pheatmap(log2(M+1),
annotation_col = colData,
show_rownames = TRUE,
fontsize_row = 8,
scale = 'row',
cutree_cols = 2,
cutree_rows = 2)
```
FIGURE 8\.11: Heatmap of expression value from the genes with the top GO term.
We can see that almost all genes from this gene set display an increased level of expression in the case samples
compared to the controls.
### 8\.3\.9 Accounting for additional sources of variation
When doing a differential expression analysis in a case\-control setting, the variable of interest, i.e. the variable that explains the separation of the case samples from the control, is usually the treatment, genotypic differences, a certain phenotype, etc. However, in reality, depending on how the experiment and the sequencing were designed, there may be additional factors that might contribute to the variation between the compared samples. Sometimes, such variables are known, for instance, the date of the sequencing for each sample (batch information), or the temperature under which samples were kept. Such variables are not necessarily biological but rather technical, however, they still impact the measurements obtained from an RNA\-seq experiment. Such variables can introduce systematic shifts in the obtained measurements. Here, we will demonstrate: firstly how to account for such variables using DESeq2, when the possible sources of variation are actually known; secondly, how to account for such variables when all we have is just a count table but we observe that the variable of interest only explains a small proportion of the differences between case and control samples.
#### 8\.3\.9\.1 Accounting for covariates using DESeq2
For demonstration purposes, we will use a subset of the count table obtained for a heart disease study, where there are RNA\-seq samples from subjects with normal and failing hearts. We again use a subset of the samples, focusing on 6 case and 6 control samples and we only consider protein\-coding genes (for speed concerns).
Let’s import count and colData for this experiment.
```
counts_file <- system.file('extdata/rna-seq/SRP021193.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP021193.colData.tsv',
package = 'compGenomRData')
counts <- read.table(counts_file)
colData <- read.table(colData_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
```
Let’s take a look at how the samples cluster by calculating the TPM counts as displayed as a heatmap in Figure [8\.12](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:batcheffects2).
```
library(pheatmap)
#find gene length normalized values
geneLengths <- counts$width
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
selectedGenes <- names(sort(apply(tpm, 1, var),
decreasing = T)[1:100])
pheatmap(tpm[selectedGenes,],
scale = 'row',
annotation_col = colData,
show_rownames = FALSE)
```
FIGURE 8\.12: Visualizing batch effects in an experiment.
Here we can see from the clusters that the dominating variable is the ‘Library Selection’ variable rather than the ‘diagnosis’ variable, which determines the state of the organ from which the sample was taken. Case and control samples are all mixed in both two major clusters. However, ideally, we’d like to see a separation of the case and control samples regardless of the additional covariates. When testing for differential gene expression between conditions, such confounding variables can be accounted for using `DESeq2`. Below is a demonstration of how we instruct `DESeq2` to account for the ‘library selection’ variable:
```
library(DESeq2)
# remove the 'width' column from the counts matrix
countData <- as.matrix(subset(counts, select = c(-width)))
# set up a DESeqDataSet object
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = ~ LibrarySelection + group)
```
When constructing the design formula, it is very important to pay attention to the sequence of variables. We leave the variable of interest to the last and we can add as many covariates as we want to the beginning of the design formula. Please refer to the `DESeq2` vignette if you’d like to learn more about how to construct design formulas.
Now, we can run the differential expression analysis as has been demonstrated previously.
```
# run DESeq
dds <- DESeq(dds)
# extract results
DEresults <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
```
#### 8\.3\.9\.2 Accounting for estimated covariates using RUVSeq
In cases when the sources of potential variation are not known, it is worthwhile to use tools such as `RUVSeq` or `sva` that can estimate potential sources of variation and clean up the counts table from those sources of variation. Later on, the estimated covariates can be integrated into DESeq2’s design formula.
Let’s see how to utilize the `RUVseq` package to first diagnose the problem and then solve it. Here, for demonstration purposes, we’ll use a count table from a lung carcinoma study in which a transcription factor (Ets homologous factor \- EHF) is overexpressed and compared to the control samples with baseline EHF expression. Again, we only consider protein coding genes and use only five case and five control samples. The original data can be found on the `recount2` database with the accession ‘SRP049988’.
```
counts_file <- system.file('extdata/rna-seq/SRP049988.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP049988.colData.tsv',
package = 'compGenomRData')
counts <- read.table(counts_file)
colData <- read.table(colData_file, header = T,
sep = '\t', stringsAsFactors = TRUE)
# simplify condition descriptions
colData$source_name <- ifelse(colData$group == 'CASE',
'EHF_overexpression', 'Empty_Vector')
```
Let’s start by making heatmaps of the samples using TPM counts (see Figure [8\.13](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose1)).
```
#find gene length normalized values
geneLengths <- counts$width
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
selectedGenes <- names(sort(apply(tpm, 1, var),
decreasing = T)[1:100])
pheatmap(tpm[selectedGenes,],
scale = 'row',
annotation_col = colData,
cutree_cols = 2,
show_rownames = FALSE)
```
FIGURE 8\.13: Diagnostic plot to observe.
We can see that the overall clusters look fine, except that one of the case samples (CASE\_5\) clusters more closely with the control samples than the other case samples. This mis\-clustering could be a result of some batch effect, or any other technical preparation steps. However, the `colData` object doesn’t contain any variables that we can use to pinpoint the exact cause of this. So, let’s use `RUVSeq` to estimate potential covariates to see if the clustering results can be improved.
First, we set up the experiment:
```
library(EDASeq)
# remove 'width' column from counts
countData <- as.matrix(subset(counts, select = c(-width)))
# create a seqExpressionSet object using EDASeq package
set <- newSeqExpressionSet(counts = countData,
phenoData = colData)
```
Next, let’s make a diagnostic RLE plot on the raw count table.
```
# make an RLE plot and a PCA plot on raw count data and color samples by group
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group))
plotPCA(set, col = as.numeric(colData$group), adj = 0.5,
ylim = c(-0.7, 0.5), xlim = c(-0.5, 0.5))
```
FIGURE 8\.14: Diagnostic RLE and PCA plots based on raw count table.
```
## make RLE and PCA plots on TPM matrix
par(mfrow = c(1,2))
plotRLE(tpm, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group))
plotPCA(tpm, col=as.numeric(colData$group), adj = 0.5,
ylim = c(-0.3, 1), xlim = c(-0.5, 0.5))
```
FIGURE 8\.15: Diagnostic RLE and PCA plots based on TPM normalized count table.
Both RLE and PCA plots look better on normalized data (Figure [8\.15](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose2p2)) compared to raw data (Figure [8\.14](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose2p1)), but still suggest the necessity of further improvement, because the CASE\_5 sample still clusters with the control samples. We haven’t yet accounted for the source of unwanted variation.
#### 8\.3\.9\.3 Removing unwanted variation from the data
`RUVSeq` has three main functions for removing unwanted variation: `RUVg()`, `RUVs()`, and `RUVr()`. Here, we will demonstrate how to use `RUVg` and `RUVs`. `RUVr` will be left as an exercise for the reader.
##### 8\.3\.9\.3\.1 Using RUVg
One way of removing unwanted variation depends on using a set of reference genes that are not expected to change by the sources of technical variation. One strategy along this line is to use spike\-in genes, which are artificially introduced into the sequencing run (Jiang, Schlesinger, Davis, et al. [2011](#ref-jiang_synthetic_2011)). However, there are many sequencing datasets that don’t have this spike\-in data available. In such cases, an empirical set of genes can be collected from the expression data by doing a differential expression analysis and discovering genes that are unchanged in the given conditions. These unchanged genes are used to clean up the data from systematic shifts in expression due to the unwanted sources of variation. Another strategy could be to use a set of house\-keeping genes as negative controls, and use them as a reference to correct the systematic biases in the data. Let’s use a list of \~500 house\-keeping genes compiled here: [https://www.tau.ac.il/\~elieis/HKG/HK\_genes.txt](https://www.tau.ac.il/~elieis/HKG/HK_genes.txt).
```
library(RUVSeq)
#source for house-keeping genes collection:
#https://m.tau.ac.il/~elieis/HKG/HK_genes.txt
HK_genes <- read.table(file = system.file("extdata/rna-seq/HK_genes.txt",
package = 'compGenomRData'),
header = FALSE)
# let's take an intersection of the house-keeping genes with the genes available
# in the count table
house_keeping_genes <- intersect(rownames(set), HK_genes$V1)
```
We will now run `RUVg()` with the different number of factors of unwanted variation. We will plot the PCA after removing the unwanted variation. We should be able to see which `k` values, number of factors, produce better separation between sample groups.
```
# now, we use these genes as the empirical set of genes as input to RUVg.
# we try different values of k and see how the PCA plots look
par(mfrow = c(2, 2))
for(k in 1:4) {
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = k)
plotPCA(set_g, col=as.numeric(colData$group), cex = 0.9, adj = 0.5,
main = paste0('with RUVg, k = ',k),
ylim = c(-1, 1), xlim = c(-1, 1), )
}
```
FIGURE 8\.16: PCA plots on RUVg normalized data with varying number of covariates (k).
Based on the separation of case and control samples in the PCA plots in Figure [8\.16](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf1),
we choose k \= 1 and re\-run the `RUVg()` function with the house\-keeping genes to do more diagnostic plots.
```
# choose k = 1
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = 1)
```
Now let’s do diagnostics: compare the count matrices with or without RUVg processing, comparing RLE plots (Figure [8\.17](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf2)) and PCA plots (Figure [8\.18](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf3)) to see the effect of RUVg on the normalization and separation of case and control samples.
```
# RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'without RUVg')
plotRLE(set_g, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'with RUVg')
```
FIGURE 8\.17: RLE plots to observe the effect of RUVg.
```
# PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group), adj = 0.5,
main = 'without RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
plotPCA(set_g, col=as.numeric(colData$group), adj = 0.5,
main = 'with RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
```
FIGURE 8\.18: PCA plots to observe the effect of RUVg.
We can observe that using `RUVg()` with house\-keeping genes as reference has improved the clusters, however not yielded ideal separation. Probably the effect that is causing the ‘CASE\_5’ to cluster with the control samples still hasn’t been completely eliminated.
##### 8\.3\.9\.3\.2 Using RUVs
There is another strategy of `RUVSeq` that works better in the presence of replicates in the absence of a confounded experimental design, which is the `RUVs()` function. Let’s see how that performs with this data. This time we don’t use the house\-keeping genes. We rather use all genes as input to `RUVs()`. This function estimates the correction factor by assuming that replicates should have constant biological variation, rather, the variation in the replicates are the unwanted variation.
```
# make a table of sample groups from colData
differences <- makeGroups(colData$group)
## looking for two different sources of unwanted variation (k = 2)
## use information from all genes in the expression object
par(mfrow = c(2, 2))
for(k in 1:4) {
set_s <- RUVs(set, unique(rownames(set)),
k=k, differences) #all genes
plotPCA(set_s, col=as.numeric(colData$group),
cex = 0.9, adj = 0.5,
main = paste0('with RUVs, k = ',k),
ylim = c(-1, 1), xlim = c(-0.6, 0.6))
}
```
FIGURE 8\.19: PCA plots on RUVs normalized data with varying number of covariates (k).
Based on the separation of case and control samples in the PCA plots in Figure [8\.19](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf1),
we can see that the samples are better separated even at k \= 2 when using `RUVs()`. Here, we re\-run the `RUVs()` function using k \= 2, in order to do more diagnostic plots. We try to pick a value of k that is good enough to distinguish the samples by condition of interest. While setting the value of k to higher values could improve the percentage of explained variation by the first principle component to up to 61%, we try to avoid setting the value unnecessarily high to avoid removing factors that might also correlate with important biological differences between conditions.
```
# choose k = 2
set_s <- RUVs(set, unique(rownames(set)), k=2, differences) #
```
Now let’s do diagnostics again: compare the count matrices with or without RUVs processing, comparing RLE plots (Figure [8\.20](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf2)) and PCA plots (Figure [8\.21](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf3)) to see the effect of RUVg on the normalization and separation of case and control samples.
```
## compare the initial and processed objects
## RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'without RUVs')
plotRLE(set_s, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'with RUVs')
```
FIGURE 8\.20: RLE plots to observe the effect of RUVs.
```
## PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group),
main = 'without RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
```
FIGURE 8\.21: PCA plots to observe the effect of RUVs.
Let’s compare PCA results from RUVs and RUVg with the initial raw counts matrix. We will simply run the `plotPCA()` function on different normalization schemes. The resulting plots are in Figure [8\.22](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvcompare):
```
par(mfrow = c(1,3))
plotPCA(countData, col=as.numeric(colData$group),
main = 'without RUV - raw counts', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_g, col=as.numeric(colData$group),
main = 'with RUVg', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
```
FIGURE 8\.22: PCA plots to observe the before/after effect of RUV functions.
It looks like `RUVs()` has performed better than `RUVg()` in this case. So, let’s use count data that is processed by `RUVs()` to re\-do the initial heatmap. The resulting heatmap is in Figure [8\.23](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvpost).
```
library(EDASeq)
library(pheatmap)
# extract normalized counts that are cleared from unwanted variation using RUVs
normCountData <- normCounts(set_s)
selectedGenes <- names(sort(apply(normCountData, 1, var),
decreasing = TRUE))[1:500]
pheatmap(normCountData[selectedGenes,],
annotation_col = colData,
show_rownames = FALSE,
cutree_cols = 2,
scale = 'row')
```
FIGURE 8\.23: Clustering samples using the top 500 most variable genes normalized using RUVs (k \= 2\).
As can be observed the replicates from different groups cluster much better with each other after processing with `RUVs()`. It is important to note that RUVs uses information from replicates to shift the expression data and it would not work in a confounding design where the replicates of case samples and replicates of the control samples are sequenced in different batches.
#### 8\.3\.9\.4 Re\-run DESeq2 with the computed covariates
Having computed the sources of variation using `RUVs()`, we can actually integrate these variables with `DESeq2` to re\-do the differential expression analysis.
```
library(DESeq2)
#set up DESeqDataSet object
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = ~ group)
# filter for low count genes
dds <- dds[rowSums(DESeq2::counts(dds)) > 10]
# insert the covariates W1 and W2 computed using RUVs into DESeqDataSet object
colData(dds) <- cbind(colData(dds),
pData(set_s)[rownames(colData(dds)),
grep('W_[0-9]',
colnames(pData(set_s)))])
# update the design formula for the DESeq analysis (save the variable of
# interest to the last!)
design(dds) <- ~ W_1 + W_2 + group
# repeat the analysis
dds <- DESeq(dds)
# extract deseq results
res <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
res <- res[order(res$padj),]
```
### 8\.3\.1 Processing raw data
#### 8\.3\.1\.1 Quality check and read processing
The first step in any experiment that involves high\-throughput short\-read sequencing should be to check the sequencing quality of the reads before starting to do any downstream analysis. The quality of the input sequences holds fundamental importance in the confidence for the biological conclusions drawn from the experiment. We have introduced quality check and processing in Chapter [7](processingReads.html#processingReads), and those tools and workflows also apply in RNA\-seq analysis.
#### 8\.3\.1\.2 Improving the quality
The second step in the RNA\-seq analysis workflow is to improve the quality of the input reads. This step could be regarded as an optional step when the sequencing quality is very good. However, even with the highest\-quality sequencing datasets, this step may still improve the quality of the input sequences. The most common technical artifacts that can be filtered out are the adapter sequences that contaminate the sequenced reads, and the low\-quality bases that are usually found at the ends of the sequences. Commonly used tools in the field (trimmomatic (Bolger, Lohse, and Usadel [2014](#ref-bolger_trimmomatic:_2014)), trimGalore (Andrews [2010](#ref-noauthor_babraham_nodate))) are again not written in R, however there are alternative R libraries for carrying out the same functionality, for instance, QuasR (Gaidatzis, Lerch, Hahne, et al. [2015](#ref-gaidatzis_quasr:_2015)) (see `QuasR::preprocessReads` function) and ShortRead (Morgan, Anders, Lawrence, et al. [2009](#ref-morgan_shortread:_2009)) (see `ShortRead::filterFastq` function). Some of these approaches are introduced in Chapter [7](processingReads.html#processingReads).
The sequencing quality control and read pre\-processing steps can be visited multiple times until achieving a satisfactory level of quality in the sequence data before moving on to the downstream analysis steps.
#### 8\.3\.1\.1 Quality check and read processing
The first step in any experiment that involves high\-throughput short\-read sequencing should be to check the sequencing quality of the reads before starting to do any downstream analysis. The quality of the input sequences holds fundamental importance in the confidence for the biological conclusions drawn from the experiment. We have introduced quality check and processing in Chapter [7](processingReads.html#processingReads), and those tools and workflows also apply in RNA\-seq analysis.
#### 8\.3\.1\.2 Improving the quality
The second step in the RNA\-seq analysis workflow is to improve the quality of the input reads. This step could be regarded as an optional step when the sequencing quality is very good. However, even with the highest\-quality sequencing datasets, this step may still improve the quality of the input sequences. The most common technical artifacts that can be filtered out are the adapter sequences that contaminate the sequenced reads, and the low\-quality bases that are usually found at the ends of the sequences. Commonly used tools in the field (trimmomatic (Bolger, Lohse, and Usadel [2014](#ref-bolger_trimmomatic:_2014)), trimGalore (Andrews [2010](#ref-noauthor_babraham_nodate))) are again not written in R, however there are alternative R libraries for carrying out the same functionality, for instance, QuasR (Gaidatzis, Lerch, Hahne, et al. [2015](#ref-gaidatzis_quasr:_2015)) (see `QuasR::preprocessReads` function) and ShortRead (Morgan, Anders, Lawrence, et al. [2009](#ref-morgan_shortread:_2009)) (see `ShortRead::filterFastq` function). Some of these approaches are introduced in Chapter [7](processingReads.html#processingReads).
The sequencing quality control and read pre\-processing steps can be visited multiple times until achieving a satisfactory level of quality in the sequence data before moving on to the downstream analysis steps.
### 8\.3\.2 Alignment
Once a decent level of quality in the sequences is reached, the expression level of the genes can be quantified by first mapping the sequences to a reference genome, and secondly matching the aligned reads to the gene annotations, in order to count the number of reads mapping to each gene. If the species under study has a well\-annotated transcriptome, the reads can be aligned to the transcript sequences instead of the reference genome. In cases where there is no good quality reference genome or transcriptome, it is possible to de novo assemble the transcriptome from the sequences and then quantify the expression levels of genes/transcripts.
For RNA\-seq read alignments, apart from the availability of reference genomes and annotations, probably the most important factor to consider when choosing an alignment tool is whether the alignment method considers the absence of intronic regions in the sequenced reads, while the target genome may contain introns. Therefore, it is important to choose alignment tools that take into account alternative splicing. In the basic setting where a read, which originates from a cDNA sequence corresponding to an exon\-exon junction, needs to be split into two parts when aligned against the genome. There are various tools that consider this factor such as STAR (Dobin, Davis, Schlesinger, et al. [2013](#ref-dobin_star:_2013)), Tophat2 (Kim, Pertea, Trapnell, et al. [2013](#ref-kim_tophat2:_2013)), Hisat2 (Kim, Langmead, and Salzberg [2015](#ref-kim_hisat:_2015)), and GSNAP (Wu, Reeder, Lawrence, et al. [2016](#ref-wu_gmap_2016)). Most alignment tools are written in C/C\+\+ languages because of performance concerns. There are also R libraries that can do short read alignments; these are discussed in Chapter [7](processingReads.html#processingReads).
### 8\.3\.3 Quantification
After the reads are aligned to the target, a SAM/BAM file sorted by coordinates should have been obtained. The BAM file contains all alignment\-related information of all the reads that have been attempted to be aligned to the target sequence. This information consists of \- most basically \- the genomic coordinates (chromosome, start, end, strand) of where a sequence was matched (if at all) in the target, specific insertions/deletions/mismatches that describe the differences between the input and target sequences. These pieces of information are used along with the genomic coordinates of genome annotations such as gene/transcript models in order to count how many reads have been sequenced from a gene/transcript. As simple as it may sound, it is not a trivial task to assign reads to a gene/transcript just by comparing the genomic coordinates of the annotations and the sequences, because of confounding factors such as overlapping gene annotations, overlapping exon annotations from different transcript isoforms of a gene, and overlapping annotations from opposite DNA strands in the absence of a strand\-specific sequencing protocol. Therefore, for read counting, it is important to consider:
1. Strand specificity of the sequencing protocol: Are the reads expected to originate from the forward strand, reverse strand, or unspecific?
2. Counting mode:
\- when counting at the gene\-level: When there are overlapping annotations, which features should the read be assigned to? Tools usually have a parameter that lets the user select a counting mode.
\- when counting at the transcript\-level: When there are multiple isoforms of a gene, which isoform should the read be assigned to? This consideration is usually an algorithmic consideration that is not modifiable by the end\-user.
Some tools can couple alignment to quantification (e.g. STAR), while some assume the alignments are already calculated and require BAM files as input. On the other hand, in the presence of good transcriptome annotations, alignment\-free methods (Salmon (Patro, Duggal, Love, et al. [2017](#ref-patro_salmon:_2017)), Kallisto (Bray, Pimentel, Melsted, et al. [2016](#ref-bray_near-optimal_2016)), Sailfish (Patro, Mount, and Kingsford [2014](#ref-patro_sailfish_2014))) can also be used to estimate the expression levels of transcripts/genes. There are also reference\-free quantification methods that can first de novo assemble the transcriptome and estimate the expression levels based on this assembly. Such a strategy can be useful in discovering novel transcripts or may be required in cases when a good reference does not exist. If a reference transcriptome exists but of low quality, a reference\-based transcriptome assembler such as Cufflinks (Trapnell, Williams, Pertea, et al. [2010](#ref-trapnell_transcript_2010)) can be used to improve the transcriptome. In case there is no available transcriptome annotation, a de novo assembler such as Trinity (Haas, Papanicolaou, Yassour, et al. [2013](#ref-haas_novo_2013)) or Trans\-ABySS (Robertson, Schein, Chiu, et al. [2010](#ref-robertson_novo_2010)) can be used to assemble the transcriptome from scratch.
Within R, quantification can be done using:
\- `Rsubread::featureCounts`
\- `QuasR::qCount`
\- `GenomicAlignments::summarizeOverlaps`
### 8\.3\.4 Within sample normalization of the read counts
The most common application after a gene’s expression is quantified (as the number of reads aligned to the gene), is to compare the gene’s expression in different conditions, for instance, in a case\-control setting (e.g. disease versus normal) or in a time\-series (e.g. along different developmental stages). Making such comparisons helps identify the genes that might be responsible for a disease or an impaired developmental trajectory. However, there are multiple caveats that needs to be addressed before making a comparison between the read counts of a gene in different conditions (Maza, Frasse, Senin, et al. [2013](#ref-maza_comparison_2013)).
* Library size (i.e. sequencing depth) varies between samples coming from different lanes of the flow cell of the sequencing machine.
* Longer genes will have a higher number of reads.
* Library composition (i.e. relative size of the studied transcriptome) can be different in two different biological conditions.
* GC content biases across different samples may lead to a biased sampling of genes (Risso, Schwartz, Sherlock, et al. [2011](#ref-risso_gc-content_2011)).
* Read coverage of a transcript can be biased and non\-uniformly distributed along the transcript (Mortazavi, Williams, McCue, et al. [2008](#ref-mortazavi_mapping_2008)).
Therefore these factors need to be taken into account before making comparisons.
The most basic normalization approaches address the sequencing depth bias. Such procedures normalize the read counts per gene by dividing each gene’s read count by a certain value and multiplying it by 10^6\. These normalized values are usually referred to as CPM (counts per million reads):
* Total Counts Normalization (divide counts by the **sum** of all counts)
* Upper Quartile Normalization (divide counts by the **upper quartile** value of the counts)
* Median Normalization (divide counts by the **median** of all counts)
Popular metrics that improve upon CPM are RPKM/FPKM (reads/fragments per kilobase of million reads) and TPM (transcripts per million). RPKM is obtained by dividing the CPM value by another factor, which is the length of the gene per kilobase. FPKM is the same as RPKM, but is used for paired\-end reads. Thus, RPKM/FPKM methods account for, firstly, the **library size**, and secondly, the **gene lengths**.
TPM also controls for both the library size and the gene lengths, however, with the TPM method, the read counts are first normalized by the gene length (per kilobase), and then gene\-length normalized values are divided by the sum of the gene\-length normalized values and multiplied by 10^6\. Thus, the sum of normalized values for TPM will always be equal to 10^6 for each library, while the sum of RPKM/FPKM values do not sum to 10^6\. Therefore, it is easier to interpret TPM values than RPKM/FPKM values.
### 8\.3\.5 Computing different normalization schemes in R
Here we will assume that there is an RNA\-seq count table comprising raw counts, meaning the number of reads counted for each gene has not been exposed to any kind of normalization and consists of integers. The rows of the count table correspond to the genes and the columns represent different samples. Here we will use a subset of the RNA\-seq count table from a colorectal cancer study. We have filtered the original count table for only protein\-coding genes (to improve the speed of calculation) and also selected only five metastasized colorectal cancer samples along with five normal colon samples. There is an additional column `width` that contains the length of the corresponding gene in the unit of base pairs. The length of the genes are important to compute RPKM and TPM values. The original count tables can be found from the recount2 database (<https://jhubiostatistics.shinyapps.io/recount/>) using the SRA project code *SRP029880*, and the experimental setup along with other accessory information can be found from the NCBI Trace archive using the SRA project code [SRP029880\`](https://trace.ncbi.nlm.nih.gov/Traces/sra/?study=SRP029880).
```
#colorectal cancer
counts_file <- system.file("extdata/rna-seq/SRP029880.raw_counts.tsv",
package = "compGenomRData")
coldata_file <- system.file("extdata/rna-seq/SRP029880.colData.tsv",
package = "compGenomRData")
counts <- as.matrix(read.table(counts_file, header = T, sep = '\t'))
```
#### 8\.3\.5\.1 Computing CPM
Let’s do a summary of the counts table. Due to space limitations, the summary for only the first three columns is displayed.
```
summary(counts[,1:3])
```
```
## CASE_1 CASE_2 CASE_3
## Min. : 0 Min. : 0 Min. : 0
## 1st Qu.: 5155 1st Qu.: 6464 1st Qu.: 3972
## Median : 80023 Median : 85064 Median : 64145
## Mean : 295932 Mean : 273099 Mean : 263045
## 3rd Qu.: 252164 3rd Qu.: 245484 3rd Qu.: 210788
## Max. :205067466 Max. :105248041 Max. :222511278
```
To compute the CPM values for each sample (excluding the `width` column):
```
cpm <- apply(subset(counts, select = c(-width)), 2,
function(x) x/sum(as.numeric(x)) * 10^6)
```
Check that the sum of each column after normalization equals to 10^6 (except the width column).
```
colSums(cpm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3 CTRL_4 CTRL_5
## 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06
```
#### 8\.3\.5\.2 Computing RPKM
```
# create a vector of gene lengths
geneLengths <- as.vector(subset(counts, select = c(width)))
# compute rpkm
rpkm <- apply(X = subset(counts, select = c(-width)),
MARGIN = 2,
FUN = function(x) {
10^9 * x / geneLengths / sum(as.numeric(x))
})
```
Check the sample sizes of RPKM. Notice that the sums of samples are all different.
```
colSums(rpkm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3
## 158291.0 153324.2 161775.4 173047.4 172761.4 210032.6 301764.2 241418.3
## CTRL_4 CTRL_5
## 291674.5 252005.7
```
#### 8\.3\.5\.3 Computing TPM
```
#find gene length normalized values
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
```
Check the sample sizes of `tpm`. Notice that the sums of samples are all equal to 10^6\.
```
colSums(tpm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3 CTRL_4 CTRL_5
## 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06
```
None of these metrics (CPM, RPKM/FPKM, TPM) account for the other important confounding factor when comparing expression levels of genes across samples: the **library composition**, which may also be referred to as the **relative size of the compared transcriptomes**. This factor is not dependent on the sequencing technology, it is rather biological. For instance, when comparing transcriptomes of different tissues, there can be sets of genes in one tissue that consume a big chunk of the reads, while in the other tissues they are not expressed at all. This kind of imbalance in the composition of compared transcriptomes can lead to wrong conclusions about which genes are actually differentially expressed. This consideration is addressed in two popular R packages: `DESeq2` (Love, Huber, and Anders [2014](#ref-love_moderated_2014)) and edgeR (Robinson, McCarthy, and Smyth [2010](#ref-robinson_edger:_2010)) each with a different algorithm. `edgeR` uses a normalization procedure called Trimmed Mean of M\-values (TMM). `DESeq2` implements a normalization procedure using median of Ratios, which is obtained by finding the ratio of the log\-transformed count of a gene divided by the average of log\-transformed values of the gene in all samples (geometric mean), and then taking the median of these values for all genes. The raw read count of the gene is finally divided by this value (median of ratios) to obtain the normalized counts.
#### 8\.3\.5\.1 Computing CPM
Let’s do a summary of the counts table. Due to space limitations, the summary for only the first three columns is displayed.
```
summary(counts[,1:3])
```
```
## CASE_1 CASE_2 CASE_3
## Min. : 0 Min. : 0 Min. : 0
## 1st Qu.: 5155 1st Qu.: 6464 1st Qu.: 3972
## Median : 80023 Median : 85064 Median : 64145
## Mean : 295932 Mean : 273099 Mean : 263045
## 3rd Qu.: 252164 3rd Qu.: 245484 3rd Qu.: 210788
## Max. :205067466 Max. :105248041 Max. :222511278
```
To compute the CPM values for each sample (excluding the `width` column):
```
cpm <- apply(subset(counts, select = c(-width)), 2,
function(x) x/sum(as.numeric(x)) * 10^6)
```
Check that the sum of each column after normalization equals to 10^6 (except the width column).
```
colSums(cpm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3 CTRL_4 CTRL_5
## 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06
```
#### 8\.3\.5\.2 Computing RPKM
```
# create a vector of gene lengths
geneLengths <- as.vector(subset(counts, select = c(width)))
# compute rpkm
rpkm <- apply(X = subset(counts, select = c(-width)),
MARGIN = 2,
FUN = function(x) {
10^9 * x / geneLengths / sum(as.numeric(x))
})
```
Check the sample sizes of RPKM. Notice that the sums of samples are all different.
```
colSums(rpkm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3
## 158291.0 153324.2 161775.4 173047.4 172761.4 210032.6 301764.2 241418.3
## CTRL_4 CTRL_5
## 291674.5 252005.7
```
#### 8\.3\.5\.3 Computing TPM
```
#find gene length normalized values
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
```
Check the sample sizes of `tpm`. Notice that the sums of samples are all equal to 10^6\.
```
colSums(tpm)
```
```
## CASE_1 CASE_2 CASE_3 CASE_4 CASE_5 CTRL_1 CTRL_2 CTRL_3 CTRL_4 CTRL_5
## 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06 1e+06
```
None of these metrics (CPM, RPKM/FPKM, TPM) account for the other important confounding factor when comparing expression levels of genes across samples: the **library composition**, which may also be referred to as the **relative size of the compared transcriptomes**. This factor is not dependent on the sequencing technology, it is rather biological. For instance, when comparing transcriptomes of different tissues, there can be sets of genes in one tissue that consume a big chunk of the reads, while in the other tissues they are not expressed at all. This kind of imbalance in the composition of compared transcriptomes can lead to wrong conclusions about which genes are actually differentially expressed. This consideration is addressed in two popular R packages: `DESeq2` (Love, Huber, and Anders [2014](#ref-love_moderated_2014)) and edgeR (Robinson, McCarthy, and Smyth [2010](#ref-robinson_edger:_2010)) each with a different algorithm. `edgeR` uses a normalization procedure called Trimmed Mean of M\-values (TMM). `DESeq2` implements a normalization procedure using median of Ratios, which is obtained by finding the ratio of the log\-transformed count of a gene divided by the average of log\-transformed values of the gene in all samples (geometric mean), and then taking the median of these values for all genes. The raw read count of the gene is finally divided by this value (median of ratios) to obtain the normalized counts.
### 8\.3\.6 Exploratory analysis of the read count table
A typical quality control, in this case interrogating the RNA\-seq experiment design, is to measure the similarity of the samples with each other in terms of the quantified expression level profiles across a set of genes. One important observation to make is to see whether the most similar samples to any given sample are the biological replicates of that sample. This can be computed using unsupervised clustering techniques such as hierarchical clustering and visualized as a heatmap with dendrograms. Another most commonly applied technique is a dimensionality reduction technique called Principal Component Analysis (PCA) and visualized as a two\-dimensional (or in some cases three\-dimensional) scatter plot. In order to find out more about the clustering methods and PCA, please refer to Chapter [4](unsupervisedLearning.html#unsupervisedLearning).
#### 8\.3\.6\.1 Clustering
We can combine clustering and visualization of the clustering results by using heatmap functions that are available in a variety of R libraries. The basic R installation comes with the `stats::heatmap` function. However, there are other libraries available in CRAN (e.g. `pheatmap` (Kolde [2019](#ref-pheatmap))) or Bioconductor (e.g. `ComplexHeatmap` (Z. Gu, Eils, and Schlesner [2016](#ref-gu_complex_2016)[a](#ref-gu_complex_2016))) that come with more flexibility and more appealing visualizations.
Here we demonstrate a heatmap using the `pheatmap` package and the previously calculated `tpm` matrix.
As these matrices can be quite large, both computing the clustering and rendering the heatmaps can take a lot of resources and time. Therefore, a quick and informative way to compare samples is to select a subset of genes that are, for instance, most variable across samples, and use that subset to do the clustering and visualization.
Let’s select the top 100 most variable genes among the samples.
```
#compute the variance of each gene across samples
V <- apply(tpm, 1, var)
#sort the results by variance in decreasing order
#and select the top 100 genes
selectedGenes <- names(V[order(V, decreasing = T)][1:100])
```
Now we can quickly produce a heatmap where samples and genes are clustered (see Figure [8\.1](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:tpmhierClust1) ).
```
library(pheatmap)
pheatmap(tpm[selectedGenes,], scale = 'row', show_rownames = FALSE)
```
FIGURE 8\.1: Clustering and visualization of the topmost variable genes as a heatmap.
We can also overlay some annotation tracks to observe the clusters.
Here it is important to observe whether the replicates of the same sample cluster most closely with each other, or not. Overlaying the heatmap with such annotation and displaying sample groups with distinct colors helps quickly see if there are samples that don’t cluster as expected (see Figure [8\.2](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:tpmhierclust2) ).
```
colData <- read.table(coldata_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
pheatmap(tpm[selectedGenes,], scale = 'row',
show_rownames = FALSE,
annotation_col = colData)
```
FIGURE 8\.2: Clustering samples as a heatmap with sample annotations.
#### 8\.3\.6\.2 PCA
Let’s make a PCA plot to see the clustering of replicates as a scatter plot in two dimensions (Figure [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:pca1)).
```
library(stats)
library(ggplot2)
#transpose the matrix
M <- t(tpm[selectedGenes,])
# transform the counts to log2 scale
M <- log2(M + 1)
#compute PCA
pcaResults <- prcomp(M)
#plot PCA results making use of ggplot2's autoplot function
#ggfortify is needed to let ggplot2 know about PCA data structure.
autoplot(pcaResults, data = colData, colour = 'group')
```
FIGURE 8\.3: PCA plot of samples using TPM counts.
We should observe here whether the samples from the case group (CASE) and samples from the control group (CTRL) can be split into two distinct clusters on the scatter plot of the first two largest principal components.
We can use the `summary` function to summarize the PCA results to observe the contribution of the principal components in the explained variation.
```
summary(pcaResults)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 24.396 2.50514 2.39327 1.93841 1.79193 1.6357 1.46059
## Proportion of Variance 0.957 0.01009 0.00921 0.00604 0.00516 0.0043 0.00343
## Cumulative Proportion 0.957 0.96706 0.97627 0.98231 0.98747 0.9918 0.99520
## PC8 PC9 PC10
## Standard deviation 1.30902 1.12657 4.616e-15
## Proportion of Variance 0.00276 0.00204 0.000e+00
## Cumulative Proportion 0.99796 1.00000 1.000e+00
```
#### 8\.3\.6\.3 Correlation plots
Another complementary approach to see the reproducibility of the experiments is to compute the correlation scores between each pair of samples and draw a correlation plot.
Let’s first compute pairwise correlation scores between every pair of samples.
```
library(stats)
correlationMatrix <- cor(tpm)
```
Let’s have a look at how the correlation matrix looks ([8\.1](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:corrplot2))
(showing only two samples each of case and control samples):
TABLE 8\.1: Correlation scores between samples
| | CASE\_1 | CASE\_2 | CTRL\_1 | CTRL\_2 |
| --- | --- | --- | --- | --- |
| CASE\_1 | 1\.0000000 | 0\.9924606 | 0\.9594011 | 0\.9635760 |
| CASE\_2 | 0\.9924606 | 1\.0000000 | 0\.9725646 | 0\.9793835 |
| CTRL\_1 | 0\.9594011 | 0\.9725646 | 1\.0000000 | 0\.9879862 |
| CTRL\_2 | 0\.9635760 | 0\.9793835 | 0\.9879862 | 1\.0000000 |
We can also draw more visually appealing correlation plots using the `corrplot` package (Figure [8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:corrplot3)).
Using the `addrect` argument, we can split clusters into groups and surround them with rectangles.
By setting the `addCoef.col` argument to ‘white’, we can display the correlation coefficients as numbers in white color.
```
library(corrplot)
corrplot(correlationMatrix, order = 'hclust',
addrect = 2, addCoef.col = 'white',
number.cex = 0.7)
```
FIGURE 8\.4: Correlation plot of samples ordered by hierarchical clustering.
Here pairwise correlation levels are visualized as colored circles. `Blue` indicates positive correlation, while `Red` indicates negative correlation.
We could also plot this correlation matrix as a heatmap (Figure [8\.5](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:corrplot4)). As all the samples have a high pairwise
correlation score, using a heatmap instead of a corrplot helps to see the differences between samples more easily. The
`annotation_col` argument helps to display sample annotations and the `cutree_cols` argument is set to 2 to split the clusters into two groups based on the hierarchical clustering results.
```
library(pheatmap)
# split the clusters into two based on the clustering similarity
pheatmap(correlationMatrix,
annotation_col = colData,
cutree_cols = 2)
```
FIGURE 8\.5: Pairwise correlation of samples displayed as a heatmap.
#### 8\.3\.6\.1 Clustering
We can combine clustering and visualization of the clustering results by using heatmap functions that are available in a variety of R libraries. The basic R installation comes with the `stats::heatmap` function. However, there are other libraries available in CRAN (e.g. `pheatmap` (Kolde [2019](#ref-pheatmap))) or Bioconductor (e.g. `ComplexHeatmap` (Z. Gu, Eils, and Schlesner [2016](#ref-gu_complex_2016)[a](#ref-gu_complex_2016))) that come with more flexibility and more appealing visualizations.
Here we demonstrate a heatmap using the `pheatmap` package and the previously calculated `tpm` matrix.
As these matrices can be quite large, both computing the clustering and rendering the heatmaps can take a lot of resources and time. Therefore, a quick and informative way to compare samples is to select a subset of genes that are, for instance, most variable across samples, and use that subset to do the clustering and visualization.
Let’s select the top 100 most variable genes among the samples.
```
#compute the variance of each gene across samples
V <- apply(tpm, 1, var)
#sort the results by variance in decreasing order
#and select the top 100 genes
selectedGenes <- names(V[order(V, decreasing = T)][1:100])
```
Now we can quickly produce a heatmap where samples and genes are clustered (see Figure [8\.1](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:tpmhierClust1) ).
```
library(pheatmap)
pheatmap(tpm[selectedGenes,], scale = 'row', show_rownames = FALSE)
```
FIGURE 8\.1: Clustering and visualization of the topmost variable genes as a heatmap.
We can also overlay some annotation tracks to observe the clusters.
Here it is important to observe whether the replicates of the same sample cluster most closely with each other, or not. Overlaying the heatmap with such annotation and displaying sample groups with distinct colors helps quickly see if there are samples that don’t cluster as expected (see Figure [8\.2](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:tpmhierclust2) ).
```
colData <- read.table(coldata_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
pheatmap(tpm[selectedGenes,], scale = 'row',
show_rownames = FALSE,
annotation_col = colData)
```
FIGURE 8\.2: Clustering samples as a heatmap with sample annotations.
#### 8\.3\.6\.2 PCA
Let’s make a PCA plot to see the clustering of replicates as a scatter plot in two dimensions (Figure [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:pca1)).
```
library(stats)
library(ggplot2)
#transpose the matrix
M <- t(tpm[selectedGenes,])
# transform the counts to log2 scale
M <- log2(M + 1)
#compute PCA
pcaResults <- prcomp(M)
#plot PCA results making use of ggplot2's autoplot function
#ggfortify is needed to let ggplot2 know about PCA data structure.
autoplot(pcaResults, data = colData, colour = 'group')
```
FIGURE 8\.3: PCA plot of samples using TPM counts.
We should observe here whether the samples from the case group (CASE) and samples from the control group (CTRL) can be split into two distinct clusters on the scatter plot of the first two largest principal components.
We can use the `summary` function to summarize the PCA results to observe the contribution of the principal components in the explained variation.
```
summary(pcaResults)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 24.396 2.50514 2.39327 1.93841 1.79193 1.6357 1.46059
## Proportion of Variance 0.957 0.01009 0.00921 0.00604 0.00516 0.0043 0.00343
## Cumulative Proportion 0.957 0.96706 0.97627 0.98231 0.98747 0.9918 0.99520
## PC8 PC9 PC10
## Standard deviation 1.30902 1.12657 4.616e-15
## Proportion of Variance 0.00276 0.00204 0.000e+00
## Cumulative Proportion 0.99796 1.00000 1.000e+00
```
#### 8\.3\.6\.3 Correlation plots
Another complementary approach to see the reproducibility of the experiments is to compute the correlation scores between each pair of samples and draw a correlation plot.
Let’s first compute pairwise correlation scores between every pair of samples.
```
library(stats)
correlationMatrix <- cor(tpm)
```
Let’s have a look at how the correlation matrix looks ([8\.1](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:corrplot2))
(showing only two samples each of case and control samples):
TABLE 8\.1: Correlation scores between samples
| | CASE\_1 | CASE\_2 | CTRL\_1 | CTRL\_2 |
| --- | --- | --- | --- | --- |
| CASE\_1 | 1\.0000000 | 0\.9924606 | 0\.9594011 | 0\.9635760 |
| CASE\_2 | 0\.9924606 | 1\.0000000 | 0\.9725646 | 0\.9793835 |
| CTRL\_1 | 0\.9594011 | 0\.9725646 | 1\.0000000 | 0\.9879862 |
| CTRL\_2 | 0\.9635760 | 0\.9793835 | 0\.9879862 | 1\.0000000 |
We can also draw more visually appealing correlation plots using the `corrplot` package (Figure [8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:corrplot3)).
Using the `addrect` argument, we can split clusters into groups and surround them with rectangles.
By setting the `addCoef.col` argument to ‘white’, we can display the correlation coefficients as numbers in white color.
```
library(corrplot)
corrplot(correlationMatrix, order = 'hclust',
addrect = 2, addCoef.col = 'white',
number.cex = 0.7)
```
FIGURE 8\.4: Correlation plot of samples ordered by hierarchical clustering.
Here pairwise correlation levels are visualized as colored circles. `Blue` indicates positive correlation, while `Red` indicates negative correlation.
We could also plot this correlation matrix as a heatmap (Figure [8\.5](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:corrplot4)). As all the samples have a high pairwise
correlation score, using a heatmap instead of a corrplot helps to see the differences between samples more easily. The
`annotation_col` argument helps to display sample annotations and the `cutree_cols` argument is set to 2 to split the clusters into two groups based on the hierarchical clustering results.
```
library(pheatmap)
# split the clusters into two based on the clustering similarity
pheatmap(correlationMatrix,
annotation_col = colData,
cutree_cols = 2)
```
FIGURE 8\.5: Pairwise correlation of samples displayed as a heatmap.
### 8\.3\.7 Differential expression analysis
Differential expression analysis allows us to test tens of thousands of hypotheses (one test for each gene) against the null hypothesis that the activity of the gene stays the same in two different conditions. There are multiple limiting factors that influence the power of detecting genes that have real changes between two biological conditions. Among these are the limited number of biological replicates, non\-normality of the distribution of the read counts, and higher uncertainty of measurements for lowly expressed genes than highly expressed genes (Love, Huber, and Anders [2014](#ref-love_moderated_2014)). Tools such as `edgeR` and `DESeq2` address these limitations using sophisticated statistical models in order to maximize the amount of knowledge that can be extracted from such noisy datasets. In essence, these models assume that for each gene, the read counts are generated by a negative binomial distribution. This is a popular distribution that is used for modeling count data. This distribution can be specified with a mean parameter, \\(m\\), and a dispersion parameter, \\(\\alpha\\). The dispersion parameter \\(\\alpha\\) is directly related to the variance as the variance of this distribution is formulated as: \\(m\+\\alpha m^{2}\\). Therefore, estimating these parameters is crucial for differential expression tests. The methods used in `edgeR` and `DESeq2` use dispersion estimates from other genes with similar counts to precisely estimate the per\-gene dispersion values. With accurate dispersion parameter estimates, one can estimate the variance more precisely, which in turn
improves the result of the differential expression test. Although statistical models are different, the process here is similar to the moderated t\-test and qualifies as an empirical Bayes method which we introduced in Chapter [3](stats.html#stats). There, we calculated gene\-wise variability and shrunk each gene\-wise variability towards the median variability of all genes. In the case of RNA\-seq the dispersion coefficient \\(\\alpha\\) is shrunk towards the value of dispersion from other genes with similar read counts.
Now let us take a closer look at the `DESeq2` workflow and how it calculates differential expression:
1. The read counts are normalized by computing size factors, which addresses the differences not only in the library sizes, but also the library compositions.
2. For each gene, a dispersion estimate is calculated. The dispersion value computed by `DESeq2` is equal to the squared coefficient of variation (variation divided by the mean).
3. A line is fit across the dispersion estimates of all genes computed in step 2 versus the mean normalized counts of the genes.
4. Dispersion values of each gene are shrunk towards the fitted line in step 3\.
5. A Generalized Linear Model is fitted which considers additional confounding variables related to the experimental design such as sequencing batches, treatment, temperature, patient’s age, sequencing technology, etc., and uses negative binomial distribution for fitting count data.
6. For a given contrast (e.g. treatment type: drug\-A versus untreated), a test for differential expression is carried out against the null hypothesis that the log fold change of the normalized counts of the gene in the given pair of groups is exactly zero.
7. It adjusts p\-values for multiple\-testing.
In order to carry out a differential expression analysis using `DESeq2`, three kinds of inputs are necessary:
1. The **read count table**: This table must be raw read counts as integers that are not processed in any form by a normalization technique. The rows represent features (e.g. genes, transcripts, genomic intervals) and columns represent samples.
2. A **colData** table: This table describes the experimental design.
3. A **design formula**: This formula is needed to describe the variable of interest in the analysis (e.g. treatment status) along with (optionally) other covariates (e.g. batch, temperature, sequencing technology).
Let’s define these inputs:
```
#remove the 'width' column
countData <- as.matrix(subset(counts, select = c(-width)))
#define the experimental setup
colData <- read.table(coldata_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
#define the design formula
designFormula <- "~ group"
```
Now, we are ready to run `DESeq2`.
```
library(DESeq2)
library(stats)
#create a DESeq dataset object from the count matrix and the colData
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = as.formula(designFormula))
#print dds object to see the contents
print(dds)
```
```
## class: DESeqDataSet
## dim: 19719 10
## metadata(1): version
## assays(1): counts
## rownames(19719): TSPAN6 TNMD ... MYOCOS HSFX3
## rowData names(0):
## colnames(10): CASE_1 CASE_2 ... CTRL_4 CTRL_5
## colData names(2): source_name group
```
The `DESeqDataSet` object contains all the information about the experimental setup, the read counts, and the design formulas. Certain functions can be used to access this information separately: `rownames(dds)` shows which features are used in the study (e.g. genes), `colnames(dds)` displays the studied samples, `counts(dds)` displays the count table, and `colData(dds)` displays the experimental setup.
Remove genes that have almost no information in any of the given samples.
```
#For each gene, we count the total number of reads for that gene in all samples
#and remove those that don't have at least 1 read.
dds <- dds[ rowSums(DESeq2::counts(dds)) > 1, ]
```
Now, we can use the `DESeq()` function of `DESeq2`, which is a wrapper function that implements estimation of size factors to normalize the counts, estimation of dispersion values, and computing a GLM model based on the experimental design formula. This function returns a `DESeqDataSet` object, which is an updated version of the `dds` variable that we pass to the function as input.
```
dds <- DESeq(dds)
```
Now, we can compare and contrast the samples based on different variables of interest. In this case, we currently have only one variable, which is the `group` variable that determines if a sample belongs to the CASE group or the CTRL group.
```
#compute the contrast for the 'group' variable where 'CTRL'
#samples are used as the control group.
DEresults = results(dds, contrast = c("group", 'CASE', 'CTRL'))
#sort results by increasing p-value
DEresults <- DEresults[order(DEresults$pvalue),]
```
Thus we have obtained a table containing the differential expression status of case samples compared to the control samples.
It is important to note that the sequence of the elements provided in the `contrast` argument determines which group of samples are to be used as the control. This impacts the way the results are interpreted, for instance, if a gene is found up\-regulated (has a positive log2 fold change), the up\-regulation status is only relative to the factor that is provided as control. In this case, we used samples from the “CTRL” group as control and contrasted the samples from the “CASE” group with respect to the “CTRL” samples. Thus genes with a positive log2 fold change are called up\-regulated in the case samples with respect to the control, while genes with a negative log2 fold change are down\-regulated in the case samples. Whether the deregulation is significant or not, warrants assessment of the adjusted p\-values.
Let’s have a look at the contents of the `DEresults` table.
```
#shows a summary of the results
print(DEresults)
```
```
## log2 fold change (MLE): group CASE vs CTRL
## Wald test p-value: group CASE vs CTRL
## DataFrame with 19097 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue
## <numeric> <numeric> <numeric> <numeric> <numeric>
## CYP2E1 4829889 9.36024 0.215223 43.4909 0.00000e+00
## FCGBP 10349993 -7.57579 0.186433 -40.6355 0.00000e+00
## ASGR2 426422 8.01830 0.216207 37.0863 4.67898e-301
## GCKR 100183 7.82841 0.233376 33.5442 1.09479e-246
## APOA5 438054 10.20248 0.312503 32.6477 8.64906e-234
## ... ... ... ... ... ...
## CCDC195 20.4981 -0.215607 2.89255 -0.0745386 NA
## SPEM3 23.6370 -22.154765 3.02785 -7.3170030 NA
## AC022167.5 21.8451 -2.056240 2.89545 -0.7101618 NA
## BX276092.9 29.9636 0.407326 2.89048 0.1409199 NA
## ETDC 22.5675 -1.795274 2.89421 -0.6202983 NA
## padj
## <numeric>
## CYP2E1 0.00000e+00
## FCGBP 0.00000e+00
## ASGR2 2.87741e-297
## GCKR 5.04945e-243
## APOA5 3.19133e-230
## ... ...
## CCDC195 NA
## SPEM3 NA
## AC022167.5 NA
## BX276092.9 NA
## ETDC NA
```
The first three lines in this output show the contrast and the statistical test that were used to compute these results, along with the dimensions of the resulting table (number of columns and rows). Below these lines is the actual table with 6 columns: `baseMean` represents the average normalized expression of the gene across all considered samples. `log2FoldChange` represents the base\-2 logarithm of the fold change of the normalized expression of the gene in the given contrast. `lfcSE` represents the standard error of log2 fold change estimate, and `stat` is the statistic calculated in the contrast which is translated into a `pvalue` and adjusted for multiple testing in the `padj` column. To find out about the importance of adjusting for multiple testing, see Chapter [3](stats.html#stats).
#### 8\.3\.7\.1 Diagnostic plots
At this point, before proceeding to do any downstream analysis and jumping to conclusions about the biological insights that are reachable with the experimental data at hand, it is important to do some more diagnostic tests to improve our confidence about the quality of the data and the experimental setup.
##### 8\.3\.7\.1\.1 MA plot
An MA plot is useful to observe if the data normalization worked well (Figure [8\.6](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEmaplot)). The MA plot is a scatter plot where the x\-axis denotes the average of normalized counts across samples and the y\-axis denotes the log fold change in the given contrast. Most points are expected to be on the horizontal 0 line (most genes are not expected to be differentially expressed).
```
library(DESeq2)
DESeq2::plotMA(object = dds, ylim = c(-5, 5))
```
FIGURE 8\.6: MA plot of differential expression results.
##### 8\.3\.7\.1\.2 P\-value distribution
It is also important to observe the distribution of raw p\-values (Figure [8\.7](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEpvaldist)). We expect to see a peak around low p\-values and a uniform distribution at P\-values above 0\.1\. Otherwise, adjustment for multiple testing does not work and the results are not meaningful.
```
library(ggplot2)
ggplot(data = as.data.frame(DEresults), aes(x = pvalue)) +
geom_histogram(bins = 100)
```
FIGURE 8\.7: P\-value distribution genes before adjusting for multiple testing.
##### 8\.3\.7\.1\.3 PCA plot
A final diagnosis is to check the biological reproducibility of the sample replicates in a PCA plot or a heatmap. To plot the PCA results, we need to extract the normalized counts from the DESeqDataSet object. It is possible to color the points in the scatter plot by the variable of interest, which helps to see if the replicates cluster well (Figure [8\.8](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEpca)).
```
library(DESeq2)
# extract normalized counts from the DESeqDataSet object
countsNormalized <- DESeq2::counts(dds, normalized = TRUE)
# select top 500 most variable genes
selectedGenes <- names(sort(apply(countsNormalized, 1, var),
decreasing = TRUE)[1:500])
plotPCA(countsNormalized[selectedGenes,],
col = as.numeric(colData$group), adj = 0.5,
xlim = c(-0.5, 0.5), ylim = c(-0.5, 0.6))
```
FIGURE 8\.8: Principle component analysis plot based on top 500 most variable genes.
Alternatively, the normalized counts can be transformed using the `DESeq2::rlog` function and `DESeq2::plotPCA()` can be readily used to plot the PCA results (Figure [8\.9](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DErldnorm)).
```
rld <- rlog(dds)
DESeq2::plotPCA(rld, ntop = 500, intgroup = 'group') +
ylim(-50, 50) + theme_bw()
```
FIGURE 8\.9: PCA plot of top 500 most variable genes.
##### 8\.3\.7\.1\.4 Relative Log Expression (RLE) plot
A similar plot to the MA plot is the RLE (Relative Log Expression) plot that is useful in finding out if the data at hand needs normalization (Gandolfo and Speed [2018](#ref-gandolfo_rle_2018)). Sometimes, even the datasets normalized using the explained methods above may need further normalization due to unforeseen sources of variation that might stem from the library preparation, the person who carries out the experiment, the date of sequencing, the temperature changes in the laboratory at the time of library preparation, and so on and so forth. The RLE plot is a quick diagnostic that can be applied on the raw or normalized count matrices to see if further processing is required.
Let’s do RLE plots on the raw counts and normalized counts using the `EDASeq` package (Risso, Schwartz, Sherlock, et al. [2011](#ref-risso_gc-content_2011)) (see Figure [8\.10](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DErleplot)).
```
library(EDASeq)
par(mfrow = c(1, 2))
plotRLE(countData, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'Raw Counts')
plotRLE(DESeq2::counts(dds, normalized = TRUE),
outline=FALSE, ylim=c(-4, 4),
col = as.numeric(colData$group),
main = 'Normalized Counts')
```
FIGURE 8\.10: Relative log expression plots based on raw and normalized count matrices
Here the RLE plot is comprised of boxplots, where each box\-plot represents the distribution of the relative log expression of the genes expressed in the corresponding sample. Each gene’s expression is divided by the median expression value of that gene across all samples. Then this is transformed to log scale, which gives the relative log expression value for a single gene. The RLE values for all the genes from a sample are visualized as a boxplot.
Ideally the boxplots are centered around the horizontal zero line and are as tightly distributed as possible (Risso, Ngai, Speed, et al. [2014](#ref-risso_normalization_2014)). From the plots that we have made for the raw and normalized count data, we can observe how the normalized dataset has improved upon the raw count data for all the samples. However, in some cases, it is important to visualize RLE plots in combination with other diagnostic plots such as PCA plots, heatmaps, and correlation plots to see if there is more unwanted variation in the data, which can be further accounted for using packages such as `RUVSeq` (Risso, Ngai, Speed, et al. [2014](#ref-risso_normalization_2014)) and `sva` (Leek, Johnson, Parker, et al. [2012](#ref-leek_sva_2012)). We will cover details about the `RUVSeq` package to account for unwanted sources of noise in RNA\-seq datasets in later sections.
#### 8\.3\.7\.1 Diagnostic plots
At this point, before proceeding to do any downstream analysis and jumping to conclusions about the biological insights that are reachable with the experimental data at hand, it is important to do some more diagnostic tests to improve our confidence about the quality of the data and the experimental setup.
##### 8\.3\.7\.1\.1 MA plot
An MA plot is useful to observe if the data normalization worked well (Figure [8\.6](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEmaplot)). The MA plot is a scatter plot where the x\-axis denotes the average of normalized counts across samples and the y\-axis denotes the log fold change in the given contrast. Most points are expected to be on the horizontal 0 line (most genes are not expected to be differentially expressed).
```
library(DESeq2)
DESeq2::plotMA(object = dds, ylim = c(-5, 5))
```
FIGURE 8\.6: MA plot of differential expression results.
##### 8\.3\.7\.1\.2 P\-value distribution
It is also important to observe the distribution of raw p\-values (Figure [8\.7](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEpvaldist)). We expect to see a peak around low p\-values and a uniform distribution at P\-values above 0\.1\. Otherwise, adjustment for multiple testing does not work and the results are not meaningful.
```
library(ggplot2)
ggplot(data = as.data.frame(DEresults), aes(x = pvalue)) +
geom_histogram(bins = 100)
```
FIGURE 8\.7: P\-value distribution genes before adjusting for multiple testing.
##### 8\.3\.7\.1\.3 PCA plot
A final diagnosis is to check the biological reproducibility of the sample replicates in a PCA plot or a heatmap. To plot the PCA results, we need to extract the normalized counts from the DESeqDataSet object. It is possible to color the points in the scatter plot by the variable of interest, which helps to see if the replicates cluster well (Figure [8\.8](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEpca)).
```
library(DESeq2)
# extract normalized counts from the DESeqDataSet object
countsNormalized <- DESeq2::counts(dds, normalized = TRUE)
# select top 500 most variable genes
selectedGenes <- names(sort(apply(countsNormalized, 1, var),
decreasing = TRUE)[1:500])
plotPCA(countsNormalized[selectedGenes,],
col = as.numeric(colData$group), adj = 0.5,
xlim = c(-0.5, 0.5), ylim = c(-0.5, 0.6))
```
FIGURE 8\.8: Principle component analysis plot based on top 500 most variable genes.
Alternatively, the normalized counts can be transformed using the `DESeq2::rlog` function and `DESeq2::plotPCA()` can be readily used to plot the PCA results (Figure [8\.9](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DErldnorm)).
```
rld <- rlog(dds)
DESeq2::plotPCA(rld, ntop = 500, intgroup = 'group') +
ylim(-50, 50) + theme_bw()
```
FIGURE 8\.9: PCA plot of top 500 most variable genes.
##### 8\.3\.7\.1\.4 Relative Log Expression (RLE) plot
A similar plot to the MA plot is the RLE (Relative Log Expression) plot that is useful in finding out if the data at hand needs normalization (Gandolfo and Speed [2018](#ref-gandolfo_rle_2018)). Sometimes, even the datasets normalized using the explained methods above may need further normalization due to unforeseen sources of variation that might stem from the library preparation, the person who carries out the experiment, the date of sequencing, the temperature changes in the laboratory at the time of library preparation, and so on and so forth. The RLE plot is a quick diagnostic that can be applied on the raw or normalized count matrices to see if further processing is required.
Let’s do RLE plots on the raw counts and normalized counts using the `EDASeq` package (Risso, Schwartz, Sherlock, et al. [2011](#ref-risso_gc-content_2011)) (see Figure [8\.10](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DErleplot)).
```
library(EDASeq)
par(mfrow = c(1, 2))
plotRLE(countData, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'Raw Counts')
plotRLE(DESeq2::counts(dds, normalized = TRUE),
outline=FALSE, ylim=c(-4, 4),
col = as.numeric(colData$group),
main = 'Normalized Counts')
```
FIGURE 8\.10: Relative log expression plots based on raw and normalized count matrices
Here the RLE plot is comprised of boxplots, where each box\-plot represents the distribution of the relative log expression of the genes expressed in the corresponding sample. Each gene’s expression is divided by the median expression value of that gene across all samples. Then this is transformed to log scale, which gives the relative log expression value for a single gene. The RLE values for all the genes from a sample are visualized as a boxplot.
Ideally the boxplots are centered around the horizontal zero line and are as tightly distributed as possible (Risso, Ngai, Speed, et al. [2014](#ref-risso_normalization_2014)). From the plots that we have made for the raw and normalized count data, we can observe how the normalized dataset has improved upon the raw count data for all the samples. However, in some cases, it is important to visualize RLE plots in combination with other diagnostic plots such as PCA plots, heatmaps, and correlation plots to see if there is more unwanted variation in the data, which can be further accounted for using packages such as `RUVSeq` (Risso, Ngai, Speed, et al. [2014](#ref-risso_normalization_2014)) and `sva` (Leek, Johnson, Parker, et al. [2012](#ref-leek_sva_2012)). We will cover details about the `RUVSeq` package to account for unwanted sources of noise in RNA\-seq datasets in later sections.
##### 8\.3\.7\.1\.1 MA plot
An MA plot is useful to observe if the data normalization worked well (Figure [8\.6](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEmaplot)). The MA plot is a scatter plot where the x\-axis denotes the average of normalized counts across samples and the y\-axis denotes the log fold change in the given contrast. Most points are expected to be on the horizontal 0 line (most genes are not expected to be differentially expressed).
```
library(DESeq2)
DESeq2::plotMA(object = dds, ylim = c(-5, 5))
```
FIGURE 8\.6: MA plot of differential expression results.
##### 8\.3\.7\.1\.2 P\-value distribution
It is also important to observe the distribution of raw p\-values (Figure [8\.7](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEpvaldist)). We expect to see a peak around low p\-values and a uniform distribution at P\-values above 0\.1\. Otherwise, adjustment for multiple testing does not work and the results are not meaningful.
```
library(ggplot2)
ggplot(data = as.data.frame(DEresults), aes(x = pvalue)) +
geom_histogram(bins = 100)
```
FIGURE 8\.7: P\-value distribution genes before adjusting for multiple testing.
##### 8\.3\.7\.1\.3 PCA plot
A final diagnosis is to check the biological reproducibility of the sample replicates in a PCA plot or a heatmap. To plot the PCA results, we need to extract the normalized counts from the DESeqDataSet object. It is possible to color the points in the scatter plot by the variable of interest, which helps to see if the replicates cluster well (Figure [8\.8](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DEpca)).
```
library(DESeq2)
# extract normalized counts from the DESeqDataSet object
countsNormalized <- DESeq2::counts(dds, normalized = TRUE)
# select top 500 most variable genes
selectedGenes <- names(sort(apply(countsNormalized, 1, var),
decreasing = TRUE)[1:500])
plotPCA(countsNormalized[selectedGenes,],
col = as.numeric(colData$group), adj = 0.5,
xlim = c(-0.5, 0.5), ylim = c(-0.5, 0.6))
```
FIGURE 8\.8: Principle component analysis plot based on top 500 most variable genes.
Alternatively, the normalized counts can be transformed using the `DESeq2::rlog` function and `DESeq2::plotPCA()` can be readily used to plot the PCA results (Figure [8\.9](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DErldnorm)).
```
rld <- rlog(dds)
DESeq2::plotPCA(rld, ntop = 500, intgroup = 'group') +
ylim(-50, 50) + theme_bw()
```
FIGURE 8\.9: PCA plot of top 500 most variable genes.
##### 8\.3\.7\.1\.4 Relative Log Expression (RLE) plot
A similar plot to the MA plot is the RLE (Relative Log Expression) plot that is useful in finding out if the data at hand needs normalization (Gandolfo and Speed [2018](#ref-gandolfo_rle_2018)). Sometimes, even the datasets normalized using the explained methods above may need further normalization due to unforeseen sources of variation that might stem from the library preparation, the person who carries out the experiment, the date of sequencing, the temperature changes in the laboratory at the time of library preparation, and so on and so forth. The RLE plot is a quick diagnostic that can be applied on the raw or normalized count matrices to see if further processing is required.
Let’s do RLE plots on the raw counts and normalized counts using the `EDASeq` package (Risso, Schwartz, Sherlock, et al. [2011](#ref-risso_gc-content_2011)) (see Figure [8\.10](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:DErleplot)).
```
library(EDASeq)
par(mfrow = c(1, 2))
plotRLE(countData, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'Raw Counts')
plotRLE(DESeq2::counts(dds, normalized = TRUE),
outline=FALSE, ylim=c(-4, 4),
col = as.numeric(colData$group),
main = 'Normalized Counts')
```
FIGURE 8\.10: Relative log expression plots based on raw and normalized count matrices
Here the RLE plot is comprised of boxplots, where each box\-plot represents the distribution of the relative log expression of the genes expressed in the corresponding sample. Each gene’s expression is divided by the median expression value of that gene across all samples. Then this is transformed to log scale, which gives the relative log expression value for a single gene. The RLE values for all the genes from a sample are visualized as a boxplot.
Ideally the boxplots are centered around the horizontal zero line and are as tightly distributed as possible (Risso, Ngai, Speed, et al. [2014](#ref-risso_normalization_2014)). From the plots that we have made for the raw and normalized count data, we can observe how the normalized dataset has improved upon the raw count data for all the samples. However, in some cases, it is important to visualize RLE plots in combination with other diagnostic plots such as PCA plots, heatmaps, and correlation plots to see if there is more unwanted variation in the data, which can be further accounted for using packages such as `RUVSeq` (Risso, Ngai, Speed, et al. [2014](#ref-risso_normalization_2014)) and `sva` (Leek, Johnson, Parker, et al. [2012](#ref-leek_sva_2012)). We will cover details about the `RUVSeq` package to account for unwanted sources of noise in RNA\-seq datasets in later sections.
### 8\.3\.8 Functional enrichment analysis
#### 8\.3\.8\.1 GO term analysis
In a typical differential expression analysis, thousands of genes are found differentially expressed between two groups of samples. While prior knowledge of the functions of individual genes can give some clues about what kind of cellular processes have been affected, e.g. by a drug treatment, manually going through the whole list of thousands of genes would be very cumbersome and not be very informative in the end. Therefore a commonly used tool to address this problem is to do enrichment analyses of functional terms that appear associated to the given set of differentially expressed genes more often than expected by chance. The functional terms are usually associated to multiple genes. Thus, genes can be grouped into sets by shared functional terms. However, it is important to have an agreed upon controlled vocabulary on the list of terms used to describe the functions of genes. Otherwise, it would be impossible to exchange scientific results globally. That’s why initiatives such as the Gene Ontology Consortium have collated a list of Gene Ontology (GO) terms for each gene. GO term analysis is probably the most common analysis applied after a differential expression analysis. GO term analysis helps quickly find out systematic changes that can describe differences between groups of samples.
In R, one of the simplest ways to do functional enrichment analysis for a set of genes is via the `gProfileR` package.
Let’s select the genes that are significantly differentially expressed between the case and control samples.
Let’s extract genes that have an adjusted p\-value below 0\.1 and that show a 2\-fold change (either negative or positive) in the case compared to control. We will then feed this gene set into the `gProfileR` function. The top 10 detected GO terms are displayed in Table [8\.2](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:GOanalysistable).
```
library(DESeq2)
library(gProfileR)
library(knitr)
# extract differential expression results
DEresults <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
#remove genes with NA values
DE <- DEresults[!is.na(DEresults$padj),]
#select genes with adjusted p-values below 0.1
DE <- DE[DE$padj < 0.1,]
#select genes with absolute log2 fold change above 1 (two-fold change)
DE <- DE[abs(DE$log2FoldChange) > 1,]
#get the list of genes of interest
genesOfInterest <- rownames(DE)
#calculate enriched GO terms
goResults <- gprofiler(query = genesOfInterest,
organism = 'hsapiens',
src_filter = 'GO',
hier_filtering = 'moderate')
```
TABLE 8\.2: Top GO terms sorted by p\-value.
| | p.value | term.size | precision | domain | term.name |
| --- | --- | --- | --- | --- | --- |
| 64 | 0 | 2740 | 0\.223 | CC | plasma membrane part |
| 23 | 0 | 1609 | 0\.136 | BP | ion transport |
| 16 | 0 | 3656 | 0\.258 | BP | regulation of biological quality |
| 30 | 0 | 385 | 0\.042 | BP | extracellular structure organization |
| 34 | 0 | 7414 | 0\.452 | BP | multicellular organismal process |
| 78 | 0 | 1069 | 0\.090 | MF | transmembrane transporter activity |
| 47 | 0 | 1073 | 0\.090 | BP | organic acid metabolic process |
| 5 | 0 | 975 | 0\.083 | BP | response to drug |
| 18 | 0 | 1351 | 0\.107 | BP | biological adhesion |
| 31 | 0 | 4760 | 0\.302 | BP | system development |
#### 8\.3\.8\.2 Gene set enrichment analysis
A gene set is a collection of genes with some common property. This shared property among a set of genes could be a GO term, a common biological pathway, a shared interaction partner, or any biologically relevant commonality that is meaningful in the context of the pursued experiment. Gene set enrichment analysis (GSEA) is a valuable exploratory analysis tool that can associate systematic changes to a high\-level function rather than individual genes. Analysis of coordinated changes of expression levels of gene sets can provide complementary benefits on top of per\-gene\-based differential expression analyses. For instance, consider a gene set belonging to a biological pathway where each member of the pathway displays a slight deregulation in a disease sample compared to a normal sample. In such a case, individual genes might not be picked up by the per\-gene\-based differential expression analysis. Thus, the GO/Pathway enrichment on the differentially expressed list of genes would not show an enrichment of this pathway. However, the additive effect of slight changes of the genes could amount to a large effect at the level of the gene set, thus the pathway could be detected as a significant pathway that could explain the mechanistic problems in the disease sample.
We use the bioconductor package `gage` (Luo, Friedman, Shedden, et al. [2009](#ref-luo_gage:_2009)) to demonstrate how to do GSEA using normalized expression data of the samples as input. Here we are using only two gene sets: one from the top GO term discovered from the previous GO analysis, one that we compile by randomly selecting a list of genes. However, annotated gene sets can be used from databases such as MSIGDB (Subramanian, Tamayo, Mootha, et al. [2005](#ref-subramanian_gene_2005)), which compile gene sets from a variety of resources such as KEGG (Kanehisa, Sato, Kawashima, et al. [2016](#ref-kanehisa_kegg_2016)) and REACTOME (Antonio Fabregat, Jupe, Matthews, et al. [2018](#ref-fabregat_reactome_2018)).
```
#Let's define the first gene set as the list of genes from one of the
#significant GO terms found in the GO analysis. order go results by pvalue
goResults <- goResults[order(goResults$p.value),]
#restrict the terms that have at most 100 genes overlapping with the query
go <- goResults[goResults$overlap.size < 100,]
# use the top term from this table to create a gene set
geneSet1 <- unlist(strsplit(go[1,]$intersection, ','))
#Define another gene set by just randomly selecting 25 genes from the counts
#table get normalized counts from DESeq2 results
normalizedCounts <- DESeq2::counts(dds, normalized = TRUE)
geneSet2 <- sample(rownames(normalizedCounts), 25)
geneSets <- list('top_GO_term' = geneSet1,
'random_set' = geneSet2)
```
Using the defined gene sets, we’d like to do a group comparison between the case samples with respect to the control samples.
```
library(gage)
#use the normalized counts to carry out a GSEA.
gseaResults <- gage(exprs = log2(normalizedCounts+1),
ref = match(rownames(colData[colData$group == 'CTRL',]),
colnames(normalizedCounts)),
samp = match(rownames(colData[colData$group == 'CASE',]),
colnames(normalizedCounts)),
gsets = geneSets, compare = 'as.group')
```
We can observe if there is a significant up\-regulation or down\-regulation of the gene set in the case group compared to the controls by accessing `gseaResults$greater` as in Table [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1) or `gseaResults$less` as in Table [8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost2).
TABLE 8\.3: Up\-regulation statistics
| | p.geomean | stat.mean | p.val | q.val | set.size | exp1 |
| --- | --- | --- | --- | --- | --- | --- |
| top\_GO\_term | 0\.0000 | 7\.1994 | 0\.0000 | 0\.0000 | 32 | 0\.0000 |
| random\_set | 0\.5832 | \-0\.2113 | 0\.5832 | 0\.5832 | 25 | 0\.5832 |
TABLE 8\.4: Down\-regulation statistics
| | p.geomean | stat.mean | p.val | q.val | set.size | exp1 |
| --- | --- | --- | --- | --- | --- | --- |
| random\_set | 0\.4168 | \-0\.2113 | 0\.4168 | 0\.8336 | 25 | 0\.4168 |
| top\_GO\_term | 1\.0000 | 7\.1994 | 1\.0000 | 1\.0000 | 32 | 1\.0000 |
We can see that the random gene set shows no significant up\- or down\-regulation (Tables [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1) and ([8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost2)), while the gene set we defined using the top GO term shows a significant up\-regulation (adjusted p\-value \< 0\.0007\) ([8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1)). It is worthwhile to visualize these systematic changes in a heatmap as in Figure [8\.11](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:gseaPost3).
```
library(pheatmap)
# get the expression data for the gene set of interest
M <- normalizedCounts[rownames(normalizedCounts) %in% geneSet1, ]
# log transform the counts for visualization scaling by row helps visualizing
# relative change of expression of a gene in multiple conditions
pheatmap(log2(M+1),
annotation_col = colData,
show_rownames = TRUE,
fontsize_row = 8,
scale = 'row',
cutree_cols = 2,
cutree_rows = 2)
```
FIGURE 8\.11: Heatmap of expression value from the genes with the top GO term.
We can see that almost all genes from this gene set display an increased level of expression in the case samples
compared to the controls.
#### 8\.3\.8\.1 GO term analysis
In a typical differential expression analysis, thousands of genes are found differentially expressed between two groups of samples. While prior knowledge of the functions of individual genes can give some clues about what kind of cellular processes have been affected, e.g. by a drug treatment, manually going through the whole list of thousands of genes would be very cumbersome and not be very informative in the end. Therefore a commonly used tool to address this problem is to do enrichment analyses of functional terms that appear associated to the given set of differentially expressed genes more often than expected by chance. The functional terms are usually associated to multiple genes. Thus, genes can be grouped into sets by shared functional terms. However, it is important to have an agreed upon controlled vocabulary on the list of terms used to describe the functions of genes. Otherwise, it would be impossible to exchange scientific results globally. That’s why initiatives such as the Gene Ontology Consortium have collated a list of Gene Ontology (GO) terms for each gene. GO term analysis is probably the most common analysis applied after a differential expression analysis. GO term analysis helps quickly find out systematic changes that can describe differences between groups of samples.
In R, one of the simplest ways to do functional enrichment analysis for a set of genes is via the `gProfileR` package.
Let’s select the genes that are significantly differentially expressed between the case and control samples.
Let’s extract genes that have an adjusted p\-value below 0\.1 and that show a 2\-fold change (either negative or positive) in the case compared to control. We will then feed this gene set into the `gProfileR` function. The top 10 detected GO terms are displayed in Table [8\.2](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:GOanalysistable).
```
library(DESeq2)
library(gProfileR)
library(knitr)
# extract differential expression results
DEresults <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
#remove genes with NA values
DE <- DEresults[!is.na(DEresults$padj),]
#select genes with adjusted p-values below 0.1
DE <- DE[DE$padj < 0.1,]
#select genes with absolute log2 fold change above 1 (two-fold change)
DE <- DE[abs(DE$log2FoldChange) > 1,]
#get the list of genes of interest
genesOfInterest <- rownames(DE)
#calculate enriched GO terms
goResults <- gprofiler(query = genesOfInterest,
organism = 'hsapiens',
src_filter = 'GO',
hier_filtering = 'moderate')
```
TABLE 8\.2: Top GO terms sorted by p\-value.
| | p.value | term.size | precision | domain | term.name |
| --- | --- | --- | --- | --- | --- |
| 64 | 0 | 2740 | 0\.223 | CC | plasma membrane part |
| 23 | 0 | 1609 | 0\.136 | BP | ion transport |
| 16 | 0 | 3656 | 0\.258 | BP | regulation of biological quality |
| 30 | 0 | 385 | 0\.042 | BP | extracellular structure organization |
| 34 | 0 | 7414 | 0\.452 | BP | multicellular organismal process |
| 78 | 0 | 1069 | 0\.090 | MF | transmembrane transporter activity |
| 47 | 0 | 1073 | 0\.090 | BP | organic acid metabolic process |
| 5 | 0 | 975 | 0\.083 | BP | response to drug |
| 18 | 0 | 1351 | 0\.107 | BP | biological adhesion |
| 31 | 0 | 4760 | 0\.302 | BP | system development |
#### 8\.3\.8\.2 Gene set enrichment analysis
A gene set is a collection of genes with some common property. This shared property among a set of genes could be a GO term, a common biological pathway, a shared interaction partner, or any biologically relevant commonality that is meaningful in the context of the pursued experiment. Gene set enrichment analysis (GSEA) is a valuable exploratory analysis tool that can associate systematic changes to a high\-level function rather than individual genes. Analysis of coordinated changes of expression levels of gene sets can provide complementary benefits on top of per\-gene\-based differential expression analyses. For instance, consider a gene set belonging to a biological pathway where each member of the pathway displays a slight deregulation in a disease sample compared to a normal sample. In such a case, individual genes might not be picked up by the per\-gene\-based differential expression analysis. Thus, the GO/Pathway enrichment on the differentially expressed list of genes would not show an enrichment of this pathway. However, the additive effect of slight changes of the genes could amount to a large effect at the level of the gene set, thus the pathway could be detected as a significant pathway that could explain the mechanistic problems in the disease sample.
We use the bioconductor package `gage` (Luo, Friedman, Shedden, et al. [2009](#ref-luo_gage:_2009)) to demonstrate how to do GSEA using normalized expression data of the samples as input. Here we are using only two gene sets: one from the top GO term discovered from the previous GO analysis, one that we compile by randomly selecting a list of genes. However, annotated gene sets can be used from databases such as MSIGDB (Subramanian, Tamayo, Mootha, et al. [2005](#ref-subramanian_gene_2005)), which compile gene sets from a variety of resources such as KEGG (Kanehisa, Sato, Kawashima, et al. [2016](#ref-kanehisa_kegg_2016)) and REACTOME (Antonio Fabregat, Jupe, Matthews, et al. [2018](#ref-fabregat_reactome_2018)).
```
#Let's define the first gene set as the list of genes from one of the
#significant GO terms found in the GO analysis. order go results by pvalue
goResults <- goResults[order(goResults$p.value),]
#restrict the terms that have at most 100 genes overlapping with the query
go <- goResults[goResults$overlap.size < 100,]
# use the top term from this table to create a gene set
geneSet1 <- unlist(strsplit(go[1,]$intersection, ','))
#Define another gene set by just randomly selecting 25 genes from the counts
#table get normalized counts from DESeq2 results
normalizedCounts <- DESeq2::counts(dds, normalized = TRUE)
geneSet2 <- sample(rownames(normalizedCounts), 25)
geneSets <- list('top_GO_term' = geneSet1,
'random_set' = geneSet2)
```
Using the defined gene sets, we’d like to do a group comparison between the case samples with respect to the control samples.
```
library(gage)
#use the normalized counts to carry out a GSEA.
gseaResults <- gage(exprs = log2(normalizedCounts+1),
ref = match(rownames(colData[colData$group == 'CTRL',]),
colnames(normalizedCounts)),
samp = match(rownames(colData[colData$group == 'CASE',]),
colnames(normalizedCounts)),
gsets = geneSets, compare = 'as.group')
```
We can observe if there is a significant up\-regulation or down\-regulation of the gene set in the case group compared to the controls by accessing `gseaResults$greater` as in Table [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1) or `gseaResults$less` as in Table [8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost2).
TABLE 8\.3: Up\-regulation statistics
| | p.geomean | stat.mean | p.val | q.val | set.size | exp1 |
| --- | --- | --- | --- | --- | --- | --- |
| top\_GO\_term | 0\.0000 | 7\.1994 | 0\.0000 | 0\.0000 | 32 | 0\.0000 |
| random\_set | 0\.5832 | \-0\.2113 | 0\.5832 | 0\.5832 | 25 | 0\.5832 |
TABLE 8\.4: Down\-regulation statistics
| | p.geomean | stat.mean | p.val | q.val | set.size | exp1 |
| --- | --- | --- | --- | --- | --- | --- |
| random\_set | 0\.4168 | \-0\.2113 | 0\.4168 | 0\.8336 | 25 | 0\.4168 |
| top\_GO\_term | 1\.0000 | 7\.1994 | 1\.0000 | 1\.0000 | 32 | 1\.0000 |
We can see that the random gene set shows no significant up\- or down\-regulation (Tables [8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1) and ([8\.4](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost2)), while the gene set we defined using the top GO term shows a significant up\-regulation (adjusted p\-value \< 0\.0007\) ([8\.3](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#tab:gseaPost1)). It is worthwhile to visualize these systematic changes in a heatmap as in Figure [8\.11](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:gseaPost3).
```
library(pheatmap)
# get the expression data for the gene set of interest
M <- normalizedCounts[rownames(normalizedCounts) %in% geneSet1, ]
# log transform the counts for visualization scaling by row helps visualizing
# relative change of expression of a gene in multiple conditions
pheatmap(log2(M+1),
annotation_col = colData,
show_rownames = TRUE,
fontsize_row = 8,
scale = 'row',
cutree_cols = 2,
cutree_rows = 2)
```
FIGURE 8\.11: Heatmap of expression value from the genes with the top GO term.
We can see that almost all genes from this gene set display an increased level of expression in the case samples
compared to the controls.
### 8\.3\.9 Accounting for additional sources of variation
When doing a differential expression analysis in a case\-control setting, the variable of interest, i.e. the variable that explains the separation of the case samples from the control, is usually the treatment, genotypic differences, a certain phenotype, etc. However, in reality, depending on how the experiment and the sequencing were designed, there may be additional factors that might contribute to the variation between the compared samples. Sometimes, such variables are known, for instance, the date of the sequencing for each sample (batch information), or the temperature under which samples were kept. Such variables are not necessarily biological but rather technical, however, they still impact the measurements obtained from an RNA\-seq experiment. Such variables can introduce systematic shifts in the obtained measurements. Here, we will demonstrate: firstly how to account for such variables using DESeq2, when the possible sources of variation are actually known; secondly, how to account for such variables when all we have is just a count table but we observe that the variable of interest only explains a small proportion of the differences between case and control samples.
#### 8\.3\.9\.1 Accounting for covariates using DESeq2
For demonstration purposes, we will use a subset of the count table obtained for a heart disease study, where there are RNA\-seq samples from subjects with normal and failing hearts. We again use a subset of the samples, focusing on 6 case and 6 control samples and we only consider protein\-coding genes (for speed concerns).
Let’s import count and colData for this experiment.
```
counts_file <- system.file('extdata/rna-seq/SRP021193.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP021193.colData.tsv',
package = 'compGenomRData')
counts <- read.table(counts_file)
colData <- read.table(colData_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
```
Let’s take a look at how the samples cluster by calculating the TPM counts as displayed as a heatmap in Figure [8\.12](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:batcheffects2).
```
library(pheatmap)
#find gene length normalized values
geneLengths <- counts$width
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
selectedGenes <- names(sort(apply(tpm, 1, var),
decreasing = T)[1:100])
pheatmap(tpm[selectedGenes,],
scale = 'row',
annotation_col = colData,
show_rownames = FALSE)
```
FIGURE 8\.12: Visualizing batch effects in an experiment.
Here we can see from the clusters that the dominating variable is the ‘Library Selection’ variable rather than the ‘diagnosis’ variable, which determines the state of the organ from which the sample was taken. Case and control samples are all mixed in both two major clusters. However, ideally, we’d like to see a separation of the case and control samples regardless of the additional covariates. When testing for differential gene expression between conditions, such confounding variables can be accounted for using `DESeq2`. Below is a demonstration of how we instruct `DESeq2` to account for the ‘library selection’ variable:
```
library(DESeq2)
# remove the 'width' column from the counts matrix
countData <- as.matrix(subset(counts, select = c(-width)))
# set up a DESeqDataSet object
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = ~ LibrarySelection + group)
```
When constructing the design formula, it is very important to pay attention to the sequence of variables. We leave the variable of interest to the last and we can add as many covariates as we want to the beginning of the design formula. Please refer to the `DESeq2` vignette if you’d like to learn more about how to construct design formulas.
Now, we can run the differential expression analysis as has been demonstrated previously.
```
# run DESeq
dds <- DESeq(dds)
# extract results
DEresults <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
```
#### 8\.3\.9\.2 Accounting for estimated covariates using RUVSeq
In cases when the sources of potential variation are not known, it is worthwhile to use tools such as `RUVSeq` or `sva` that can estimate potential sources of variation and clean up the counts table from those sources of variation. Later on, the estimated covariates can be integrated into DESeq2’s design formula.
Let’s see how to utilize the `RUVseq` package to first diagnose the problem and then solve it. Here, for demonstration purposes, we’ll use a count table from a lung carcinoma study in which a transcription factor (Ets homologous factor \- EHF) is overexpressed and compared to the control samples with baseline EHF expression. Again, we only consider protein coding genes and use only five case and five control samples. The original data can be found on the `recount2` database with the accession ‘SRP049988’.
```
counts_file <- system.file('extdata/rna-seq/SRP049988.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP049988.colData.tsv',
package = 'compGenomRData')
counts <- read.table(counts_file)
colData <- read.table(colData_file, header = T,
sep = '\t', stringsAsFactors = TRUE)
# simplify condition descriptions
colData$source_name <- ifelse(colData$group == 'CASE',
'EHF_overexpression', 'Empty_Vector')
```
Let’s start by making heatmaps of the samples using TPM counts (see Figure [8\.13](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose1)).
```
#find gene length normalized values
geneLengths <- counts$width
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
selectedGenes <- names(sort(apply(tpm, 1, var),
decreasing = T)[1:100])
pheatmap(tpm[selectedGenes,],
scale = 'row',
annotation_col = colData,
cutree_cols = 2,
show_rownames = FALSE)
```
FIGURE 8\.13: Diagnostic plot to observe.
We can see that the overall clusters look fine, except that one of the case samples (CASE\_5\) clusters more closely with the control samples than the other case samples. This mis\-clustering could be a result of some batch effect, or any other technical preparation steps. However, the `colData` object doesn’t contain any variables that we can use to pinpoint the exact cause of this. So, let’s use `RUVSeq` to estimate potential covariates to see if the clustering results can be improved.
First, we set up the experiment:
```
library(EDASeq)
# remove 'width' column from counts
countData <- as.matrix(subset(counts, select = c(-width)))
# create a seqExpressionSet object using EDASeq package
set <- newSeqExpressionSet(counts = countData,
phenoData = colData)
```
Next, let’s make a diagnostic RLE plot on the raw count table.
```
# make an RLE plot and a PCA plot on raw count data and color samples by group
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group))
plotPCA(set, col = as.numeric(colData$group), adj = 0.5,
ylim = c(-0.7, 0.5), xlim = c(-0.5, 0.5))
```
FIGURE 8\.14: Diagnostic RLE and PCA plots based on raw count table.
```
## make RLE and PCA plots on TPM matrix
par(mfrow = c(1,2))
plotRLE(tpm, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group))
plotPCA(tpm, col=as.numeric(colData$group), adj = 0.5,
ylim = c(-0.3, 1), xlim = c(-0.5, 0.5))
```
FIGURE 8\.15: Diagnostic RLE and PCA plots based on TPM normalized count table.
Both RLE and PCA plots look better on normalized data (Figure [8\.15](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose2p2)) compared to raw data (Figure [8\.14](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose2p1)), but still suggest the necessity of further improvement, because the CASE\_5 sample still clusters with the control samples. We haven’t yet accounted for the source of unwanted variation.
#### 8\.3\.9\.3 Removing unwanted variation from the data
`RUVSeq` has three main functions for removing unwanted variation: `RUVg()`, `RUVs()`, and `RUVr()`. Here, we will demonstrate how to use `RUVg` and `RUVs`. `RUVr` will be left as an exercise for the reader.
##### 8\.3\.9\.3\.1 Using RUVg
One way of removing unwanted variation depends on using a set of reference genes that are not expected to change by the sources of technical variation. One strategy along this line is to use spike\-in genes, which are artificially introduced into the sequencing run (Jiang, Schlesinger, Davis, et al. [2011](#ref-jiang_synthetic_2011)). However, there are many sequencing datasets that don’t have this spike\-in data available. In such cases, an empirical set of genes can be collected from the expression data by doing a differential expression analysis and discovering genes that are unchanged in the given conditions. These unchanged genes are used to clean up the data from systematic shifts in expression due to the unwanted sources of variation. Another strategy could be to use a set of house\-keeping genes as negative controls, and use them as a reference to correct the systematic biases in the data. Let’s use a list of \~500 house\-keeping genes compiled here: [https://www.tau.ac.il/\~elieis/HKG/HK\_genes.txt](https://www.tau.ac.il/~elieis/HKG/HK_genes.txt).
```
library(RUVSeq)
#source for house-keeping genes collection:
#https://m.tau.ac.il/~elieis/HKG/HK_genes.txt
HK_genes <- read.table(file = system.file("extdata/rna-seq/HK_genes.txt",
package = 'compGenomRData'),
header = FALSE)
# let's take an intersection of the house-keeping genes with the genes available
# in the count table
house_keeping_genes <- intersect(rownames(set), HK_genes$V1)
```
We will now run `RUVg()` with the different number of factors of unwanted variation. We will plot the PCA after removing the unwanted variation. We should be able to see which `k` values, number of factors, produce better separation between sample groups.
```
# now, we use these genes as the empirical set of genes as input to RUVg.
# we try different values of k and see how the PCA plots look
par(mfrow = c(2, 2))
for(k in 1:4) {
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = k)
plotPCA(set_g, col=as.numeric(colData$group), cex = 0.9, adj = 0.5,
main = paste0('with RUVg, k = ',k),
ylim = c(-1, 1), xlim = c(-1, 1), )
}
```
FIGURE 8\.16: PCA plots on RUVg normalized data with varying number of covariates (k).
Based on the separation of case and control samples in the PCA plots in Figure [8\.16](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf1),
we choose k \= 1 and re\-run the `RUVg()` function with the house\-keeping genes to do more diagnostic plots.
```
# choose k = 1
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = 1)
```
Now let’s do diagnostics: compare the count matrices with or without RUVg processing, comparing RLE plots (Figure [8\.17](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf2)) and PCA plots (Figure [8\.18](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf3)) to see the effect of RUVg on the normalization and separation of case and control samples.
```
# RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'without RUVg')
plotRLE(set_g, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'with RUVg')
```
FIGURE 8\.17: RLE plots to observe the effect of RUVg.
```
# PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group), adj = 0.5,
main = 'without RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
plotPCA(set_g, col=as.numeric(colData$group), adj = 0.5,
main = 'with RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
```
FIGURE 8\.18: PCA plots to observe the effect of RUVg.
We can observe that using `RUVg()` with house\-keeping genes as reference has improved the clusters, however not yielded ideal separation. Probably the effect that is causing the ‘CASE\_5’ to cluster with the control samples still hasn’t been completely eliminated.
##### 8\.3\.9\.3\.2 Using RUVs
There is another strategy of `RUVSeq` that works better in the presence of replicates in the absence of a confounded experimental design, which is the `RUVs()` function. Let’s see how that performs with this data. This time we don’t use the house\-keeping genes. We rather use all genes as input to `RUVs()`. This function estimates the correction factor by assuming that replicates should have constant biological variation, rather, the variation in the replicates are the unwanted variation.
```
# make a table of sample groups from colData
differences <- makeGroups(colData$group)
## looking for two different sources of unwanted variation (k = 2)
## use information from all genes in the expression object
par(mfrow = c(2, 2))
for(k in 1:4) {
set_s <- RUVs(set, unique(rownames(set)),
k=k, differences) #all genes
plotPCA(set_s, col=as.numeric(colData$group),
cex = 0.9, adj = 0.5,
main = paste0('with RUVs, k = ',k),
ylim = c(-1, 1), xlim = c(-0.6, 0.6))
}
```
FIGURE 8\.19: PCA plots on RUVs normalized data with varying number of covariates (k).
Based on the separation of case and control samples in the PCA plots in Figure [8\.19](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf1),
we can see that the samples are better separated even at k \= 2 when using `RUVs()`. Here, we re\-run the `RUVs()` function using k \= 2, in order to do more diagnostic plots. We try to pick a value of k that is good enough to distinguish the samples by condition of interest. While setting the value of k to higher values could improve the percentage of explained variation by the first principle component to up to 61%, we try to avoid setting the value unnecessarily high to avoid removing factors that might also correlate with important biological differences between conditions.
```
# choose k = 2
set_s <- RUVs(set, unique(rownames(set)), k=2, differences) #
```
Now let’s do diagnostics again: compare the count matrices with or without RUVs processing, comparing RLE plots (Figure [8\.20](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf2)) and PCA plots (Figure [8\.21](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf3)) to see the effect of RUVg on the normalization and separation of case and control samples.
```
## compare the initial and processed objects
## RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'without RUVs')
plotRLE(set_s, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'with RUVs')
```
FIGURE 8\.20: RLE plots to observe the effect of RUVs.
```
## PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group),
main = 'without RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
```
FIGURE 8\.21: PCA plots to observe the effect of RUVs.
Let’s compare PCA results from RUVs and RUVg with the initial raw counts matrix. We will simply run the `plotPCA()` function on different normalization schemes. The resulting plots are in Figure [8\.22](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvcompare):
```
par(mfrow = c(1,3))
plotPCA(countData, col=as.numeric(colData$group),
main = 'without RUV - raw counts', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_g, col=as.numeric(colData$group),
main = 'with RUVg', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
```
FIGURE 8\.22: PCA plots to observe the before/after effect of RUV functions.
It looks like `RUVs()` has performed better than `RUVg()` in this case. So, let’s use count data that is processed by `RUVs()` to re\-do the initial heatmap. The resulting heatmap is in Figure [8\.23](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvpost).
```
library(EDASeq)
library(pheatmap)
# extract normalized counts that are cleared from unwanted variation using RUVs
normCountData <- normCounts(set_s)
selectedGenes <- names(sort(apply(normCountData, 1, var),
decreasing = TRUE))[1:500]
pheatmap(normCountData[selectedGenes,],
annotation_col = colData,
show_rownames = FALSE,
cutree_cols = 2,
scale = 'row')
```
FIGURE 8\.23: Clustering samples using the top 500 most variable genes normalized using RUVs (k \= 2\).
As can be observed the replicates from different groups cluster much better with each other after processing with `RUVs()`. It is important to note that RUVs uses information from replicates to shift the expression data and it would not work in a confounding design where the replicates of case samples and replicates of the control samples are sequenced in different batches.
#### 8\.3\.9\.4 Re\-run DESeq2 with the computed covariates
Having computed the sources of variation using `RUVs()`, we can actually integrate these variables with `DESeq2` to re\-do the differential expression analysis.
```
library(DESeq2)
#set up DESeqDataSet object
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = ~ group)
# filter for low count genes
dds <- dds[rowSums(DESeq2::counts(dds)) > 10]
# insert the covariates W1 and W2 computed using RUVs into DESeqDataSet object
colData(dds) <- cbind(colData(dds),
pData(set_s)[rownames(colData(dds)),
grep('W_[0-9]',
colnames(pData(set_s)))])
# update the design formula for the DESeq analysis (save the variable of
# interest to the last!)
design(dds) <- ~ W_1 + W_2 + group
# repeat the analysis
dds <- DESeq(dds)
# extract deseq results
res <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
res <- res[order(res$padj),]
```
#### 8\.3\.9\.1 Accounting for covariates using DESeq2
For demonstration purposes, we will use a subset of the count table obtained for a heart disease study, where there are RNA\-seq samples from subjects with normal and failing hearts. We again use a subset of the samples, focusing on 6 case and 6 control samples and we only consider protein\-coding genes (for speed concerns).
Let’s import count and colData for this experiment.
```
counts_file <- system.file('extdata/rna-seq/SRP021193.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP021193.colData.tsv',
package = 'compGenomRData')
counts <- read.table(counts_file)
colData <- read.table(colData_file, header = T, sep = '\t',
stringsAsFactors = TRUE)
```
Let’s take a look at how the samples cluster by calculating the TPM counts as displayed as a heatmap in Figure [8\.12](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:batcheffects2).
```
library(pheatmap)
#find gene length normalized values
geneLengths <- counts$width
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
selectedGenes <- names(sort(apply(tpm, 1, var),
decreasing = T)[1:100])
pheatmap(tpm[selectedGenes,],
scale = 'row',
annotation_col = colData,
show_rownames = FALSE)
```
FIGURE 8\.12: Visualizing batch effects in an experiment.
Here we can see from the clusters that the dominating variable is the ‘Library Selection’ variable rather than the ‘diagnosis’ variable, which determines the state of the organ from which the sample was taken. Case and control samples are all mixed in both two major clusters. However, ideally, we’d like to see a separation of the case and control samples regardless of the additional covariates. When testing for differential gene expression between conditions, such confounding variables can be accounted for using `DESeq2`. Below is a demonstration of how we instruct `DESeq2` to account for the ‘library selection’ variable:
```
library(DESeq2)
# remove the 'width' column from the counts matrix
countData <- as.matrix(subset(counts, select = c(-width)))
# set up a DESeqDataSet object
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = ~ LibrarySelection + group)
```
When constructing the design formula, it is very important to pay attention to the sequence of variables. We leave the variable of interest to the last and we can add as many covariates as we want to the beginning of the design formula. Please refer to the `DESeq2` vignette if you’d like to learn more about how to construct design formulas.
Now, we can run the differential expression analysis as has been demonstrated previously.
```
# run DESeq
dds <- DESeq(dds)
# extract results
DEresults <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
```
#### 8\.3\.9\.2 Accounting for estimated covariates using RUVSeq
In cases when the sources of potential variation are not known, it is worthwhile to use tools such as `RUVSeq` or `sva` that can estimate potential sources of variation and clean up the counts table from those sources of variation. Later on, the estimated covariates can be integrated into DESeq2’s design formula.
Let’s see how to utilize the `RUVseq` package to first diagnose the problem and then solve it. Here, for demonstration purposes, we’ll use a count table from a lung carcinoma study in which a transcription factor (Ets homologous factor \- EHF) is overexpressed and compared to the control samples with baseline EHF expression. Again, we only consider protein coding genes and use only five case and five control samples. The original data can be found on the `recount2` database with the accession ‘SRP049988’.
```
counts_file <- system.file('extdata/rna-seq/SRP049988.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP049988.colData.tsv',
package = 'compGenomRData')
counts <- read.table(counts_file)
colData <- read.table(colData_file, header = T,
sep = '\t', stringsAsFactors = TRUE)
# simplify condition descriptions
colData$source_name <- ifelse(colData$group == 'CASE',
'EHF_overexpression', 'Empty_Vector')
```
Let’s start by making heatmaps of the samples using TPM counts (see Figure [8\.13](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose1)).
```
#find gene length normalized values
geneLengths <- counts$width
rpk <- apply( subset(counts, select = c(-width)), 2,
function(x) x/(geneLengths/1000))
#normalize by the sample size using rpk values
tpm <- apply(rpk, 2, function(x) x / sum(as.numeric(x)) * 10^6)
selectedGenes <- names(sort(apply(tpm, 1, var),
decreasing = T)[1:100])
pheatmap(tpm[selectedGenes,],
scale = 'row',
annotation_col = colData,
cutree_cols = 2,
show_rownames = FALSE)
```
FIGURE 8\.13: Diagnostic plot to observe.
We can see that the overall clusters look fine, except that one of the case samples (CASE\_5\) clusters more closely with the control samples than the other case samples. This mis\-clustering could be a result of some batch effect, or any other technical preparation steps. However, the `colData` object doesn’t contain any variables that we can use to pinpoint the exact cause of this. So, let’s use `RUVSeq` to estimate potential covariates to see if the clustering results can be improved.
First, we set up the experiment:
```
library(EDASeq)
# remove 'width' column from counts
countData <- as.matrix(subset(counts, select = c(-width)))
# create a seqExpressionSet object using EDASeq package
set <- newSeqExpressionSet(counts = countData,
phenoData = colData)
```
Next, let’s make a diagnostic RLE plot on the raw count table.
```
# make an RLE plot and a PCA plot on raw count data and color samples by group
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group))
plotPCA(set, col = as.numeric(colData$group), adj = 0.5,
ylim = c(-0.7, 0.5), xlim = c(-0.5, 0.5))
```
FIGURE 8\.14: Diagnostic RLE and PCA plots based on raw count table.
```
## make RLE and PCA plots on TPM matrix
par(mfrow = c(1,2))
plotRLE(tpm, outline=FALSE, ylim=c(-4, 4), col=as.numeric(colData$group))
plotPCA(tpm, col=as.numeric(colData$group), adj = 0.5,
ylim = c(-0.3, 1), xlim = c(-0.5, 0.5))
```
FIGURE 8\.15: Diagnostic RLE and PCA plots based on TPM normalized count table.
Both RLE and PCA plots look better on normalized data (Figure [8\.15](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose2p2)) compared to raw data (Figure [8\.14](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvdiagnose2p1)), but still suggest the necessity of further improvement, because the CASE\_5 sample still clusters with the control samples. We haven’t yet accounted for the source of unwanted variation.
#### 8\.3\.9\.3 Removing unwanted variation from the data
`RUVSeq` has three main functions for removing unwanted variation: `RUVg()`, `RUVs()`, and `RUVr()`. Here, we will demonstrate how to use `RUVg` and `RUVs`. `RUVr` will be left as an exercise for the reader.
##### 8\.3\.9\.3\.1 Using RUVg
One way of removing unwanted variation depends on using a set of reference genes that are not expected to change by the sources of technical variation. One strategy along this line is to use spike\-in genes, which are artificially introduced into the sequencing run (Jiang, Schlesinger, Davis, et al. [2011](#ref-jiang_synthetic_2011)). However, there are many sequencing datasets that don’t have this spike\-in data available. In such cases, an empirical set of genes can be collected from the expression data by doing a differential expression analysis and discovering genes that are unchanged in the given conditions. These unchanged genes are used to clean up the data from systematic shifts in expression due to the unwanted sources of variation. Another strategy could be to use a set of house\-keeping genes as negative controls, and use them as a reference to correct the systematic biases in the data. Let’s use a list of \~500 house\-keeping genes compiled here: [https://www.tau.ac.il/\~elieis/HKG/HK\_genes.txt](https://www.tau.ac.il/~elieis/HKG/HK_genes.txt).
```
library(RUVSeq)
#source for house-keeping genes collection:
#https://m.tau.ac.il/~elieis/HKG/HK_genes.txt
HK_genes <- read.table(file = system.file("extdata/rna-seq/HK_genes.txt",
package = 'compGenomRData'),
header = FALSE)
# let's take an intersection of the house-keeping genes with the genes available
# in the count table
house_keeping_genes <- intersect(rownames(set), HK_genes$V1)
```
We will now run `RUVg()` with the different number of factors of unwanted variation. We will plot the PCA after removing the unwanted variation. We should be able to see which `k` values, number of factors, produce better separation between sample groups.
```
# now, we use these genes as the empirical set of genes as input to RUVg.
# we try different values of k and see how the PCA plots look
par(mfrow = c(2, 2))
for(k in 1:4) {
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = k)
plotPCA(set_g, col=as.numeric(colData$group), cex = 0.9, adj = 0.5,
main = paste0('with RUVg, k = ',k),
ylim = c(-1, 1), xlim = c(-1, 1), )
}
```
FIGURE 8\.16: PCA plots on RUVg normalized data with varying number of covariates (k).
Based on the separation of case and control samples in the PCA plots in Figure [8\.16](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf1),
we choose k \= 1 and re\-run the `RUVg()` function with the house\-keeping genes to do more diagnostic plots.
```
# choose k = 1
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = 1)
```
Now let’s do diagnostics: compare the count matrices with or without RUVg processing, comparing RLE plots (Figure [8\.17](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf2)) and PCA plots (Figure [8\.18](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf3)) to see the effect of RUVg on the normalization and separation of case and control samples.
```
# RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'without RUVg')
plotRLE(set_g, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'with RUVg')
```
FIGURE 8\.17: RLE plots to observe the effect of RUVg.
```
# PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group), adj = 0.5,
main = 'without RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
plotPCA(set_g, col=as.numeric(colData$group), adj = 0.5,
main = 'with RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
```
FIGURE 8\.18: PCA plots to observe the effect of RUVg.
We can observe that using `RUVg()` with house\-keeping genes as reference has improved the clusters, however not yielded ideal separation. Probably the effect that is causing the ‘CASE\_5’ to cluster with the control samples still hasn’t been completely eliminated.
##### 8\.3\.9\.3\.2 Using RUVs
There is another strategy of `RUVSeq` that works better in the presence of replicates in the absence of a confounded experimental design, which is the `RUVs()` function. Let’s see how that performs with this data. This time we don’t use the house\-keeping genes. We rather use all genes as input to `RUVs()`. This function estimates the correction factor by assuming that replicates should have constant biological variation, rather, the variation in the replicates are the unwanted variation.
```
# make a table of sample groups from colData
differences <- makeGroups(colData$group)
## looking for two different sources of unwanted variation (k = 2)
## use information from all genes in the expression object
par(mfrow = c(2, 2))
for(k in 1:4) {
set_s <- RUVs(set, unique(rownames(set)),
k=k, differences) #all genes
plotPCA(set_s, col=as.numeric(colData$group),
cex = 0.9, adj = 0.5,
main = paste0('with RUVs, k = ',k),
ylim = c(-1, 1), xlim = c(-0.6, 0.6))
}
```
FIGURE 8\.19: PCA plots on RUVs normalized data with varying number of covariates (k).
Based on the separation of case and control samples in the PCA plots in Figure [8\.19](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf1),
we can see that the samples are better separated even at k \= 2 when using `RUVs()`. Here, we re\-run the `RUVs()` function using k \= 2, in order to do more diagnostic plots. We try to pick a value of k that is good enough to distinguish the samples by condition of interest. While setting the value of k to higher values could improve the percentage of explained variation by the first principle component to up to 61%, we try to avoid setting the value unnecessarily high to avoid removing factors that might also correlate with important biological differences between conditions.
```
# choose k = 2
set_s <- RUVs(set, unique(rownames(set)), k=2, differences) #
```
Now let’s do diagnostics again: compare the count matrices with or without RUVs processing, comparing RLE plots (Figure [8\.20](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf2)) and PCA plots (Figure [8\.21](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf3)) to see the effect of RUVg on the normalization and separation of case and control samples.
```
## compare the initial and processed objects
## RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'without RUVs')
plotRLE(set_s, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'with RUVs')
```
FIGURE 8\.20: RLE plots to observe the effect of RUVs.
```
## PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group),
main = 'without RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
```
FIGURE 8\.21: PCA plots to observe the effect of RUVs.
Let’s compare PCA results from RUVs and RUVg with the initial raw counts matrix. We will simply run the `plotPCA()` function on different normalization schemes. The resulting plots are in Figure [8\.22](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvcompare):
```
par(mfrow = c(1,3))
plotPCA(countData, col=as.numeric(colData$group),
main = 'without RUV - raw counts', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_g, col=as.numeric(colData$group),
main = 'with RUVg', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
```
FIGURE 8\.22: PCA plots to observe the before/after effect of RUV functions.
It looks like `RUVs()` has performed better than `RUVg()` in this case. So, let’s use count data that is processed by `RUVs()` to re\-do the initial heatmap. The resulting heatmap is in Figure [8\.23](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvpost).
```
library(EDASeq)
library(pheatmap)
# extract normalized counts that are cleared from unwanted variation using RUVs
normCountData <- normCounts(set_s)
selectedGenes <- names(sort(apply(normCountData, 1, var),
decreasing = TRUE))[1:500]
pheatmap(normCountData[selectedGenes,],
annotation_col = colData,
show_rownames = FALSE,
cutree_cols = 2,
scale = 'row')
```
FIGURE 8\.23: Clustering samples using the top 500 most variable genes normalized using RUVs (k \= 2\).
As can be observed the replicates from different groups cluster much better with each other after processing with `RUVs()`. It is important to note that RUVs uses information from replicates to shift the expression data and it would not work in a confounding design where the replicates of case samples and replicates of the control samples are sequenced in different batches.
##### 8\.3\.9\.3\.1 Using RUVg
One way of removing unwanted variation depends on using a set of reference genes that are not expected to change by the sources of technical variation. One strategy along this line is to use spike\-in genes, which are artificially introduced into the sequencing run (Jiang, Schlesinger, Davis, et al. [2011](#ref-jiang_synthetic_2011)). However, there are many sequencing datasets that don’t have this spike\-in data available. In such cases, an empirical set of genes can be collected from the expression data by doing a differential expression analysis and discovering genes that are unchanged in the given conditions. These unchanged genes are used to clean up the data from systematic shifts in expression due to the unwanted sources of variation. Another strategy could be to use a set of house\-keeping genes as negative controls, and use them as a reference to correct the systematic biases in the data. Let’s use a list of \~500 house\-keeping genes compiled here: [https://www.tau.ac.il/\~elieis/HKG/HK\_genes.txt](https://www.tau.ac.il/~elieis/HKG/HK_genes.txt).
```
library(RUVSeq)
#source for house-keeping genes collection:
#https://m.tau.ac.il/~elieis/HKG/HK_genes.txt
HK_genes <- read.table(file = system.file("extdata/rna-seq/HK_genes.txt",
package = 'compGenomRData'),
header = FALSE)
# let's take an intersection of the house-keeping genes with the genes available
# in the count table
house_keeping_genes <- intersect(rownames(set), HK_genes$V1)
```
We will now run `RUVg()` with the different number of factors of unwanted variation. We will plot the PCA after removing the unwanted variation. We should be able to see which `k` values, number of factors, produce better separation between sample groups.
```
# now, we use these genes as the empirical set of genes as input to RUVg.
# we try different values of k and see how the PCA plots look
par(mfrow = c(2, 2))
for(k in 1:4) {
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = k)
plotPCA(set_g, col=as.numeric(colData$group), cex = 0.9, adj = 0.5,
main = paste0('with RUVg, k = ',k),
ylim = c(-1, 1), xlim = c(-1, 1), )
}
```
FIGURE 8\.16: PCA plots on RUVg normalized data with varying number of covariates (k).
Based on the separation of case and control samples in the PCA plots in Figure [8\.16](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf1),
we choose k \= 1 and re\-run the `RUVg()` function with the house\-keeping genes to do more diagnostic plots.
```
# choose k = 1
set_g <- RUVg(x = set, cIdx = house_keeping_genes, k = 1)
```
Now let’s do diagnostics: compare the count matrices with or without RUVg processing, comparing RLE plots (Figure [8\.17](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf2)) and PCA plots (Figure [8\.18](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvgf3)) to see the effect of RUVg on the normalization and separation of case and control samples.
```
# RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'without RUVg')
plotRLE(set_g, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group), main = 'with RUVg')
```
FIGURE 8\.17: RLE plots to observe the effect of RUVg.
```
# PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group), adj = 0.5,
main = 'without RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
plotPCA(set_g, col=as.numeric(colData$group), adj = 0.5,
main = 'with RUVg',
ylim = c(-1, 0.5), xlim = c(-0.5, 0.5))
```
FIGURE 8\.18: PCA plots to observe the effect of RUVg.
We can observe that using `RUVg()` with house\-keeping genes as reference has improved the clusters, however not yielded ideal separation. Probably the effect that is causing the ‘CASE\_5’ to cluster with the control samples still hasn’t been completely eliminated.
##### 8\.3\.9\.3\.2 Using RUVs
There is another strategy of `RUVSeq` that works better in the presence of replicates in the absence of a confounded experimental design, which is the `RUVs()` function. Let’s see how that performs with this data. This time we don’t use the house\-keeping genes. We rather use all genes as input to `RUVs()`. This function estimates the correction factor by assuming that replicates should have constant biological variation, rather, the variation in the replicates are the unwanted variation.
```
# make a table of sample groups from colData
differences <- makeGroups(colData$group)
## looking for two different sources of unwanted variation (k = 2)
## use information from all genes in the expression object
par(mfrow = c(2, 2))
for(k in 1:4) {
set_s <- RUVs(set, unique(rownames(set)),
k=k, differences) #all genes
plotPCA(set_s, col=as.numeric(colData$group),
cex = 0.9, adj = 0.5,
main = paste0('with RUVs, k = ',k),
ylim = c(-1, 1), xlim = c(-0.6, 0.6))
}
```
FIGURE 8\.19: PCA plots on RUVs normalized data with varying number of covariates (k).
Based on the separation of case and control samples in the PCA plots in Figure [8\.19](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf1),
we can see that the samples are better separated even at k \= 2 when using `RUVs()`. Here, we re\-run the `RUVs()` function using k \= 2, in order to do more diagnostic plots. We try to pick a value of k that is good enough to distinguish the samples by condition of interest. While setting the value of k to higher values could improve the percentage of explained variation by the first principle component to up to 61%, we try to avoid setting the value unnecessarily high to avoid removing factors that might also correlate with important biological differences between conditions.
```
# choose k = 2
set_s <- RUVs(set, unique(rownames(set)), k=2, differences) #
```
Now let’s do diagnostics again: compare the count matrices with or without RUVs processing, comparing RLE plots (Figure [8\.20](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf2)) and PCA plots (Figure [8\.21](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvsf3)) to see the effect of RUVg on the normalization and separation of case and control samples.
```
## compare the initial and processed objects
## RLE plots
par(mfrow = c(1,2))
plotRLE(set, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'without RUVs')
plotRLE(set_s, outline=FALSE, ylim=c(-4, 4),
col=as.numeric(colData$group),
main = 'with RUVs')
```
FIGURE 8\.20: RLE plots to observe the effect of RUVs.
```
## PCA plots
par(mfrow = c(1,2))
plotPCA(set, col=as.numeric(colData$group),
main = 'without RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
```
FIGURE 8\.21: PCA plots to observe the effect of RUVs.
Let’s compare PCA results from RUVs and RUVg with the initial raw counts matrix. We will simply run the `plotPCA()` function on different normalization schemes. The resulting plots are in Figure [8\.22](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvcompare):
```
par(mfrow = c(1,3))
plotPCA(countData, col=as.numeric(colData$group),
main = 'without RUV - raw counts', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_g, col=as.numeric(colData$group),
main = 'with RUVg', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
plotPCA(set_s, col=as.numeric(colData$group),
main = 'with RUVs', adj = 0.5,
ylim = c(-0.75, 0.75), xlim = c(-0.75, 0.75))
```
FIGURE 8\.22: PCA plots to observe the before/after effect of RUV functions.
It looks like `RUVs()` has performed better than `RUVg()` in this case. So, let’s use count data that is processed by `RUVs()` to re\-do the initial heatmap. The resulting heatmap is in Figure [8\.23](gene-expression-analysis-using-high-throughput-sequencing-technologies.html#fig:ruvpost).
```
library(EDASeq)
library(pheatmap)
# extract normalized counts that are cleared from unwanted variation using RUVs
normCountData <- normCounts(set_s)
selectedGenes <- names(sort(apply(normCountData, 1, var),
decreasing = TRUE))[1:500]
pheatmap(normCountData[selectedGenes,],
annotation_col = colData,
show_rownames = FALSE,
cutree_cols = 2,
scale = 'row')
```
FIGURE 8\.23: Clustering samples using the top 500 most variable genes normalized using RUVs (k \= 2\).
As can be observed the replicates from different groups cluster much better with each other after processing with `RUVs()`. It is important to note that RUVs uses information from replicates to shift the expression data and it would not work in a confounding design where the replicates of case samples and replicates of the control samples are sequenced in different batches.
#### 8\.3\.9\.4 Re\-run DESeq2 with the computed covariates
Having computed the sources of variation using `RUVs()`, we can actually integrate these variables with `DESeq2` to re\-do the differential expression analysis.
```
library(DESeq2)
#set up DESeqDataSet object
dds <- DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = ~ group)
# filter for low count genes
dds <- dds[rowSums(DESeq2::counts(dds)) > 10]
# insert the covariates W1 and W2 computed using RUVs into DESeqDataSet object
colData(dds) <- cbind(colData(dds),
pData(set_s)[rownames(colData(dds)),
grep('W_[0-9]',
colnames(pData(set_s)))])
# update the design formula for the DESeq analysis (save the variable of
# interest to the last!)
design(dds) <- ~ W_1 + W_2 + group
# repeat the analysis
dds <- DESeq(dds)
# extract deseq results
res <- results(dds, contrast = c('group', 'CASE', 'CTRL'))
res <- res[order(res$padj),]
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/exercises-6.html |
8\.5 Exercises
--------------
### 8\.5\.1 Exploring the count tables
Here, import an example count table and do some exploration of the expression data.
```
counts_file <- system.file("extdata/rna-seq/SRP029880.raw_counts.tsv",
package = "compGenomRData")
coldata_file <- system.file("extdata/rna-seq/SRP029880.colData.tsv",
package = "compGenomRData")
```
1. Normalize the counts using the TPM approach. \[Difficulty: **Beginner**]
2. Plot a heatmap of the top 500 most variable genes. Compare with the heatmap obtained using the 100 most variable genes. \[Difficulty: **Beginner**]
3. Re\-do the heatmaps setting the `scale` argument to `none`, and `column`. Compare the results with `scale = 'row'`. \[Difficulty: **Beginner**]
4. Draw a correlation plot for the samples depicting the sample differences as ‘ellipses’, drawing only the upper end of the matrix, and order samples by hierarchical clustering results based on `average` linkage clustering method. \[Difficulty: **Beginner**]
5. How else could the count matrix be subsetted to obtain quick and accurate clusters? Try selecting the top 100 genes that have the highest total expression in all samples and re\-draw the cluster heatmaps and PCA plots. \[Difficulty: **Intermediate**]
6. Add an additional column to the annotation data.frame object to annotate the samples and use the updated annotation data.frame to plot the heatmaps. (Hint: Assign different batch values to CASE and CTRL samples). Make a PCA plot and color samples by the added variable (e.g. batch). \[Difficulty: Intermediate]
7. Try making the heatmaps using all the genes in the count table, rather than sub\-selecting. \[Difficulty: **Advanced**]
8. Use the [`Rtsne` package](https://cran.r-project.org/web/packages/Rtsne/Rtsne.pdf) to draw a t\-SNE plot of the expression values. Color the points by sample group. Compare the results with the PCA plots. \[Difficulty: **Advanced**]
### 8\.5\.2 Differential expression analysis
Firstly, carry out a differential expression analysis starting from raw counts.
Use the following datasets:
```
counts_file <- system.file("extdata/rna-seq/SRP029880.raw_counts.tsv",
package = "compGenomRData")
coldata_file <- system.file("extdata/rna-seq/SRP029880.colData.tsv",
package = "compGenomRData")
```
* Import the read counts and colData tables.
* Set up a DESeqDataSet object.
* Filter out genes with low counts.
* Run DESeq2 contrasting the `CASE` sample with `CONTROL` samples.
Now, you are ready to do the following exercises:
1. Make a volcano plot using the differential expression analysis results. (Hint: x\-axis denotes the log2FoldChange and the y\-axis represents the \-log10(pvalue)). \[Difficulty: **Beginner**]
2. Use DESeq2::plotDispEsts to make a dispersion plot and find out the meaning of this plot. (Hint: Type ?DESeq2::plotDispEsts) \[Difficulty: **Beginner**]
3. Explore `lfcThreshold` argument of the `DESeq2::results` function. What is its default value? What does it mean to change the default value to, for instance, `1`? \[Difficulty: **Intermediate**]
4. What is independent filtering? What happens if we don’t use it? Google `independent filtering statquest` and watch the online video about independent filtering. \[Difficulty: **Intermediate**]
5. Re\-do the differential expression analysis using the `edgeR` package. Find out how much DESeq2 and edgeR agree on the list of differentially expressed genes. \[Difficulty: **Advanced**]
6. Use the `compcodeR` package to run the differential expression analysis using at least three different tools and compare and contrast the results following the `compcodeR` vignette. \[Difficulty: **Advanced**]
### 8\.5\.3 Functional enrichment analysis
1. Re\-run gProfileR, this time using pathway annotations such as KEGG, REACTOME, and protein complex databases such as CORUM, in addition to the GO terms. Sort the resulting tables by columns `precision` and/or `recall`. How do the top GO terms change when sorted for `precision`, `recall`, or `p.value`? \[Difficulty: **Beginner**]
2. Repeat the gene set enrichment analysis by trying different options for the `compare` argument of the `GAGE:gage`
function. How do the results differ? \[Difficulty: **Beginner**]
3. Make a scatter plot of GO term sizes and obtained p\-values by setting the `gProfiler::gprofiler` argument `significant = FALSE`. Is there a correlation of term sizes and p\-values? (Hint: Take \-log10 of p\-values). If so, how can this bias be mitigated? \[Difficulty: **Intermediate**]
4. Do a gene\-set enrichment analysis using gene sets from top 10 GO terms. \[Difficulty: **Intermediate**]
5. What are the other available R packages that can carry out gene set enrichment analysis for RNA\-seq datasets? \[Difficulty: **Intermediate**]
6. Use the topGO package (<https://bioconductor.org/packages/release/bioc/html/topGO.html>) to re\-do the GO term analysis. Compare and contrast the results with what has been obtained using the `gProfileR` package. Which tool is faster, `gProfileR` or topGO? Why? \[Difficulty: **Advanced**]
7. Given a gene set annotated for human, how can it be utilized to work on *C. elegans* data? (Hint: See `biomaRt::getLDS`). \[Difficulty: **Advanced**]
8. Import curated pathway gene sets with Entrez identifiers from the [MSIGDB database](http://software.broadinstitute.org/gsea/msigdb/collections.jsp) and re\-do the GSEA for all curated gene sets. \[Difficulty: **Advanced**]
### 8\.5\.4 Removing unwanted variation from the expression data
For the exercises below, use the datasets at:
```
counts_file <- system.file('extdata/rna-seq/SRP049988.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP049988.colData.tsv',
package = 'compGenomRData')
```
1. Run RUVSeq using multiple values of `k` from 1 to 10 and compare and contrast the PCA plots obtained from the normalized counts of each RUVSeq run. \[Difficulty: **Beginner**]
2. Re\-run RUVSeq using the `RUVr()` function. Compare PCA plots from `RUVs`, `RUVg` and `RUVr` using the same `k` values and find out which one performs the best. \[Difficulty: **Intermediate**]
3. Do the necessary diagnostic plots using the differential expression results from the EHF count table. \[Difficulty: **Intermediate**]
4. Use the `sva` package to discover sources of unwanted variation and re\-do the differential expression analysis using variables from the output of `sva` and compare the results with `DESeq2` results using `RUVSeq` corrected normalization counts. \[Difficulty: **Advanced**]
### 8\.5\.1 Exploring the count tables
Here, import an example count table and do some exploration of the expression data.
```
counts_file <- system.file("extdata/rna-seq/SRP029880.raw_counts.tsv",
package = "compGenomRData")
coldata_file <- system.file("extdata/rna-seq/SRP029880.colData.tsv",
package = "compGenomRData")
```
1. Normalize the counts using the TPM approach. \[Difficulty: **Beginner**]
2. Plot a heatmap of the top 500 most variable genes. Compare with the heatmap obtained using the 100 most variable genes. \[Difficulty: **Beginner**]
3. Re\-do the heatmaps setting the `scale` argument to `none`, and `column`. Compare the results with `scale = 'row'`. \[Difficulty: **Beginner**]
4. Draw a correlation plot for the samples depicting the sample differences as ‘ellipses’, drawing only the upper end of the matrix, and order samples by hierarchical clustering results based on `average` linkage clustering method. \[Difficulty: **Beginner**]
5. How else could the count matrix be subsetted to obtain quick and accurate clusters? Try selecting the top 100 genes that have the highest total expression in all samples and re\-draw the cluster heatmaps and PCA plots. \[Difficulty: **Intermediate**]
6. Add an additional column to the annotation data.frame object to annotate the samples and use the updated annotation data.frame to plot the heatmaps. (Hint: Assign different batch values to CASE and CTRL samples). Make a PCA plot and color samples by the added variable (e.g. batch). \[Difficulty: Intermediate]
7. Try making the heatmaps using all the genes in the count table, rather than sub\-selecting. \[Difficulty: **Advanced**]
8. Use the [`Rtsne` package](https://cran.r-project.org/web/packages/Rtsne/Rtsne.pdf) to draw a t\-SNE plot of the expression values. Color the points by sample group. Compare the results with the PCA plots. \[Difficulty: **Advanced**]
### 8\.5\.2 Differential expression analysis
Firstly, carry out a differential expression analysis starting from raw counts.
Use the following datasets:
```
counts_file <- system.file("extdata/rna-seq/SRP029880.raw_counts.tsv",
package = "compGenomRData")
coldata_file <- system.file("extdata/rna-seq/SRP029880.colData.tsv",
package = "compGenomRData")
```
* Import the read counts and colData tables.
* Set up a DESeqDataSet object.
* Filter out genes with low counts.
* Run DESeq2 contrasting the `CASE` sample with `CONTROL` samples.
Now, you are ready to do the following exercises:
1. Make a volcano plot using the differential expression analysis results. (Hint: x\-axis denotes the log2FoldChange and the y\-axis represents the \-log10(pvalue)). \[Difficulty: **Beginner**]
2. Use DESeq2::plotDispEsts to make a dispersion plot and find out the meaning of this plot. (Hint: Type ?DESeq2::plotDispEsts) \[Difficulty: **Beginner**]
3. Explore `lfcThreshold` argument of the `DESeq2::results` function. What is its default value? What does it mean to change the default value to, for instance, `1`? \[Difficulty: **Intermediate**]
4. What is independent filtering? What happens if we don’t use it? Google `independent filtering statquest` and watch the online video about independent filtering. \[Difficulty: **Intermediate**]
5. Re\-do the differential expression analysis using the `edgeR` package. Find out how much DESeq2 and edgeR agree on the list of differentially expressed genes. \[Difficulty: **Advanced**]
6. Use the `compcodeR` package to run the differential expression analysis using at least three different tools and compare and contrast the results following the `compcodeR` vignette. \[Difficulty: **Advanced**]
### 8\.5\.3 Functional enrichment analysis
1. Re\-run gProfileR, this time using pathway annotations such as KEGG, REACTOME, and protein complex databases such as CORUM, in addition to the GO terms. Sort the resulting tables by columns `precision` and/or `recall`. How do the top GO terms change when sorted for `precision`, `recall`, or `p.value`? \[Difficulty: **Beginner**]
2. Repeat the gene set enrichment analysis by trying different options for the `compare` argument of the `GAGE:gage`
function. How do the results differ? \[Difficulty: **Beginner**]
3. Make a scatter plot of GO term sizes and obtained p\-values by setting the `gProfiler::gprofiler` argument `significant = FALSE`. Is there a correlation of term sizes and p\-values? (Hint: Take \-log10 of p\-values). If so, how can this bias be mitigated? \[Difficulty: **Intermediate**]
4. Do a gene\-set enrichment analysis using gene sets from top 10 GO terms. \[Difficulty: **Intermediate**]
5. What are the other available R packages that can carry out gene set enrichment analysis for RNA\-seq datasets? \[Difficulty: **Intermediate**]
6. Use the topGO package (<https://bioconductor.org/packages/release/bioc/html/topGO.html>) to re\-do the GO term analysis. Compare and contrast the results with what has been obtained using the `gProfileR` package. Which tool is faster, `gProfileR` or topGO? Why? \[Difficulty: **Advanced**]
7. Given a gene set annotated for human, how can it be utilized to work on *C. elegans* data? (Hint: See `biomaRt::getLDS`). \[Difficulty: **Advanced**]
8. Import curated pathway gene sets with Entrez identifiers from the [MSIGDB database](http://software.broadinstitute.org/gsea/msigdb/collections.jsp) and re\-do the GSEA for all curated gene sets. \[Difficulty: **Advanced**]
### 8\.5\.4 Removing unwanted variation from the expression data
For the exercises below, use the datasets at:
```
counts_file <- system.file('extdata/rna-seq/SRP049988.raw_counts.tsv',
package = 'compGenomRData')
colData_file <- system.file('extdata/rna-seq/SRP049988.colData.tsv',
package = 'compGenomRData')
```
1. Run RUVSeq using multiple values of `k` from 1 to 10 and compare and contrast the PCA plots obtained from the normalized counts of each RUVSeq run. \[Difficulty: **Beginner**]
2. Re\-run RUVSeq using the `RUVr()` function. Compare PCA plots from `RUVs`, `RUVg` and `RUVr` using the same `k` values and find out which one performs the best. \[Difficulty: **Intermediate**]
3. Do the necessary diagnostic plots using the differential expression results from the EHF count table. \[Difficulty: **Intermediate**]
4. Use the `sva` package to discover sources of unwanted variation and re\-do the differential expression analysis using variables from the output of `sva` and compare the results with `DESeq2` results using `RUVSeq` corrected normalization counts. \[Difficulty: **Advanced**]
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/chip-quality-control.html |
9\.5 ChIP quality control
-------------------------
While the goal of the read quality assessment is to check whether the sequencing
produced a high enough number of high\-quality reads
the goal of ChIP quality control is to ascertain whether the chromatin immunoprecipitation
enrichment was successful.
This is a crucial step in the ChIP\-seq analysis because it can help us
identify low\-quality ChIP samples, and give information about which experimental
steps went wrong.
There are four steps in ChIP quality control:
1. Sample correlation clustering: Clustering of the pair\-wise correlations between
genome\-wide signal profiles.
2. Data visualization in a genomic browser.
3. Average fragment length determination: Determining whether the ChIP was enriched for fragments of a certain length.
4. Visualization of GC bias. Here we will plot the ChIP enrichment versus the
average GC content in the corresponding genomic bin.
### 9\.5\.1 The data
Here we will familiarize ourselves with the datasets that will be used in the
chapter.
Experimental data was downloaded from the public ENCODE (ENCODE Project Consortium [2012](#ref-ENCODE_Project_Consortium2012-wf))
database of ChIP\-seq experiments.
The experiments were performed on a lymphoblastoid cell line, GM12878, and mapped
to the GRCh38 (hg38\) version of the human genome, using the standard ENCODE
ChIP\-seq pipeline. In this chapter, due to compute time considerations, we have taken a subset of the data which corresponds to the human chromosome 21 (chr21\).
The data sets are located in the `compGenomRData` package.
The location of the data sets can be accessed using the `system.file()` command,
in the following way:
```
data_path = system.file('extdata/chip-seq',package='compGenomRData')
```
The available datasets can be listed using the `list.files()` function:
```
chip_files = list.files(data_path, full.names=TRUE)
```
The dataset consists of the following ChIP experiments:
1. **Transcription factors**: CTCF, SMC3, ZNF143, PolII
(RNA polymerase 2\)
2. **Histone modifications**: H3K4me3, H3K36me3, H3k27ac, H3k27me3
3. Various input samples
### 9\.5\.2 Sample clustering
Clustering is an ordering procedure which groups samples by similarity;
the more similar samples are grouped closer to one another.
The details of clustering methodologies are described in Chapter [4](unsupervisedLearning.html#unsupervisedLearning).
Clustering of ChIP signal profiles is used for two purposes:
The first one is to ascertain whether there is concordance between
biological replicates; biological replicates should show greater similarity
than ChIP of different proteins. The second function is to see whether our experiments conform to known prior knowledge. For example, we would expect to see greater similarity between proteins
which belong to the same protein complex.
To quantify the ChIP signal we will firstly construct 1\-kilobase\-wide tilling
windows over the genome, and subsequently count the number of reads
in each window, for each experiment. We will then normalize the counts, to
account for a different total number of reads in each experiment, and finally
calculate the correlation between all pairs of samples.
Although this procedure represents a crude way of data quantification, it provides sufficient
information to ascertain the data quality.
Using the `GenomeInfoDb` we will first fetch the chromosome lengths corresponding
to the hg38 version of the human genome, and filter the length for human
chromosome 21\.
```
# load the chromosome info package
library(GenomeInfoDb)
# fetch the chromosome lengths for the human genome
hg_chrs = getChromInfoFromUCSC('hg38')
# find the length of chromosome 21
hg_chrs = subset(hg_chrs, grepl('chr21$',chrom))
```
The `tileGenome()` function from the `GenomicRanges` package constructs equally sized
windows over the genome of interest.
The function takes two arguments:
1. A vector of chromosome lengths
2. Window size
Firstly, we convert the chromosome lengths *data.frame* into a *named vector*.
```
# downloaded hg_chrs is a data.frame object,
# we need to convert the data.frame into a named vector
seqlengths = with(hg_chrs, setNames(size, chrom))
```
Then we construct the windows.
```
# load the genomic ranges package
library(GenomicRanges)
# tileGenome function returns a list of GRanges of a given width,
# spanning the whole chromosome
tilling_window = tileGenome(seqlengths, tilewidth=1000)
# unlist converts the list to one GRanges object
tilling_window = unlist(tilling_window)
```
```
## GRanges object with 46710 ranges and 0 metadata columns:
## seqnames ranges strand
## <Rle> <IRanges> <Rle>
## [1] chr21 1-1000 *
## [2] chr21 1001-2000 *
## [3] chr21 2001-3000 *
## [4] chr21 3001-4000 *
## [5] chr21 4001-5000 *
## ... ... ... ...
## [46706] chr21 46704985-46705984 *
## [46707] chr21 46705985-46706984 *
## [46708] chr21 46706985-46707984 *
## [46709] chr21 46707985-46708984 *
## [46710] chr21 46708985-46709983 *
## -------
## seqinfo: 1 sequence from an unspecified genome
```
We will use the `summarizeOverlaps()` function from the `GenomicAlignments` package
to count the number of reads in each genomic window.
The function will do the counting automatically for all our experiments.
The `summarizeOverlaps()` function returns a `SummarizedExperiment` object.
The object contains the counts, genomic ranges which were used for the quantification,
and the sample descriptions.
```
# load GenomicAlignments
library(GenomicAlignments)
# fetch bam files from the data folder
bam_files = list.files(
path = data_path,
full.names = TRUE,
pattern = 'bam$'
)
# use summarizeOverlaps to count the reads
so = summarizeOverlaps(tilling_window, bam_files)
# extract the counts from the SummarizedExperiment
counts = assays(so)[[1]]
```
Different ChIP experiments were sequenced to different depths; each experiment
contains a different number of reads. To remove the effect of the experimental
depth on the quantification, the samples need to be normalized.
The standard normalization procedure, for ChIP data, is to divide the
counts in each tilling window by the total number of sequenced reads, and
multiply it by a constant factor (to avoid extremely small numbers).
This normalization procedure is called the **cpm** \- counts per million.
\\\[
CPM \= counts \* (10^{6} / total\\\>number\\\>of\\\>reads)
\\]
```
# calculate the cpm from the counts matrix
# the following command works because
# R calculates everything by columns
cpm = t(t(counts)*(1000000/colSums(counts)))
```
We remove all tiles which do not have overlapping reads.
Tiles with 0 counts do not provide any additional discriminatory power, rather,
they introduce artificial similarity between the samples (i.e. samples with
only a handful of bound regions will have a lot of tiles with \\(0\\) counts, while
they do not have to have any overlapping enriched tiles).
```
# remove all tiles which do not contain reads
cpm = cpm[rowSums(cpm) > 0,]
```
We use the `sub()` function to shorten the column names of the cpm matrix.
```
# change the formatting of the column names
# remove the .chr21.bam suffix
colnames(cpm) = sub('.chr21.bam','', colnames(cpm))
# remove the GM12878_hg38 prefix
colnames(cpm) = sub('GM12878_hg38_','',colnames(cpm))
```
Finally, we calculate the pairwise Pearson correlation coefficient using the
`cor()` function.
The function takes as input a region\-by\-sample count matrix, and returns
a sample X sample matrix, where each field contains the correlation coefficient
between two samples.
```
# calculates the pearson correlation coefficient between the samples
correlation_matrix = cor(cpm, method='pearson')
```
The `Heatmap()` function from the `ComplexHeatmap` (Z. Gu, Eils, and Schlesner [2016](#ref-Gu_2016)[b](#ref-Gu_2016)) package is used to visualize
the correlation coefficient.
The function automatically performs hierarchical clustering \- it groups the
samples which have the highest pairwise correlation.
The diagonal represents the correlation of each sample with itself.
```
# load ComplexHeatmap
library(ComplexHeatmap)
# load the circlize package, and define
# the color palette which will be used in the heatmap
library(circlize)
heatmap_col = circlize::colorRamp2(
breaks = c(-1,0,1),
colors = c('blue','white','red')
)
# plot the heatmap using the Heatmap function
Heatmap(
matrix = correlation_matrix,
col = heatmap_col
)
```
FIGURE 9\.2: Heatmap showing ChIP\-seq sample similarity using the Pearson correlation coefficient.
In Figure [9\.2](chip-quality-control.html#fig:sample-clustering-complex-heatmap) we can see a
perfect example of why quality control is important.
**CTCF** is a zinc finger protein which co\-localizes with the Cohesin complex.
**SMC3** is a sub unit of the Cohesin complex, and we would therefore expect to
see that the **SMC3** signal profile has high correlation with the **CTCF** signal profile.
This is true for the second biological replicate of **SMC3**, while the first
replicate (SMC3\_r1\) clusters with the input samples. This indicates that the
sample likely has low enrichment.
We can see that the ChIP and Input samples form separate clusters. This implies
that the ChIP samples have an enrichment of fragments.
Additionally, we see that the biological replicates of other experiments
cluster together.
### 9\.5\.3 Visualization in the genome browser
One of the first steps in any ChIP\-seq analysis should be looking at the
data. By looking at the data we get an intuition about the quality of the
experiment, and start seeing preliminary correlations between the samples, which
we can use to guide our analysis.
This can be achieved either by plotting signal profiles around
regions of interest, or by loading data into a genome browser
(such as IGV, or UCSC genome browsers).
Genome browsers are standalone applications which represent the genome
as a one\-dimensional (1D) coordinate system. The browsers enable
simultaneous visualization and comparison of multiple types of annotations and experimental data.
Genome browsers can visualize most of the commonly used genomic data formats:
BAM, BED, wig, and bigWig.
The easiest way to access our data would be to load the .bam files into the browser. This will show us the sequence and position of every mapped read. If we want to view multiple samples in parallel, loading every mapped read can be restrictive. It takes up a lot of computational resources, and the amount of information
makes the visual comparison hard to do.
We would like to convert our data so that we get a compressed visualization,
which would show us the main properties of our samples, namely, the quality and
the location of the enrichment.
This is achieved by summarizing the read enrichment into a signal profile \-
the whole experiment is converted into a numeric vector \- a coverage vector.
The vector contains information on how many reads overlap each position
in the genome.
We will proceed as follows: Firstly, we will import a **.bam** file into **R**. Then we will calculate the signal profile (construct the coverage vector), and finally, we export the vector as a **.bigWig** file.
First we select one of the ChIP samples.
```
# list the bam files in the directory
# the '$' sign tells the pattern recognizer to omit bam.bai files
bam_files = list.files(
path = data_path,
full.names = TRUE,
pattern = 'bam$'
)
# select the first bam file
chip_file = bam_files[1]
```
We will use the `readGAlignments()` function from the `GenomicAlignments`
package to load the reads into **R**, and then the `GRanges()` function
to convert them into a `GRanges` object.
```
# load the genomic alignments package
library(GenomicAlignments)
# read the ChIP reads into R
reads = readGAlignments(chip_file)
# the reads need to be converted to a granges object
reads = granges(reads)
```
Because DNA fragments are being sequenced from their ends (both the 3’ and 5’ end),
the read enrichment does not correspond to the exact location of the bound protein.
Rather, reads end to form clusters of enrichment upstream and downstream of the true binding location.
To correct for this, we use a small hack. Before we create the signal profiles,
we will extend the reads towards their **3’** end. The reads are extended to
form fragments of 200 base pairs. This is an empiric measure, which
corresponds to the average fragment size of the Illumina sample preparation kit.
The exact average fragment size will differ from 200 base pairs, but if the
deviation is not large (i.e. more than 200 base pairs),
it will not affect the visual properties of our samples.
Read extension is done using the `resize()` function. The function
takes two arguments:
1. `width`: resulting fragment width
2. `fix`: which position of the fragment should not be changed (if `fix` is set to start,
the reads will be extended towards the **3’** end. If `fix` is set to end, they will
be extended towards the **5’** end)
```
# extends the reads towards the 3' end
reads = resize(reads, width=200, fix='start')
# keeps only chromosome 21
reads = keepSeqlevels(reads, 'chr21', pruning.mode='coarse')
```
Conversion of reads into coverage vectors is done with the `coverage()`
function.
The function takes only one argument (`width`), which corresponds to chromosome sizes.
For this purpose we can use the, previously created, `seqlengths` variable.
The `coverage()` function converts the reads into a compressed `Rle` object. We have introduced these workflows in Chapter [6](genomicIntervals.html#genomicIntervals).
```
# convert the reads into a signal profile
cov = coverage(reads, width = seqlengths)
```
```
## RleList of length 1
## $chr21
## integer-Rle of length 46709983 with 199419 runs
## Lengths: 5038228 200 63546 20 ... 200 1203 200 27856
## Values : 0 1 0 1 ... 1 0 1 0
```
The name of the output file is created by changing the file suffix from **.bam**
to **.bigWig**.
```
# change the file extension from .bam to .bigWig
output_file = sub('.bam','.bigWig', chip_file)
```
Now we can use the `export.bw()` function from the rtracklayer package to
write the bigWig file.
```
# load the rtracklayer package
library(rtracklayer)
# export the bigWig output file
export.bw(cov, 'output_file')
```
#### 9\.5\.3\.1 Vizualization of track data using Gviz
We can create genome browserlike visualizations using the `Gviz` package,
which was introduced in Chapter [6](genomicIntervals.html#genomicIntervals).
The `Gviz` is a tool which enables exhaustive customized visualization of
genomics experiments. The basic usage principle is to define tracks, where each track can represent
genomic annotation, or a signal profile; subsequently we define the order
of the tracks and plot them.
Here we will define two tracks, a genome axis, which will show the position
along the human chromosome 21; and a signal track from our CTCF experiment.
```
library(Gviz)
# define the genome axis track
axis = GenomeAxisTrack(
range = GRanges('chr21', IRanges(1, width=seqlengths))
)
# convert the signal into genomic ranges and define the signal track
gcov = as(cov, 'GRanges')
dtrack = DataTrack(gcov, name = "CTCF", type='l')
# define the track ordering
track_list = list(axis,dtrack)
```
Tracks are plotted with the `plotTracks()` function. The `sizes` argument needs to be the same size as the track\_list, and defines the
relative size of each track.
Figure [9\.3](chip-quality-control.html#fig:genome-browser-gviz-show) shows the output of the
`plotTracks()` function.
```
# plot the list of browser tracks
# sizes argument defines the relative sizes of tracks
# background title defines the color for the track labels
plotTracks(
trackList = track_list,
sizes = c(.1,1),
background.title = "black"
)
```
FIGURE 9\.3: ChIP\-seq signal visualized as a browser track using Gviz.
### 9\.5\.4 Plus and minus strand cross\-correlation
Cross\-correlation between plus and minus strands is a method
which quantifies whether the DNA library was enriched for fragments of
a certain length.
Similarity between the plus and minus strands defined as the correlation of
the signal profiles for the reads that map to the **\+** and the **\-** strands.
The distribution of reads is shown in Figure [9\.4](chip-quality-control.html#fig:Figure-BrowserScreenshot).
FIGURE 9\.4: Browser screenshot of aligned reads for one ChIP, and control sample. ChIP samples have an asymetric distribution of reads; reads mapping to the \+ strand are located on the left side of the peak, while the reads mapping to the \- strand are found on the right side of the peak.
Due to the sequencing properties, reads which correspond to
the **5’** fragment ends will map to the opposite strand from the reads
coming from the **3’** ends. Most often (depending on the sequencing protocol)
the reads from the **5’** fragment ends map to the **\+** strand,
while the reads from the **3’** ends map to the **\-** strand.
We calculate the cross\-correlation by shifting the signal on the **\+** strand,
by a pre\-defined amount (i.e. shift by 1 \- 400 nucleotides), and calculating,
for each shift, the correlation between the **\+**, and the **\-** strands.
Subsequently we plot the correlation versus shift, and locate the maximum value.
The maximum value should correspond to the average DNA fragment length which
was present in the library. This value tells us whether the ChIP enriched for
fragments of certain length (i.e. whether the ChIP was successful).
Due to the size of genomic data, it might be computationally prohibitive to
calculate the Pearson correlation between whole genome (or even whole chromosome)
signal profiles.
To get around this problem, we will resort to a trick; we will disregard the dynamic
range of the signal profiles, and only keep the information of which
genomic bases contained the ends of the fragments.
This is done by calculating the coverage vector of the read starting position (separately
for each strand), and converting the coverage vector into a Boolean vector.
The Boolean vector contains the information of which genomic positions
contained the DNA fragment ends.
Similarity between two Boolean vectors can be promptly computed using the Jaccard index.
The Jaccard index is defined as an intersection between two Boolean vectors,
divided by their union as shown in Figure [9\.5](chip-quality-control.html#fig:FigureJaccardSimilarity).
FIGURE 9\.5: Jaccard similarity is defined as the ratio of the intersection and union of two sets.
Firstly, we load the reads for one of the CTCF ChIP experiments.
Then we create signal profiles, separately for reads on the **\+** and **\-**
strands.
Unlike before, we do not extend the reads to the average expected fragment
length (200 base pairs); we keep only the starting position of each read.
```
# load the reads
reads = readGAlignments(chip_file)
reads = granges(reads)
# keep only the starting position of each read
reads = resize(reads, width=1, fix='start')
reads = keepSeqlevels(reads, 'chr21', pruning.mode='coarse')
```
Now we can calculate the coverage vector of the read starting position.
The coverage vector is then automatically converted into a Boolean vector by
asking which genomic positions have \\(coverage \> 0\\).
```
# calculate the coverage profile for plus and minus strand
reads = split(reads, strand(reads))
# coverage(x, width = seqlengths)[[1]] > 0
# calculates the coverage and converts
# the coverage vector into a boolean
cov = lapply(reads, function(x){
coverage(x, width = seqlengths)[[1]] > 0
})
cov = lapply(cov, as.vector)
```
We will now shift the coverage vector from the plus strand by \\(1\\) to \\(400\\) base pairs, and for each pair shift we will calculate the Jaccard index between the vectors
on the plus and minus strand.
```
# defines the shift range
wsize = 1:400
# defines the jaccard similarity
jaccard = function(x,y)sum((x & y)) / sum((x | y))
# shifts the + vector by 1 - 400 nucleotides and
# calculates the correlation coefficient
cc = shiftApply(
SHIFT = wsize,
X = cov[['+']],
Y = cov[['-']],
FUN = jaccard
)
# converts the results into a data frame
cc = data.frame(fragment_size = wsize, cross_correlation = cc)
```
We can finally plot the shift in base pairs versus the correlation coefficient:
```
library(ggplot2)
ggplot(data = cc, aes(fragment_size, cross_correlation)) +
geom_point() +
geom_vline(xintercept = which.max(cc$cross_correlation),
size=2, color='red', linetype=2) +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Shift in base pairs') +
ylab('Jaccard similarity')
```
FIGURE 9\.6: The figure shows the correlation coefficient between the ChIP\-seq signal on \+ and \\(\-\\) strands. The peak of the distribution designates the fragment size
Figure [9\.6](chip-quality-control.html#fig:correlation-plot) shows the shift in base pairs,
which corresponds to the maximum value of the correlation coefficient
gives us an approximation to the expected average DNA fragment length.
Because this value is not 0, or monotonically decreasing, we can conclude
that there was substantial enrichment of certain fragments in the ChIP samples.
### 9\.5\.5 GC bias quantification
The PCR amplification procedure can cause a significant bias in the ChIP
experiments. The bias can be influenced by the DNA fragment size distribution,
sequence composition, hexamer distribution of PCR primers, and the number of cycles used
for the amplification.
One way to determine whether some of the samples have significantly
different sequence composition is to look at whether regions with
differing GC composition were equally enriched in all experiments.
We will do the following: Firstly we will calculate the GC content of each
of the tilling windows, and then we will compare the GC content with the corresponding
cpm (count per million reads) value, for each tile.
```
# fetches the chromosome lengths and constructs the tiles
library(GenomeInfoDb)
library(GenomicRanges)
hg_chrs = getChromInfoFromUCSC('hg38')
hg_chrs = subset(hg_chrs, grepl('chr21$',chrom))
seqlengths = with(hg_chrs, setNames(size, chrom))
# tileGenome produces a list per chromosome
# unlist combines the elemenents of the list
# into one GRanges object
tilling_window = unlist(tileGenome(
seqlengths = seqlengths,
tilewidth = 1000
))
```
We will extract the sequence information from the `BSgenome.Hsapiens.UCSC.hg38`
package. `BSgenome` are generic Bioconductor containers for genomic sequences.
Sequences are extracted from the `BSgenome` container using the `getSeq()` function.
The `getSeq()` function takes as input the genome object, and the ranges with the
regions of interest; in our case, the tilling windows.
The function returns a `DNAString` object.
```
# loads the human genome sequence
library(BSgenome.Hsapiens.UCSC.hg38)
# extracts the sequence from the human genome
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38, tilling_window)
```
To calculate the GC content, we will use the `oligonucleotideFrequency()` function on the
`DNAString` object. By setting the width parameter to 2 we will
calculate the **dinucleotide** frequency.
Each row in the resulting table will contain the number of all possible
dinucleotides observed in each tilling window.
Because we have tilling windows of the same length, we do not
necessarily need to normalize the counts by the window length.
If all of the windows have different lengths (i.e. when at the ChIP\-seq peaks), then normalization is a prerequisite.
```
# calculates the frequency of all possible dimers
# in our sequence set
nuc = oligonucleotideFrequency(seq, width = 2)
# converts the matrix into a data.frame
nuc = as.data.frame(nuc)
# calculates the percentages, and rounds the number
nuc = round(nuc/1000,3)
```
Now we can combine the GC frequency with the cpm values.
We will convert the cpm values to the log10 scale. To avoid
taking the \\(log(0\)\\), we add a pseudo count of 1 to cpm.
```
# counts the number of reads per tilling window
# for each experiment
so = summarizeOverlaps(tilling_window, bam_files)
# converts the raw counts to cpm values
counts = assays(so)[[1]]
cpm = t(t(counts)*(1000000/colSums(counts)))
# because the cpm scale has a large dynamic range
# we transform it using the log function
cpm_log = log10(cpm+1)
```
Combine the cpm values with the GC content,
```
gc = cbind(data.frame(cpm_log), GC = nuc['GC'])
```
and plot the results.
```
ggplot(
data = gc,
aes(
x = GC,
y = GM12878_hg38_CTCF_r1.chr21.bam
)) +
geom_point(size=2, alpha=.3) +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('GC content in one kilobase windows') +
ylab('log10( cpm + 1 )') +
ggtitle('CTCF Replicate 1')
```
FIGURE 9\.7: GC content abundance in a ChIP\-seq experiment.
Figure [9\.7](chip-quality-control.html#fig:gc-plot) visualizes the CPM versus GC content, and
gives us two important pieces of information.
Firstly, it shows whether there was a specific amplification of regions
with extremely high or extremely low GC content. This would be a strong indication
that either the PCR or the size selection procedure were not successfully
executed.
The second piece of information comes by comparison of plots
corresponding to multiple experiments. If different ChIP\-samples have
highly diverging enrichment of different ChIP regions, then
some of the samples were affected by unknown batch effects. Such effects
need to be taken into account in downstream analysis.
Firstly, we will reorder the columns of the `data.frame` using the `pivot_longer()`
function from the `tidyr` package.
```
# load the tidyr package
library(tidyr)
# pivot_longer converts a fat data.frame into a tall data.frame,
# which is the format used by the ggplot package
gcd = pivot_longer(
data = gc,
cols = -GC,
names_to = 'experiment',
values_to = 'cpm'
)
# we select the ChIP files corresponding to the ctcf experiment
gcd = subset(gcd, grepl('CTCF', experiment))
# remove the chr21 suffix
gcd$experiment = sub('chr21.','',gcd$experiment)
```
We can now visualize the relationship using a scatter plot.
Figure [9\.8](chip-quality-control.html#fig:gc-tidy-plot) compares the GC content dependency on the CPM between
the first and the second CTCF replicate. In this case, the replicate looks similar.
```
ggplot(data = gcd, aes(GC, log10(cpm+1))) +
geom_point(size=2, alpha=.05) +
theme_bw() +
facet_wrap(~experiment, nrow=1)+
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('GC content in one kilobase windows') +
ylab('log10( cpm + 1 )') +
ggtitle('CTCF Replicates 1 and 2')
```
FIGURE 9\.8: Comparison of GC content and signal abundance between two CTCF biological replicates
### 9\.5\.6 Sequence read genomic distribution
The fourth way to look at the ChIP quality control is to visualize
the genomic distribution of reads in different functional genomic regions.
If the ChIP samples have the same distribution of reads as the Input samples,
this implies a lack of specific enrichment. Additionally, if we have
prior knowledge of where our proteins should be located, we can use
the visualization to judge how well the genomic distributions conform to our priors.
For example, the trimethylation of histone H3 on lysine 36 \- **H3K36me3** is associated
with elongating polymerase and productive transcription. If we performed a
successful ChIP experiment with an anti\-**H3K36me3** antibody, we would expect most of the reads
to fall within gene bodies (introns and exons).
#### 9\.5\.6\.1 Hierarchical annotation of genomic features
Overlapping genomic features (a transcription start site of one
gene might be in an intron of another gene) will cause an ambiguity during
the read annotation. If a read overlaps more than one functional category, we are not
certain which category it should be assigned to.
To solve the problem of multiple assignments, we need to construct a set of annotation rules.
A heuristic solution is to organize the genomic annotation into a
hierarchy which will imply prioritization.
We can then look, for each read, which functional categories it overlaps, and
if it is within multiple categories, we assign the read to the topmost category.
As an example, let’s say that we have 4 genomic categories: 1\) TSS (transcription start sites), 2\) exon, 3\) intron, and 4\) intergenic with the following hierarchy: **TSS \-\> exon \-\> intron \-\> intergenic**. This means that if a read overlaps a TSS and an intron, it will be annotated as TSS. This approach is shown in Figure
[9\.9](chip-quality-control.html#fig:Figure-Hierarchical-Annotation).
FIGURE 9\.9: Principle of hierarchical annotation. The region of interest is annotated as the topmost ranked category that it overlaps. In this case, our region overlaps a TSS, an exon, and an intergenic region. Because the TSS has the topmost rank, it is annotated as a TSS.
Now we will construct the set of functional genomic regions, and annotate
the reads.
#### 9\.5\.6\.2 Finding annotations
There are multiple sources of genomic annotation. **UCSC**,
**Genbank**, and **Ensembl** databases represent stable resources,
from which the annotation can be easily obtained.
`AnnotationHub` is a Bioconductor\-based online resource which contains a large number of experiments from various
sources. We will use the `AnnotationHub` to download the location of
genes corresponding to the **hg38** genome. The hub is accessed in the following way:
```
# load the AnnotationHub package
library(AnnotationHub)
# connect to the hub object
hub = AnnotationHub()
```
The `hub` variable contains the programming interface towards the online database. We can use the `query()` function to find out the ID of the
“ENSEMBL” gene annotation.
```
# query the hub for the human annotation
AnnotationHub::query(
x = hub,
pattern = c('ENSEMBL','Homo','GRCh38','chr','gtf')
)
```
```
## AnnotationHub with 32 records
## # snapshotDate(): 2020-04-27
## # $dataprovider: Ensembl
## # $species: Homo sapiens
## # $rdataclass: GRanges
## # additional mcols(): taxonomyid, genome, description,
## # coordinate_1_based, maintainer, rdatadateadded, preparerclass, tags,
## # rdatapath, sourceurl, sourcetype
## # retrieve records with, e.g., 'object[["AH50842"]]'
##
## title
## AH50842 | Homo_sapiens.GRCh38.84.chr.gtf
## AH50843 | Homo_sapiens.GRCh38.84.chr_patch_hapl_scaff.gtf
## AH51012 | Homo_sapiens.GRCh38.85.chr.gtf
## AH51013 | Homo_sapiens.GRCh38.85.chr_patch_hapl_scaff.gtf
## AH51953 | Homo_sapiens.GRCh38.86.chr.gtf
## ... ...
## AH75392 | Homo_sapiens.GRCh38.98.chr_patch_hapl_scaff.gtf
## AH79159 | Homo_sapiens.GRCh38.99.chr.gtf
## AH79160 | Homo_sapiens.GRCh38.99.chr_patch_hapl_scaff.gtf
## AH80075 | Homo_sapiens.GRCh38.100.chr.gtf
## AH80076 | Homo_sapiens.GRCh38.100.chr_patch_hapl_scaff.gtf
```
We are interested in the version **GRCh38\.92**, which is available under **AH61126**.
To download the data from the hub, we use the `[[` operator on the
hub API.
We will download the annotation in the **GTF** format, into a `GRanges` object.
```
# retrieve the human gene annotation
gtf = hub[['AH61126']]
```
```
## GRanges object with 6 ranges and 3 metadata columns:
## seqnames ranges strand | source type score
## <Rle> <IRanges> <Rle> | <factor> <factor> <numeric>
## [1] 1 11869-14409 + | havana gene NA
## [2] 1 11869-14409 + | havana transcript NA
## [3] 1 11869-12227 + | havana exon NA
## [4] 1 12613-12721 + | havana exon NA
## [5] 1 13221-14409 + | havana exon NA
## [6] 1 12010-13670 + | havana transcript NA
## -------
## seqinfo: 25 sequences (1 circular) from GRCh38 genome
```
By default the ENSEMBL project labels chromosomes using numeric identifiers (i.e. 1,2,3 … X),
without the **chr** prefix.
We need to therefore append the prefix to the chromosome names (seqlevels).
`pruning.mode = 'coarse'` designates that the chromosome names will be replaced
in the gtf object.
```
# extract ensemel chromosome names
ensembl_seqlevels = seqlevels(gtf)
# paste the chr prefix to the chromosome names
ucsc_seqlevels = paste0('chr', ensembl_seqlevels)
# replace ensembl with ucsc chromosome names
seqlevels(gtf, pruning.mode='coarse') = ucsc_seqlevels
```
And finally we subset only regions which correspond to chromosome 21\.
```
# keep only chromosome 21
gtf = gtf[seqnames(gtf) == 'chr21']
```
#### 9\.5\.6\.3 Constructing genomic annotation
Once we have downloaded the annotation we can define the functional hierarchy.
We will use the previously mentioned ordering: **TSS \-\> exon \-\> intron \-\> intergenic**, with **TSS** having the highest priority and the intergenic regions having the lowest priority.
```
# construct a GRangesList with human annotation
annotation_list = GRangesList(
# promoters function extends the gtf around the TSS
# by an upstream and downstream amounts
tss = promoters(
x = subset(gtf, type=='gene'),
upstream = 1000,
downstream = 1000),
exon = subset(gtf, type=='exon'),
intron = subset(gtf, type=='gene')
)
```
#### 9\.5\.6\.4 Annotating reads
To annotate the reads we will define a function that takes as input a
**.bam** file, and an annotation list, and returns the frequency of
reads in each genomic category.
We will then loop over all of the **.bam**
files to annotate each experiment.
The `annotateReads()` function works in the following way:
1. Load the **.bam** file.
2. Find overlaps between the reads and the annotation categories.
3. Arrange the annotated reads based on the hierarchy, and remove duplicated assignments.
4. Count the number of reads in each category.
The crucial step to understand here is using the `arrange()` and `filter()` functions to keep only one annotated category per read.
```
annotateReads = function(bam_file, annotation_list){
library(dplyr)
message(basename(bam_file))
# load the reads into R
bam = readGAlignments(bam_file)
# find overlaps between reads and annotation
result = as.data.frame(
findOverlaps(bam, annotation_list)
)
# appends to the annotation index the corresponding
# annotation name
annotation_name = names(annotation_list)[result$subjectHits]
result$annotation = annotation_name
# order the overlaps based on the hierarchy
result = result[order(result$subjectHits),]
# select only one category per read
result = subset(result, !duplicated(queryHits))
# count the number of reads in each category
# group the result data frame by the corresponding category
result = group_by(.data=result, annotation)
# count the number of reads in each category
result = summarise(.data = result, counts = length(annotation))
# classify all reads which are outside of
# the annotation as intergenic
result = rbind(
result,
data.frame(
annotation = 'intergenic',
counts = length(bam) - sum(result$counts)
)
)
# calculate the frequency
result$frequency = with(result, round(counts/sum(counts),2))
# append the experiment name
result$experiment = basename(bam_file)
return(result)
}
```
We execute the annotation function on all files.
```
# list all bam files in the folder
bam_files = list.files(data_path, full.names=TRUE, pattern='bam$')
# calculate the read distribution for every file
annot_reads_list = lapply(bam_files, function(x){
annotateReads(
bam_file = x,
annotation_list = annotation_list
)
})
```
First, we combine the results in one data frame, and
reformat the experiment names.
```
# collapse the per-file read distributions into one data.frame
annot_reads_df = dplyr::bind_rows(annot_reads_list)
# format the experiment names
experiment_name = annot_reads_df$experiment
experiment_name = sub('.chr21.bam','', experiment_name)
experiment_name = sub('GM12878_hg38_','',experiment_name)
annot_reads_df$experiment = experiment_name
```
And plot the results.
```
ggplot(data = annot_reads_df,
aes(
x = experiment,
y = frequency,
fill = annotation
)) +
geom_bar(stat='identity') +
theme_bw() +
scale_fill_brewer(palette='Set2') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5),
axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab('Sample') +
ylab('Percentage of reads') +
ggtitle('Percentage of reads in annotation')
```
FIGURE 9\.10: Read distribution in genomice functional annotation categories.
Figure [9\.10](chip-quality-control.html#fig:read-annotation-plot) shows a slight increase of **H3K36me3** on the exons
and introns, and **H3K4me3** on the **TSS**. Interestingly, both replicates of the **ZNF143**
transcription factor show increased read abundance around the TSS.
### 9\.5\.1 The data
Here we will familiarize ourselves with the datasets that will be used in the
chapter.
Experimental data was downloaded from the public ENCODE (ENCODE Project Consortium [2012](#ref-ENCODE_Project_Consortium2012-wf))
database of ChIP\-seq experiments.
The experiments were performed on a lymphoblastoid cell line, GM12878, and mapped
to the GRCh38 (hg38\) version of the human genome, using the standard ENCODE
ChIP\-seq pipeline. In this chapter, due to compute time considerations, we have taken a subset of the data which corresponds to the human chromosome 21 (chr21\).
The data sets are located in the `compGenomRData` package.
The location of the data sets can be accessed using the `system.file()` command,
in the following way:
```
data_path = system.file('extdata/chip-seq',package='compGenomRData')
```
The available datasets can be listed using the `list.files()` function:
```
chip_files = list.files(data_path, full.names=TRUE)
```
The dataset consists of the following ChIP experiments:
1. **Transcription factors**: CTCF, SMC3, ZNF143, PolII
(RNA polymerase 2\)
2. **Histone modifications**: H3K4me3, H3K36me3, H3k27ac, H3k27me3
3. Various input samples
### 9\.5\.2 Sample clustering
Clustering is an ordering procedure which groups samples by similarity;
the more similar samples are grouped closer to one another.
The details of clustering methodologies are described in Chapter [4](unsupervisedLearning.html#unsupervisedLearning).
Clustering of ChIP signal profiles is used for two purposes:
The first one is to ascertain whether there is concordance between
biological replicates; biological replicates should show greater similarity
than ChIP of different proteins. The second function is to see whether our experiments conform to known prior knowledge. For example, we would expect to see greater similarity between proteins
which belong to the same protein complex.
To quantify the ChIP signal we will firstly construct 1\-kilobase\-wide tilling
windows over the genome, and subsequently count the number of reads
in each window, for each experiment. We will then normalize the counts, to
account for a different total number of reads in each experiment, and finally
calculate the correlation between all pairs of samples.
Although this procedure represents a crude way of data quantification, it provides sufficient
information to ascertain the data quality.
Using the `GenomeInfoDb` we will first fetch the chromosome lengths corresponding
to the hg38 version of the human genome, and filter the length for human
chromosome 21\.
```
# load the chromosome info package
library(GenomeInfoDb)
# fetch the chromosome lengths for the human genome
hg_chrs = getChromInfoFromUCSC('hg38')
# find the length of chromosome 21
hg_chrs = subset(hg_chrs, grepl('chr21$',chrom))
```
The `tileGenome()` function from the `GenomicRanges` package constructs equally sized
windows over the genome of interest.
The function takes two arguments:
1. A vector of chromosome lengths
2. Window size
Firstly, we convert the chromosome lengths *data.frame* into a *named vector*.
```
# downloaded hg_chrs is a data.frame object,
# we need to convert the data.frame into a named vector
seqlengths = with(hg_chrs, setNames(size, chrom))
```
Then we construct the windows.
```
# load the genomic ranges package
library(GenomicRanges)
# tileGenome function returns a list of GRanges of a given width,
# spanning the whole chromosome
tilling_window = tileGenome(seqlengths, tilewidth=1000)
# unlist converts the list to one GRanges object
tilling_window = unlist(tilling_window)
```
```
## GRanges object with 46710 ranges and 0 metadata columns:
## seqnames ranges strand
## <Rle> <IRanges> <Rle>
## [1] chr21 1-1000 *
## [2] chr21 1001-2000 *
## [3] chr21 2001-3000 *
## [4] chr21 3001-4000 *
## [5] chr21 4001-5000 *
## ... ... ... ...
## [46706] chr21 46704985-46705984 *
## [46707] chr21 46705985-46706984 *
## [46708] chr21 46706985-46707984 *
## [46709] chr21 46707985-46708984 *
## [46710] chr21 46708985-46709983 *
## -------
## seqinfo: 1 sequence from an unspecified genome
```
We will use the `summarizeOverlaps()` function from the `GenomicAlignments` package
to count the number of reads in each genomic window.
The function will do the counting automatically for all our experiments.
The `summarizeOverlaps()` function returns a `SummarizedExperiment` object.
The object contains the counts, genomic ranges which were used for the quantification,
and the sample descriptions.
```
# load GenomicAlignments
library(GenomicAlignments)
# fetch bam files from the data folder
bam_files = list.files(
path = data_path,
full.names = TRUE,
pattern = 'bam$'
)
# use summarizeOverlaps to count the reads
so = summarizeOverlaps(tilling_window, bam_files)
# extract the counts from the SummarizedExperiment
counts = assays(so)[[1]]
```
Different ChIP experiments were sequenced to different depths; each experiment
contains a different number of reads. To remove the effect of the experimental
depth on the quantification, the samples need to be normalized.
The standard normalization procedure, for ChIP data, is to divide the
counts in each tilling window by the total number of sequenced reads, and
multiply it by a constant factor (to avoid extremely small numbers).
This normalization procedure is called the **cpm** \- counts per million.
\\\[
CPM \= counts \* (10^{6} / total\\\>number\\\>of\\\>reads)
\\]
```
# calculate the cpm from the counts matrix
# the following command works because
# R calculates everything by columns
cpm = t(t(counts)*(1000000/colSums(counts)))
```
We remove all tiles which do not have overlapping reads.
Tiles with 0 counts do not provide any additional discriminatory power, rather,
they introduce artificial similarity between the samples (i.e. samples with
only a handful of bound regions will have a lot of tiles with \\(0\\) counts, while
they do not have to have any overlapping enriched tiles).
```
# remove all tiles which do not contain reads
cpm = cpm[rowSums(cpm) > 0,]
```
We use the `sub()` function to shorten the column names of the cpm matrix.
```
# change the formatting of the column names
# remove the .chr21.bam suffix
colnames(cpm) = sub('.chr21.bam','', colnames(cpm))
# remove the GM12878_hg38 prefix
colnames(cpm) = sub('GM12878_hg38_','',colnames(cpm))
```
Finally, we calculate the pairwise Pearson correlation coefficient using the
`cor()` function.
The function takes as input a region\-by\-sample count matrix, and returns
a sample X sample matrix, where each field contains the correlation coefficient
between two samples.
```
# calculates the pearson correlation coefficient between the samples
correlation_matrix = cor(cpm, method='pearson')
```
The `Heatmap()` function from the `ComplexHeatmap` (Z. Gu, Eils, and Schlesner [2016](#ref-Gu_2016)[b](#ref-Gu_2016)) package is used to visualize
the correlation coefficient.
The function automatically performs hierarchical clustering \- it groups the
samples which have the highest pairwise correlation.
The diagonal represents the correlation of each sample with itself.
```
# load ComplexHeatmap
library(ComplexHeatmap)
# load the circlize package, and define
# the color palette which will be used in the heatmap
library(circlize)
heatmap_col = circlize::colorRamp2(
breaks = c(-1,0,1),
colors = c('blue','white','red')
)
# plot the heatmap using the Heatmap function
Heatmap(
matrix = correlation_matrix,
col = heatmap_col
)
```
FIGURE 9\.2: Heatmap showing ChIP\-seq sample similarity using the Pearson correlation coefficient.
In Figure [9\.2](chip-quality-control.html#fig:sample-clustering-complex-heatmap) we can see a
perfect example of why quality control is important.
**CTCF** is a zinc finger protein which co\-localizes with the Cohesin complex.
**SMC3** is a sub unit of the Cohesin complex, and we would therefore expect to
see that the **SMC3** signal profile has high correlation with the **CTCF** signal profile.
This is true for the second biological replicate of **SMC3**, while the first
replicate (SMC3\_r1\) clusters with the input samples. This indicates that the
sample likely has low enrichment.
We can see that the ChIP and Input samples form separate clusters. This implies
that the ChIP samples have an enrichment of fragments.
Additionally, we see that the biological replicates of other experiments
cluster together.
### 9\.5\.3 Visualization in the genome browser
One of the first steps in any ChIP\-seq analysis should be looking at the
data. By looking at the data we get an intuition about the quality of the
experiment, and start seeing preliminary correlations between the samples, which
we can use to guide our analysis.
This can be achieved either by plotting signal profiles around
regions of interest, or by loading data into a genome browser
(such as IGV, or UCSC genome browsers).
Genome browsers are standalone applications which represent the genome
as a one\-dimensional (1D) coordinate system. The browsers enable
simultaneous visualization and comparison of multiple types of annotations and experimental data.
Genome browsers can visualize most of the commonly used genomic data formats:
BAM, BED, wig, and bigWig.
The easiest way to access our data would be to load the .bam files into the browser. This will show us the sequence and position of every mapped read. If we want to view multiple samples in parallel, loading every mapped read can be restrictive. It takes up a lot of computational resources, and the amount of information
makes the visual comparison hard to do.
We would like to convert our data so that we get a compressed visualization,
which would show us the main properties of our samples, namely, the quality and
the location of the enrichment.
This is achieved by summarizing the read enrichment into a signal profile \-
the whole experiment is converted into a numeric vector \- a coverage vector.
The vector contains information on how many reads overlap each position
in the genome.
We will proceed as follows: Firstly, we will import a **.bam** file into **R**. Then we will calculate the signal profile (construct the coverage vector), and finally, we export the vector as a **.bigWig** file.
First we select one of the ChIP samples.
```
# list the bam files in the directory
# the '$' sign tells the pattern recognizer to omit bam.bai files
bam_files = list.files(
path = data_path,
full.names = TRUE,
pattern = 'bam$'
)
# select the first bam file
chip_file = bam_files[1]
```
We will use the `readGAlignments()` function from the `GenomicAlignments`
package to load the reads into **R**, and then the `GRanges()` function
to convert them into a `GRanges` object.
```
# load the genomic alignments package
library(GenomicAlignments)
# read the ChIP reads into R
reads = readGAlignments(chip_file)
# the reads need to be converted to a granges object
reads = granges(reads)
```
Because DNA fragments are being sequenced from their ends (both the 3’ and 5’ end),
the read enrichment does not correspond to the exact location of the bound protein.
Rather, reads end to form clusters of enrichment upstream and downstream of the true binding location.
To correct for this, we use a small hack. Before we create the signal profiles,
we will extend the reads towards their **3’** end. The reads are extended to
form fragments of 200 base pairs. This is an empiric measure, which
corresponds to the average fragment size of the Illumina sample preparation kit.
The exact average fragment size will differ from 200 base pairs, but if the
deviation is not large (i.e. more than 200 base pairs),
it will not affect the visual properties of our samples.
Read extension is done using the `resize()` function. The function
takes two arguments:
1. `width`: resulting fragment width
2. `fix`: which position of the fragment should not be changed (if `fix` is set to start,
the reads will be extended towards the **3’** end. If `fix` is set to end, they will
be extended towards the **5’** end)
```
# extends the reads towards the 3' end
reads = resize(reads, width=200, fix='start')
# keeps only chromosome 21
reads = keepSeqlevels(reads, 'chr21', pruning.mode='coarse')
```
Conversion of reads into coverage vectors is done with the `coverage()`
function.
The function takes only one argument (`width`), which corresponds to chromosome sizes.
For this purpose we can use the, previously created, `seqlengths` variable.
The `coverage()` function converts the reads into a compressed `Rle` object. We have introduced these workflows in Chapter [6](genomicIntervals.html#genomicIntervals).
```
# convert the reads into a signal profile
cov = coverage(reads, width = seqlengths)
```
```
## RleList of length 1
## $chr21
## integer-Rle of length 46709983 with 199419 runs
## Lengths: 5038228 200 63546 20 ... 200 1203 200 27856
## Values : 0 1 0 1 ... 1 0 1 0
```
The name of the output file is created by changing the file suffix from **.bam**
to **.bigWig**.
```
# change the file extension from .bam to .bigWig
output_file = sub('.bam','.bigWig', chip_file)
```
Now we can use the `export.bw()` function from the rtracklayer package to
write the bigWig file.
```
# load the rtracklayer package
library(rtracklayer)
# export the bigWig output file
export.bw(cov, 'output_file')
```
#### 9\.5\.3\.1 Vizualization of track data using Gviz
We can create genome browserlike visualizations using the `Gviz` package,
which was introduced in Chapter [6](genomicIntervals.html#genomicIntervals).
The `Gviz` is a tool which enables exhaustive customized visualization of
genomics experiments. The basic usage principle is to define tracks, where each track can represent
genomic annotation, or a signal profile; subsequently we define the order
of the tracks and plot them.
Here we will define two tracks, a genome axis, which will show the position
along the human chromosome 21; and a signal track from our CTCF experiment.
```
library(Gviz)
# define the genome axis track
axis = GenomeAxisTrack(
range = GRanges('chr21', IRanges(1, width=seqlengths))
)
# convert the signal into genomic ranges and define the signal track
gcov = as(cov, 'GRanges')
dtrack = DataTrack(gcov, name = "CTCF", type='l')
# define the track ordering
track_list = list(axis,dtrack)
```
Tracks are plotted with the `plotTracks()` function. The `sizes` argument needs to be the same size as the track\_list, and defines the
relative size of each track.
Figure [9\.3](chip-quality-control.html#fig:genome-browser-gviz-show) shows the output of the
`plotTracks()` function.
```
# plot the list of browser tracks
# sizes argument defines the relative sizes of tracks
# background title defines the color for the track labels
plotTracks(
trackList = track_list,
sizes = c(.1,1),
background.title = "black"
)
```
FIGURE 9\.3: ChIP\-seq signal visualized as a browser track using Gviz.
#### 9\.5\.3\.1 Vizualization of track data using Gviz
We can create genome browserlike visualizations using the `Gviz` package,
which was introduced in Chapter [6](genomicIntervals.html#genomicIntervals).
The `Gviz` is a tool which enables exhaustive customized visualization of
genomics experiments. The basic usage principle is to define tracks, where each track can represent
genomic annotation, or a signal profile; subsequently we define the order
of the tracks and plot them.
Here we will define two tracks, a genome axis, which will show the position
along the human chromosome 21; and a signal track from our CTCF experiment.
```
library(Gviz)
# define the genome axis track
axis = GenomeAxisTrack(
range = GRanges('chr21', IRanges(1, width=seqlengths))
)
# convert the signal into genomic ranges and define the signal track
gcov = as(cov, 'GRanges')
dtrack = DataTrack(gcov, name = "CTCF", type='l')
# define the track ordering
track_list = list(axis,dtrack)
```
Tracks are plotted with the `plotTracks()` function. The `sizes` argument needs to be the same size as the track\_list, and defines the
relative size of each track.
Figure [9\.3](chip-quality-control.html#fig:genome-browser-gviz-show) shows the output of the
`plotTracks()` function.
```
# plot the list of browser tracks
# sizes argument defines the relative sizes of tracks
# background title defines the color for the track labels
plotTracks(
trackList = track_list,
sizes = c(.1,1),
background.title = "black"
)
```
FIGURE 9\.3: ChIP\-seq signal visualized as a browser track using Gviz.
### 9\.5\.4 Plus and minus strand cross\-correlation
Cross\-correlation between plus and minus strands is a method
which quantifies whether the DNA library was enriched for fragments of
a certain length.
Similarity between the plus and minus strands defined as the correlation of
the signal profiles for the reads that map to the **\+** and the **\-** strands.
The distribution of reads is shown in Figure [9\.4](chip-quality-control.html#fig:Figure-BrowserScreenshot).
FIGURE 9\.4: Browser screenshot of aligned reads for one ChIP, and control sample. ChIP samples have an asymetric distribution of reads; reads mapping to the \+ strand are located on the left side of the peak, while the reads mapping to the \- strand are found on the right side of the peak.
Due to the sequencing properties, reads which correspond to
the **5’** fragment ends will map to the opposite strand from the reads
coming from the **3’** ends. Most often (depending on the sequencing protocol)
the reads from the **5’** fragment ends map to the **\+** strand,
while the reads from the **3’** ends map to the **\-** strand.
We calculate the cross\-correlation by shifting the signal on the **\+** strand,
by a pre\-defined amount (i.e. shift by 1 \- 400 nucleotides), and calculating,
for each shift, the correlation between the **\+**, and the **\-** strands.
Subsequently we plot the correlation versus shift, and locate the maximum value.
The maximum value should correspond to the average DNA fragment length which
was present in the library. This value tells us whether the ChIP enriched for
fragments of certain length (i.e. whether the ChIP was successful).
Due to the size of genomic data, it might be computationally prohibitive to
calculate the Pearson correlation between whole genome (or even whole chromosome)
signal profiles.
To get around this problem, we will resort to a trick; we will disregard the dynamic
range of the signal profiles, and only keep the information of which
genomic bases contained the ends of the fragments.
This is done by calculating the coverage vector of the read starting position (separately
for each strand), and converting the coverage vector into a Boolean vector.
The Boolean vector contains the information of which genomic positions
contained the DNA fragment ends.
Similarity between two Boolean vectors can be promptly computed using the Jaccard index.
The Jaccard index is defined as an intersection between two Boolean vectors,
divided by their union as shown in Figure [9\.5](chip-quality-control.html#fig:FigureJaccardSimilarity).
FIGURE 9\.5: Jaccard similarity is defined as the ratio of the intersection and union of two sets.
Firstly, we load the reads for one of the CTCF ChIP experiments.
Then we create signal profiles, separately for reads on the **\+** and **\-**
strands.
Unlike before, we do not extend the reads to the average expected fragment
length (200 base pairs); we keep only the starting position of each read.
```
# load the reads
reads = readGAlignments(chip_file)
reads = granges(reads)
# keep only the starting position of each read
reads = resize(reads, width=1, fix='start')
reads = keepSeqlevels(reads, 'chr21', pruning.mode='coarse')
```
Now we can calculate the coverage vector of the read starting position.
The coverage vector is then automatically converted into a Boolean vector by
asking which genomic positions have \\(coverage \> 0\\).
```
# calculate the coverage profile for plus and minus strand
reads = split(reads, strand(reads))
# coverage(x, width = seqlengths)[[1]] > 0
# calculates the coverage and converts
# the coverage vector into a boolean
cov = lapply(reads, function(x){
coverage(x, width = seqlengths)[[1]] > 0
})
cov = lapply(cov, as.vector)
```
We will now shift the coverage vector from the plus strand by \\(1\\) to \\(400\\) base pairs, and for each pair shift we will calculate the Jaccard index between the vectors
on the plus and minus strand.
```
# defines the shift range
wsize = 1:400
# defines the jaccard similarity
jaccard = function(x,y)sum((x & y)) / sum((x | y))
# shifts the + vector by 1 - 400 nucleotides and
# calculates the correlation coefficient
cc = shiftApply(
SHIFT = wsize,
X = cov[['+']],
Y = cov[['-']],
FUN = jaccard
)
# converts the results into a data frame
cc = data.frame(fragment_size = wsize, cross_correlation = cc)
```
We can finally plot the shift in base pairs versus the correlation coefficient:
```
library(ggplot2)
ggplot(data = cc, aes(fragment_size, cross_correlation)) +
geom_point() +
geom_vline(xintercept = which.max(cc$cross_correlation),
size=2, color='red', linetype=2) +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Shift in base pairs') +
ylab('Jaccard similarity')
```
FIGURE 9\.6: The figure shows the correlation coefficient between the ChIP\-seq signal on \+ and \\(\-\\) strands. The peak of the distribution designates the fragment size
Figure [9\.6](chip-quality-control.html#fig:correlation-plot) shows the shift in base pairs,
which corresponds to the maximum value of the correlation coefficient
gives us an approximation to the expected average DNA fragment length.
Because this value is not 0, or monotonically decreasing, we can conclude
that there was substantial enrichment of certain fragments in the ChIP samples.
### 9\.5\.5 GC bias quantification
The PCR amplification procedure can cause a significant bias in the ChIP
experiments. The bias can be influenced by the DNA fragment size distribution,
sequence composition, hexamer distribution of PCR primers, and the number of cycles used
for the amplification.
One way to determine whether some of the samples have significantly
different sequence composition is to look at whether regions with
differing GC composition were equally enriched in all experiments.
We will do the following: Firstly we will calculate the GC content of each
of the tilling windows, and then we will compare the GC content with the corresponding
cpm (count per million reads) value, for each tile.
```
# fetches the chromosome lengths and constructs the tiles
library(GenomeInfoDb)
library(GenomicRanges)
hg_chrs = getChromInfoFromUCSC('hg38')
hg_chrs = subset(hg_chrs, grepl('chr21$',chrom))
seqlengths = with(hg_chrs, setNames(size, chrom))
# tileGenome produces a list per chromosome
# unlist combines the elemenents of the list
# into one GRanges object
tilling_window = unlist(tileGenome(
seqlengths = seqlengths,
tilewidth = 1000
))
```
We will extract the sequence information from the `BSgenome.Hsapiens.UCSC.hg38`
package. `BSgenome` are generic Bioconductor containers for genomic sequences.
Sequences are extracted from the `BSgenome` container using the `getSeq()` function.
The `getSeq()` function takes as input the genome object, and the ranges with the
regions of interest; in our case, the tilling windows.
The function returns a `DNAString` object.
```
# loads the human genome sequence
library(BSgenome.Hsapiens.UCSC.hg38)
# extracts the sequence from the human genome
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38, tilling_window)
```
To calculate the GC content, we will use the `oligonucleotideFrequency()` function on the
`DNAString` object. By setting the width parameter to 2 we will
calculate the **dinucleotide** frequency.
Each row in the resulting table will contain the number of all possible
dinucleotides observed in each tilling window.
Because we have tilling windows of the same length, we do not
necessarily need to normalize the counts by the window length.
If all of the windows have different lengths (i.e. when at the ChIP\-seq peaks), then normalization is a prerequisite.
```
# calculates the frequency of all possible dimers
# in our sequence set
nuc = oligonucleotideFrequency(seq, width = 2)
# converts the matrix into a data.frame
nuc = as.data.frame(nuc)
# calculates the percentages, and rounds the number
nuc = round(nuc/1000,3)
```
Now we can combine the GC frequency with the cpm values.
We will convert the cpm values to the log10 scale. To avoid
taking the \\(log(0\)\\), we add a pseudo count of 1 to cpm.
```
# counts the number of reads per tilling window
# for each experiment
so = summarizeOverlaps(tilling_window, bam_files)
# converts the raw counts to cpm values
counts = assays(so)[[1]]
cpm = t(t(counts)*(1000000/colSums(counts)))
# because the cpm scale has a large dynamic range
# we transform it using the log function
cpm_log = log10(cpm+1)
```
Combine the cpm values with the GC content,
```
gc = cbind(data.frame(cpm_log), GC = nuc['GC'])
```
and plot the results.
```
ggplot(
data = gc,
aes(
x = GC,
y = GM12878_hg38_CTCF_r1.chr21.bam
)) +
geom_point(size=2, alpha=.3) +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('GC content in one kilobase windows') +
ylab('log10( cpm + 1 )') +
ggtitle('CTCF Replicate 1')
```
FIGURE 9\.7: GC content abundance in a ChIP\-seq experiment.
Figure [9\.7](chip-quality-control.html#fig:gc-plot) visualizes the CPM versus GC content, and
gives us two important pieces of information.
Firstly, it shows whether there was a specific amplification of regions
with extremely high or extremely low GC content. This would be a strong indication
that either the PCR or the size selection procedure were not successfully
executed.
The second piece of information comes by comparison of plots
corresponding to multiple experiments. If different ChIP\-samples have
highly diverging enrichment of different ChIP regions, then
some of the samples were affected by unknown batch effects. Such effects
need to be taken into account in downstream analysis.
Firstly, we will reorder the columns of the `data.frame` using the `pivot_longer()`
function from the `tidyr` package.
```
# load the tidyr package
library(tidyr)
# pivot_longer converts a fat data.frame into a tall data.frame,
# which is the format used by the ggplot package
gcd = pivot_longer(
data = gc,
cols = -GC,
names_to = 'experiment',
values_to = 'cpm'
)
# we select the ChIP files corresponding to the ctcf experiment
gcd = subset(gcd, grepl('CTCF', experiment))
# remove the chr21 suffix
gcd$experiment = sub('chr21.','',gcd$experiment)
```
We can now visualize the relationship using a scatter plot.
Figure [9\.8](chip-quality-control.html#fig:gc-tidy-plot) compares the GC content dependency on the CPM between
the first and the second CTCF replicate. In this case, the replicate looks similar.
```
ggplot(data = gcd, aes(GC, log10(cpm+1))) +
geom_point(size=2, alpha=.05) +
theme_bw() +
facet_wrap(~experiment, nrow=1)+
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('GC content in one kilobase windows') +
ylab('log10( cpm + 1 )') +
ggtitle('CTCF Replicates 1 and 2')
```
FIGURE 9\.8: Comparison of GC content and signal abundance between two CTCF biological replicates
### 9\.5\.6 Sequence read genomic distribution
The fourth way to look at the ChIP quality control is to visualize
the genomic distribution of reads in different functional genomic regions.
If the ChIP samples have the same distribution of reads as the Input samples,
this implies a lack of specific enrichment. Additionally, if we have
prior knowledge of where our proteins should be located, we can use
the visualization to judge how well the genomic distributions conform to our priors.
For example, the trimethylation of histone H3 on lysine 36 \- **H3K36me3** is associated
with elongating polymerase and productive transcription. If we performed a
successful ChIP experiment with an anti\-**H3K36me3** antibody, we would expect most of the reads
to fall within gene bodies (introns and exons).
#### 9\.5\.6\.1 Hierarchical annotation of genomic features
Overlapping genomic features (a transcription start site of one
gene might be in an intron of another gene) will cause an ambiguity during
the read annotation. If a read overlaps more than one functional category, we are not
certain which category it should be assigned to.
To solve the problem of multiple assignments, we need to construct a set of annotation rules.
A heuristic solution is to organize the genomic annotation into a
hierarchy which will imply prioritization.
We can then look, for each read, which functional categories it overlaps, and
if it is within multiple categories, we assign the read to the topmost category.
As an example, let’s say that we have 4 genomic categories: 1\) TSS (transcription start sites), 2\) exon, 3\) intron, and 4\) intergenic with the following hierarchy: **TSS \-\> exon \-\> intron \-\> intergenic**. This means that if a read overlaps a TSS and an intron, it will be annotated as TSS. This approach is shown in Figure
[9\.9](chip-quality-control.html#fig:Figure-Hierarchical-Annotation).
FIGURE 9\.9: Principle of hierarchical annotation. The region of interest is annotated as the topmost ranked category that it overlaps. In this case, our region overlaps a TSS, an exon, and an intergenic region. Because the TSS has the topmost rank, it is annotated as a TSS.
Now we will construct the set of functional genomic regions, and annotate
the reads.
#### 9\.5\.6\.2 Finding annotations
There are multiple sources of genomic annotation. **UCSC**,
**Genbank**, and **Ensembl** databases represent stable resources,
from which the annotation can be easily obtained.
`AnnotationHub` is a Bioconductor\-based online resource which contains a large number of experiments from various
sources. We will use the `AnnotationHub` to download the location of
genes corresponding to the **hg38** genome. The hub is accessed in the following way:
```
# load the AnnotationHub package
library(AnnotationHub)
# connect to the hub object
hub = AnnotationHub()
```
The `hub` variable contains the programming interface towards the online database. We can use the `query()` function to find out the ID of the
“ENSEMBL” gene annotation.
```
# query the hub for the human annotation
AnnotationHub::query(
x = hub,
pattern = c('ENSEMBL','Homo','GRCh38','chr','gtf')
)
```
```
## AnnotationHub with 32 records
## # snapshotDate(): 2020-04-27
## # $dataprovider: Ensembl
## # $species: Homo sapiens
## # $rdataclass: GRanges
## # additional mcols(): taxonomyid, genome, description,
## # coordinate_1_based, maintainer, rdatadateadded, preparerclass, tags,
## # rdatapath, sourceurl, sourcetype
## # retrieve records with, e.g., 'object[["AH50842"]]'
##
## title
## AH50842 | Homo_sapiens.GRCh38.84.chr.gtf
## AH50843 | Homo_sapiens.GRCh38.84.chr_patch_hapl_scaff.gtf
## AH51012 | Homo_sapiens.GRCh38.85.chr.gtf
## AH51013 | Homo_sapiens.GRCh38.85.chr_patch_hapl_scaff.gtf
## AH51953 | Homo_sapiens.GRCh38.86.chr.gtf
## ... ...
## AH75392 | Homo_sapiens.GRCh38.98.chr_patch_hapl_scaff.gtf
## AH79159 | Homo_sapiens.GRCh38.99.chr.gtf
## AH79160 | Homo_sapiens.GRCh38.99.chr_patch_hapl_scaff.gtf
## AH80075 | Homo_sapiens.GRCh38.100.chr.gtf
## AH80076 | Homo_sapiens.GRCh38.100.chr_patch_hapl_scaff.gtf
```
We are interested in the version **GRCh38\.92**, which is available under **AH61126**.
To download the data from the hub, we use the `[[` operator on the
hub API.
We will download the annotation in the **GTF** format, into a `GRanges` object.
```
# retrieve the human gene annotation
gtf = hub[['AH61126']]
```
```
## GRanges object with 6 ranges and 3 metadata columns:
## seqnames ranges strand | source type score
## <Rle> <IRanges> <Rle> | <factor> <factor> <numeric>
## [1] 1 11869-14409 + | havana gene NA
## [2] 1 11869-14409 + | havana transcript NA
## [3] 1 11869-12227 + | havana exon NA
## [4] 1 12613-12721 + | havana exon NA
## [5] 1 13221-14409 + | havana exon NA
## [6] 1 12010-13670 + | havana transcript NA
## -------
## seqinfo: 25 sequences (1 circular) from GRCh38 genome
```
By default the ENSEMBL project labels chromosomes using numeric identifiers (i.e. 1,2,3 … X),
without the **chr** prefix.
We need to therefore append the prefix to the chromosome names (seqlevels).
`pruning.mode = 'coarse'` designates that the chromosome names will be replaced
in the gtf object.
```
# extract ensemel chromosome names
ensembl_seqlevels = seqlevels(gtf)
# paste the chr prefix to the chromosome names
ucsc_seqlevels = paste0('chr', ensembl_seqlevels)
# replace ensembl with ucsc chromosome names
seqlevels(gtf, pruning.mode='coarse') = ucsc_seqlevels
```
And finally we subset only regions which correspond to chromosome 21\.
```
# keep only chromosome 21
gtf = gtf[seqnames(gtf) == 'chr21']
```
#### 9\.5\.6\.3 Constructing genomic annotation
Once we have downloaded the annotation we can define the functional hierarchy.
We will use the previously mentioned ordering: **TSS \-\> exon \-\> intron \-\> intergenic**, with **TSS** having the highest priority and the intergenic regions having the lowest priority.
```
# construct a GRangesList with human annotation
annotation_list = GRangesList(
# promoters function extends the gtf around the TSS
# by an upstream and downstream amounts
tss = promoters(
x = subset(gtf, type=='gene'),
upstream = 1000,
downstream = 1000),
exon = subset(gtf, type=='exon'),
intron = subset(gtf, type=='gene')
)
```
#### 9\.5\.6\.4 Annotating reads
To annotate the reads we will define a function that takes as input a
**.bam** file, and an annotation list, and returns the frequency of
reads in each genomic category.
We will then loop over all of the **.bam**
files to annotate each experiment.
The `annotateReads()` function works in the following way:
1. Load the **.bam** file.
2. Find overlaps between the reads and the annotation categories.
3. Arrange the annotated reads based on the hierarchy, and remove duplicated assignments.
4. Count the number of reads in each category.
The crucial step to understand here is using the `arrange()` and `filter()` functions to keep only one annotated category per read.
```
annotateReads = function(bam_file, annotation_list){
library(dplyr)
message(basename(bam_file))
# load the reads into R
bam = readGAlignments(bam_file)
# find overlaps between reads and annotation
result = as.data.frame(
findOverlaps(bam, annotation_list)
)
# appends to the annotation index the corresponding
# annotation name
annotation_name = names(annotation_list)[result$subjectHits]
result$annotation = annotation_name
# order the overlaps based on the hierarchy
result = result[order(result$subjectHits),]
# select only one category per read
result = subset(result, !duplicated(queryHits))
# count the number of reads in each category
# group the result data frame by the corresponding category
result = group_by(.data=result, annotation)
# count the number of reads in each category
result = summarise(.data = result, counts = length(annotation))
# classify all reads which are outside of
# the annotation as intergenic
result = rbind(
result,
data.frame(
annotation = 'intergenic',
counts = length(bam) - sum(result$counts)
)
)
# calculate the frequency
result$frequency = with(result, round(counts/sum(counts),2))
# append the experiment name
result$experiment = basename(bam_file)
return(result)
}
```
We execute the annotation function on all files.
```
# list all bam files in the folder
bam_files = list.files(data_path, full.names=TRUE, pattern='bam$')
# calculate the read distribution for every file
annot_reads_list = lapply(bam_files, function(x){
annotateReads(
bam_file = x,
annotation_list = annotation_list
)
})
```
First, we combine the results in one data frame, and
reformat the experiment names.
```
# collapse the per-file read distributions into one data.frame
annot_reads_df = dplyr::bind_rows(annot_reads_list)
# format the experiment names
experiment_name = annot_reads_df$experiment
experiment_name = sub('.chr21.bam','', experiment_name)
experiment_name = sub('GM12878_hg38_','',experiment_name)
annot_reads_df$experiment = experiment_name
```
And plot the results.
```
ggplot(data = annot_reads_df,
aes(
x = experiment,
y = frequency,
fill = annotation
)) +
geom_bar(stat='identity') +
theme_bw() +
scale_fill_brewer(palette='Set2') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5),
axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab('Sample') +
ylab('Percentage of reads') +
ggtitle('Percentage of reads in annotation')
```
FIGURE 9\.10: Read distribution in genomice functional annotation categories.
Figure [9\.10](chip-quality-control.html#fig:read-annotation-plot) shows a slight increase of **H3K36me3** on the exons
and introns, and **H3K4me3** on the **TSS**. Interestingly, both replicates of the **ZNF143**
transcription factor show increased read abundance around the TSS.
#### 9\.5\.6\.1 Hierarchical annotation of genomic features
Overlapping genomic features (a transcription start site of one
gene might be in an intron of another gene) will cause an ambiguity during
the read annotation. If a read overlaps more than one functional category, we are not
certain which category it should be assigned to.
To solve the problem of multiple assignments, we need to construct a set of annotation rules.
A heuristic solution is to organize the genomic annotation into a
hierarchy which will imply prioritization.
We can then look, for each read, which functional categories it overlaps, and
if it is within multiple categories, we assign the read to the topmost category.
As an example, let’s say that we have 4 genomic categories: 1\) TSS (transcription start sites), 2\) exon, 3\) intron, and 4\) intergenic with the following hierarchy: **TSS \-\> exon \-\> intron \-\> intergenic**. This means that if a read overlaps a TSS and an intron, it will be annotated as TSS. This approach is shown in Figure
[9\.9](chip-quality-control.html#fig:Figure-Hierarchical-Annotation).
FIGURE 9\.9: Principle of hierarchical annotation. The region of interest is annotated as the topmost ranked category that it overlaps. In this case, our region overlaps a TSS, an exon, and an intergenic region. Because the TSS has the topmost rank, it is annotated as a TSS.
Now we will construct the set of functional genomic regions, and annotate
the reads.
#### 9\.5\.6\.2 Finding annotations
There are multiple sources of genomic annotation. **UCSC**,
**Genbank**, and **Ensembl** databases represent stable resources,
from which the annotation can be easily obtained.
`AnnotationHub` is a Bioconductor\-based online resource which contains a large number of experiments from various
sources. We will use the `AnnotationHub` to download the location of
genes corresponding to the **hg38** genome. The hub is accessed in the following way:
```
# load the AnnotationHub package
library(AnnotationHub)
# connect to the hub object
hub = AnnotationHub()
```
The `hub` variable contains the programming interface towards the online database. We can use the `query()` function to find out the ID of the
“ENSEMBL” gene annotation.
```
# query the hub for the human annotation
AnnotationHub::query(
x = hub,
pattern = c('ENSEMBL','Homo','GRCh38','chr','gtf')
)
```
```
## AnnotationHub with 32 records
## # snapshotDate(): 2020-04-27
## # $dataprovider: Ensembl
## # $species: Homo sapiens
## # $rdataclass: GRanges
## # additional mcols(): taxonomyid, genome, description,
## # coordinate_1_based, maintainer, rdatadateadded, preparerclass, tags,
## # rdatapath, sourceurl, sourcetype
## # retrieve records with, e.g., 'object[["AH50842"]]'
##
## title
## AH50842 | Homo_sapiens.GRCh38.84.chr.gtf
## AH50843 | Homo_sapiens.GRCh38.84.chr_patch_hapl_scaff.gtf
## AH51012 | Homo_sapiens.GRCh38.85.chr.gtf
## AH51013 | Homo_sapiens.GRCh38.85.chr_patch_hapl_scaff.gtf
## AH51953 | Homo_sapiens.GRCh38.86.chr.gtf
## ... ...
## AH75392 | Homo_sapiens.GRCh38.98.chr_patch_hapl_scaff.gtf
## AH79159 | Homo_sapiens.GRCh38.99.chr.gtf
## AH79160 | Homo_sapiens.GRCh38.99.chr_patch_hapl_scaff.gtf
## AH80075 | Homo_sapiens.GRCh38.100.chr.gtf
## AH80076 | Homo_sapiens.GRCh38.100.chr_patch_hapl_scaff.gtf
```
We are interested in the version **GRCh38\.92**, which is available under **AH61126**.
To download the data from the hub, we use the `[[` operator on the
hub API.
We will download the annotation in the **GTF** format, into a `GRanges` object.
```
# retrieve the human gene annotation
gtf = hub[['AH61126']]
```
```
## GRanges object with 6 ranges and 3 metadata columns:
## seqnames ranges strand | source type score
## <Rle> <IRanges> <Rle> | <factor> <factor> <numeric>
## [1] 1 11869-14409 + | havana gene NA
## [2] 1 11869-14409 + | havana transcript NA
## [3] 1 11869-12227 + | havana exon NA
## [4] 1 12613-12721 + | havana exon NA
## [5] 1 13221-14409 + | havana exon NA
## [6] 1 12010-13670 + | havana transcript NA
## -------
## seqinfo: 25 sequences (1 circular) from GRCh38 genome
```
By default the ENSEMBL project labels chromosomes using numeric identifiers (i.e. 1,2,3 … X),
without the **chr** prefix.
We need to therefore append the prefix to the chromosome names (seqlevels).
`pruning.mode = 'coarse'` designates that the chromosome names will be replaced
in the gtf object.
```
# extract ensemel chromosome names
ensembl_seqlevels = seqlevels(gtf)
# paste the chr prefix to the chromosome names
ucsc_seqlevels = paste0('chr', ensembl_seqlevels)
# replace ensembl with ucsc chromosome names
seqlevels(gtf, pruning.mode='coarse') = ucsc_seqlevels
```
And finally we subset only regions which correspond to chromosome 21\.
```
# keep only chromosome 21
gtf = gtf[seqnames(gtf) == 'chr21']
```
#### 9\.5\.6\.3 Constructing genomic annotation
Once we have downloaded the annotation we can define the functional hierarchy.
We will use the previously mentioned ordering: **TSS \-\> exon \-\> intron \-\> intergenic**, with **TSS** having the highest priority and the intergenic regions having the lowest priority.
```
# construct a GRangesList with human annotation
annotation_list = GRangesList(
# promoters function extends the gtf around the TSS
# by an upstream and downstream amounts
tss = promoters(
x = subset(gtf, type=='gene'),
upstream = 1000,
downstream = 1000),
exon = subset(gtf, type=='exon'),
intron = subset(gtf, type=='gene')
)
```
#### 9\.5\.6\.4 Annotating reads
To annotate the reads we will define a function that takes as input a
**.bam** file, and an annotation list, and returns the frequency of
reads in each genomic category.
We will then loop over all of the **.bam**
files to annotate each experiment.
The `annotateReads()` function works in the following way:
1. Load the **.bam** file.
2. Find overlaps between the reads and the annotation categories.
3. Arrange the annotated reads based on the hierarchy, and remove duplicated assignments.
4. Count the number of reads in each category.
The crucial step to understand here is using the `arrange()` and `filter()` functions to keep only one annotated category per read.
```
annotateReads = function(bam_file, annotation_list){
library(dplyr)
message(basename(bam_file))
# load the reads into R
bam = readGAlignments(bam_file)
# find overlaps between reads and annotation
result = as.data.frame(
findOverlaps(bam, annotation_list)
)
# appends to the annotation index the corresponding
# annotation name
annotation_name = names(annotation_list)[result$subjectHits]
result$annotation = annotation_name
# order the overlaps based on the hierarchy
result = result[order(result$subjectHits),]
# select only one category per read
result = subset(result, !duplicated(queryHits))
# count the number of reads in each category
# group the result data frame by the corresponding category
result = group_by(.data=result, annotation)
# count the number of reads in each category
result = summarise(.data = result, counts = length(annotation))
# classify all reads which are outside of
# the annotation as intergenic
result = rbind(
result,
data.frame(
annotation = 'intergenic',
counts = length(bam) - sum(result$counts)
)
)
# calculate the frequency
result$frequency = with(result, round(counts/sum(counts),2))
# append the experiment name
result$experiment = basename(bam_file)
return(result)
}
```
We execute the annotation function on all files.
```
# list all bam files in the folder
bam_files = list.files(data_path, full.names=TRUE, pattern='bam$')
# calculate the read distribution for every file
annot_reads_list = lapply(bam_files, function(x){
annotateReads(
bam_file = x,
annotation_list = annotation_list
)
})
```
First, we combine the results in one data frame, and
reformat the experiment names.
```
# collapse the per-file read distributions into one data.frame
annot_reads_df = dplyr::bind_rows(annot_reads_list)
# format the experiment names
experiment_name = annot_reads_df$experiment
experiment_name = sub('.chr21.bam','', experiment_name)
experiment_name = sub('GM12878_hg38_','',experiment_name)
annot_reads_df$experiment = experiment_name
```
And plot the results.
```
ggplot(data = annot_reads_df,
aes(
x = experiment,
y = frequency,
fill = annotation
)) +
geom_bar(stat='identity') +
theme_bw() +
scale_fill_brewer(palette='Set2') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5),
axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab('Sample') +
ylab('Percentage of reads') +
ggtitle('Percentage of reads in annotation')
```
FIGURE 9\.10: Read distribution in genomice functional annotation categories.
Figure [9\.10](chip-quality-control.html#fig:read-annotation-plot) shows a slight increase of **H3K36me3** on the exons
and introns, and **H3K4me3** on the **TSS**. Interestingly, both replicates of the **ZNF143**
transcription factor show increased read abundance around the TSS.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/peak-calling.html |
9\.6 Peak calling
-----------------
After we are convinced that the data is of sufficient quality, we can
proceed with the downstream analysis.
One of the first steps in the ChIP\-seq analysis is peak calling.
Peak calling is a statistical procedure, which uses coverage properties
of ChIP and Input samples to find regions which are enriched due to
protein binding.
The procedure requires mapped reads, and outputs a set of regions, which
represent the putative binding locations. Each region is usually associated
with a significance score which is an indicator of enrichment.
For peak calling we will use the `normR` Bioconductor package.
`normR` uses a binomial mixture model, and performs simultaneous
normalization and peak finding. Due to the nature of the model, it is
quite flexible and can be used for different types of ChIP experiments.
One of the caveats of `normR` is that it does not inherently support
multiple biological replicates, for the same biological sample.
Therefore, the peak calling procedure needs to be done on each replicate
separately, and the peaks need to be combined in post\-processing.
### 9\.6\.1 Types of ChIP\-seq experiments
Based on the binding properties of ChIP\-ped proteins, ChIP\-seq
signal profiles can be divided into three classes:
1. **Sharp** (point signal): A signal profile which is localized to specific
short genomic regions (up to couple of hundred base pairs)
It is usually obtained from transcription factors, or highly localized posttranslational histone modifications
(H3K4me3, which is found on gene promoters).
2. **Broad** (wide signal): The signal covers broad genomic domains spanning up to several kilobases.
Usually produced by disperse histone modifications (H3K36me3, located
on gene bodies, or H3K23me3, which is deposited by the Polycomb complex in large genomic regions).
3. **Mixed**: The signal consists of a mixture of sharp and broad regions.
It is produced by proteins which have dynamic behavior. Most often these are ChIP experiments
of RNA Polymerase 2\.
Different types of ChIP experiments usually require specialized analysis tools. Some peak callers are developed to specifically detect narrow peaks (Zhang, Liu, Meyer, et al. [2008](#ref-zhang_2008); Xu, Handoko, Wei, et al. [2010](#ref-xu_2010); Shao, Zhang, Yuan, et al. [2012](#ref-shao_2012)), while others
detect enrichment in diffuse broad regions (Zang, Schones, Zeng, et al. [2009](#ref-zang_2009); Micsinai, Parisi, Strino, et al. [2012](#ref-micsinai_2012); Beck, Brandl, Boelen, et al. [2012](#ref-beck_2012); Song and Smith [2011](#ref-song_2011); Xing, Mo, Liao, et al. [2012](#ref-xing_2012)),
or mixed (Polymerase 2\) signals (Han, Tian, Pécot, et al. [2012](#ref-han_2012)).
Recent developments in peak calling methods (such as `normR`) can however accommodate
multiple types of ChIP experiments (Rashid, Giresi, Ibrahim, et al. [2011](#ref-rashid_2011)).
The choice of the algorithm will largely depend on the type of the wanted
results, and the peculiarities of the experimental design and execution (Laajala, Raghav, Tuomela, et al. [2009](#ref-laajala_2009); Wilbanks and Facciotti [2010](#ref-wilbanks_2010)).
If you are not certain what kind of signal profile to expect from a ChIP\-seq
experiment, the best solution is to visualize the data. We will now use the data from **H3K4me3** (Sharp), **H3K36me3** (Broad), and **POL2** (Mixed)
ChIP experiments to show the differences in the signal profiles. We will use the bigWig files to visualize the signal profiles around a
highly expressed human gene from chromosome 21\. This will give us an indication
of how the profiles for different types of ChIP experiments differ. First we select the files of interest:
```
# set names for chip-seq bigWig files
chip_files = list(
H3K4me3 = 'GM12878_hg38_H3K4me3.chr21.bw',
H3K36me3 = 'GM12878_hg38_H3K36me3.chr21.bw',
POL2 = 'GM12878_hg38_POLR2A.chr21.bw'
)
# get full paths to the files
chip_files = lapply(chip_files, function(x){
file.path(data_path, x)
})
```
Next we import the coverage profiles into **R**:
```
# load rtracklayer
library(rtracklayer)
# import the ChIP bigWig files
chip_profiles = lapply(chip_files, rtracklayer::import.bw)
```
We fetch the reference annotation for human chromosome 21\.
```
library(AnnotationHub)
hub = AnnotationHub()
gtf = hub[['AH61126']]
# select only chromosome 21
seqlevels(gtf, pruning.mode='coarse') = '21'
# extract chromosome names
ensembl_seqlevels = seqlevels(gtf)
# paste the chr prefix to the chromosome names
ucsc_seqlevels = paste0('chr', ensembl_seqlevels)
# replace ensembl with ucsc chromosome names
seqlevels(gtf, pruning.mode='coarse') = ucsc_seqlevels
```
To enable `Gviz` to work with genomic annotation we will convert the `GRanges`
object into a transcript database using the following function:
```
# load the GenomicFeatures object
library(GenomicFeatures)
# convert the gtf annotation into a data.base
txdb = makeTxDbFromGRanges(gtf)
```
And convert the transcript database into a `Gviz` track.
```
# define the gene track object
gene_track = GeneRegionTrack(txdb, chr='chr21', genome='hg38')
```
Once we have downloaded the annotation, and imported the signal profiles into **R** we are ready to visualize the data.
We will again use the `Gviz` library. We firstly define the coordinate system. The ideogram track which will show
the position of our current viewpoint on the chromosome, and a genome axis track, which will show the exact coordinates.
```
# load Gviz package
library(Gviz)
# fetches the chromosome length information
hg_chrs = getChromInfoFromUCSC('hg38')
hg_chrs = subset(hg_chrs, (grepl('chr21$',chrom)))
# convert data.frame to named vector
seqlengths = with(hg_chrs, setNames(size, chrom))
# constructs the ideogram track
chr_track = IdeogramTrack(
chromosome = 'chr21',
genome = 'hg38'
)
# constructs the coordinate system
axis = GenomeAxisTrack(
range = GRanges('chr21', IRanges(1, width=seqlengths))
)
```
We use a loop to convert the signal profiles into a `DataTrack` object.
```
# use a lapply on the imported bw files to create the track objects
# we loop over experiment names, and select the corresponding object
# within the function
data_tracks = lapply(names(chip_profiles), function(exp_name){
# chip_profiles[[exp_name]] - selects the
# proper experiment using the exp_name
DataTrack(
range = chip_profiles[[exp_name]],
name = exp_name,
# type of the track
type = 'h',
# line width parameter
lwd = 5
)
})
```
We are finally ready to create the genome screenshot.
We will focus on an extended region around the URB1 gene.
```
# select the start coordinate for the URB1 gene
start = min(start(subset(gtf, gene_name == 'URB1')))
# select the end coordinate for the URB1 gene
end = max(end(subset(gtf, gene_name == 'URB1')))
# plot the signal profiles around the URB1 gene
plotTracks(
trackList = c(chr_track, axis, gene_track, data_tracks),
# relative track sizes
sizes = c(1,1,1,1,1,1),
# background color
background.title = "black",
# controls visualization of gene sets
collapseTranscripts = "longest",
transcriptAnnotation = "symbol",
# coordinates to visualize
from = start - 5000,
to = end + 5000
)
```
FIGURE 9\.11: ChIP\-seq signal around the URB1 gene.
Figure [9\.11](peak-calling.html#fig:chip-type-plot-gviz) shows the signal profile around the URB1 gene. H3K4me3 signal profile contains a strong narrow peak on the transcription start site. H3K36me3 shows strong enrichment in the gene body, while the POL2 ChIP shows a mixed profile, with a strong peak at the TSS and an enrichment over the gene body.
### 9\.6\.2 Peak calling: Sharp peaks
We will now use the `normR` (Helmuth, Li, Arrigoni, et al. [2016](#ref-helmuth_2016)) package for peak calling in sharp and broad peak experiments.
Select the input files. Since `normR` does not support the usage of biological
replicates, we will showcase the peak calling on one of the CTCF samples.
```
# full path to the ChIP data file
chip_file = file.path(data_path, 'GM12878_hg38_CTCF_r1.chr21.bam')
# full path to the Control data file
control_file = file.path(data_path, 'GM12878_hg38_Input_r5.chr21.bam')
```
To understand the dynamic range of enrichment, we will create a scatter plot
showing the strength of signal in the CTCF and Input.
Let us first count the reads in 1\-kb windows, and normalize them to counts per
million sequenced reads.
```
# as previously done, we calculate the cpm for each experiment
library(GenomicRanges)
library(GenomicAlignments)
# select the chromosome
hg_chrs = getChromInfoFromUCSC('hg38')
hg_chrs = subset(hg_chrs, grepl('chr21$',chrom))
seqlengths = with(hg_chrs, setNames(size, chrom))
# define the windows
tilling_window = unlist(tileGenome(seqlengths, tilewidth=1000))
# count the reads
counts = summarizeOverlaps(
features = tilling_window,
reads = c(chip_file, control_file)
)
# normalize read counts
counts = assays(counts)[[1]]
cpm = t(t(counts)*(1000000/colSums(counts)))
```
We can now plot the ChIP versus Input signal:
```
library(ggplot2)
# convert the matrix into a data.frame for ggplot
cpm = data.frame(cpm)
ggplot(
data = cpm,
aes(
x = GM12878_hg38_Input_r5.chr21.bam,
y = GM12878_hg38_CTCF_r1.chr21.bam)
) +
geom_point() +
geom_abline(slope = 1) +
theme_bw() +
theme_bw() +
scale_fill_brewer(palette='Set2') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5),
axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab('Input CPM') +
ylab('CTCF CPM') +
ggtitle('ChIP versus Input')
```
FIGURE 9\.12: Comparison of CPM values between ChIP and Input experiments. Good ChIP experiments should always show enrichment.
Regions above the diagonal, in Figure [9\.12](peak-calling.html#fig:peak-calling-sharp-plot), show
higher enrichment in the ChIP samples, while the regions below the diagonal
show higher enrichment in the Input samples.
Let us now perform for peak calling. `normR` usage is deceivingly simple; we need to provide the location ChIP and Control read files, and the genome version to the `enrichR()` function. The function will automatically create tilling windows (250bp by default), count the number of reads in each window, and fit a mixture of binomial distributions.
```
library(normr)
# peak calling using chip and control
ctcf_fit = enrichR(
# ChIP file
treatment = chip_file,
# control file
control = control_file,
# genome version
genome = "hg38",
# print intermediary steps during the analysis
verbose = FALSE)
```
With the summary function we can take a look at the results:
```
summary(ctcf_fit)
```
```
## NormRFit-class object
##
## Type: 'enrichR'
## Number of Regions: 12353090
## Number of Components: 2
## Theta* (naive bg): 0.137
## Background component B: 1
##
## +++ Results of fit +++
## Mixture Proportions:
## Background Class 1
## 97.72% 2.28%
## Theta:
## Background Class 1
## 0.103 0.695
##
## Bayesian Information Criterion: 539882
##
## +++ Results of binomial test +++
## T-Filter threshold: 4
## Number of Regions filtered out: 12267164
## Significantly different from background B based on q-values:
## TOTAL:
## *** ** * . n.s.
## Bins 0 627 120 195 87 84897
## % 0.000 0.711 0.847 1.068 1.166 96.209
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 'n.s.'
```
The summary function shows that most of the regions of chromosome 21 correspond
to the background: \\(97\.72%\\). In total we have \\(1029\=(627\+120\+195\+87\)\\) significantly enriched regions.
We will now extract the regions into a `GRanges` object.
The `getRanges()` function extracts the regions from the model. Using the
`getQvalue()`, and `getEnrichment()` function we assign to our regions
the statistical significance and calculated enrichment.
In order to identify only highly significant regions,
we keep only ranges where the false discovery rate (q value) is below \\(0\.01\\).
```
# extracts the ranges
ctcf_peaks = getRanges(ctcf_fit)
# annotates the ranges with the supporting p value
ctcf_peaks$qvalue = getQvalues(ctcf_fit)
# annotates the ranges with the calculated enrichment
ctcf_peaks$enrichment = getEnrichment(ctcf_fit)
# selects the ranges which correspond to the enriched class
ctcf_peaks = subset(ctcf_peaks, !is.na(component))
# filter by a stringent q value threshold
ctcf_peaks = subset(ctcf_peaks, qvalue < 0.01)
# order the peaks based on the q value
ctcf_peaks = ctcf_peaks[order(ctcf_peaks$qvalue)]
```
```
## GRanges object with 724 ranges and 3 metadata columns:
## seqnames ranges strand | component qvalue enrichment
## <Rle> <IRanges> <Rle> | <integer> <numeric> <numeric>
## [1] chr21 43939251-43939500 * | 1 4.69881e-140 1.37891
## [2] chr21 43646751-43647000 * | 1 2.52006e-137 1.42361
## [3] chr21 43810751-43811000 * | 1 1.86404e-121 1.30519
## [4] chr21 43939001-43939250 * | 1 2.10822e-121 1.19820
## [5] chr21 37712251-37712500 * | 1 6.35711e-118 1.70989
## ... ... ... ... . ... ... ...
## [720] chr21 38172001-38172250 * | 1 0.00867374 0.951189
## [721] chr21 38806001-38806250 * | 1 0.00867374 0.951189
## [722] chr21 42009501-42009750 * | 1 0.00867374 0.656253
## [723] chr21 46153001-46153250 * | 1 0.00867374 0.951189
## [724] chr21 46294751-46295000 * | 1 0.00867374 0.722822
## -------
## seqinfo: 24 sequences from an unspecified genome
```
After stringent q value filtering we are left with \\(724\\) peaks. For the ease of downstream analysis, we will limit the sequence levels to
chromosome 21\.
```
seqlevels(ctcf_peaks, pruning.mode='coarse') = 'chr21'
```
Let’s export the peaks into a .txt file which we can use the downstream in the analysis.
```
# write the peaks loacations into a txt table
write.table(ctcf_peaks, file.path(data_path, 'CTCF_peaks.txt'),
row.names=F, col.names=T, quote=F, sep='\t')
```
We can now repeat the CTCF versus Input plot, and label significantly marked peaks. Using the count overlaps we mark which of our 1\-kb regions contained significant peaks.
```
# find enriched tilling windows
enriched_regions = countOverlaps(tilling_window, ctcf_peaks) > 0
```
```
library(ggplot2)
cpm$enriched_regions = enriched_regions
ggplot(
data = cpm,
aes(
x = GM12878_hg38_Input_r5.chr21.bam,
y = GM12878_hg38_CTCF_r1.chr21.bam,
color = enriched_regions
)) +
geom_point() +
geom_abline(slope = 1) +
theme_bw() +
scale_fill_brewer(palette='Set2') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5),
axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab('Input CPM') +
ylab('CTCF CPM') +
ggtitle('ChIP versus Input') +
scale_color_manual(values=c('gray','red'))
```
FIGURE 9\.13: Comparison of signal between ChIP and input samples. Red labeled dots correspond to called peaks.
Figure [9\.13](peak-calling.html#fig:peak-calling-sharp-peak-calling-plot) shows that `normR`
identified all of the regions above the diagonal as statistically significant.
It has, however, labeled a significant number of regions below the diagonal.
Because of the sophisticated statistical model,
`normR` has greater sensitivity, and these peaks might really be enriched regions,
it is worth investigating the nature of these regions. This is left as an exercise
to the reader.
We can now create a genome browser screenshot around a peak region.
This will show us what kind of signal properties have contributed to the peak calling.
We would expect to see a strong, bell\-shaped, enrichment in the ChIP sample, and
uniform noise in the Input sample.
Let us now visualize the signal around the most enriched peak. The following function takes as input a **.bam** file, and loads the bam into R.
It extends the reads to a size of 200 bp, and creates the coverage vector.
```
# calculate the coverage for one bam file
calculateCoverage = function(
bam_file,
extend = 200
){
# load reads into R
reads = readGAlignments(bam_file)
# convert reads into a GRanges object
reads = granges(reads)
# resize the reads to 200bp
reads = resize(reads, width=extend, fix='start')
# get the coverage vector
cov = coverage(reads)
# normalize the coverage vector to the sequencing depth
cov = round(cov * (1000000/length(reads)),2)
# convert the coverage go a GRanges object
cov = as(cov, 'GRanges')
# keep only chromosome 21
seqlevels(cov, pruning.mode='coarse') = 'chr21'
return(cov)
}
```
Let’s apply the function to the ChIP and input samples.
```
# calculate coverage for the ChIP file
ctcf_cov = calculateCoverage(chip_file)
# calculate coverage for the control file
cont_cov = calculateCoverage(control_file)
```
Using `Gviz`, we will construct the layered tracks.
First, we layout the genome coordinates:
```
# load Gviz and get the chromosome coordinates
library(Gviz)
chr_track = IdeogramTrack('chr21', 'hg38')
axis = GenomeAxisTrack(
range = GRanges('chr21', IRanges(1, width=seqlengths))
)
```
Then, the peak locations:
```
# peaks track
peaks_track = AnnotationTrack(ctcf_peaks, name = "CTCF Peaks")
```
And finally, the signal files:
```
chip_track = DataTrack(
range = ctcf_cov,
name = "CTCF",
type = 'h',
lwd = 3
)
cont_track = DataTrack(
range = cont_cov,
name = "Input",
type = 'h',
lwd=3
)
```
```
plotTracks(
trackList = list(chr_track, axis, peaks_track, chip_track, cont_track),
sizes = c(.2,.5,.5,1,1),
background.title = "black",
from = start(ctcf_peaks)[1] - 1000,
to = end(ctcf_peaks)[1] + 1000
)
```
FIGURE 9\.14: ChIP and Input signal profile around the peak centers.
In Figure [9\.14](peak-calling.html#fig:peak-calling-signal-profile-plot), the ChIP sample looks as expected.
Although the Input sample shows an enrichment,
it is important to compare the scales on both samples. The normalized ChIP signal goes up
to \\(2500\\), while the maximum value in the input sample is only \\(60\\).
### 9\.6\.3 Peak calling: Broad regions
We will now use `normR` to call peaks for the H3K36me3 histone modification,
which is associated with gene bodies of expressed genes. We define the ChIP and Input files:
```
# fetch the ChIP-file for H3K36me3
chip_file = file.path(data_path, 'GM12878_hg38_H3K36me3.chr21.bam')
# fetch the corresponding input file
control_file = file.path(data_path, 'GM12878_hg38_Input_r5.chr21.bam')
```
Because H3K36 regions span broad domains, it is necessary to increase the
tilling window size which will be used for counting.
Using the `countConfiguration()` function, we will set the tilling window size
to 5000 base pairs.
```
library(normr)
# define the window width for the counting
countConfiguration = countConfigSingleEnd(binsize = 5000)
```
```
# find broad peaks using enrichR
h3k36_fit = enrichR(
# ChIP file
treatment = chip_file,
# control file
control = control_file,
# genome version
genome = "hg38",
verbose = FALSE,
# window size for counting
countConfig = countConfiguration)
```
```
summary(h3k36_fit)
```
```
## NormRFit-class object
##
## Type: 'enrichR'
## Number of Regions: 617665
## Number of Components: 2
## Theta* (naive bg): 0.197
## Background component B: 1
##
## +++ Results of fit +++
## Mixture Proportions:
## Background Class 1
## 85.4% 14.6%
## Theta:
## Background Class 1
## 0.138 0.442
##
## Bayesian Information Criterion: 741525
##
## +++ Results of binomial test +++
## T-Filter threshold: 5
## Number of Regions filtered out: 610736
## Significantly different from background B based on q-values:
## TOTAL:
## *** ** * . n.s.
## Bins 0 1005 314 381 237 4992
## % 0.00 9.18 12.04 15.52 17.68 45.58
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 'n.s.'
```
The summary function shows that we get 1937 enriched regions. We will extract enriched regions, and plot them in the same way we did for the
CTCF.
```
# get the locations of broad peaks
h3k36_peaks = getRanges(h3k36_fit)
# extract the qvalue and enrichment
h3k36_peaks$qvalue = getQvalues(h3k36_fit)
h3k36_peaks$enrichment = getEnrichment(h3k36_fit)
# select proper peaks
h3k36_peaks = subset(h3k36_peaks, !is.na(component))
h3k36_peaks = subset(h3k36_peaks, qvalue < 0.01)
h3k36_peaks = h3k36_peaks[order(h3k36_peaks$qvalue)]
# collapse nearby enriched regions
h3k36_peaks = reduce(h3k36_peaks)
```
```
# construct the data tracks for the H3K36me3 and Input files
h3k36_cov = calculateCoverage(chip_file)
data_tracks = list(
h3k36 = DataTrack(h3k36_cov, name = 'h3k36_cov', type='h', lwd=3),
input = DataTrack(cont_cov, name = 'Input', type='h', lwd=3)
)
```
```
# define the window for the visualization
start = min(start(h3k36_peaks[2])) - 25000
end = max(end(h3k36_peaks[2])) + 25000
# create the peak track
peak_track = AnnotationTrack(reduce(h3k36_peaks), name='H3K36me3')
# plots the enriched region
plotTracks(
trackList = c(chr_track, axis, gene_track, peak_track, data_tracks),
sizes = c(.5,.5,.5,.1,1,1),
background.title = "black",
collapseTranscripts = "longest",
transcriptAnnotation = "symbol",
from = start,
to = end
)
```
FIGURE 9\.15: Visualization of H3K36me3 ChIP signal on a called broad peak.
Figure [9\.15](peak-calling.html#fig:peak-calling-broad-gviz) shows a highly enriched H3K36me3
region covering the gene body, as expected.
### 9\.6\.4 Peak quality control
Peak calling is not a mathematically defined procedure; it is impossible
to unambiguously define what a “peak” is. Therefore all of the peak
calling procedures use heuristics, and statistical models which have been
shown to work well in specific use cases.
After peak calling, it is always necessary to check
whether the defined peaks really are located in enriched regions, and in addition,
use prior knowledge to ascertain whether the peaks correspond to known biology.
Peak calling can falsely identify enriched regions if the input
sample is not sequenced to the proper depth. Because the input samples
correspond to **de facto** whole genome sequencing, and the ChIP procedure
enriches for a subset of the genome, it can often happen that many regions
in the genome are not sufficiently covered by the Input sample.
Such variability in the signal profile of Input samples can cause a region
to be defined as a peak, enriched in the ChIP sample, while in reality it is depleted in the
Input, due to under\-sampling. For example, the figure in the previous chapter, showing
an enriched region H3K36me3 over a gene body, shows a large depletion in the Input
sample over the same region. Such depletion should be a concern and merit
further investigation.
The quality of enrichment can be checked by calculating the percentage of reads within peaks for both
ChIP and Input samples. ChIP samples should have a high percentage of reads in peaks,
while for the input samples, the percentage of reads should correspond to the
percentage of genome covered by peaks.
For transcription factor ChIP experiments, an important control is to determine whether
the peak regions contain sequences which are known to be bound
by the corresponding transcription factor \- whether they contain
known transcription factor binding motifs.
Transcription factor binding motifs are sequence models which model the propensity
of binding DNA sequences.
Such sequence models can be downloaded from public databases and compared to see
whether there is a positional enrichment around our peaks.
We will now calculate the percentage of reads within peaks for the H3K36me3 experiment.
Subsequently, we will download the known CTCF sequence model, and compare it
to our peak regions.
#### 9\.6\.4\.1 Percentage of reads in peaks
To calculate the reads in peaks, we will firstly extract the number of reads
in each tilling window from the `normR` produced fit object.
This is done using the `getCounts()` function.
We will then use the q\-value to define which tilling windows correspond
to peaks, and count the number of reads within and outside peaks.
```
# extract, per tilling window, counts from the fit object
h3k36_counts = data.frame(getCounts(h3k36_fit))
# change the column names of the data.frame
colnames(h3k36_counts) = c('Input','H3K36me3')
# extract the q-value corresponding to each bin
h3k36_counts$qvalue = getQvalues(h3k36_fit)
# define which regions are peaks using a q value cutoff
h3k36_counts$enriched[is.na(h3k36_counts$qvalue)] = 'Not Peak'
h3k36_counts$enriched[h3k36_counts$qvalue > 0.05] = 'Not Peak'
h3k36_counts$enriched[h3k36_counts$qvalue <= 0.05] = 'Peak'
# remove the q value column
h3k36_counts$qvalue = NULL
# reshape the data.frame into a long format
h3k36_counts_df = tidyr::pivot_longer(
data = h3k36_counts,
cols = -enriched,
names_to = 'experiment',
values_to = 'counts'
)
# sum the number of reads in the Peak and Not Peak regions
h3k36_counts_df = group_by(.data = h3k36_counts_df, experiment, enriched)
h3k36_counts_df = summarize(.data = h3k36_counts_df, num_of_reads = sum(counts))
# calculate the percentage of reads.
h3k36_counts_df = group_by(.data = h3k36_counts_df, experiment)
h3k36_counts_df = mutate(.data = h3k36_counts_df, total=sum(num_of_reads))
h3k36_counts_df$percentage = with(h3k36_counts_df, round(num_of_reads/total,2))
```
```
## # A tibble: 4 x 5
## # Groups: experiment [2]
## experiment enriched num_of_reads total percentage
## <chr> <chr> <int> <int> <dbl>
## 1 H3K36me3 Not Peak 67623 158616 0.43
## 2 H3K36me3 Peak 90993 158616 0.570
## 3 Input Not Peak 492369 648196 0.76
## 4 Input Peak 155827 648196 0.24
```
We can now plot the percentage of reads in peaks:
```
ggplot(
data = h3k36_counts_df,
aes(
x = experiment,
y = percentage,
fill = enriched
)) +
geom_bar(stat='identity', position='dodge') +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=12,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Experiment') +
ylab('Percetage of reads in region') +
ggtitle('Percentage of reads in peaks for H3K36me3') +
scale_fill_manual(values=c('gray','red'))
```
FIGURE 9\.16: Percentage of ChIP reads in called peaks. Higher percentage indicates higher ChIP quality.
Figure [9\.16](peak-calling.html#fig:peak-quality-counts-plot) shows that the ChIP sample is
clearly enriched in the peak regions.
The percentage of reads in peaks will depend on the quality of the antibody (strength of
enrichment), and the size of peaks which are bound by the protein of interest.
If the total size of peaks is small, relative to the genome size, we can expect that
the percentage of reads in peaks will be small.
#### 9\.6\.4\.2 DNA motifs on peaks
Well\-studied transcription factors have publicly available transcription
factor binding motifs.
If such a model is available for our transcription factor of interest, we
can use it to check the quality of our ChIP data.
Two common measures are used for this purpose:
1. Percentage of peaks containing the motif of interest.
2. Positional distribution of the motif \- the distribution of motif locations should be centered on the peak centers.
##### 9\.6\.4\.2\.1 Representing motifs as matrices
In order to calculate the percentage of CTCF peaks which contain a known CTCF
motif. We need to find the CTCF motif and have the computational tools to search for that motif. The DNA binding motifs can be extracted from the `MotifDB` Bioconductor
database. The `MotifDB` is an agglomeration of multiple motif databases.
```
# load the MotifDB package
library(MotifDb)
# fetch the CTCF motif from the data base
motifs = query(query(MotifDb, 'Hsapiens'), 'CTCF')
# show all available ctcf motifs
motifs
```
```
## MotifDb object of length 12
## | Created from downloaded public sources: 2013-Aug-30
## | 12 position frequency matrices from 8 sources:
## | HOCOMOCOv10: 2
## | HOCOMOCOv11-core-A: 2
## | JASPAR_2014: 1
## | JASPAR_CORE: 1
## | SwissRegulon: 2
## | jaspar2016: 1
## | jaspar2018: 2
## | jolma2013: 1
## | 1 organism/s
## | Hsapiens: 12
## Hsapiens-SwissRegulon-CTCFL.SwissRegulon
## Hsapiens-SwissRegulon-CTCF.SwissRegulon
## Hsapiens-HOCOMOCOv10-CTCFL_HUMAN.H10MO.A
## Hsapiens-HOCOMOCOv10-CTCF_HUMAN.H10MO.A
## Hsapiens-HOCOMOCOv11-core-A-CTCFL_HUMAN.H11MO.0.A
## ...
## Hsapiens-JASPAR_2014-CTCF-MA0139.1
## Hsapiens-jaspar2016-CTCF-MA0139.1
## Hsapiens-jaspar2018-CTCF-MA0139.1
## Hsapiens-jaspar2018-CTCFL-MA1102.1
## Hsapiens-jolma2013-CTCF
```
We will extract the CTCF from the `MotifDB` (Khan, Fornes, Stigliani, et al. [2018](#ref-khan_2018)) database.
```
# based on the MotifDB version, the location of the CTCF motif
# might change, if you do not get the expected results please try
# to subset with different indices
ctcf_motif = motifs[[1]]
```
The motifs are usually represented as matrices of 4\-by\-N dimensions. In the matrix, each of 4 rows correspond to one nucleotide (A, C, G, T).
The number of columns designates the width of the region bound by the transcription factor or the length of the motif that the protein recognizes.
Each element of the matrix contains the probability of observing the corresponding
nucleotide on this position.
For example, for following the CTCF matrix in Table [9\.1](peak-calling.html#tab:peakqualityshow), the probability of observing a thymine at
the first position of the motif,\\(p\_{i\=1,k\=4}\\) , is 0\.57 (1st column, 4th row).
Such a matrix, where each column is a probability distribution over a sequence of nucleotides,
is called a position frequency matrix (PFM). In some sources, this matrix is also called “position probability matrix (PPM)”. One way to construct such matrices is to get experimentally verified sequences that are bound by the protein of interest and then to use a motif\-finding algorithm.
TABLE 9\.1: Position Frequency Matrix (PFM) for the CTCF motif
| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| A | 0\.17 | 0\.23 | 0\.29 | 0\.10 | 0\.33 | 0\.06 | 0\.05 | 0\.04 | 0\.02 | 0 | 0\.25 | 0\.00 | 0 | 0\.05 | 0\.25 | 0\.00 | 0\.17 | 0 | 0\.02 | 0\.19 |
| C | 0\.42 | 0\.28 | 0\.30 | 0\.32 | 0\.11 | 0\.33 | 0\.56 | 0\.00 | 0\.96 | 1 | 0\.67 | 0\.69 | 1 | 0\.04 | 0\.07 | 0\.42 | 0\.15 | 0 | 0\.06 | 0\.43 |
| G | 0\.25 | 0\.23 | 0\.26 | 0\.27 | 0\.42 | 0\.55 | 0\.05 | 0\.83 | 0\.01 | 0 | 0\.03 | 0\.00 | 0 | 0\.02 | 0\.53 | 0\.55 | 0\.05 | 1 | 0\.87 | 0\.15 |
| T | 0\.16 | 0\.27 | 0\.15 | 0\.31 | 0\.14 | 0\.06 | 0\.33 | 0\.13 | 0\.00 | 0 | 0\.06 | 0\.31 | 0 | 0\.89 | 0\.15 | 0\.03 | 0\.62 | 0 | 0\.05 | 0\.23 |
Such a matrix can be used to calculate the probability that the transcription
factor will bind to any given sequence. However, computationally, it is easier to work with summation rather than multiplication. In addition, the simple probabilistic model does not take the background probability of observing a certain base in a given position. We can correct for background base frequencies by dividing the individual probability, \\(p\_{i,k}\\) in each cell of the matrix by the background base probability for a given base, \\(B\_k\\). We can then take the logarithm of that quantity to calculate a log\-likelihood and bring everything to log\-scale as follows \\(Score\_{i,k}\=log\_2(p\_{i,k}/B\_k)\\). We can now calculate a score for any given
sequence by summing up the base\-position\-specific scores we obtain from the log\-scaled matrix. This matrix is formally called position\-specific scoring matrix (PSSM) or position\-specific weight matrix (PWM). We can use this matrix to scan the genome in a sliding window manner and calculate a score for each window. Usually, a cutoff value is needed to call a motif hit. The higher the score you get from the PWM for a particular sequence, the better it is. The traditional algorithms we will use in the following sections use 80% of the maximum rescaled score you can obtain from a PWM as the default cutoff for a hit. The rescaling is simple min\-max rescaling where you rescale the score by subtracting the minimum score and dividing that by \\(max(PWMscore)\-min(PWMscore)\\). The motif scanning approach is illustrated in Figure [9\.17](peak-calling.html#fig:FigurePWMScanning). In this example, ACACT is not considered a hit because its score only corresponds to only \\(15\.6\\) % of the rescaled maximum score.
FIGURE 9\.17: PWM scanning principle. A genomic sequence is scanned by a PWM matrix. This matrix is used to measure how likely it is that the transcription factor will bind each nucleotide in each position. Here we are looking at how likely it is that our TF will bind to the sequence ACACT. The score for this sequence is \-3\.6\. The maximal score obtainable by the PWM is 7\.2 and minimum is \-5\.6\. After min\-max rescaling, \-3\.6 corresponds to a 15% score and ACACT is not considered a hit.
##### 9\.6\.4\.2\.2 Representing motifs as sequence logos
Using the PFM, we can calculate the information content of each position in the matrix.
The information content quantifies the contribution of each nucleotide to the
cumulative binding preference. This tells us how important each nucleotide is for the binding. It additionally allows us to visually represent the probability matrices as sequence logos.
The information content is quantified as relative entropy. It ranges from \\(0\\), no information,
to \\(2\\), maximal information. For a column in the PFM, the entropy is calculated as follows:
\\\[
entropy \= \-\\sum\\limits\_{k\=1}^n p\_{i,k}\\log\_2(p\_{i,k})
\\]
\\(p\_{i,k}\\) is the probability of observing base \\(k\\) in the column \\(i\\) of the PFM. In other words, \\(p\_{i,k}\\) is simply the value of the cell in the PFM. The entropy value is high when the probabilities of each base are similar and low when it is much more probable that only one base occur in a given column. The relative portion comes from the fact that we compare the entropy we calculated for a column to the maximum entropy we can obtain. If the all bases are equally likely for a position in the PFM, then we will have the maximum entropy and we compare our original entropy to that maximum entropy. The maximum entropy is simply \\(log\_2{n}\\) where \\(n\\) is number of letters in the alphabet. In our case we have 4 letters A,C,G and T. The information content is then simply subtracting the observed entropy for a column from the maximum entropy, which translates to the following equation:
\\\[
IC\=log\_2(n)\+\\sum\\limits\_{k\=1}^n p\_{i,k}\\log\_2(p\_{i,k})
\\]
The information content, \\(IC\\), in the preceding equation, will be high if a base has a high probability of occurrence and low if all bases are more or less equally likely to occur.
We can visualize the matrix by visualizing the letters weighted by their probabilities in the PFM. This approach is shown on the left\-hand side of Figure [9\.18](peak-calling.html#fig:peak-quality-seqLogo-plot). In addition, we can also calculate the information content per column to weight the probabilities. This means that the columns that have very frequent letters will be higher. This approach is shown on the right\-hand side of Figure [9\.18](peak-calling.html#fig:peak-quality-seqLogo-plot). We will use below the `seqLogo` package to visualize the CTCF motif in the two different ways we described above.
FIGURE 9\.18: CTCF sequence motif visualized as a sequence logo. Y\-axis ranges from zero to two, and corresponds to the amount of information each base in the motif contributes to the overall motif. The larger the letter, the greater the probability of observing just one defined base on the designated position.
##### 9\.6\.4\.2\.3 Percentage of peaks with the motif
Since we now understand how DNA motifs are used we can start annotating the CTCF peaks with the motif. First, we will extend the peak
regions to \+/\- 200 bp around the peak center.
Because the average fragment size is 200 bp, 400 nucleotides is the
expected variation in the position of the true binding location.
```
# extend the peak regions
ctcf_peaks_resized = resize(ctcf_peaks, width = 400, fix = 'center')
```
Now we use the `BSgenome` package to
extract the sequences corresponding to the peak regions.
```
# load the human genome sequence
library(BSgenome.Hsapiens.UCSC.hg38)
# extract the sequences around the peaks
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38, ctcf_peaks_resized)
```
Once we have extracted the sequences, we can use the CTCF motif to
scan each sequence and determine the probability of CTCF binding.
For this we use the `TFBSTools` (Tan and Lenhard [2016](#ref-TFBSTools)) package.
We first convert the raw probability matrix into a `PWMMatrix` object,
which can then be used for efficient scanning.
```
# load the TFBS tools package
library(TFBSTools)
# convert the matrix into a PWM object
ctcf_pwm = PWMatrix(
ID = 'CTCF',
profileMatrix = ctcf_motif
)
```
We can now use the `searchSeq()` function to scan each sequence for the motif occurrence.
Because the motif matrices are given a continuous binding score, we need to set a cutoff to
determine when a sequence contains the motif, and when it doesn’t.
The cutoff is set by determining the maximal possible score produced by the motif matrix;
a percentage of that score is then taken as the threshold value.
For example, if the best sequence would have a score of 1\.4 of being bound,
then we define a threshold of 80% of 1\.4, which is 1\.12; and any sequence which
scores less than 1\.12 would not be marked as being bound by the protein.
For the CTCF, we mark any peak containing a sequence with \> 80% of the maximal rescaled score or “relative score” as a positive hit.
```
## seqnames source feature start end absScore relScore strand ID
## 1 1 TFBS TFBS 44 63 11.9 0.921 - CTCF
## 2 1 TFBS TFBS 102 121 11.0 0.839 - CTCF
## 3 2 TFBS TFBS 151 170 11.5 0.881 + CTCF
## 4 4 TFBS TFBS 294 313 11.9 0.921 - CTCF
## 5 4 TFBS TFBS 352 371 11.0 0.839 - CTCF
## 6 5 TFBS TFBS 164 183 10.9 0.831 - CTCF
```
A common diagnostic plot is to graph a reverse cumulative distribution of
peak occurrences.
On the x\-axis we rank the peaks, with the most highly enriched peak in the
first position, and the least enriched peak in the last position.
We then walk from the lowest to the highest ranking and measure the
percentage of peaks containing the motif.
```
# label which peaks contain CTCF motifs
motif_hits_df = data.frame(
peak_order = 1:length(ctcf_peaks)
)
motif_hits_df$contains_motif = motif_hits_df$peak_order %in% hits$seqnames
motif_hits_df = motif_hits_df[order(-motif_hits_df$peak_order),]
# calculate the percentage of peaks with motif for peaks of descending strength
motif_hits_df$perc_peaks = with(motif_hits_df,
cumsum(contains_motif) / max(peak_order))
motif_hits_df$perc_peaks = round(motif_hits_df$perc_peaks, 2)
```
We can now visualize the percentage of peaks with matching CTCF motif.
```
# plot the cumulative distribution of motif hit percentages
ggplot(
motif_hits_df,
aes(
x = peak_order,
y = perc_peaks
)) +
geom_line(size=2) +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Peak rank') +
ylab('Percetage of peaks with motif') +
ggtitle('Percentage of CTCF peaks with the CTCF motif')
```
FIGURE 9\.19: Percentage of peaks containing the motif. Higher percentage indicates a better ChIP\-experiment, and a better peak calling procedure.
Figure [9\.19](peak-calling.html#fig:peak-quality-scan-dist-plot)
shows that, when we take all peaks into account, \~45% of
the peaks contain a CTCF motif.
This is an excellent percentage and indicates a high\-quality ChIP experiment.
Our inability to locate the motif in \~50% of the sequences does not
necessarily need to be a consequence of a poor experiment; sometimes
it is a result of the molecular mechanism by which the transcription factor
binds. If a transcription factor has multiple binding modes, which are context
dependent, for example, if the transcription factor binds indirectly to
a subset of regions, through
an interacting partner, we do not have to observe a motif.
#### 9\.6\.4\.3 Motif localization
If the ChIP experiment was performed properly, we would expect the motif
to be localized just below the summit of each peak.
By plotting the motif localization around ChIP peaks, we are quantifying
the uncertainty in the peak location.
We will firstly resize our peaks into regions around \+/−1\-kb around the peak
center.
```
# resize the region around peaks to +/- 1kb
ctcf_peaks_resized = resize(ctcf_peaks, width = 2000, fix='center')
```
Now we perform the motif localization, as before.
```
# fetch the sequence
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38,ctcf_peaks_resized)
# convert the motif matrix to PWM, and scan the peaks
ctcf_pwm = PWMatrix(ID = 'CTCF', profileMatrix = ctcf_motif)
hits = searchSeq(ctcf_pwm, seq, min.score="80%", strand="*")
hits = as.data.frame(hits)
```
We now construct a plot, where the
X\-axis represents the \+/\- 1000 nucleotides around the peak, while the
Y\-axis shows the motif enrichment at each position.
```
# set the position relative to the start
hits$position = hits$start - 1000
# plot the motif hits around peaks
ggplot(data=hits, aes(position)) +
geom_density(size=2) +
theme_bw() +
geom_vline(xintercept = 0, linetype=2, color='red', size=2) +
xlab('Position around the CTCF peaks') +
ylab('Per position percentage\nof motif occurence') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5))
```
FIGURE 9\.20: Transcription factor sequence motif localization with respect to the defined binding sites.
We can in Figure [9\.20](peak-calling.html#fig:chip-quality-motifloc-plot), see that the bulk of motif
hits are found in a region of \\(\+/\-\\) 250 bp around the peak centers.
This means that the peak calling procedure was quite precise.
### 9\.6\.5 Peak annotation
As the final step of quality control we will visualize the distribution
of peaks in different functional genomic regions.
The purpose of the analysis is to check whether the location of the peaks
conforms our prior knowledge.
This analysis is equivalent to constructing distributions for reads.
Firstly we download the human gene models and construct the annotation hierarchy.
```
# download the annotation
hub = AnnotationHub()
gtf = hub[['AH61126']]
seqlevels(gtf, pruning.mode='coarse') = '21'
seqlevels(gtf, pruning.mode='coarse') = paste0('chr', seqlevels(gtf))
# create the annotation hierarchy
annotation_list = GRangesList(
tss = promoters(subset(gtf, type=='gene'), 1000, 1000),
exon = subset(gtf, type=='exon'),
intron = subset(gtf, type=='gene')
)
```
The following function finds the genomic location of each peak, annotates
the peaks using the hierarchical prioritization,
and calculates the summary statistics.
The function contains four major parts:
1. Creating a disjoint set of peak regions.
2. Finding the overlapping annotation for each peak.
3. Annotating each peak with the corresponding annotation class.
4. Calculating summary statistics
```
# function which annotates the location of each peak
annotatePeaks = function(peaks, annotation_list, name){
# ------------------------------------------------ #
# 1. getting disjoint regions
# collapse touching enriched regions
peaks = reduce(peaks)
# ------------------------------------------------ #
# 2. overlapping peaks and annotation
# find overlaps between the peaks and annotation_list
result = as.data.frame(findOverlaps(peaks, annotation_list))
# ------------------------------------------------ #
# 3. annotating peaks
# fetch annotation names
result$annotation = names(annotation_list)[result$subjectHits]
# rank by annotation precedence
result = result[order(result$subjectHits),]
# remove overlapping annotations
result = subset(result, !duplicated(queryHits))
# ------------------------------------------------ #
# 4. calculating statistics
# count the number of peaks in each annotation category
result = group_by(.data = result, annotation)
result = summarise(.data = result, counts = length(annotation))
# fetch the number of intergenic peaks
result = rbind(result,
data.frame(annotation = 'intergenic',
counts = length(peaks) - sum(result$counts)))
result$frequency = with(result, round(counts/sum(counts),2))
result$experiment = name
return(result)
}
```
Using the above defined `annotatePeaks()` function we will now annotate CTCF
and H3K36me3 peaks. Firstly we create a list which contains both CTCF and H3K36me3 peaks.
```
peak_list = list(
CTCF = ctcf_peaks,
H3K36me3 = h3k36_peaks
)
```
Using the `lapply()` function we apply the `annotatePeaks()` function
on each element of the list.
```
# calculate the distribution of peaks in annotation for each experiment
annot_peaks_list = lapply(names(peak_list), function(peak_name){
annotatePeaks(peak_list[[peak_name]], annotation_list, peak_name)
})
```
We use the `dplyr::bind_rows()` function to combine the CTCF and H3K36me3 annotation
statistics into one data frame.
```
# combine a list of data.frames into one data.frame
annot_peaks_df = dplyr::bind_rows(annot_peaks_list)
```
And visualize the results as bar plots. Resulting plot is in Figure [9\.21](peak-calling.html#fig:peak-annotation-plot), which shows that the H3K36me3 peaks are
located preferentially in gene bodies, as expected, while the CTCF peaks are
found preferentially in introns.
```
# plot the distribution of peaks in genomic features
ggplot(data = annot_peaks_df,
aes(
x = experiment,
y = frequency,
fill = annotation
)) +
geom_bar(stat='identity') +
scale_fill_brewer(palette='Set2') +
theme_bw()+
theme(
axis.text = element_text(size=18, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
ggtitle('Peak distribution in\ngenomic regions') +
xlab('Experiment') +
ylab('Frequency')
```
FIGURE 9\.21: Enrichment of transcription factor or histone modifications in functional genomic features.
### 9\.6\.1 Types of ChIP\-seq experiments
Based on the binding properties of ChIP\-ped proteins, ChIP\-seq
signal profiles can be divided into three classes:
1. **Sharp** (point signal): A signal profile which is localized to specific
short genomic regions (up to couple of hundred base pairs)
It is usually obtained from transcription factors, or highly localized posttranslational histone modifications
(H3K4me3, which is found on gene promoters).
2. **Broad** (wide signal): The signal covers broad genomic domains spanning up to several kilobases.
Usually produced by disperse histone modifications (H3K36me3, located
on gene bodies, or H3K23me3, which is deposited by the Polycomb complex in large genomic regions).
3. **Mixed**: The signal consists of a mixture of sharp and broad regions.
It is produced by proteins which have dynamic behavior. Most often these are ChIP experiments
of RNA Polymerase 2\.
Different types of ChIP experiments usually require specialized analysis tools. Some peak callers are developed to specifically detect narrow peaks (Zhang, Liu, Meyer, et al. [2008](#ref-zhang_2008); Xu, Handoko, Wei, et al. [2010](#ref-xu_2010); Shao, Zhang, Yuan, et al. [2012](#ref-shao_2012)), while others
detect enrichment in diffuse broad regions (Zang, Schones, Zeng, et al. [2009](#ref-zang_2009); Micsinai, Parisi, Strino, et al. [2012](#ref-micsinai_2012); Beck, Brandl, Boelen, et al. [2012](#ref-beck_2012); Song and Smith [2011](#ref-song_2011); Xing, Mo, Liao, et al. [2012](#ref-xing_2012)),
or mixed (Polymerase 2\) signals (Han, Tian, Pécot, et al. [2012](#ref-han_2012)).
Recent developments in peak calling methods (such as `normR`) can however accommodate
multiple types of ChIP experiments (Rashid, Giresi, Ibrahim, et al. [2011](#ref-rashid_2011)).
The choice of the algorithm will largely depend on the type of the wanted
results, and the peculiarities of the experimental design and execution (Laajala, Raghav, Tuomela, et al. [2009](#ref-laajala_2009); Wilbanks and Facciotti [2010](#ref-wilbanks_2010)).
If you are not certain what kind of signal profile to expect from a ChIP\-seq
experiment, the best solution is to visualize the data. We will now use the data from **H3K4me3** (Sharp), **H3K36me3** (Broad), and **POL2** (Mixed)
ChIP experiments to show the differences in the signal profiles. We will use the bigWig files to visualize the signal profiles around a
highly expressed human gene from chromosome 21\. This will give us an indication
of how the profiles for different types of ChIP experiments differ. First we select the files of interest:
```
# set names for chip-seq bigWig files
chip_files = list(
H3K4me3 = 'GM12878_hg38_H3K4me3.chr21.bw',
H3K36me3 = 'GM12878_hg38_H3K36me3.chr21.bw',
POL2 = 'GM12878_hg38_POLR2A.chr21.bw'
)
# get full paths to the files
chip_files = lapply(chip_files, function(x){
file.path(data_path, x)
})
```
Next we import the coverage profiles into **R**:
```
# load rtracklayer
library(rtracklayer)
# import the ChIP bigWig files
chip_profiles = lapply(chip_files, rtracklayer::import.bw)
```
We fetch the reference annotation for human chromosome 21\.
```
library(AnnotationHub)
hub = AnnotationHub()
gtf = hub[['AH61126']]
# select only chromosome 21
seqlevels(gtf, pruning.mode='coarse') = '21'
# extract chromosome names
ensembl_seqlevels = seqlevels(gtf)
# paste the chr prefix to the chromosome names
ucsc_seqlevels = paste0('chr', ensembl_seqlevels)
# replace ensembl with ucsc chromosome names
seqlevels(gtf, pruning.mode='coarse') = ucsc_seqlevels
```
To enable `Gviz` to work with genomic annotation we will convert the `GRanges`
object into a transcript database using the following function:
```
# load the GenomicFeatures object
library(GenomicFeatures)
# convert the gtf annotation into a data.base
txdb = makeTxDbFromGRanges(gtf)
```
And convert the transcript database into a `Gviz` track.
```
# define the gene track object
gene_track = GeneRegionTrack(txdb, chr='chr21', genome='hg38')
```
Once we have downloaded the annotation, and imported the signal profiles into **R** we are ready to visualize the data.
We will again use the `Gviz` library. We firstly define the coordinate system. The ideogram track which will show
the position of our current viewpoint on the chromosome, and a genome axis track, which will show the exact coordinates.
```
# load Gviz package
library(Gviz)
# fetches the chromosome length information
hg_chrs = getChromInfoFromUCSC('hg38')
hg_chrs = subset(hg_chrs, (grepl('chr21$',chrom)))
# convert data.frame to named vector
seqlengths = with(hg_chrs, setNames(size, chrom))
# constructs the ideogram track
chr_track = IdeogramTrack(
chromosome = 'chr21',
genome = 'hg38'
)
# constructs the coordinate system
axis = GenomeAxisTrack(
range = GRanges('chr21', IRanges(1, width=seqlengths))
)
```
We use a loop to convert the signal profiles into a `DataTrack` object.
```
# use a lapply on the imported bw files to create the track objects
# we loop over experiment names, and select the corresponding object
# within the function
data_tracks = lapply(names(chip_profiles), function(exp_name){
# chip_profiles[[exp_name]] - selects the
# proper experiment using the exp_name
DataTrack(
range = chip_profiles[[exp_name]],
name = exp_name,
# type of the track
type = 'h',
# line width parameter
lwd = 5
)
})
```
We are finally ready to create the genome screenshot.
We will focus on an extended region around the URB1 gene.
```
# select the start coordinate for the URB1 gene
start = min(start(subset(gtf, gene_name == 'URB1')))
# select the end coordinate for the URB1 gene
end = max(end(subset(gtf, gene_name == 'URB1')))
# plot the signal profiles around the URB1 gene
plotTracks(
trackList = c(chr_track, axis, gene_track, data_tracks),
# relative track sizes
sizes = c(1,1,1,1,1,1),
# background color
background.title = "black",
# controls visualization of gene sets
collapseTranscripts = "longest",
transcriptAnnotation = "symbol",
# coordinates to visualize
from = start - 5000,
to = end + 5000
)
```
FIGURE 9\.11: ChIP\-seq signal around the URB1 gene.
Figure [9\.11](peak-calling.html#fig:chip-type-plot-gviz) shows the signal profile around the URB1 gene. H3K4me3 signal profile contains a strong narrow peak on the transcription start site. H3K36me3 shows strong enrichment in the gene body, while the POL2 ChIP shows a mixed profile, with a strong peak at the TSS and an enrichment over the gene body.
### 9\.6\.2 Peak calling: Sharp peaks
We will now use the `normR` (Helmuth, Li, Arrigoni, et al. [2016](#ref-helmuth_2016)) package for peak calling in sharp and broad peak experiments.
Select the input files. Since `normR` does not support the usage of biological
replicates, we will showcase the peak calling on one of the CTCF samples.
```
# full path to the ChIP data file
chip_file = file.path(data_path, 'GM12878_hg38_CTCF_r1.chr21.bam')
# full path to the Control data file
control_file = file.path(data_path, 'GM12878_hg38_Input_r5.chr21.bam')
```
To understand the dynamic range of enrichment, we will create a scatter plot
showing the strength of signal in the CTCF and Input.
Let us first count the reads in 1\-kb windows, and normalize them to counts per
million sequenced reads.
```
# as previously done, we calculate the cpm for each experiment
library(GenomicRanges)
library(GenomicAlignments)
# select the chromosome
hg_chrs = getChromInfoFromUCSC('hg38')
hg_chrs = subset(hg_chrs, grepl('chr21$',chrom))
seqlengths = with(hg_chrs, setNames(size, chrom))
# define the windows
tilling_window = unlist(tileGenome(seqlengths, tilewidth=1000))
# count the reads
counts = summarizeOverlaps(
features = tilling_window,
reads = c(chip_file, control_file)
)
# normalize read counts
counts = assays(counts)[[1]]
cpm = t(t(counts)*(1000000/colSums(counts)))
```
We can now plot the ChIP versus Input signal:
```
library(ggplot2)
# convert the matrix into a data.frame for ggplot
cpm = data.frame(cpm)
ggplot(
data = cpm,
aes(
x = GM12878_hg38_Input_r5.chr21.bam,
y = GM12878_hg38_CTCF_r1.chr21.bam)
) +
geom_point() +
geom_abline(slope = 1) +
theme_bw() +
theme_bw() +
scale_fill_brewer(palette='Set2') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5),
axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab('Input CPM') +
ylab('CTCF CPM') +
ggtitle('ChIP versus Input')
```
FIGURE 9\.12: Comparison of CPM values between ChIP and Input experiments. Good ChIP experiments should always show enrichment.
Regions above the diagonal, in Figure [9\.12](peak-calling.html#fig:peak-calling-sharp-plot), show
higher enrichment in the ChIP samples, while the regions below the diagonal
show higher enrichment in the Input samples.
Let us now perform for peak calling. `normR` usage is deceivingly simple; we need to provide the location ChIP and Control read files, and the genome version to the `enrichR()` function. The function will automatically create tilling windows (250bp by default), count the number of reads in each window, and fit a mixture of binomial distributions.
```
library(normr)
# peak calling using chip and control
ctcf_fit = enrichR(
# ChIP file
treatment = chip_file,
# control file
control = control_file,
# genome version
genome = "hg38",
# print intermediary steps during the analysis
verbose = FALSE)
```
With the summary function we can take a look at the results:
```
summary(ctcf_fit)
```
```
## NormRFit-class object
##
## Type: 'enrichR'
## Number of Regions: 12353090
## Number of Components: 2
## Theta* (naive bg): 0.137
## Background component B: 1
##
## +++ Results of fit +++
## Mixture Proportions:
## Background Class 1
## 97.72% 2.28%
## Theta:
## Background Class 1
## 0.103 0.695
##
## Bayesian Information Criterion: 539882
##
## +++ Results of binomial test +++
## T-Filter threshold: 4
## Number of Regions filtered out: 12267164
## Significantly different from background B based on q-values:
## TOTAL:
## *** ** * . n.s.
## Bins 0 627 120 195 87 84897
## % 0.000 0.711 0.847 1.068 1.166 96.209
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 'n.s.'
```
The summary function shows that most of the regions of chromosome 21 correspond
to the background: \\(97\.72%\\). In total we have \\(1029\=(627\+120\+195\+87\)\\) significantly enriched regions.
We will now extract the regions into a `GRanges` object.
The `getRanges()` function extracts the regions from the model. Using the
`getQvalue()`, and `getEnrichment()` function we assign to our regions
the statistical significance and calculated enrichment.
In order to identify only highly significant regions,
we keep only ranges where the false discovery rate (q value) is below \\(0\.01\\).
```
# extracts the ranges
ctcf_peaks = getRanges(ctcf_fit)
# annotates the ranges with the supporting p value
ctcf_peaks$qvalue = getQvalues(ctcf_fit)
# annotates the ranges with the calculated enrichment
ctcf_peaks$enrichment = getEnrichment(ctcf_fit)
# selects the ranges which correspond to the enriched class
ctcf_peaks = subset(ctcf_peaks, !is.na(component))
# filter by a stringent q value threshold
ctcf_peaks = subset(ctcf_peaks, qvalue < 0.01)
# order the peaks based on the q value
ctcf_peaks = ctcf_peaks[order(ctcf_peaks$qvalue)]
```
```
## GRanges object with 724 ranges and 3 metadata columns:
## seqnames ranges strand | component qvalue enrichment
## <Rle> <IRanges> <Rle> | <integer> <numeric> <numeric>
## [1] chr21 43939251-43939500 * | 1 4.69881e-140 1.37891
## [2] chr21 43646751-43647000 * | 1 2.52006e-137 1.42361
## [3] chr21 43810751-43811000 * | 1 1.86404e-121 1.30519
## [4] chr21 43939001-43939250 * | 1 2.10822e-121 1.19820
## [5] chr21 37712251-37712500 * | 1 6.35711e-118 1.70989
## ... ... ... ... . ... ... ...
## [720] chr21 38172001-38172250 * | 1 0.00867374 0.951189
## [721] chr21 38806001-38806250 * | 1 0.00867374 0.951189
## [722] chr21 42009501-42009750 * | 1 0.00867374 0.656253
## [723] chr21 46153001-46153250 * | 1 0.00867374 0.951189
## [724] chr21 46294751-46295000 * | 1 0.00867374 0.722822
## -------
## seqinfo: 24 sequences from an unspecified genome
```
After stringent q value filtering we are left with \\(724\\) peaks. For the ease of downstream analysis, we will limit the sequence levels to
chromosome 21\.
```
seqlevels(ctcf_peaks, pruning.mode='coarse') = 'chr21'
```
Let’s export the peaks into a .txt file which we can use the downstream in the analysis.
```
# write the peaks loacations into a txt table
write.table(ctcf_peaks, file.path(data_path, 'CTCF_peaks.txt'),
row.names=F, col.names=T, quote=F, sep='\t')
```
We can now repeat the CTCF versus Input plot, and label significantly marked peaks. Using the count overlaps we mark which of our 1\-kb regions contained significant peaks.
```
# find enriched tilling windows
enriched_regions = countOverlaps(tilling_window, ctcf_peaks) > 0
```
```
library(ggplot2)
cpm$enriched_regions = enriched_regions
ggplot(
data = cpm,
aes(
x = GM12878_hg38_Input_r5.chr21.bam,
y = GM12878_hg38_CTCF_r1.chr21.bam,
color = enriched_regions
)) +
geom_point() +
geom_abline(slope = 1) +
theme_bw() +
scale_fill_brewer(palette='Set2') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5),
axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab('Input CPM') +
ylab('CTCF CPM') +
ggtitle('ChIP versus Input') +
scale_color_manual(values=c('gray','red'))
```
FIGURE 9\.13: Comparison of signal between ChIP and input samples. Red labeled dots correspond to called peaks.
Figure [9\.13](peak-calling.html#fig:peak-calling-sharp-peak-calling-plot) shows that `normR`
identified all of the regions above the diagonal as statistically significant.
It has, however, labeled a significant number of regions below the diagonal.
Because of the sophisticated statistical model,
`normR` has greater sensitivity, and these peaks might really be enriched regions,
it is worth investigating the nature of these regions. This is left as an exercise
to the reader.
We can now create a genome browser screenshot around a peak region.
This will show us what kind of signal properties have contributed to the peak calling.
We would expect to see a strong, bell\-shaped, enrichment in the ChIP sample, and
uniform noise in the Input sample.
Let us now visualize the signal around the most enriched peak. The following function takes as input a **.bam** file, and loads the bam into R.
It extends the reads to a size of 200 bp, and creates the coverage vector.
```
# calculate the coverage for one bam file
calculateCoverage = function(
bam_file,
extend = 200
){
# load reads into R
reads = readGAlignments(bam_file)
# convert reads into a GRanges object
reads = granges(reads)
# resize the reads to 200bp
reads = resize(reads, width=extend, fix='start')
# get the coverage vector
cov = coverage(reads)
# normalize the coverage vector to the sequencing depth
cov = round(cov * (1000000/length(reads)),2)
# convert the coverage go a GRanges object
cov = as(cov, 'GRanges')
# keep only chromosome 21
seqlevels(cov, pruning.mode='coarse') = 'chr21'
return(cov)
}
```
Let’s apply the function to the ChIP and input samples.
```
# calculate coverage for the ChIP file
ctcf_cov = calculateCoverage(chip_file)
# calculate coverage for the control file
cont_cov = calculateCoverage(control_file)
```
Using `Gviz`, we will construct the layered tracks.
First, we layout the genome coordinates:
```
# load Gviz and get the chromosome coordinates
library(Gviz)
chr_track = IdeogramTrack('chr21', 'hg38')
axis = GenomeAxisTrack(
range = GRanges('chr21', IRanges(1, width=seqlengths))
)
```
Then, the peak locations:
```
# peaks track
peaks_track = AnnotationTrack(ctcf_peaks, name = "CTCF Peaks")
```
And finally, the signal files:
```
chip_track = DataTrack(
range = ctcf_cov,
name = "CTCF",
type = 'h',
lwd = 3
)
cont_track = DataTrack(
range = cont_cov,
name = "Input",
type = 'h',
lwd=3
)
```
```
plotTracks(
trackList = list(chr_track, axis, peaks_track, chip_track, cont_track),
sizes = c(.2,.5,.5,1,1),
background.title = "black",
from = start(ctcf_peaks)[1] - 1000,
to = end(ctcf_peaks)[1] + 1000
)
```
FIGURE 9\.14: ChIP and Input signal profile around the peak centers.
In Figure [9\.14](peak-calling.html#fig:peak-calling-signal-profile-plot), the ChIP sample looks as expected.
Although the Input sample shows an enrichment,
it is important to compare the scales on both samples. The normalized ChIP signal goes up
to \\(2500\\), while the maximum value in the input sample is only \\(60\\).
### 9\.6\.3 Peak calling: Broad regions
We will now use `normR` to call peaks for the H3K36me3 histone modification,
which is associated with gene bodies of expressed genes. We define the ChIP and Input files:
```
# fetch the ChIP-file for H3K36me3
chip_file = file.path(data_path, 'GM12878_hg38_H3K36me3.chr21.bam')
# fetch the corresponding input file
control_file = file.path(data_path, 'GM12878_hg38_Input_r5.chr21.bam')
```
Because H3K36 regions span broad domains, it is necessary to increase the
tilling window size which will be used for counting.
Using the `countConfiguration()` function, we will set the tilling window size
to 5000 base pairs.
```
library(normr)
# define the window width for the counting
countConfiguration = countConfigSingleEnd(binsize = 5000)
```
```
# find broad peaks using enrichR
h3k36_fit = enrichR(
# ChIP file
treatment = chip_file,
# control file
control = control_file,
# genome version
genome = "hg38",
verbose = FALSE,
# window size for counting
countConfig = countConfiguration)
```
```
summary(h3k36_fit)
```
```
## NormRFit-class object
##
## Type: 'enrichR'
## Number of Regions: 617665
## Number of Components: 2
## Theta* (naive bg): 0.197
## Background component B: 1
##
## +++ Results of fit +++
## Mixture Proportions:
## Background Class 1
## 85.4% 14.6%
## Theta:
## Background Class 1
## 0.138 0.442
##
## Bayesian Information Criterion: 741525
##
## +++ Results of binomial test +++
## T-Filter threshold: 5
## Number of Regions filtered out: 610736
## Significantly different from background B based on q-values:
## TOTAL:
## *** ** * . n.s.
## Bins 0 1005 314 381 237 4992
## % 0.00 9.18 12.04 15.52 17.68 45.58
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 'n.s.'
```
The summary function shows that we get 1937 enriched regions. We will extract enriched regions, and plot them in the same way we did for the
CTCF.
```
# get the locations of broad peaks
h3k36_peaks = getRanges(h3k36_fit)
# extract the qvalue and enrichment
h3k36_peaks$qvalue = getQvalues(h3k36_fit)
h3k36_peaks$enrichment = getEnrichment(h3k36_fit)
# select proper peaks
h3k36_peaks = subset(h3k36_peaks, !is.na(component))
h3k36_peaks = subset(h3k36_peaks, qvalue < 0.01)
h3k36_peaks = h3k36_peaks[order(h3k36_peaks$qvalue)]
# collapse nearby enriched regions
h3k36_peaks = reduce(h3k36_peaks)
```
```
# construct the data tracks for the H3K36me3 and Input files
h3k36_cov = calculateCoverage(chip_file)
data_tracks = list(
h3k36 = DataTrack(h3k36_cov, name = 'h3k36_cov', type='h', lwd=3),
input = DataTrack(cont_cov, name = 'Input', type='h', lwd=3)
)
```
```
# define the window for the visualization
start = min(start(h3k36_peaks[2])) - 25000
end = max(end(h3k36_peaks[2])) + 25000
# create the peak track
peak_track = AnnotationTrack(reduce(h3k36_peaks), name='H3K36me3')
# plots the enriched region
plotTracks(
trackList = c(chr_track, axis, gene_track, peak_track, data_tracks),
sizes = c(.5,.5,.5,.1,1,1),
background.title = "black",
collapseTranscripts = "longest",
transcriptAnnotation = "symbol",
from = start,
to = end
)
```
FIGURE 9\.15: Visualization of H3K36me3 ChIP signal on a called broad peak.
Figure [9\.15](peak-calling.html#fig:peak-calling-broad-gviz) shows a highly enriched H3K36me3
region covering the gene body, as expected.
### 9\.6\.4 Peak quality control
Peak calling is not a mathematically defined procedure; it is impossible
to unambiguously define what a “peak” is. Therefore all of the peak
calling procedures use heuristics, and statistical models which have been
shown to work well in specific use cases.
After peak calling, it is always necessary to check
whether the defined peaks really are located in enriched regions, and in addition,
use prior knowledge to ascertain whether the peaks correspond to known biology.
Peak calling can falsely identify enriched regions if the input
sample is not sequenced to the proper depth. Because the input samples
correspond to **de facto** whole genome sequencing, and the ChIP procedure
enriches for a subset of the genome, it can often happen that many regions
in the genome are not sufficiently covered by the Input sample.
Such variability in the signal profile of Input samples can cause a region
to be defined as a peak, enriched in the ChIP sample, while in reality it is depleted in the
Input, due to under\-sampling. For example, the figure in the previous chapter, showing
an enriched region H3K36me3 over a gene body, shows a large depletion in the Input
sample over the same region. Such depletion should be a concern and merit
further investigation.
The quality of enrichment can be checked by calculating the percentage of reads within peaks for both
ChIP and Input samples. ChIP samples should have a high percentage of reads in peaks,
while for the input samples, the percentage of reads should correspond to the
percentage of genome covered by peaks.
For transcription factor ChIP experiments, an important control is to determine whether
the peak regions contain sequences which are known to be bound
by the corresponding transcription factor \- whether they contain
known transcription factor binding motifs.
Transcription factor binding motifs are sequence models which model the propensity
of binding DNA sequences.
Such sequence models can be downloaded from public databases and compared to see
whether there is a positional enrichment around our peaks.
We will now calculate the percentage of reads within peaks for the H3K36me3 experiment.
Subsequently, we will download the known CTCF sequence model, and compare it
to our peak regions.
#### 9\.6\.4\.1 Percentage of reads in peaks
To calculate the reads in peaks, we will firstly extract the number of reads
in each tilling window from the `normR` produced fit object.
This is done using the `getCounts()` function.
We will then use the q\-value to define which tilling windows correspond
to peaks, and count the number of reads within and outside peaks.
```
# extract, per tilling window, counts from the fit object
h3k36_counts = data.frame(getCounts(h3k36_fit))
# change the column names of the data.frame
colnames(h3k36_counts) = c('Input','H3K36me3')
# extract the q-value corresponding to each bin
h3k36_counts$qvalue = getQvalues(h3k36_fit)
# define which regions are peaks using a q value cutoff
h3k36_counts$enriched[is.na(h3k36_counts$qvalue)] = 'Not Peak'
h3k36_counts$enriched[h3k36_counts$qvalue > 0.05] = 'Not Peak'
h3k36_counts$enriched[h3k36_counts$qvalue <= 0.05] = 'Peak'
# remove the q value column
h3k36_counts$qvalue = NULL
# reshape the data.frame into a long format
h3k36_counts_df = tidyr::pivot_longer(
data = h3k36_counts,
cols = -enriched,
names_to = 'experiment',
values_to = 'counts'
)
# sum the number of reads in the Peak and Not Peak regions
h3k36_counts_df = group_by(.data = h3k36_counts_df, experiment, enriched)
h3k36_counts_df = summarize(.data = h3k36_counts_df, num_of_reads = sum(counts))
# calculate the percentage of reads.
h3k36_counts_df = group_by(.data = h3k36_counts_df, experiment)
h3k36_counts_df = mutate(.data = h3k36_counts_df, total=sum(num_of_reads))
h3k36_counts_df$percentage = with(h3k36_counts_df, round(num_of_reads/total,2))
```
```
## # A tibble: 4 x 5
## # Groups: experiment [2]
## experiment enriched num_of_reads total percentage
## <chr> <chr> <int> <int> <dbl>
## 1 H3K36me3 Not Peak 67623 158616 0.43
## 2 H3K36me3 Peak 90993 158616 0.570
## 3 Input Not Peak 492369 648196 0.76
## 4 Input Peak 155827 648196 0.24
```
We can now plot the percentage of reads in peaks:
```
ggplot(
data = h3k36_counts_df,
aes(
x = experiment,
y = percentage,
fill = enriched
)) +
geom_bar(stat='identity', position='dodge') +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=12,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Experiment') +
ylab('Percetage of reads in region') +
ggtitle('Percentage of reads in peaks for H3K36me3') +
scale_fill_manual(values=c('gray','red'))
```
FIGURE 9\.16: Percentage of ChIP reads in called peaks. Higher percentage indicates higher ChIP quality.
Figure [9\.16](peak-calling.html#fig:peak-quality-counts-plot) shows that the ChIP sample is
clearly enriched in the peak regions.
The percentage of reads in peaks will depend on the quality of the antibody (strength of
enrichment), and the size of peaks which are bound by the protein of interest.
If the total size of peaks is small, relative to the genome size, we can expect that
the percentage of reads in peaks will be small.
#### 9\.6\.4\.2 DNA motifs on peaks
Well\-studied transcription factors have publicly available transcription
factor binding motifs.
If such a model is available for our transcription factor of interest, we
can use it to check the quality of our ChIP data.
Two common measures are used for this purpose:
1. Percentage of peaks containing the motif of interest.
2. Positional distribution of the motif \- the distribution of motif locations should be centered on the peak centers.
##### 9\.6\.4\.2\.1 Representing motifs as matrices
In order to calculate the percentage of CTCF peaks which contain a known CTCF
motif. We need to find the CTCF motif and have the computational tools to search for that motif. The DNA binding motifs can be extracted from the `MotifDB` Bioconductor
database. The `MotifDB` is an agglomeration of multiple motif databases.
```
# load the MotifDB package
library(MotifDb)
# fetch the CTCF motif from the data base
motifs = query(query(MotifDb, 'Hsapiens'), 'CTCF')
# show all available ctcf motifs
motifs
```
```
## MotifDb object of length 12
## | Created from downloaded public sources: 2013-Aug-30
## | 12 position frequency matrices from 8 sources:
## | HOCOMOCOv10: 2
## | HOCOMOCOv11-core-A: 2
## | JASPAR_2014: 1
## | JASPAR_CORE: 1
## | SwissRegulon: 2
## | jaspar2016: 1
## | jaspar2018: 2
## | jolma2013: 1
## | 1 organism/s
## | Hsapiens: 12
## Hsapiens-SwissRegulon-CTCFL.SwissRegulon
## Hsapiens-SwissRegulon-CTCF.SwissRegulon
## Hsapiens-HOCOMOCOv10-CTCFL_HUMAN.H10MO.A
## Hsapiens-HOCOMOCOv10-CTCF_HUMAN.H10MO.A
## Hsapiens-HOCOMOCOv11-core-A-CTCFL_HUMAN.H11MO.0.A
## ...
## Hsapiens-JASPAR_2014-CTCF-MA0139.1
## Hsapiens-jaspar2016-CTCF-MA0139.1
## Hsapiens-jaspar2018-CTCF-MA0139.1
## Hsapiens-jaspar2018-CTCFL-MA1102.1
## Hsapiens-jolma2013-CTCF
```
We will extract the CTCF from the `MotifDB` (Khan, Fornes, Stigliani, et al. [2018](#ref-khan_2018)) database.
```
# based on the MotifDB version, the location of the CTCF motif
# might change, if you do not get the expected results please try
# to subset with different indices
ctcf_motif = motifs[[1]]
```
The motifs are usually represented as matrices of 4\-by\-N dimensions. In the matrix, each of 4 rows correspond to one nucleotide (A, C, G, T).
The number of columns designates the width of the region bound by the transcription factor or the length of the motif that the protein recognizes.
Each element of the matrix contains the probability of observing the corresponding
nucleotide on this position.
For example, for following the CTCF matrix in Table [9\.1](peak-calling.html#tab:peakqualityshow), the probability of observing a thymine at
the first position of the motif,\\(p\_{i\=1,k\=4}\\) , is 0\.57 (1st column, 4th row).
Such a matrix, where each column is a probability distribution over a sequence of nucleotides,
is called a position frequency matrix (PFM). In some sources, this matrix is also called “position probability matrix (PPM)”. One way to construct such matrices is to get experimentally verified sequences that are bound by the protein of interest and then to use a motif\-finding algorithm.
TABLE 9\.1: Position Frequency Matrix (PFM) for the CTCF motif
| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| A | 0\.17 | 0\.23 | 0\.29 | 0\.10 | 0\.33 | 0\.06 | 0\.05 | 0\.04 | 0\.02 | 0 | 0\.25 | 0\.00 | 0 | 0\.05 | 0\.25 | 0\.00 | 0\.17 | 0 | 0\.02 | 0\.19 |
| C | 0\.42 | 0\.28 | 0\.30 | 0\.32 | 0\.11 | 0\.33 | 0\.56 | 0\.00 | 0\.96 | 1 | 0\.67 | 0\.69 | 1 | 0\.04 | 0\.07 | 0\.42 | 0\.15 | 0 | 0\.06 | 0\.43 |
| G | 0\.25 | 0\.23 | 0\.26 | 0\.27 | 0\.42 | 0\.55 | 0\.05 | 0\.83 | 0\.01 | 0 | 0\.03 | 0\.00 | 0 | 0\.02 | 0\.53 | 0\.55 | 0\.05 | 1 | 0\.87 | 0\.15 |
| T | 0\.16 | 0\.27 | 0\.15 | 0\.31 | 0\.14 | 0\.06 | 0\.33 | 0\.13 | 0\.00 | 0 | 0\.06 | 0\.31 | 0 | 0\.89 | 0\.15 | 0\.03 | 0\.62 | 0 | 0\.05 | 0\.23 |
Such a matrix can be used to calculate the probability that the transcription
factor will bind to any given sequence. However, computationally, it is easier to work with summation rather than multiplication. In addition, the simple probabilistic model does not take the background probability of observing a certain base in a given position. We can correct for background base frequencies by dividing the individual probability, \\(p\_{i,k}\\) in each cell of the matrix by the background base probability for a given base, \\(B\_k\\). We can then take the logarithm of that quantity to calculate a log\-likelihood and bring everything to log\-scale as follows \\(Score\_{i,k}\=log\_2(p\_{i,k}/B\_k)\\). We can now calculate a score for any given
sequence by summing up the base\-position\-specific scores we obtain from the log\-scaled matrix. This matrix is formally called position\-specific scoring matrix (PSSM) or position\-specific weight matrix (PWM). We can use this matrix to scan the genome in a sliding window manner and calculate a score for each window. Usually, a cutoff value is needed to call a motif hit. The higher the score you get from the PWM for a particular sequence, the better it is. The traditional algorithms we will use in the following sections use 80% of the maximum rescaled score you can obtain from a PWM as the default cutoff for a hit. The rescaling is simple min\-max rescaling where you rescale the score by subtracting the minimum score and dividing that by \\(max(PWMscore)\-min(PWMscore)\\). The motif scanning approach is illustrated in Figure [9\.17](peak-calling.html#fig:FigurePWMScanning). In this example, ACACT is not considered a hit because its score only corresponds to only \\(15\.6\\) % of the rescaled maximum score.
FIGURE 9\.17: PWM scanning principle. A genomic sequence is scanned by a PWM matrix. This matrix is used to measure how likely it is that the transcription factor will bind each nucleotide in each position. Here we are looking at how likely it is that our TF will bind to the sequence ACACT. The score for this sequence is \-3\.6\. The maximal score obtainable by the PWM is 7\.2 and minimum is \-5\.6\. After min\-max rescaling, \-3\.6 corresponds to a 15% score and ACACT is not considered a hit.
##### 9\.6\.4\.2\.2 Representing motifs as sequence logos
Using the PFM, we can calculate the information content of each position in the matrix.
The information content quantifies the contribution of each nucleotide to the
cumulative binding preference. This tells us how important each nucleotide is for the binding. It additionally allows us to visually represent the probability matrices as sequence logos.
The information content is quantified as relative entropy. It ranges from \\(0\\), no information,
to \\(2\\), maximal information. For a column in the PFM, the entropy is calculated as follows:
\\\[
entropy \= \-\\sum\\limits\_{k\=1}^n p\_{i,k}\\log\_2(p\_{i,k})
\\]
\\(p\_{i,k}\\) is the probability of observing base \\(k\\) in the column \\(i\\) of the PFM. In other words, \\(p\_{i,k}\\) is simply the value of the cell in the PFM. The entropy value is high when the probabilities of each base are similar and low when it is much more probable that only one base occur in a given column. The relative portion comes from the fact that we compare the entropy we calculated for a column to the maximum entropy we can obtain. If the all bases are equally likely for a position in the PFM, then we will have the maximum entropy and we compare our original entropy to that maximum entropy. The maximum entropy is simply \\(log\_2{n}\\) where \\(n\\) is number of letters in the alphabet. In our case we have 4 letters A,C,G and T. The information content is then simply subtracting the observed entropy for a column from the maximum entropy, which translates to the following equation:
\\\[
IC\=log\_2(n)\+\\sum\\limits\_{k\=1}^n p\_{i,k}\\log\_2(p\_{i,k})
\\]
The information content, \\(IC\\), in the preceding equation, will be high if a base has a high probability of occurrence and low if all bases are more or less equally likely to occur.
We can visualize the matrix by visualizing the letters weighted by their probabilities in the PFM. This approach is shown on the left\-hand side of Figure [9\.18](peak-calling.html#fig:peak-quality-seqLogo-plot). In addition, we can also calculate the information content per column to weight the probabilities. This means that the columns that have very frequent letters will be higher. This approach is shown on the right\-hand side of Figure [9\.18](peak-calling.html#fig:peak-quality-seqLogo-plot). We will use below the `seqLogo` package to visualize the CTCF motif in the two different ways we described above.
FIGURE 9\.18: CTCF sequence motif visualized as a sequence logo. Y\-axis ranges from zero to two, and corresponds to the amount of information each base in the motif contributes to the overall motif. The larger the letter, the greater the probability of observing just one defined base on the designated position.
##### 9\.6\.4\.2\.3 Percentage of peaks with the motif
Since we now understand how DNA motifs are used we can start annotating the CTCF peaks with the motif. First, we will extend the peak
regions to \+/\- 200 bp around the peak center.
Because the average fragment size is 200 bp, 400 nucleotides is the
expected variation in the position of the true binding location.
```
# extend the peak regions
ctcf_peaks_resized = resize(ctcf_peaks, width = 400, fix = 'center')
```
Now we use the `BSgenome` package to
extract the sequences corresponding to the peak regions.
```
# load the human genome sequence
library(BSgenome.Hsapiens.UCSC.hg38)
# extract the sequences around the peaks
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38, ctcf_peaks_resized)
```
Once we have extracted the sequences, we can use the CTCF motif to
scan each sequence and determine the probability of CTCF binding.
For this we use the `TFBSTools` (Tan and Lenhard [2016](#ref-TFBSTools)) package.
We first convert the raw probability matrix into a `PWMMatrix` object,
which can then be used for efficient scanning.
```
# load the TFBS tools package
library(TFBSTools)
# convert the matrix into a PWM object
ctcf_pwm = PWMatrix(
ID = 'CTCF',
profileMatrix = ctcf_motif
)
```
We can now use the `searchSeq()` function to scan each sequence for the motif occurrence.
Because the motif matrices are given a continuous binding score, we need to set a cutoff to
determine when a sequence contains the motif, and when it doesn’t.
The cutoff is set by determining the maximal possible score produced by the motif matrix;
a percentage of that score is then taken as the threshold value.
For example, if the best sequence would have a score of 1\.4 of being bound,
then we define a threshold of 80% of 1\.4, which is 1\.12; and any sequence which
scores less than 1\.12 would not be marked as being bound by the protein.
For the CTCF, we mark any peak containing a sequence with \> 80% of the maximal rescaled score or “relative score” as a positive hit.
```
## seqnames source feature start end absScore relScore strand ID
## 1 1 TFBS TFBS 44 63 11.9 0.921 - CTCF
## 2 1 TFBS TFBS 102 121 11.0 0.839 - CTCF
## 3 2 TFBS TFBS 151 170 11.5 0.881 + CTCF
## 4 4 TFBS TFBS 294 313 11.9 0.921 - CTCF
## 5 4 TFBS TFBS 352 371 11.0 0.839 - CTCF
## 6 5 TFBS TFBS 164 183 10.9 0.831 - CTCF
```
A common diagnostic plot is to graph a reverse cumulative distribution of
peak occurrences.
On the x\-axis we rank the peaks, with the most highly enriched peak in the
first position, and the least enriched peak in the last position.
We then walk from the lowest to the highest ranking and measure the
percentage of peaks containing the motif.
```
# label which peaks contain CTCF motifs
motif_hits_df = data.frame(
peak_order = 1:length(ctcf_peaks)
)
motif_hits_df$contains_motif = motif_hits_df$peak_order %in% hits$seqnames
motif_hits_df = motif_hits_df[order(-motif_hits_df$peak_order),]
# calculate the percentage of peaks with motif for peaks of descending strength
motif_hits_df$perc_peaks = with(motif_hits_df,
cumsum(contains_motif) / max(peak_order))
motif_hits_df$perc_peaks = round(motif_hits_df$perc_peaks, 2)
```
We can now visualize the percentage of peaks with matching CTCF motif.
```
# plot the cumulative distribution of motif hit percentages
ggplot(
motif_hits_df,
aes(
x = peak_order,
y = perc_peaks
)) +
geom_line(size=2) +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Peak rank') +
ylab('Percetage of peaks with motif') +
ggtitle('Percentage of CTCF peaks with the CTCF motif')
```
FIGURE 9\.19: Percentage of peaks containing the motif. Higher percentage indicates a better ChIP\-experiment, and a better peak calling procedure.
Figure [9\.19](peak-calling.html#fig:peak-quality-scan-dist-plot)
shows that, when we take all peaks into account, \~45% of
the peaks contain a CTCF motif.
This is an excellent percentage and indicates a high\-quality ChIP experiment.
Our inability to locate the motif in \~50% of the sequences does not
necessarily need to be a consequence of a poor experiment; sometimes
it is a result of the molecular mechanism by which the transcription factor
binds. If a transcription factor has multiple binding modes, which are context
dependent, for example, if the transcription factor binds indirectly to
a subset of regions, through
an interacting partner, we do not have to observe a motif.
#### 9\.6\.4\.3 Motif localization
If the ChIP experiment was performed properly, we would expect the motif
to be localized just below the summit of each peak.
By plotting the motif localization around ChIP peaks, we are quantifying
the uncertainty in the peak location.
We will firstly resize our peaks into regions around \+/−1\-kb around the peak
center.
```
# resize the region around peaks to +/- 1kb
ctcf_peaks_resized = resize(ctcf_peaks, width = 2000, fix='center')
```
Now we perform the motif localization, as before.
```
# fetch the sequence
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38,ctcf_peaks_resized)
# convert the motif matrix to PWM, and scan the peaks
ctcf_pwm = PWMatrix(ID = 'CTCF', profileMatrix = ctcf_motif)
hits = searchSeq(ctcf_pwm, seq, min.score="80%", strand="*")
hits = as.data.frame(hits)
```
We now construct a plot, where the
X\-axis represents the \+/\- 1000 nucleotides around the peak, while the
Y\-axis shows the motif enrichment at each position.
```
# set the position relative to the start
hits$position = hits$start - 1000
# plot the motif hits around peaks
ggplot(data=hits, aes(position)) +
geom_density(size=2) +
theme_bw() +
geom_vline(xintercept = 0, linetype=2, color='red', size=2) +
xlab('Position around the CTCF peaks') +
ylab('Per position percentage\nof motif occurence') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5))
```
FIGURE 9\.20: Transcription factor sequence motif localization with respect to the defined binding sites.
We can in Figure [9\.20](peak-calling.html#fig:chip-quality-motifloc-plot), see that the bulk of motif
hits are found in a region of \\(\+/\-\\) 250 bp around the peak centers.
This means that the peak calling procedure was quite precise.
#### 9\.6\.4\.1 Percentage of reads in peaks
To calculate the reads in peaks, we will firstly extract the number of reads
in each tilling window from the `normR` produced fit object.
This is done using the `getCounts()` function.
We will then use the q\-value to define which tilling windows correspond
to peaks, and count the number of reads within and outside peaks.
```
# extract, per tilling window, counts from the fit object
h3k36_counts = data.frame(getCounts(h3k36_fit))
# change the column names of the data.frame
colnames(h3k36_counts) = c('Input','H3K36me3')
# extract the q-value corresponding to each bin
h3k36_counts$qvalue = getQvalues(h3k36_fit)
# define which regions are peaks using a q value cutoff
h3k36_counts$enriched[is.na(h3k36_counts$qvalue)] = 'Not Peak'
h3k36_counts$enriched[h3k36_counts$qvalue > 0.05] = 'Not Peak'
h3k36_counts$enriched[h3k36_counts$qvalue <= 0.05] = 'Peak'
# remove the q value column
h3k36_counts$qvalue = NULL
# reshape the data.frame into a long format
h3k36_counts_df = tidyr::pivot_longer(
data = h3k36_counts,
cols = -enriched,
names_to = 'experiment',
values_to = 'counts'
)
# sum the number of reads in the Peak and Not Peak regions
h3k36_counts_df = group_by(.data = h3k36_counts_df, experiment, enriched)
h3k36_counts_df = summarize(.data = h3k36_counts_df, num_of_reads = sum(counts))
# calculate the percentage of reads.
h3k36_counts_df = group_by(.data = h3k36_counts_df, experiment)
h3k36_counts_df = mutate(.data = h3k36_counts_df, total=sum(num_of_reads))
h3k36_counts_df$percentage = with(h3k36_counts_df, round(num_of_reads/total,2))
```
```
## # A tibble: 4 x 5
## # Groups: experiment [2]
## experiment enriched num_of_reads total percentage
## <chr> <chr> <int> <int> <dbl>
## 1 H3K36me3 Not Peak 67623 158616 0.43
## 2 H3K36me3 Peak 90993 158616 0.570
## 3 Input Not Peak 492369 648196 0.76
## 4 Input Peak 155827 648196 0.24
```
We can now plot the percentage of reads in peaks:
```
ggplot(
data = h3k36_counts_df,
aes(
x = experiment,
y = percentage,
fill = enriched
)) +
geom_bar(stat='identity', position='dodge') +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=12,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Experiment') +
ylab('Percetage of reads in region') +
ggtitle('Percentage of reads in peaks for H3K36me3') +
scale_fill_manual(values=c('gray','red'))
```
FIGURE 9\.16: Percentage of ChIP reads in called peaks. Higher percentage indicates higher ChIP quality.
Figure [9\.16](peak-calling.html#fig:peak-quality-counts-plot) shows that the ChIP sample is
clearly enriched in the peak regions.
The percentage of reads in peaks will depend on the quality of the antibody (strength of
enrichment), and the size of peaks which are bound by the protein of interest.
If the total size of peaks is small, relative to the genome size, we can expect that
the percentage of reads in peaks will be small.
#### 9\.6\.4\.2 DNA motifs on peaks
Well\-studied transcription factors have publicly available transcription
factor binding motifs.
If such a model is available for our transcription factor of interest, we
can use it to check the quality of our ChIP data.
Two common measures are used for this purpose:
1. Percentage of peaks containing the motif of interest.
2. Positional distribution of the motif \- the distribution of motif locations should be centered on the peak centers.
##### 9\.6\.4\.2\.1 Representing motifs as matrices
In order to calculate the percentage of CTCF peaks which contain a known CTCF
motif. We need to find the CTCF motif and have the computational tools to search for that motif. The DNA binding motifs can be extracted from the `MotifDB` Bioconductor
database. The `MotifDB` is an agglomeration of multiple motif databases.
```
# load the MotifDB package
library(MotifDb)
# fetch the CTCF motif from the data base
motifs = query(query(MotifDb, 'Hsapiens'), 'CTCF')
# show all available ctcf motifs
motifs
```
```
## MotifDb object of length 12
## | Created from downloaded public sources: 2013-Aug-30
## | 12 position frequency matrices from 8 sources:
## | HOCOMOCOv10: 2
## | HOCOMOCOv11-core-A: 2
## | JASPAR_2014: 1
## | JASPAR_CORE: 1
## | SwissRegulon: 2
## | jaspar2016: 1
## | jaspar2018: 2
## | jolma2013: 1
## | 1 organism/s
## | Hsapiens: 12
## Hsapiens-SwissRegulon-CTCFL.SwissRegulon
## Hsapiens-SwissRegulon-CTCF.SwissRegulon
## Hsapiens-HOCOMOCOv10-CTCFL_HUMAN.H10MO.A
## Hsapiens-HOCOMOCOv10-CTCF_HUMAN.H10MO.A
## Hsapiens-HOCOMOCOv11-core-A-CTCFL_HUMAN.H11MO.0.A
## ...
## Hsapiens-JASPAR_2014-CTCF-MA0139.1
## Hsapiens-jaspar2016-CTCF-MA0139.1
## Hsapiens-jaspar2018-CTCF-MA0139.1
## Hsapiens-jaspar2018-CTCFL-MA1102.1
## Hsapiens-jolma2013-CTCF
```
We will extract the CTCF from the `MotifDB` (Khan, Fornes, Stigliani, et al. [2018](#ref-khan_2018)) database.
```
# based on the MotifDB version, the location of the CTCF motif
# might change, if you do not get the expected results please try
# to subset with different indices
ctcf_motif = motifs[[1]]
```
The motifs are usually represented as matrices of 4\-by\-N dimensions. In the matrix, each of 4 rows correspond to one nucleotide (A, C, G, T).
The number of columns designates the width of the region bound by the transcription factor or the length of the motif that the protein recognizes.
Each element of the matrix contains the probability of observing the corresponding
nucleotide on this position.
For example, for following the CTCF matrix in Table [9\.1](peak-calling.html#tab:peakqualityshow), the probability of observing a thymine at
the first position of the motif,\\(p\_{i\=1,k\=4}\\) , is 0\.57 (1st column, 4th row).
Such a matrix, where each column is a probability distribution over a sequence of nucleotides,
is called a position frequency matrix (PFM). In some sources, this matrix is also called “position probability matrix (PPM)”. One way to construct such matrices is to get experimentally verified sequences that are bound by the protein of interest and then to use a motif\-finding algorithm.
TABLE 9\.1: Position Frequency Matrix (PFM) for the CTCF motif
| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| A | 0\.17 | 0\.23 | 0\.29 | 0\.10 | 0\.33 | 0\.06 | 0\.05 | 0\.04 | 0\.02 | 0 | 0\.25 | 0\.00 | 0 | 0\.05 | 0\.25 | 0\.00 | 0\.17 | 0 | 0\.02 | 0\.19 |
| C | 0\.42 | 0\.28 | 0\.30 | 0\.32 | 0\.11 | 0\.33 | 0\.56 | 0\.00 | 0\.96 | 1 | 0\.67 | 0\.69 | 1 | 0\.04 | 0\.07 | 0\.42 | 0\.15 | 0 | 0\.06 | 0\.43 |
| G | 0\.25 | 0\.23 | 0\.26 | 0\.27 | 0\.42 | 0\.55 | 0\.05 | 0\.83 | 0\.01 | 0 | 0\.03 | 0\.00 | 0 | 0\.02 | 0\.53 | 0\.55 | 0\.05 | 1 | 0\.87 | 0\.15 |
| T | 0\.16 | 0\.27 | 0\.15 | 0\.31 | 0\.14 | 0\.06 | 0\.33 | 0\.13 | 0\.00 | 0 | 0\.06 | 0\.31 | 0 | 0\.89 | 0\.15 | 0\.03 | 0\.62 | 0 | 0\.05 | 0\.23 |
Such a matrix can be used to calculate the probability that the transcription
factor will bind to any given sequence. However, computationally, it is easier to work with summation rather than multiplication. In addition, the simple probabilistic model does not take the background probability of observing a certain base in a given position. We can correct for background base frequencies by dividing the individual probability, \\(p\_{i,k}\\) in each cell of the matrix by the background base probability for a given base, \\(B\_k\\). We can then take the logarithm of that quantity to calculate a log\-likelihood and bring everything to log\-scale as follows \\(Score\_{i,k}\=log\_2(p\_{i,k}/B\_k)\\). We can now calculate a score for any given
sequence by summing up the base\-position\-specific scores we obtain from the log\-scaled matrix. This matrix is formally called position\-specific scoring matrix (PSSM) or position\-specific weight matrix (PWM). We can use this matrix to scan the genome in a sliding window manner and calculate a score for each window. Usually, a cutoff value is needed to call a motif hit. The higher the score you get from the PWM for a particular sequence, the better it is. The traditional algorithms we will use in the following sections use 80% of the maximum rescaled score you can obtain from a PWM as the default cutoff for a hit. The rescaling is simple min\-max rescaling where you rescale the score by subtracting the minimum score and dividing that by \\(max(PWMscore)\-min(PWMscore)\\). The motif scanning approach is illustrated in Figure [9\.17](peak-calling.html#fig:FigurePWMScanning). In this example, ACACT is not considered a hit because its score only corresponds to only \\(15\.6\\) % of the rescaled maximum score.
FIGURE 9\.17: PWM scanning principle. A genomic sequence is scanned by a PWM matrix. This matrix is used to measure how likely it is that the transcription factor will bind each nucleotide in each position. Here we are looking at how likely it is that our TF will bind to the sequence ACACT. The score for this sequence is \-3\.6\. The maximal score obtainable by the PWM is 7\.2 and minimum is \-5\.6\. After min\-max rescaling, \-3\.6 corresponds to a 15% score and ACACT is not considered a hit.
##### 9\.6\.4\.2\.2 Representing motifs as sequence logos
Using the PFM, we can calculate the information content of each position in the matrix.
The information content quantifies the contribution of each nucleotide to the
cumulative binding preference. This tells us how important each nucleotide is for the binding. It additionally allows us to visually represent the probability matrices as sequence logos.
The information content is quantified as relative entropy. It ranges from \\(0\\), no information,
to \\(2\\), maximal information. For a column in the PFM, the entropy is calculated as follows:
\\\[
entropy \= \-\\sum\\limits\_{k\=1}^n p\_{i,k}\\log\_2(p\_{i,k})
\\]
\\(p\_{i,k}\\) is the probability of observing base \\(k\\) in the column \\(i\\) of the PFM. In other words, \\(p\_{i,k}\\) is simply the value of the cell in the PFM. The entropy value is high when the probabilities of each base are similar and low when it is much more probable that only one base occur in a given column. The relative portion comes from the fact that we compare the entropy we calculated for a column to the maximum entropy we can obtain. If the all bases are equally likely for a position in the PFM, then we will have the maximum entropy and we compare our original entropy to that maximum entropy. The maximum entropy is simply \\(log\_2{n}\\) where \\(n\\) is number of letters in the alphabet. In our case we have 4 letters A,C,G and T. The information content is then simply subtracting the observed entropy for a column from the maximum entropy, which translates to the following equation:
\\\[
IC\=log\_2(n)\+\\sum\\limits\_{k\=1}^n p\_{i,k}\\log\_2(p\_{i,k})
\\]
The information content, \\(IC\\), in the preceding equation, will be high if a base has a high probability of occurrence and low if all bases are more or less equally likely to occur.
We can visualize the matrix by visualizing the letters weighted by their probabilities in the PFM. This approach is shown on the left\-hand side of Figure [9\.18](peak-calling.html#fig:peak-quality-seqLogo-plot). In addition, we can also calculate the information content per column to weight the probabilities. This means that the columns that have very frequent letters will be higher. This approach is shown on the right\-hand side of Figure [9\.18](peak-calling.html#fig:peak-quality-seqLogo-plot). We will use below the `seqLogo` package to visualize the CTCF motif in the two different ways we described above.
FIGURE 9\.18: CTCF sequence motif visualized as a sequence logo. Y\-axis ranges from zero to two, and corresponds to the amount of information each base in the motif contributes to the overall motif. The larger the letter, the greater the probability of observing just one defined base on the designated position.
##### 9\.6\.4\.2\.3 Percentage of peaks with the motif
Since we now understand how DNA motifs are used we can start annotating the CTCF peaks with the motif. First, we will extend the peak
regions to \+/\- 200 bp around the peak center.
Because the average fragment size is 200 bp, 400 nucleotides is the
expected variation in the position of the true binding location.
```
# extend the peak regions
ctcf_peaks_resized = resize(ctcf_peaks, width = 400, fix = 'center')
```
Now we use the `BSgenome` package to
extract the sequences corresponding to the peak regions.
```
# load the human genome sequence
library(BSgenome.Hsapiens.UCSC.hg38)
# extract the sequences around the peaks
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38, ctcf_peaks_resized)
```
Once we have extracted the sequences, we can use the CTCF motif to
scan each sequence and determine the probability of CTCF binding.
For this we use the `TFBSTools` (Tan and Lenhard [2016](#ref-TFBSTools)) package.
We first convert the raw probability matrix into a `PWMMatrix` object,
which can then be used for efficient scanning.
```
# load the TFBS tools package
library(TFBSTools)
# convert the matrix into a PWM object
ctcf_pwm = PWMatrix(
ID = 'CTCF',
profileMatrix = ctcf_motif
)
```
We can now use the `searchSeq()` function to scan each sequence for the motif occurrence.
Because the motif matrices are given a continuous binding score, we need to set a cutoff to
determine when a sequence contains the motif, and when it doesn’t.
The cutoff is set by determining the maximal possible score produced by the motif matrix;
a percentage of that score is then taken as the threshold value.
For example, if the best sequence would have a score of 1\.4 of being bound,
then we define a threshold of 80% of 1\.4, which is 1\.12; and any sequence which
scores less than 1\.12 would not be marked as being bound by the protein.
For the CTCF, we mark any peak containing a sequence with \> 80% of the maximal rescaled score or “relative score” as a positive hit.
```
## seqnames source feature start end absScore relScore strand ID
## 1 1 TFBS TFBS 44 63 11.9 0.921 - CTCF
## 2 1 TFBS TFBS 102 121 11.0 0.839 - CTCF
## 3 2 TFBS TFBS 151 170 11.5 0.881 + CTCF
## 4 4 TFBS TFBS 294 313 11.9 0.921 - CTCF
## 5 4 TFBS TFBS 352 371 11.0 0.839 - CTCF
## 6 5 TFBS TFBS 164 183 10.9 0.831 - CTCF
```
A common diagnostic plot is to graph a reverse cumulative distribution of
peak occurrences.
On the x\-axis we rank the peaks, with the most highly enriched peak in the
first position, and the least enriched peak in the last position.
We then walk from the lowest to the highest ranking and measure the
percentage of peaks containing the motif.
```
# label which peaks contain CTCF motifs
motif_hits_df = data.frame(
peak_order = 1:length(ctcf_peaks)
)
motif_hits_df$contains_motif = motif_hits_df$peak_order %in% hits$seqnames
motif_hits_df = motif_hits_df[order(-motif_hits_df$peak_order),]
# calculate the percentage of peaks with motif for peaks of descending strength
motif_hits_df$perc_peaks = with(motif_hits_df,
cumsum(contains_motif) / max(peak_order))
motif_hits_df$perc_peaks = round(motif_hits_df$perc_peaks, 2)
```
We can now visualize the percentage of peaks with matching CTCF motif.
```
# plot the cumulative distribution of motif hit percentages
ggplot(
motif_hits_df,
aes(
x = peak_order,
y = perc_peaks
)) +
geom_line(size=2) +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Peak rank') +
ylab('Percetage of peaks with motif') +
ggtitle('Percentage of CTCF peaks with the CTCF motif')
```
FIGURE 9\.19: Percentage of peaks containing the motif. Higher percentage indicates a better ChIP\-experiment, and a better peak calling procedure.
Figure [9\.19](peak-calling.html#fig:peak-quality-scan-dist-plot)
shows that, when we take all peaks into account, \~45% of
the peaks contain a CTCF motif.
This is an excellent percentage and indicates a high\-quality ChIP experiment.
Our inability to locate the motif in \~50% of the sequences does not
necessarily need to be a consequence of a poor experiment; sometimes
it is a result of the molecular mechanism by which the transcription factor
binds. If a transcription factor has multiple binding modes, which are context
dependent, for example, if the transcription factor binds indirectly to
a subset of regions, through
an interacting partner, we do not have to observe a motif.
##### 9\.6\.4\.2\.1 Representing motifs as matrices
In order to calculate the percentage of CTCF peaks which contain a known CTCF
motif. We need to find the CTCF motif and have the computational tools to search for that motif. The DNA binding motifs can be extracted from the `MotifDB` Bioconductor
database. The `MotifDB` is an agglomeration of multiple motif databases.
```
# load the MotifDB package
library(MotifDb)
# fetch the CTCF motif from the data base
motifs = query(query(MotifDb, 'Hsapiens'), 'CTCF')
# show all available ctcf motifs
motifs
```
```
## MotifDb object of length 12
## | Created from downloaded public sources: 2013-Aug-30
## | 12 position frequency matrices from 8 sources:
## | HOCOMOCOv10: 2
## | HOCOMOCOv11-core-A: 2
## | JASPAR_2014: 1
## | JASPAR_CORE: 1
## | SwissRegulon: 2
## | jaspar2016: 1
## | jaspar2018: 2
## | jolma2013: 1
## | 1 organism/s
## | Hsapiens: 12
## Hsapiens-SwissRegulon-CTCFL.SwissRegulon
## Hsapiens-SwissRegulon-CTCF.SwissRegulon
## Hsapiens-HOCOMOCOv10-CTCFL_HUMAN.H10MO.A
## Hsapiens-HOCOMOCOv10-CTCF_HUMAN.H10MO.A
## Hsapiens-HOCOMOCOv11-core-A-CTCFL_HUMAN.H11MO.0.A
## ...
## Hsapiens-JASPAR_2014-CTCF-MA0139.1
## Hsapiens-jaspar2016-CTCF-MA0139.1
## Hsapiens-jaspar2018-CTCF-MA0139.1
## Hsapiens-jaspar2018-CTCFL-MA1102.1
## Hsapiens-jolma2013-CTCF
```
We will extract the CTCF from the `MotifDB` (Khan, Fornes, Stigliani, et al. [2018](#ref-khan_2018)) database.
```
# based on the MotifDB version, the location of the CTCF motif
# might change, if you do not get the expected results please try
# to subset with different indices
ctcf_motif = motifs[[1]]
```
The motifs are usually represented as matrices of 4\-by\-N dimensions. In the matrix, each of 4 rows correspond to one nucleotide (A, C, G, T).
The number of columns designates the width of the region bound by the transcription factor or the length of the motif that the protein recognizes.
Each element of the matrix contains the probability of observing the corresponding
nucleotide on this position.
For example, for following the CTCF matrix in Table [9\.1](peak-calling.html#tab:peakqualityshow), the probability of observing a thymine at
the first position of the motif,\\(p\_{i\=1,k\=4}\\) , is 0\.57 (1st column, 4th row).
Such a matrix, where each column is a probability distribution over a sequence of nucleotides,
is called a position frequency matrix (PFM). In some sources, this matrix is also called “position probability matrix (PPM)”. One way to construct such matrices is to get experimentally verified sequences that are bound by the protein of interest and then to use a motif\-finding algorithm.
TABLE 9\.1: Position Frequency Matrix (PFM) for the CTCF motif
| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| A | 0\.17 | 0\.23 | 0\.29 | 0\.10 | 0\.33 | 0\.06 | 0\.05 | 0\.04 | 0\.02 | 0 | 0\.25 | 0\.00 | 0 | 0\.05 | 0\.25 | 0\.00 | 0\.17 | 0 | 0\.02 | 0\.19 |
| C | 0\.42 | 0\.28 | 0\.30 | 0\.32 | 0\.11 | 0\.33 | 0\.56 | 0\.00 | 0\.96 | 1 | 0\.67 | 0\.69 | 1 | 0\.04 | 0\.07 | 0\.42 | 0\.15 | 0 | 0\.06 | 0\.43 |
| G | 0\.25 | 0\.23 | 0\.26 | 0\.27 | 0\.42 | 0\.55 | 0\.05 | 0\.83 | 0\.01 | 0 | 0\.03 | 0\.00 | 0 | 0\.02 | 0\.53 | 0\.55 | 0\.05 | 1 | 0\.87 | 0\.15 |
| T | 0\.16 | 0\.27 | 0\.15 | 0\.31 | 0\.14 | 0\.06 | 0\.33 | 0\.13 | 0\.00 | 0 | 0\.06 | 0\.31 | 0 | 0\.89 | 0\.15 | 0\.03 | 0\.62 | 0 | 0\.05 | 0\.23 |
Such a matrix can be used to calculate the probability that the transcription
factor will bind to any given sequence. However, computationally, it is easier to work with summation rather than multiplication. In addition, the simple probabilistic model does not take the background probability of observing a certain base in a given position. We can correct for background base frequencies by dividing the individual probability, \\(p\_{i,k}\\) in each cell of the matrix by the background base probability for a given base, \\(B\_k\\). We can then take the logarithm of that quantity to calculate a log\-likelihood and bring everything to log\-scale as follows \\(Score\_{i,k}\=log\_2(p\_{i,k}/B\_k)\\). We can now calculate a score for any given
sequence by summing up the base\-position\-specific scores we obtain from the log\-scaled matrix. This matrix is formally called position\-specific scoring matrix (PSSM) or position\-specific weight matrix (PWM). We can use this matrix to scan the genome in a sliding window manner and calculate a score for each window. Usually, a cutoff value is needed to call a motif hit. The higher the score you get from the PWM for a particular sequence, the better it is. The traditional algorithms we will use in the following sections use 80% of the maximum rescaled score you can obtain from a PWM as the default cutoff for a hit. The rescaling is simple min\-max rescaling where you rescale the score by subtracting the minimum score and dividing that by \\(max(PWMscore)\-min(PWMscore)\\). The motif scanning approach is illustrated in Figure [9\.17](peak-calling.html#fig:FigurePWMScanning). In this example, ACACT is not considered a hit because its score only corresponds to only \\(15\.6\\) % of the rescaled maximum score.
FIGURE 9\.17: PWM scanning principle. A genomic sequence is scanned by a PWM matrix. This matrix is used to measure how likely it is that the transcription factor will bind each nucleotide in each position. Here we are looking at how likely it is that our TF will bind to the sequence ACACT. The score for this sequence is \-3\.6\. The maximal score obtainable by the PWM is 7\.2 and minimum is \-5\.6\. After min\-max rescaling, \-3\.6 corresponds to a 15% score and ACACT is not considered a hit.
##### 9\.6\.4\.2\.2 Representing motifs as sequence logos
Using the PFM, we can calculate the information content of each position in the matrix.
The information content quantifies the contribution of each nucleotide to the
cumulative binding preference. This tells us how important each nucleotide is for the binding. It additionally allows us to visually represent the probability matrices as sequence logos.
The information content is quantified as relative entropy. It ranges from \\(0\\), no information,
to \\(2\\), maximal information. For a column in the PFM, the entropy is calculated as follows:
\\\[
entropy \= \-\\sum\\limits\_{k\=1}^n p\_{i,k}\\log\_2(p\_{i,k})
\\]
\\(p\_{i,k}\\) is the probability of observing base \\(k\\) in the column \\(i\\) of the PFM. In other words, \\(p\_{i,k}\\) is simply the value of the cell in the PFM. The entropy value is high when the probabilities of each base are similar and low when it is much more probable that only one base occur in a given column. The relative portion comes from the fact that we compare the entropy we calculated for a column to the maximum entropy we can obtain. If the all bases are equally likely for a position in the PFM, then we will have the maximum entropy and we compare our original entropy to that maximum entropy. The maximum entropy is simply \\(log\_2{n}\\) where \\(n\\) is number of letters in the alphabet. In our case we have 4 letters A,C,G and T. The information content is then simply subtracting the observed entropy for a column from the maximum entropy, which translates to the following equation:
\\\[
IC\=log\_2(n)\+\\sum\\limits\_{k\=1}^n p\_{i,k}\\log\_2(p\_{i,k})
\\]
The information content, \\(IC\\), in the preceding equation, will be high if a base has a high probability of occurrence and low if all bases are more or less equally likely to occur.
We can visualize the matrix by visualizing the letters weighted by their probabilities in the PFM. This approach is shown on the left\-hand side of Figure [9\.18](peak-calling.html#fig:peak-quality-seqLogo-plot). In addition, we can also calculate the information content per column to weight the probabilities. This means that the columns that have very frequent letters will be higher. This approach is shown on the right\-hand side of Figure [9\.18](peak-calling.html#fig:peak-quality-seqLogo-plot). We will use below the `seqLogo` package to visualize the CTCF motif in the two different ways we described above.
FIGURE 9\.18: CTCF sequence motif visualized as a sequence logo. Y\-axis ranges from zero to two, and corresponds to the amount of information each base in the motif contributes to the overall motif. The larger the letter, the greater the probability of observing just one defined base on the designated position.
##### 9\.6\.4\.2\.3 Percentage of peaks with the motif
Since we now understand how DNA motifs are used we can start annotating the CTCF peaks with the motif. First, we will extend the peak
regions to \+/\- 200 bp around the peak center.
Because the average fragment size is 200 bp, 400 nucleotides is the
expected variation in the position of the true binding location.
```
# extend the peak regions
ctcf_peaks_resized = resize(ctcf_peaks, width = 400, fix = 'center')
```
Now we use the `BSgenome` package to
extract the sequences corresponding to the peak regions.
```
# load the human genome sequence
library(BSgenome.Hsapiens.UCSC.hg38)
# extract the sequences around the peaks
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38, ctcf_peaks_resized)
```
Once we have extracted the sequences, we can use the CTCF motif to
scan each sequence and determine the probability of CTCF binding.
For this we use the `TFBSTools` (Tan and Lenhard [2016](#ref-TFBSTools)) package.
We first convert the raw probability matrix into a `PWMMatrix` object,
which can then be used for efficient scanning.
```
# load the TFBS tools package
library(TFBSTools)
# convert the matrix into a PWM object
ctcf_pwm = PWMatrix(
ID = 'CTCF',
profileMatrix = ctcf_motif
)
```
We can now use the `searchSeq()` function to scan each sequence for the motif occurrence.
Because the motif matrices are given a continuous binding score, we need to set a cutoff to
determine when a sequence contains the motif, and when it doesn’t.
The cutoff is set by determining the maximal possible score produced by the motif matrix;
a percentage of that score is then taken as the threshold value.
For example, if the best sequence would have a score of 1\.4 of being bound,
then we define a threshold of 80% of 1\.4, which is 1\.12; and any sequence which
scores less than 1\.12 would not be marked as being bound by the protein.
For the CTCF, we mark any peak containing a sequence with \> 80% of the maximal rescaled score or “relative score” as a positive hit.
```
## seqnames source feature start end absScore relScore strand ID
## 1 1 TFBS TFBS 44 63 11.9 0.921 - CTCF
## 2 1 TFBS TFBS 102 121 11.0 0.839 - CTCF
## 3 2 TFBS TFBS 151 170 11.5 0.881 + CTCF
## 4 4 TFBS TFBS 294 313 11.9 0.921 - CTCF
## 5 4 TFBS TFBS 352 371 11.0 0.839 - CTCF
## 6 5 TFBS TFBS 164 183 10.9 0.831 - CTCF
```
A common diagnostic plot is to graph a reverse cumulative distribution of
peak occurrences.
On the x\-axis we rank the peaks, with the most highly enriched peak in the
first position, and the least enriched peak in the last position.
We then walk from the lowest to the highest ranking and measure the
percentage of peaks containing the motif.
```
# label which peaks contain CTCF motifs
motif_hits_df = data.frame(
peak_order = 1:length(ctcf_peaks)
)
motif_hits_df$contains_motif = motif_hits_df$peak_order %in% hits$seqnames
motif_hits_df = motif_hits_df[order(-motif_hits_df$peak_order),]
# calculate the percentage of peaks with motif for peaks of descending strength
motif_hits_df$perc_peaks = with(motif_hits_df,
cumsum(contains_motif) / max(peak_order))
motif_hits_df$perc_peaks = round(motif_hits_df$perc_peaks, 2)
```
We can now visualize the percentage of peaks with matching CTCF motif.
```
# plot the cumulative distribution of motif hit percentages
ggplot(
motif_hits_df,
aes(
x = peak_order,
y = perc_peaks
)) +
geom_line(size=2) +
theme_bw() +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
xlab('Peak rank') +
ylab('Percetage of peaks with motif') +
ggtitle('Percentage of CTCF peaks with the CTCF motif')
```
FIGURE 9\.19: Percentage of peaks containing the motif. Higher percentage indicates a better ChIP\-experiment, and a better peak calling procedure.
Figure [9\.19](peak-calling.html#fig:peak-quality-scan-dist-plot)
shows that, when we take all peaks into account, \~45% of
the peaks contain a CTCF motif.
This is an excellent percentage and indicates a high\-quality ChIP experiment.
Our inability to locate the motif in \~50% of the sequences does not
necessarily need to be a consequence of a poor experiment; sometimes
it is a result of the molecular mechanism by which the transcription factor
binds. If a transcription factor has multiple binding modes, which are context
dependent, for example, if the transcription factor binds indirectly to
a subset of regions, through
an interacting partner, we do not have to observe a motif.
#### 9\.6\.4\.3 Motif localization
If the ChIP experiment was performed properly, we would expect the motif
to be localized just below the summit of each peak.
By plotting the motif localization around ChIP peaks, we are quantifying
the uncertainty in the peak location.
We will firstly resize our peaks into regions around \+/−1\-kb around the peak
center.
```
# resize the region around peaks to +/- 1kb
ctcf_peaks_resized = resize(ctcf_peaks, width = 2000, fix='center')
```
Now we perform the motif localization, as before.
```
# fetch the sequence
seq = getSeq(BSgenome.Hsapiens.UCSC.hg38,ctcf_peaks_resized)
# convert the motif matrix to PWM, and scan the peaks
ctcf_pwm = PWMatrix(ID = 'CTCF', profileMatrix = ctcf_motif)
hits = searchSeq(ctcf_pwm, seq, min.score="80%", strand="*")
hits = as.data.frame(hits)
```
We now construct a plot, where the
X\-axis represents the \+/\- 1000 nucleotides around the peak, while the
Y\-axis shows the motif enrichment at each position.
```
# set the position relative to the start
hits$position = hits$start - 1000
# plot the motif hits around peaks
ggplot(data=hits, aes(position)) +
geom_density(size=2) +
theme_bw() +
geom_vline(xintercept = 0, linetype=2, color='red', size=2) +
xlab('Position around the CTCF peaks') +
ylab('Per position percentage\nof motif occurence') +
theme(
axis.text = element_text(size=10, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5))
```
FIGURE 9\.20: Transcription factor sequence motif localization with respect to the defined binding sites.
We can in Figure [9\.20](peak-calling.html#fig:chip-quality-motifloc-plot), see that the bulk of motif
hits are found in a region of \\(\+/\-\\) 250 bp around the peak centers.
This means that the peak calling procedure was quite precise.
### 9\.6\.5 Peak annotation
As the final step of quality control we will visualize the distribution
of peaks in different functional genomic regions.
The purpose of the analysis is to check whether the location of the peaks
conforms our prior knowledge.
This analysis is equivalent to constructing distributions for reads.
Firstly we download the human gene models and construct the annotation hierarchy.
```
# download the annotation
hub = AnnotationHub()
gtf = hub[['AH61126']]
seqlevels(gtf, pruning.mode='coarse') = '21'
seqlevels(gtf, pruning.mode='coarse') = paste0('chr', seqlevels(gtf))
# create the annotation hierarchy
annotation_list = GRangesList(
tss = promoters(subset(gtf, type=='gene'), 1000, 1000),
exon = subset(gtf, type=='exon'),
intron = subset(gtf, type=='gene')
)
```
The following function finds the genomic location of each peak, annotates
the peaks using the hierarchical prioritization,
and calculates the summary statistics.
The function contains four major parts:
1. Creating a disjoint set of peak regions.
2. Finding the overlapping annotation for each peak.
3. Annotating each peak with the corresponding annotation class.
4. Calculating summary statistics
```
# function which annotates the location of each peak
annotatePeaks = function(peaks, annotation_list, name){
# ------------------------------------------------ #
# 1. getting disjoint regions
# collapse touching enriched regions
peaks = reduce(peaks)
# ------------------------------------------------ #
# 2. overlapping peaks and annotation
# find overlaps between the peaks and annotation_list
result = as.data.frame(findOverlaps(peaks, annotation_list))
# ------------------------------------------------ #
# 3. annotating peaks
# fetch annotation names
result$annotation = names(annotation_list)[result$subjectHits]
# rank by annotation precedence
result = result[order(result$subjectHits),]
# remove overlapping annotations
result = subset(result, !duplicated(queryHits))
# ------------------------------------------------ #
# 4. calculating statistics
# count the number of peaks in each annotation category
result = group_by(.data = result, annotation)
result = summarise(.data = result, counts = length(annotation))
# fetch the number of intergenic peaks
result = rbind(result,
data.frame(annotation = 'intergenic',
counts = length(peaks) - sum(result$counts)))
result$frequency = with(result, round(counts/sum(counts),2))
result$experiment = name
return(result)
}
```
Using the above defined `annotatePeaks()` function we will now annotate CTCF
and H3K36me3 peaks. Firstly we create a list which contains both CTCF and H3K36me3 peaks.
```
peak_list = list(
CTCF = ctcf_peaks,
H3K36me3 = h3k36_peaks
)
```
Using the `lapply()` function we apply the `annotatePeaks()` function
on each element of the list.
```
# calculate the distribution of peaks in annotation for each experiment
annot_peaks_list = lapply(names(peak_list), function(peak_name){
annotatePeaks(peak_list[[peak_name]], annotation_list, peak_name)
})
```
We use the `dplyr::bind_rows()` function to combine the CTCF and H3K36me3 annotation
statistics into one data frame.
```
# combine a list of data.frames into one data.frame
annot_peaks_df = dplyr::bind_rows(annot_peaks_list)
```
And visualize the results as bar plots. Resulting plot is in Figure [9\.21](peak-calling.html#fig:peak-annotation-plot), which shows that the H3K36me3 peaks are
located preferentially in gene bodies, as expected, while the CTCF peaks are
found preferentially in introns.
```
# plot the distribution of peaks in genomic features
ggplot(data = annot_peaks_df,
aes(
x = experiment,
y = frequency,
fill = annotation
)) +
geom_bar(stat='identity') +
scale_fill_brewer(palette='Set2') +
theme_bw()+
theme(
axis.text = element_text(size=18, face='bold'),
axis.title = element_text(size=14,face="bold"),
plot.title = element_text(hjust = 0.5)) +
ggtitle('Peak distribution in\ngenomic regions') +
xlab('Experiment') +
ylab('Frequency')
```
FIGURE 9\.21: Enrichment of transcription factor or histone modifications in functional genomic features.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/motif-discovery.html |
9\.7 Motif discovery
--------------------
The first analysis step downstream of peak calling is motif discovery.
Motif discovery is a procedure of finding enriched sets of similar short sequences
in a large sequence dataset. In our case the large sequence dataset are
sequences around ChIP peaks, while the short sequence sets are the transcription
factor binding sites.
There are two types of motif discovery tools: supervised and unsupervised.
Supervised tools require explicit positive (we are certain that the motif is enriched), and negative sequence sets (we are certain that the motif is not enriched), and
then search for relative enrichment of short motifs in the foreground versus
the background.
Unsupervised models, on the other hand, require only a set of positive sequences,
and then compare motif abundance to a statistically constructed background set.
Due to the combinatorial nature of the procedure, motif discovery is
computationally expensive. It is therefore often performed on a subset of the
highest\-quality peaks. In this tutorial we will use the `rGADEM`
package for motif discovery.
`rGADEM` is an unsupervised, stochastic motif discovery tools, which uses
sampling with subsequent enrichment analysis to find over\-represented sequence
motifs.
We will firstly load our CTCF peaks, and convert them to a GRanges object.
We will then select the top 500 peaks, and extract the DNA sequence, which
will be used as input for the motif discovery. Nearby ChIP peaks can have overlapping coordinates. After selection, overlapping CTCF peaks have to be merged using the `reduce()` function from the `GenomicRanges` package. If we do not execute this step, we will include the same sequence multiple times in the sequence set, and artificially enrich DNA patterns.
```
# read the CTCF peaks created in the peak calling part of the tutorial
ctcf_peaks = read.table(file.path(data_path, 'CTCF_peaks.txt'), header=TRUE)
# convert the peaks into a GRanges object
ctcf_peaks = makeGRangesFromDataFrame(ctcf_peaks, keep.extra.columns = TRUE)
# order the peaks by qvalue, and take top 250 peaks
ctcf_peaks = ctcf_peaks[order(ctcf_peaks$qvalue)]
ctcf_peaks = head(ctcf_peaks, n = 500)
# merge nearby CTCF peaks
ctcf_peaks = reduce(ctcf_peaks)
```
Create a region of \\(\+/\-\\) 50 bp around the center of the peaks,
```
# expand the CTCF peaks
ctcf_peaks_resized = resize(ctcf_peaks, width = 50, fix='center')
```
and extract the genomic sequence.
We are now ready to run the motif discovery. Firstly we load the `rGADEM` package:
To run the motif discovery, we call the `GADEM()` function. with the
extracted DNA sequences. In addition to the DNA sequences, we need to
specify two parameters:
1. **seed** \- the random number generator seed, which will make the analysis
reproducible.
2. **nmotifs** \- the number of motifs to look for.
```
## top 3 4, 5-mers: 12 40 52
## top 3 4, 5-mers: 12 36 42
```
The `rGADEM` package contains a convenient `plot()` function for
motif visualization. We will use the plot function to visualize the most enriched DNA motif:
```
# visualize the resulting motif
plot(novel_motifs[1])
```
FIGURE 9\.22: Motif with highest enrichment in top 500 CTCF peaks.
The motif shown in Figure [9\.22](motif-discovery.html#fig:motif-discovery-logo) corresponds to the
previously visualized CTCF motif. Nevertheless, we will computationally
annotate our motif by querying the JASPAR (Khan, Fornes, Stigliani, et al. [2018](#ref-khan_2018)) database in the next section.
### 9\.7\.1 Motif comparison
We will now compare our unknown motif with the JASPAR2018 (Khan, Fornes, Stigliani, et al. [2018](#ref-khan_2018)) database,
to figure out to which transcription factor it corresponds.
Firstly we convert the frequency matrix into a `PWMatrix` object, and
then use this object to query the database.
```
# load the TFBSTools library
library(TFBSTools)
# extract the motif of interest from the GADEM object
unknown_motif = getPWM(novel_motifs)[[1]]
# convert the motif to a PWM matrix
unknown_pwm = PWMatrix(
ID = 'unknown',
profileMatrix = unknown_motif
)
```
Using the `getMatrixSet()` function we extract all motifs which
correspond to known human transcription factors.
The `opts` parameter defines which `PWM` database to use for comparison.
```
# load the JASPAR motif database
library(JASPAR2018)
# extract motifs corresponding to human transcription factors
pwm_library = getMatrixSet(
JASPAR2018,
opts=list(
collection = 'CORE',
species = 'Homo sapiens',
matrixtype = 'PWM'
))
```
The `PWMSimilarity()` function calculates the Pearson correlation between
the database, and our discovered motif via `rGADEM`.
```
# find the most similar motif to our motif
pwm_sim = PWMSimilarity(
# JASPAR library
pwm_library,
# out motif
unknown_pwm,
# measure for comparison
method = 'Pearson')
```
We extract the motif names from the PWM library. For each motif
in the library we append the Pearson correlation with our unknown motif, and
look at the topmost candidates.
```
# extract the motif names from the pwm library
pwm_library_list = lapply(pwm_library, function(x){
data.frame(ID = ID(x), name = name(x))
})
# combine the list into one data frame
pwm_library_dt = dplyr::bind_rows(pwm_library_list)
# fetch the similarity of each motif to our unknown motif
pwm_library_dt$similarity = pwm_sim[pwm_library_dt$ID]
# find the most similar motif in the library
pwm_library_dt = pwm_library_dt[order(-pwm_library_dt$similarity),]
```
```
head(pwm_library_dt)
```
```
## ID name similarity
## 24 MA0139.1 CTCF 0.7033789
## 370 MA1100.1 ASCL1 0.4769023
## 281 MA0807.1 TBX5 0.4762250
## 101 MA0033.2 FOXL1 0.4605249
## 302 MA0825.1 MNT 0.4370585
## 277 MA0803.1 TBX15 0.4317270
```
As expected, the topmost candidate is CTCF.
### 9\.7\.1 Motif comparison
We will now compare our unknown motif with the JASPAR2018 (Khan, Fornes, Stigliani, et al. [2018](#ref-khan_2018)) database,
to figure out to which transcription factor it corresponds.
Firstly we convert the frequency matrix into a `PWMatrix` object, and
then use this object to query the database.
```
# load the TFBSTools library
library(TFBSTools)
# extract the motif of interest from the GADEM object
unknown_motif = getPWM(novel_motifs)[[1]]
# convert the motif to a PWM matrix
unknown_pwm = PWMatrix(
ID = 'unknown',
profileMatrix = unknown_motif
)
```
Using the `getMatrixSet()` function we extract all motifs which
correspond to known human transcription factors.
The `opts` parameter defines which `PWM` database to use for comparison.
```
# load the JASPAR motif database
library(JASPAR2018)
# extract motifs corresponding to human transcription factors
pwm_library = getMatrixSet(
JASPAR2018,
opts=list(
collection = 'CORE',
species = 'Homo sapiens',
matrixtype = 'PWM'
))
```
The `PWMSimilarity()` function calculates the Pearson correlation between
the database, and our discovered motif via `rGADEM`.
```
# find the most similar motif to our motif
pwm_sim = PWMSimilarity(
# JASPAR library
pwm_library,
# out motif
unknown_pwm,
# measure for comparison
method = 'Pearson')
```
We extract the motif names from the PWM library. For each motif
in the library we append the Pearson correlation with our unknown motif, and
look at the topmost candidates.
```
# extract the motif names from the pwm library
pwm_library_list = lapply(pwm_library, function(x){
data.frame(ID = ID(x), name = name(x))
})
# combine the list into one data frame
pwm_library_dt = dplyr::bind_rows(pwm_library_list)
# fetch the similarity of each motif to our unknown motif
pwm_library_dt$similarity = pwm_sim[pwm_library_dt$ID]
# find the most similar motif in the library
pwm_library_dt = pwm_library_dt[order(-pwm_library_dt$similarity),]
```
```
head(pwm_library_dt)
```
```
## ID name similarity
## 24 MA0139.1 CTCF 0.7033789
## 370 MA1100.1 ASCL1 0.4769023
## 281 MA0807.1 TBX5 0.4762250
## 101 MA0033.2 FOXL1 0.4605249
## 302 MA0825.1 MNT 0.4370585
## 277 MA0803.1 TBX15 0.4317270
```
As expected, the topmost candidate is CTCF.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/data-filtering-and-exploratory-analysis.html |
10\.4 Data filtering and exploratory analysis
---------------------------------------------
We assume that we start the analysis in R with the methylation call files. We will read those files in and carry out exploratory analysis, and we will show how to filter bases or regions from the data and in what circumstances we might need to do so. We will use the [methylKit](https://bioconductor.org/packages/release/bioc/html/methylKit.html)(Akalin, Kormaksson, Li, et al. [2012](#ref-Akalin2012-af)) package for the bulk of the analysis.
### 10\.4\.1 Reading methylation call files
A typical methylation call file looks like this:
```
## chrBase chr base strand coverage freqC freqT
## 1 chr21.9764539 chr21 9764539 R 12 25.00 75.00
## 2 chr21.9764513 chr21 9764513 R 12 0.00 100.00
## 3 chr21.9820622 chr21 9820622 F 13 0.00 100.00
## 4 chr21.9837545 chr21 9837545 F 11 0.00 100.00
## 5 chr21.9849022 chr21 9849022 F 124 72.58 27.42
```
Most of the time bisulfite sequencing experiments have test and control samples. The test samples can be from a disease tissue while the control samples can be from a healthy tissue. You can read a set of methylation call files that have test/control conditions giving a `treatment` vector option. The treatment vector defines the sample groups and it is very important for the differential methylation analysis. For the sake of subsequent analysis, file.list, sample.id and treatment option should have the same order. In the following example, the first two files have the sample IDs “test1” and “test2” and as determined by the treatment vector they belong to the same group. The third and fourth files have sample IDs “ctrl1” and “ctrl2” and they belong to the same group as indicated by the treatment vector. We will first get a list of file paths and have a look at the content.
If you look what is inside the `file.list` variable, you will see that it is a simple list of file paths. Each file contains methylation calls for a given sample. Now, we can read the files with the `methRead()` function.
```
# read the files to a methylRawList object: myobj
myobj=methRead(file.list,
sample.id=list("test1","test2","ctrl1","ctrl2"),
assembly="hg18",
treatment=c(1,1,0,0),
context="CpG"
)
```
Tab\-separated bedgraph like formats from Bismark methylation caller can also be read in by methylkit. In those cases, we have to provide either `pipeline="bismarkCoverage"` or `pipeline="bismarkCytosineReport"` to the `methRead()` function. In addition to the options we mentioned above,
any tab\-separated text file with a generic format can be read in using methylKit,
such as methylation ratio files from [BSMAP](http://code.google.com/p/bsmap/).
See [here](http://zvfak.blogspot.com/2012/10/how-to-read-bsmap-methylation-ratio.html) for an example.
Before we move on, let us have a look at what kind of information is stored in `myobj`. This is technically a `methylRawList` object, which is essentially a list of `methylRaw` objects. These objects hold
the information for the genomic location of Cs, and methylated Cs and unmethylated Cs.
```
## inside the methylRawList object
length(myobj)
```
```
## [1] 4
```
```
head(myobj[[1]])
```
```
## chr start end strand coverage numCs numTs
## 1 chr21 9764513 9764513 - 12 0 12
## 2 chr21 9764539 9764539 - 12 3 9
## 3 chr21 9820622 9820622 + 13 0 13
## 4 chr21 9837545 9837545 + 11 0 11
## 5 chr21 9849022 9849022 + 124 90 34
## 6 chr21 9853296 9853296 + 17 10 7
```
### 10\.4\.2 Further quality check
It is always a good idea to check how the data looks before proceeding further. For example, the methylation values should have bimodal distribution generally. This can be checked via the
`getMethylationStats()` function. Normally, we should see bimodal
distributions. Strong deviations from the bimodality may be due to poor experimental quality, such as problems with bisulfite treatment. Below we show how to get these plots using the `getMethylationStats()` function. The result is shown in Figure [10\.1](data-filtering-and-exploratory-analysis.html#fig:methStats). As expected, it has a bimodal distribution where most CpGs have either high methylation or low methylation.
```
getMethylationStats(myobj[[2]],plot=TRUE,both.strands=FALSE)
```
FIGURE 10\.1: Histogram for methylation values for all CpGs in the dataset.
In addition, we might want to see coverage values. By default, methylkit handles bases with at least 10X coverage but that can be changed. The bases with unusually high coverage are usually alarming. It might indicate a PCR bias issue in the experimental procedure. The general coverage statistics can be checked with the
`getCoverageStats()` function shown below. The resulting plot is shown in Figure [10\.2](data-filtering-and-exploratory-analysis.html#fig:coverageStats).
```
getCoverageStats(myobj[[2]],plot=TRUE,both.strands=FALSE)
```
FIGURE 10\.2: Histogram for log10 read counts per CpG.
It might be useful to filter samples based on coverage. Particularly, if our samples are suffering from PCR bias, it would be useful to discard bases with very high read coverage. Furthermore, we would also like to discard bases that have low read coverage; a high enough read coverage will increase the power of the statistical tests. The code below filters a `methylRawList`, discards bases that have coverage below 10X, and also discards the bases that have more than 99\.9th percentile of coverage in each sample.
```
filtered.myobj=filterByCoverage(myobj,lo.count=10,lo.perc=NULL,
hi.count=NULL,hi.perc=99.9)
```
### 10\.4\.3 Merging samples into a single table
When we first read the files, each file is stored as its own entity. If we want to compare samples in any way, we need to make a unified data structure that contains CpGs that are covered in most samples. The `unite()` function creates a new object using the CpGs covered in each sample.
```
## we use :: notation to make sure unite() function from methylKit is called
meth=methylKit::unite(myobj, destrand=FALSE)
```
Let us take a look at the data content of the `methylBase` object:
```
head(meth)
```
```
## chr start end strand coverage1 numCs1 numTs1 coverage2 numCs2 numTs2
## 1 chr21 9853296 9853296 + 17 10 7 333 268 65
## 2 chr21 9853326 9853326 + 17 12 5 329 249 79
## 3 chr21 9860126 9860126 + 39 38 1 83 78 5
## 4 chr21 9906604 9906604 + 68 42 26 111 97 14
## 5 chr21 9906616 9906616 + 68 52 16 111 104 7
## 6 chr21 9906619 9906619 + 68 59 9 111 109 2
## coverage3 numCs3 numTs3 coverage4 numCs4 numTs4
## 1 18 16 2 395 341 54
## 2 16 14 2 379 284 95
## 3 83 83 0 41 40 1
## 4 23 18 5 37 33 4
## 5 23 14 9 37 27 10
## 6 22 18 4 37 29 8
```
By default, the `unite()` function produces bases/regions covered in all samples. That requirement can be relaxed using the `min.per.group` option in the `unite()` function.
```
# creates a methylBase object,
# where only CpGs covered with at least 1 sample per group will be returned
# there were two groups defined by the treatment vector,
# given during the creation of myobj: treatment=c(1,1,0,0)
meth.min=unite(myobj,min.per.group=1L)
```
### 10\.4\.4 Filtering CpGs
We might need to filter the CpGs further before exploratory analysis or even before the downstream analysis such as differential methylation. For exploratory analysis, it is of general interest to see how samples relate to each other and we might want to remove CpGs that are not variable before doing that. Or we might remove Cs that are potentially C\-\>T mutations. First, we show how to
filter based on variation. Below, we extract percent methylation values from CpGs as a matrix. Calculate the standard deviation for each CpG and filter based on standard deviation. We also plot the distribution of per\-CpG standard deviations with the `hist()` function. The resulting plot is shown in Figure [10\.3](data-filtering-and-exploratory-analysis.html#fig:methVar).
```
pm=percMethylation(meth) # get percent methylation matrix
mds=matrixStats::rowSds(pm) # calculate standard deviation of CpGs
head(meth[mds>20,])
```
```
## chr start end strand coverage1 numCs1 numTs1 coverage2 numCs2 numTs2
## 11 chr21 9906681 9906681 + 21 12 9 60 56 4
## 12 chr21 9906694 9906694 + 21 9 12 60 53 7
## 13 chr21 9906700 9906700 + 13 6 7 53 43 10
## 14 chr21 9906714 9906714 + 14 3 11 41 37 4
## 18 chr21 9906873 9906873 + 12 8 4 41 33 8
## 23 chr21 9927527 9927527 + 17 5 12 40 22 18
## coverage3 numCs3 numTs3 coverage4 numCs4 numTs4
## 11 37 14 23 26 11 15
## 12 39 16 23 26 15 11
## 13 30 8 22 23 10 13
## 14 25 19 6 21 19 2
## 18 15 4 11 22 7 15
## 23 32 32 0 14 11 3
```
```
hist(mds,col="cornflowerblue",xlab="Std. dev. per CpG")
```
FIGURE 10\.3: Histogram of per\-CpG standard deviations.
Now, let’s assume we know the locations of C\-\>T mutations. These locations should be removed from the analysis as they do not represent
bisulfite\-treatment\-associated conversions. Mutation locations are
stored in a `GRanges` object, and we can use that to remove CpGs
overlapping with mutations. In order to do the overlap operation, we will convert the methylKit object to a `GRanges` object and do the filtering with the `%over%` function within `[ ]`. The returned object will still be a methylKit object.
```
library(GenomicRanges)
# example SNP
mut=GRanges(seqnames=c("chr21","chr21"),
ranges=IRanges(start=c(9853296, 9853326),
end=c( 9853296,9853326)))
# select CpGs that do not overlap with mutations
sub.meth=meth[! as(meth,"GRanges") %over% mut,]
nrow(meth)
```
```
## [1] 963
```
```
nrow(sub.meth)
```
```
## [1] 961
```
### 10\.4\.5 Clustering samples
Clustering is used for grouping data points by their similarity. It is a general concept that can be achieved by many different algorithms and we introduced clustering and multiple prominent clustering algorithms in Chapter [4](unsupervisedLearning.html#unsupervisedLearning). In the context of DNA methylation, we are trying to find samples that are similar to each other. For example, if we sequenced 3 heart samples and 4 liver samples, we would expect liver samples will be more similar to each other than heart samples on the DNA methylation space.
The following function will cluster the samples and draw a dendrogram.
It will use correlation distance, which is \\(1\-\\rho\\) , where \\(\\rho\\) is the correlation coefficient between two pairs of samples. The cluster tree will be drawn using the “ward” method. This specific variant uses a “bottom up” approach: each data point starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. In Ward’s method, two clusters are merged if the variance is minimized compared to other possible merge operations. This bottom up approach helps build the dendrogram showing the relationship between clusters. The result of the clustering is shown in Figure [10\.4](data-filtering-and-exploratory-analysis.html#fig:clusterMethPlot).
```
clusterSamples(meth, dist="correlation", method="ward", plot=TRUE)
```
FIGURE 10\.4: Dendrogram for samples using correlation distance and Ward’s method for hierarchical clustering.
```
##
## Call:
## hclust(d = d, method = HCLUST.METHODS[hclust.method])
##
## Cluster method : ward.D
## Distance : pearson
## Number of objects: 4
```
Setting the `plot=FALSE` will return a dendrogram object which can be manipulated by users or fed in to other user functions that can work with dendrograms.
```
hc = clusterSamples(meth, dist="correlation", method="ward", plot=FALSE)
```
### 10\.4\.6 Principal component analysis
Principal component analysis (PCA) is a mathematical transformation of (possibly) correlated variables into a number of uncorrelated variables called principal components. The resulting components from this transformation are defined in such a way that the first principal component has the highest variance and accounts for most of the variability in the data. We have introduced PCA and other similar methods in Chapter [4](unsupervisedLearning.html#unsupervisedLearning). The following function will plot a scree plot for importance of components and the result is shown in Figure [10\.5](data-filtering-and-exploratory-analysis.html#fig:pcaMethScree).
```
PCASamples(meth, screeplot=TRUE)
```
FIGURE 10\.5: Scree plot for explained variance for principal components.
We can also plot the PC1 and PC2 axes and a scatter plot of our samples on those axes which will reveal how they cluster within these new dimensions. Similar to the clustering dendrogram, we would like to see samples that are similar to be close to each other on the scatter plot. If they are not, it might indicate problems with the experiment such as batch effects. The function below plots the samples in such a scatter plot on principal component axes. The resulting plot is shown in Figure [10\.6](data-filtering-and-exploratory-analysis.html#fig:pcaMethScatter).
```
pc=PCASamples(meth,obj.return = TRUE, adj.lim=c(1,1))
```
FIGURE 10\.6: Samples plotted on principal components.
In this case, we also returned an object from the plotting function. This is the output of the `prcomp()` function, which includes loadings and eigenvectors which might be useful. You can also do your own PCA analysis using `percMethylation()` and `prcomp()`. In the case above, the methylation matrix is transposed. This allows us to compare distances between samples on the PCA scatter plot.
### 10\.4\.1 Reading methylation call files
A typical methylation call file looks like this:
```
## chrBase chr base strand coverage freqC freqT
## 1 chr21.9764539 chr21 9764539 R 12 25.00 75.00
## 2 chr21.9764513 chr21 9764513 R 12 0.00 100.00
## 3 chr21.9820622 chr21 9820622 F 13 0.00 100.00
## 4 chr21.9837545 chr21 9837545 F 11 0.00 100.00
## 5 chr21.9849022 chr21 9849022 F 124 72.58 27.42
```
Most of the time bisulfite sequencing experiments have test and control samples. The test samples can be from a disease tissue while the control samples can be from a healthy tissue. You can read a set of methylation call files that have test/control conditions giving a `treatment` vector option. The treatment vector defines the sample groups and it is very important for the differential methylation analysis. For the sake of subsequent analysis, file.list, sample.id and treatment option should have the same order. In the following example, the first two files have the sample IDs “test1” and “test2” and as determined by the treatment vector they belong to the same group. The third and fourth files have sample IDs “ctrl1” and “ctrl2” and they belong to the same group as indicated by the treatment vector. We will first get a list of file paths and have a look at the content.
If you look what is inside the `file.list` variable, you will see that it is a simple list of file paths. Each file contains methylation calls for a given sample. Now, we can read the files with the `methRead()` function.
```
# read the files to a methylRawList object: myobj
myobj=methRead(file.list,
sample.id=list("test1","test2","ctrl1","ctrl2"),
assembly="hg18",
treatment=c(1,1,0,0),
context="CpG"
)
```
Tab\-separated bedgraph like formats from Bismark methylation caller can also be read in by methylkit. In those cases, we have to provide either `pipeline="bismarkCoverage"` or `pipeline="bismarkCytosineReport"` to the `methRead()` function. In addition to the options we mentioned above,
any tab\-separated text file with a generic format can be read in using methylKit,
such as methylation ratio files from [BSMAP](http://code.google.com/p/bsmap/).
See [here](http://zvfak.blogspot.com/2012/10/how-to-read-bsmap-methylation-ratio.html) for an example.
Before we move on, let us have a look at what kind of information is stored in `myobj`. This is technically a `methylRawList` object, which is essentially a list of `methylRaw` objects. These objects hold
the information for the genomic location of Cs, and methylated Cs and unmethylated Cs.
```
## inside the methylRawList object
length(myobj)
```
```
## [1] 4
```
```
head(myobj[[1]])
```
```
## chr start end strand coverage numCs numTs
## 1 chr21 9764513 9764513 - 12 0 12
## 2 chr21 9764539 9764539 - 12 3 9
## 3 chr21 9820622 9820622 + 13 0 13
## 4 chr21 9837545 9837545 + 11 0 11
## 5 chr21 9849022 9849022 + 124 90 34
## 6 chr21 9853296 9853296 + 17 10 7
```
### 10\.4\.2 Further quality check
It is always a good idea to check how the data looks before proceeding further. For example, the methylation values should have bimodal distribution generally. This can be checked via the
`getMethylationStats()` function. Normally, we should see bimodal
distributions. Strong deviations from the bimodality may be due to poor experimental quality, such as problems with bisulfite treatment. Below we show how to get these plots using the `getMethylationStats()` function. The result is shown in Figure [10\.1](data-filtering-and-exploratory-analysis.html#fig:methStats). As expected, it has a bimodal distribution where most CpGs have either high methylation or low methylation.
```
getMethylationStats(myobj[[2]],plot=TRUE,both.strands=FALSE)
```
FIGURE 10\.1: Histogram for methylation values for all CpGs in the dataset.
In addition, we might want to see coverage values. By default, methylkit handles bases with at least 10X coverage but that can be changed. The bases with unusually high coverage are usually alarming. It might indicate a PCR bias issue in the experimental procedure. The general coverage statistics can be checked with the
`getCoverageStats()` function shown below. The resulting plot is shown in Figure [10\.2](data-filtering-and-exploratory-analysis.html#fig:coverageStats).
```
getCoverageStats(myobj[[2]],plot=TRUE,both.strands=FALSE)
```
FIGURE 10\.2: Histogram for log10 read counts per CpG.
It might be useful to filter samples based on coverage. Particularly, if our samples are suffering from PCR bias, it would be useful to discard bases with very high read coverage. Furthermore, we would also like to discard bases that have low read coverage; a high enough read coverage will increase the power of the statistical tests. The code below filters a `methylRawList`, discards bases that have coverage below 10X, and also discards the bases that have more than 99\.9th percentile of coverage in each sample.
```
filtered.myobj=filterByCoverage(myobj,lo.count=10,lo.perc=NULL,
hi.count=NULL,hi.perc=99.9)
```
### 10\.4\.3 Merging samples into a single table
When we first read the files, each file is stored as its own entity. If we want to compare samples in any way, we need to make a unified data structure that contains CpGs that are covered in most samples. The `unite()` function creates a new object using the CpGs covered in each sample.
```
## we use :: notation to make sure unite() function from methylKit is called
meth=methylKit::unite(myobj, destrand=FALSE)
```
Let us take a look at the data content of the `methylBase` object:
```
head(meth)
```
```
## chr start end strand coverage1 numCs1 numTs1 coverage2 numCs2 numTs2
## 1 chr21 9853296 9853296 + 17 10 7 333 268 65
## 2 chr21 9853326 9853326 + 17 12 5 329 249 79
## 3 chr21 9860126 9860126 + 39 38 1 83 78 5
## 4 chr21 9906604 9906604 + 68 42 26 111 97 14
## 5 chr21 9906616 9906616 + 68 52 16 111 104 7
## 6 chr21 9906619 9906619 + 68 59 9 111 109 2
## coverage3 numCs3 numTs3 coverage4 numCs4 numTs4
## 1 18 16 2 395 341 54
## 2 16 14 2 379 284 95
## 3 83 83 0 41 40 1
## 4 23 18 5 37 33 4
## 5 23 14 9 37 27 10
## 6 22 18 4 37 29 8
```
By default, the `unite()` function produces bases/regions covered in all samples. That requirement can be relaxed using the `min.per.group` option in the `unite()` function.
```
# creates a methylBase object,
# where only CpGs covered with at least 1 sample per group will be returned
# there were two groups defined by the treatment vector,
# given during the creation of myobj: treatment=c(1,1,0,0)
meth.min=unite(myobj,min.per.group=1L)
```
### 10\.4\.4 Filtering CpGs
We might need to filter the CpGs further before exploratory analysis or even before the downstream analysis such as differential methylation. For exploratory analysis, it is of general interest to see how samples relate to each other and we might want to remove CpGs that are not variable before doing that. Or we might remove Cs that are potentially C\-\>T mutations. First, we show how to
filter based on variation. Below, we extract percent methylation values from CpGs as a matrix. Calculate the standard deviation for each CpG and filter based on standard deviation. We also plot the distribution of per\-CpG standard deviations with the `hist()` function. The resulting plot is shown in Figure [10\.3](data-filtering-and-exploratory-analysis.html#fig:methVar).
```
pm=percMethylation(meth) # get percent methylation matrix
mds=matrixStats::rowSds(pm) # calculate standard deviation of CpGs
head(meth[mds>20,])
```
```
## chr start end strand coverage1 numCs1 numTs1 coverage2 numCs2 numTs2
## 11 chr21 9906681 9906681 + 21 12 9 60 56 4
## 12 chr21 9906694 9906694 + 21 9 12 60 53 7
## 13 chr21 9906700 9906700 + 13 6 7 53 43 10
## 14 chr21 9906714 9906714 + 14 3 11 41 37 4
## 18 chr21 9906873 9906873 + 12 8 4 41 33 8
## 23 chr21 9927527 9927527 + 17 5 12 40 22 18
## coverage3 numCs3 numTs3 coverage4 numCs4 numTs4
## 11 37 14 23 26 11 15
## 12 39 16 23 26 15 11
## 13 30 8 22 23 10 13
## 14 25 19 6 21 19 2
## 18 15 4 11 22 7 15
## 23 32 32 0 14 11 3
```
```
hist(mds,col="cornflowerblue",xlab="Std. dev. per CpG")
```
FIGURE 10\.3: Histogram of per\-CpG standard deviations.
Now, let’s assume we know the locations of C\-\>T mutations. These locations should be removed from the analysis as they do not represent
bisulfite\-treatment\-associated conversions. Mutation locations are
stored in a `GRanges` object, and we can use that to remove CpGs
overlapping with mutations. In order to do the overlap operation, we will convert the methylKit object to a `GRanges` object and do the filtering with the `%over%` function within `[ ]`. The returned object will still be a methylKit object.
```
library(GenomicRanges)
# example SNP
mut=GRanges(seqnames=c("chr21","chr21"),
ranges=IRanges(start=c(9853296, 9853326),
end=c( 9853296,9853326)))
# select CpGs that do not overlap with mutations
sub.meth=meth[! as(meth,"GRanges") %over% mut,]
nrow(meth)
```
```
## [1] 963
```
```
nrow(sub.meth)
```
```
## [1] 961
```
### 10\.4\.5 Clustering samples
Clustering is used for grouping data points by their similarity. It is a general concept that can be achieved by many different algorithms and we introduced clustering and multiple prominent clustering algorithms in Chapter [4](unsupervisedLearning.html#unsupervisedLearning). In the context of DNA methylation, we are trying to find samples that are similar to each other. For example, if we sequenced 3 heart samples and 4 liver samples, we would expect liver samples will be more similar to each other than heart samples on the DNA methylation space.
The following function will cluster the samples and draw a dendrogram.
It will use correlation distance, which is \\(1\-\\rho\\) , where \\(\\rho\\) is the correlation coefficient between two pairs of samples. The cluster tree will be drawn using the “ward” method. This specific variant uses a “bottom up” approach: each data point starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. In Ward’s method, two clusters are merged if the variance is minimized compared to other possible merge operations. This bottom up approach helps build the dendrogram showing the relationship between clusters. The result of the clustering is shown in Figure [10\.4](data-filtering-and-exploratory-analysis.html#fig:clusterMethPlot).
```
clusterSamples(meth, dist="correlation", method="ward", plot=TRUE)
```
FIGURE 10\.4: Dendrogram for samples using correlation distance and Ward’s method for hierarchical clustering.
```
##
## Call:
## hclust(d = d, method = HCLUST.METHODS[hclust.method])
##
## Cluster method : ward.D
## Distance : pearson
## Number of objects: 4
```
Setting the `plot=FALSE` will return a dendrogram object which can be manipulated by users or fed in to other user functions that can work with dendrograms.
```
hc = clusterSamples(meth, dist="correlation", method="ward", plot=FALSE)
```
### 10\.4\.6 Principal component analysis
Principal component analysis (PCA) is a mathematical transformation of (possibly) correlated variables into a number of uncorrelated variables called principal components. The resulting components from this transformation are defined in such a way that the first principal component has the highest variance and accounts for most of the variability in the data. We have introduced PCA and other similar methods in Chapter [4](unsupervisedLearning.html#unsupervisedLearning). The following function will plot a scree plot for importance of components and the result is shown in Figure [10\.5](data-filtering-and-exploratory-analysis.html#fig:pcaMethScree).
```
PCASamples(meth, screeplot=TRUE)
```
FIGURE 10\.5: Scree plot for explained variance for principal components.
We can also plot the PC1 and PC2 axes and a scatter plot of our samples on those axes which will reveal how they cluster within these new dimensions. Similar to the clustering dendrogram, we would like to see samples that are similar to be close to each other on the scatter plot. If they are not, it might indicate problems with the experiment such as batch effects. The function below plots the samples in such a scatter plot on principal component axes. The resulting plot is shown in Figure [10\.6](data-filtering-and-exploratory-analysis.html#fig:pcaMethScatter).
```
pc=PCASamples(meth,obj.return = TRUE, adj.lim=c(1,1))
```
FIGURE 10\.6: Samples plotted on principal components.
In this case, we also returned an object from the plotting function. This is the output of the `prcomp()` function, which includes loadings and eigenvectors which might be useful. You can also do your own PCA analysis using `percMethylation()` and `prcomp()`. In the case above, the methylation matrix is transposed. This allows us to compare distances between samples on the PCA scatter plot.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/extracting-interesting-regions-differential-methylation-and-segmentation.html |
10\.5 Extracting interesting regions: Differential methylation and segmentation
-------------------------------------------------------------------------------
When analyzing DNA methylation data, we usually look for regions that are different than the rest of the methylome or different from a reference methylome. These regions are so\-called “interesting regions”. They usually mark important genomic features that are related to gene regulation, which in turn defines the cell type. Therefore, it is a general interest to find such regions and analyze them further to understand our biological sample or to answer specific research questions. Below we will describe two ways of defining “regions of interest”.
### 10\.5\.1 Differential methylation
Once methylation proportions per base are obtained, generally, the differences between methylation profiles are considered next. When there are multiple sample groups where each group defines a separate biological entity or treatment, it is usually of interest to locate bases or regions with different methylation proportions across the sample groups. The bases or regions with different methylation proportions across samples are called differentially methylated CpG sites (DMCs) and differentially methylated regions (DMRs). They have been shown to play a role in many different diseases due to their association with epigenetic control of gene regulation. In addition, DNA methylation profiles can be highly tissue\-specific due to their role in gene regulation (Schübeler [2015](#ref-Schubeler2015-ai)). DNA methylation is highly informative when studying normal and diseased cells, because it can also act as a biomarker. For example, the presence of large\-scale abnormally methylated genomic regions is a hallmark feature of many types of cancers (Ehrlich [2002](#ref-Ehrlich2002-hv)). Because of the aforementioned reasons, investigating differential methylation is usually one of the primary goals of doing bisulfite sequencing.
#### 10\.5\.1\.1 Fisher’s exact test
Differential DNA methylation is usually calculated by comparing the proportion of methylated Cs in a test sample relative to a control. In simple comparisons between such pairs of samples (i.e. test and control), methods such as Fisher’s exact test can be used. If there are replicates, replicates can be pooled within groups to a single sample per group. This strategy, however, does not take into account biological variability between replicates. We will now show how to compare pairs of samples via the `calculateDiffMeth()` function in `methylKit`. When there is only one sample per sample group, `calculateDiffMeth()` automatically applies Fisher’s exact test. We will now extract one sample from each group and run `calculateDiffMeth()`, which will automatically run Fisher’s exact test.
```
getSampleID(meth)
new.meth=reorganize(meth,sample.ids=c("test1","ctrl1"),treatment=c(1,0))
dmf=calculateDiffMeth(new.meth)
```
As mentioned, we can also pool the samples from the same group by adding up the number of Cs and Ts per group. This way even if we have replicated experiments we treat them as single experiments, and can apply Fisher’s exact test. We will now pool the samples and apply the `calculateDiffMeth()` function.
```
pooled.meth=pool(meth,sample.ids=c("test","control"))
dm.pooledf=calculateDiffMeth(pooled.meth)
```
The `calculateDiffMeth()` function returns the P\-values for all bases or regions in the input methylBase object. We need to filter to get differentially methylated CpGs. This can be done via the `getMethlyDiff()` function or simple filtering via `[ ]` notation. Below we show how to filter the `methylDiff` object output by the `calculateDiffMeth()` function in order to get differentially methylated CpGs. The function arguments define cutoff values for the methylation difference between groups and q\-value. In these cases, we require a methylation difference of 25% and a q\-value of at least \\(0\.01\\).
```
# get differentially methylated bases/regions with specific cutoffs
all.diff=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="all")
# get hyper-methylated
hyper=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="hyper")
# get hypo-methylated
hypo=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="hypo")
#using [ ] notation
hyper2=dm.pooledf[dm.pooledf$qvalue < 0.01 & dm.pooledf$meth.diff > 25,]
```
#### 10\.5\.1\.2 Logistic regression based tests
Regression\-based methods are generally used to model methylation levels in relation to the sample groups and variation between replicates. Differences between currently available regression methods stem from the choice of distribution to model the data and the variation associated with it. In the simplest case, linear regression can be used to model methylation per given CpG or loci across sample groups. The model fits regression coefficients to model the expected methylation proportion values for each CpG site across sample groups. Hence, the null hypothesis of the model coefficients being zero could be tested using t\-statistics. However, linear\-regression\-based methods might produce fitted methylation levels outside the range \\(\[0,1]\\) unless the values are transformed before regression. An alternative is logistic regression, which can deal with data strictly bounded between 0 and 1 and with non\-constant variance, such as methylation proportion/fraction values. In the logistic regression, it is assumed that fitted values have variation \\(np(1\-p)\\), where \\(p\\) is the fitted methylation proportion for a given sample and \\(n\\) is the read coverage. If the observed variance is larger or smaller than assumed by the model, one speaks of under\- or over\-dispersion. This over/under\-dispersion can be corrected by calculating a scaling factor and using that factor to adjust the variance estimates as in \\(np(1\-p)s\\), where \\(s\\) is the scaling factor. MethylKit can apply logistic regression to test the methylation difference with or without the over\-dispersion correction. In this case, Chi\-square or F\-test can be used to compare the difference in the deviances of the null model and the alternative model. The null model assumes there is no relationship between sample groups and methylation, and the alternative model assumes that there is a relationship where sample groups are predictive of methylation values for a given CpG or region for which the model is constructed. Next, we are going to use the logistic\-regression\-based model with over\-dispersion correction and Chi\-square test.
```
dm.lr=calculateDiffMeth(meth,overdispersion = "MN",test ="Chisq")
```
#### 10\.5\.1\.3 Betabinomial\-distribution\-based tests
More complex regression models use beta binomial distribution and are particularly useful for better modeling the variance. Similar to logistic regression, their observation follows binomial distribution (number of reads), but methylation proportion itself can vary across samples, according to a beta distribution. It can deal with fitting values in the \\(\[0,1]\\) range and performs better when there is greater variance than expected by the simple logistic model. In essence, these models have a different way of calculating a scaling factor when there is over\-dispersion in the model. Further enhancements are made to these models by using the empirical Bayes methods that can better estimate hyper parameters of the beta distribution (variance\-related parameters) by borrowing information between loci or regions within the genome to aid with inference about each individual loci or region. We are now going to use a beta\-binomial based model called DSS (Feng, Conneely, and Wu [2014](#ref-Feng2014-pd)) to calculate differential methylation.
```
dm.dss=calculateDiffMethDSS(meth)
```
```
## Using internal DSS code...
```
#### 10\.5\.1\.4 Differential methylation for regions rather than base\-pairs
Until now, we have worked on differentially methylated cytosines. However,
working with base\-pair resolution data has its problems. Not all the CpGs will be covered in all samples. If covered they may have low coverage, which reduces the power of the tests. Instead of base\-pairs, we can choose to work with regions. So, it might be desirable to summarize methylation information over pre\-defined regions rather than doing base\-pair resolution analysis. `methylKit` provides functionality to do such analysis. We can either tile the whole genome to tiles with predefined length, or we can use pre\-defined regions such as promoters or CpG islands. This kind of regional analysis is carried out by adding up C and T counts from each covered cytosine and returning a total C and T count for each region.
The function below tiles the genome with windows of \\(1000\\) bp length and \\(1000\\) bp step\-size and summarizes the methylation information on those tiles. In this case, it returns a `methylRawList` object which can be fed into `unite()` and `calculateDiffMeth()` functions consecutively to get differentially methylated regions.
```
tiles=tileMethylCounts(myobj,win.size=1000,step.size=1000)
head(tiles[[1]],3)
```
```
## chr start end strand coverage numCs numTs
## 1 chr21 9764001 9765000 * 24 3 21
## 2 chr21 9820001 9821000 * 13 0 13
## 3 chr21 9837001 9838000 * 11 0 11
```
In addition, if we are interested in particular regions, we can also get those regions as methylKit objects after summarizing the methylation information as described above. The code below summarizes the methylation information over a given set of promoter regions and outputs a `methylRaw` or `methylRawList` object depending on the input. We are using the output of
`genomation` functions used above to provide the locations of promoters. For regional summary functions, we need to
provide regions of interest as GRanges objects.
```
library(genomation)
# read the gene BED file
gene.obj=readTranscriptFeatures(system.file("extdata", "refseq.hg18.bed.txt",
package = "methylKit"))
promoters=regionCounts(myobj,gene.obj$promoters)
head(promoters[[1]])
```
```
## chr start end strand coverage numCs numTs
## 1 chr21 10011791 10013791 - 7953 6662 1290
## 2 chr21 10119796 10121796 - 1725 1171 554
## 3 chr21 10119808 10121808 - 1725 1171 554
## 4 chr21 13903368 13905368 + 10 10 0
## 5 chr21 14273636 14275636 - 282 220 62
## 6 chr21 14509336 14511336 + 1058 55 1003
```
In addition, it is possible to cluster DMCs based on their proximity and direction of differential methylation. This can be achieved by the `methSeg()` function in methylKit. We will see more about the `methSeg()` function in the following section.
But it can take the output of `getMethylDiff()` function and therefore can work on DMCs to get differentially methylated regions.
#### 10\.5\.1\.5 Adding covariates
Covariates can be included in the analysis as well in methylKit. The `calculateDiffMeth()` function will then try to
separate the influence of the covariates from the
treatment effect via the logistic regression model. In this case, we will test
if the full model (model with treatment and covariates) is better than the model with
the covariates only. If there is no effect due to the treatment (sample groups),
the full model will not explain the data better than the model with covariates
only. In `calculateDiffMeth()`, this is achieved by
supplying the `covariates` argument in the format of a `data.frame`.
Below, we simulate methylation data and add a `data.frame` for the age.
The data frame can include more columns, and those columns can also be
`factor` variables. The row order of the data.frame should match the order
of samples in the `methylBase` object. Below we are showing an example
of this using a simulated data set where methylation values of CpGs will be affected by the age of the sample.
```
covariates=data.frame(age=c(30,80,34,30,80,40))
sim.methylBase=dataSim(replicates=6,sites=1000,
treatment=c(rep(1,3),rep(0,3)),
covariates=covariates,
sample.ids=c(paste0("test",1:3),paste0("ctrl",1:3)))
my.diffMeth3=calculateDiffMeth(sim.methylBase,
covariates=covariates,
overdispersion="MN",
test="Chisq",mc.cores=1)
```
### 10\.5\.2 Methylation segmentation
The analysis of methylation dynamics is not exclusively restricted to differentially methylated regions across samples. Apart from this there is also an interest in examining the methylation profiles within the same sample. Usually, depressions in methylation profiles pinpoint regulatory regions like gene promoters that co\-localize with CG\-dense CpG islands. On the other hand, many gene\-body regions are extensively methylated and CpG\-poor (Bock, Beerman, Lien, et al. [2012](#ref-Bock2012-oh)). These observations would describe a bimodal model of either hyper\- or hypomethylated regions depending on the local density of CpGs (Lövkvist, Dodd, Sneppen, et al. [2016](#ref-Lovkvist2016-ky)). However, given the detection of CpG\-poor regions with locally reduced levels of methylation (on average 30%) in pluripotent embryonic stem cells and in neuronal progenitors in both mouse and human, a different model also seems reasonable (M. B. Stadler, Murr, Burger, et al. [2011](#ref-Stadler2011-iu)[a](#ref-Stadler2011-iu)). These low\-methylated regions (LMRs) are located distal to promoters, have little overlap with CpG islands, and are associated with enhancer marks such as p300 binding sites and H3K27ac enrichment.
Now we are going to try to segment a portion for the H1 human embryonic stem cell line. MethylKit uses change\-point analysis to segment the methylome. In change\-point analysis, the change\-points of a genome\-wide methylation signal are recorded and the genome is partitioned into regions between consecutive change points. CpGs in each segment are similar to each other more than the following segment.
After segmentation, methylKit function `methSeg()` identifies segments that are further clustered into segment classes using a mixture modeling approach. This clustering is based on only the average methylation level of the segments and allows the detection of distinct methylome features comparable to unmethylated regions (UMRs), lowly methylated regions (LMRs), and fully methylated regions (FMRs) mentioned in Stadler et al. (M. B. Stadler, Murr, Burger, et al. [2011](#ref-Stadler2011-yv)[b](#ref-Stadler2011-yv)). The code snippet below reads the methylation data from the H1 cell line as a `GRanges` object, and runs the segmentation with potentially up to classes of segments. Mixture modeling determines the optimal number of segments using a statistic called Bayesian information criterion (BIC). The BIC is a statistic based on model likelihood and helps us select the model that fits the data better. We have set the number of segment classes to try using the `G=1:4` argument. The `minSeg` arguments are related to the minimum number of CpGs in the segments. The function `methSeg()` outputs a diagnostic plot for segmentation. This plot is shown in Figure [10\.7](extracting-interesting-regions-differential-methylation-and-segmentation.html#fig:segDiag). It shows methylation values and lengths of segments in each segment class, as well as the BIC for different numbers of segments.
```
# read methylation data
methFile=system.file("extdata","H1.chr21.chr22.rds",
package="compGenomRData")
mbw=readRDS(methFile)
# segment the methylation data
res=methSeg(mbw,minSeg=10,G=1:4,
join.neighbours = TRUE)
```
FIGURE 10\.7: Segmentation characteristics shown in different plots. Top left: Mean methylation values per segment in each segment class. Top middle: Length of each segment as boxplots for each segment class. Top right: Number of segments in each segment class. Bottom left: Distribution of segment methylation values. Bottom right: BIC for different number of segment classes
In this case, we know that BIC does not improve much after 4 segment classes. Now, we will not have a look at the characteristics of the segment classes. We are going to plot the mean methylation value and the length of the segment as a scatter plot; the result of this plot is shown in Figure [10\.8](extracting-interesting-regions-differential-methylation-and-segmentation.html#fig:segplot).
```
# plot
plot(res$seg.mean,
log10(width(res)),pch=20,
col=scales::alpha(rainbow(4)[as.numeric(res$seg.group)], 0.2),
ylab="log10(length)",
xlab="methylation proportion")
```
FIGURE 10\.8: Scatter plot of segment mean, methylation values versus segment length. Each dot is a segment identified by the *methSeg()* function.
The highly methylated segment classes that have more than 70% methylation are usually longer; the median length is 17889 bp. The segment class that has the lowest methylation values have the median length of 1376 bp and the shortest segment class has low to medium methylation level, with median length of 412 bp.
### 10\.5\.3 Working with large files
We might want to perform differential methylation analysis in R using whole genome methylation data of multiple samples. The problem is that for genome\-wide experiments, file sizes can easily range from hundreds of megabytes to gigabytes and processing multiple instances of those files in memory (RAM) might become unfeasible unless we have access to a high\-performance compute cluster (HPC) with extensive RAM. If we want to use a desktop computer or laptop with limited RAM, we either need to restrict our analysis to a subset of the data or use packages that can handle this situation.
The methylKit package provides the capability of dealing with large files and high numbers of samples by exploiting flat file databases to substitute in\-memory objects. The internal data, apart from meta information, has a tabular structure storing chromosome, start/end position, and strand information of the associated CpG base just like many other biological formats like BED, GFF or SAM. By exporting this tabular data into a TAB\-delimited file and making sure it is accordingly position\-sorted, it can be indexed using the generic [tabix tool](http://www.htslib.org/doc/tabix.html). In general, tabix indexing is a generalization of BAM indexing for generic TAB\-delimited files. It inherits all the advantages of BAM indexing, including data compression and efficient random access in terms of few seek function calls per query (Li [2011](#ref-Li2011-wc)). `MethylKit` relies on [`Rsamtools`](http://bioconductor.org/packages/release/bioc/html/Rsamtools.html) which implements tabix functionality for R. This way internal methylKit objects can be efficiently stored as a compressed file on the disk and still be quickly accessed. Another advantage is that existing compressed files can be loaded in interactive sessions, allowing the backup and transfer of intermediate analysis results.
`methylKit` provides the capability for storing objects in tabix format within various functions. Every methylKit object has its tabix\-based flat\-file database equivalent. For example, when reading a methylation call file, the `dbtype` argument can be provided, which will create tabix\-based objects.
```
myobj=methRead( file.list,
sample.id=list("test1","test2","ctrl1","ctrl2"),
assembly="hg18",treatment=c(1,1,0,0),
dbtype="tabix")
```
The advantage of tabix\-based objects is of course saving memory and more efficient parallelization for differential methylation calculation. However, since the data is written to a file and indexed whenever a new object is created, working with tabix\-based objects will be slower at certain steps of the analysis compared to in\-memory objects.
### 10\.5\.1 Differential methylation
Once methylation proportions per base are obtained, generally, the differences between methylation profiles are considered next. When there are multiple sample groups where each group defines a separate biological entity or treatment, it is usually of interest to locate bases or regions with different methylation proportions across the sample groups. The bases or regions with different methylation proportions across samples are called differentially methylated CpG sites (DMCs) and differentially methylated regions (DMRs). They have been shown to play a role in many different diseases due to their association with epigenetic control of gene regulation. In addition, DNA methylation profiles can be highly tissue\-specific due to their role in gene regulation (Schübeler [2015](#ref-Schubeler2015-ai)). DNA methylation is highly informative when studying normal and diseased cells, because it can also act as a biomarker. For example, the presence of large\-scale abnormally methylated genomic regions is a hallmark feature of many types of cancers (Ehrlich [2002](#ref-Ehrlich2002-hv)). Because of the aforementioned reasons, investigating differential methylation is usually one of the primary goals of doing bisulfite sequencing.
#### 10\.5\.1\.1 Fisher’s exact test
Differential DNA methylation is usually calculated by comparing the proportion of methylated Cs in a test sample relative to a control. In simple comparisons between such pairs of samples (i.e. test and control), methods such as Fisher’s exact test can be used. If there are replicates, replicates can be pooled within groups to a single sample per group. This strategy, however, does not take into account biological variability between replicates. We will now show how to compare pairs of samples via the `calculateDiffMeth()` function in `methylKit`. When there is only one sample per sample group, `calculateDiffMeth()` automatically applies Fisher’s exact test. We will now extract one sample from each group and run `calculateDiffMeth()`, which will automatically run Fisher’s exact test.
```
getSampleID(meth)
new.meth=reorganize(meth,sample.ids=c("test1","ctrl1"),treatment=c(1,0))
dmf=calculateDiffMeth(new.meth)
```
As mentioned, we can also pool the samples from the same group by adding up the number of Cs and Ts per group. This way even if we have replicated experiments we treat them as single experiments, and can apply Fisher’s exact test. We will now pool the samples and apply the `calculateDiffMeth()` function.
```
pooled.meth=pool(meth,sample.ids=c("test","control"))
dm.pooledf=calculateDiffMeth(pooled.meth)
```
The `calculateDiffMeth()` function returns the P\-values for all bases or regions in the input methylBase object. We need to filter to get differentially methylated CpGs. This can be done via the `getMethlyDiff()` function or simple filtering via `[ ]` notation. Below we show how to filter the `methylDiff` object output by the `calculateDiffMeth()` function in order to get differentially methylated CpGs. The function arguments define cutoff values for the methylation difference between groups and q\-value. In these cases, we require a methylation difference of 25% and a q\-value of at least \\(0\.01\\).
```
# get differentially methylated bases/regions with specific cutoffs
all.diff=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="all")
# get hyper-methylated
hyper=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="hyper")
# get hypo-methylated
hypo=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="hypo")
#using [ ] notation
hyper2=dm.pooledf[dm.pooledf$qvalue < 0.01 & dm.pooledf$meth.diff > 25,]
```
#### 10\.5\.1\.2 Logistic regression based tests
Regression\-based methods are generally used to model methylation levels in relation to the sample groups and variation between replicates. Differences between currently available regression methods stem from the choice of distribution to model the data and the variation associated with it. In the simplest case, linear regression can be used to model methylation per given CpG or loci across sample groups. The model fits regression coefficients to model the expected methylation proportion values for each CpG site across sample groups. Hence, the null hypothesis of the model coefficients being zero could be tested using t\-statistics. However, linear\-regression\-based methods might produce fitted methylation levels outside the range \\(\[0,1]\\) unless the values are transformed before regression. An alternative is logistic regression, which can deal with data strictly bounded between 0 and 1 and with non\-constant variance, such as methylation proportion/fraction values. In the logistic regression, it is assumed that fitted values have variation \\(np(1\-p)\\), where \\(p\\) is the fitted methylation proportion for a given sample and \\(n\\) is the read coverage. If the observed variance is larger or smaller than assumed by the model, one speaks of under\- or over\-dispersion. This over/under\-dispersion can be corrected by calculating a scaling factor and using that factor to adjust the variance estimates as in \\(np(1\-p)s\\), where \\(s\\) is the scaling factor. MethylKit can apply logistic regression to test the methylation difference with or without the over\-dispersion correction. In this case, Chi\-square or F\-test can be used to compare the difference in the deviances of the null model and the alternative model. The null model assumes there is no relationship between sample groups and methylation, and the alternative model assumes that there is a relationship where sample groups are predictive of methylation values for a given CpG or region for which the model is constructed. Next, we are going to use the logistic\-regression\-based model with over\-dispersion correction and Chi\-square test.
```
dm.lr=calculateDiffMeth(meth,overdispersion = "MN",test ="Chisq")
```
#### 10\.5\.1\.3 Betabinomial\-distribution\-based tests
More complex regression models use beta binomial distribution and are particularly useful for better modeling the variance. Similar to logistic regression, their observation follows binomial distribution (number of reads), but methylation proportion itself can vary across samples, according to a beta distribution. It can deal with fitting values in the \\(\[0,1]\\) range and performs better when there is greater variance than expected by the simple logistic model. In essence, these models have a different way of calculating a scaling factor when there is over\-dispersion in the model. Further enhancements are made to these models by using the empirical Bayes methods that can better estimate hyper parameters of the beta distribution (variance\-related parameters) by borrowing information between loci or regions within the genome to aid with inference about each individual loci or region. We are now going to use a beta\-binomial based model called DSS (Feng, Conneely, and Wu [2014](#ref-Feng2014-pd)) to calculate differential methylation.
```
dm.dss=calculateDiffMethDSS(meth)
```
```
## Using internal DSS code...
```
#### 10\.5\.1\.4 Differential methylation for regions rather than base\-pairs
Until now, we have worked on differentially methylated cytosines. However,
working with base\-pair resolution data has its problems. Not all the CpGs will be covered in all samples. If covered they may have low coverage, which reduces the power of the tests. Instead of base\-pairs, we can choose to work with regions. So, it might be desirable to summarize methylation information over pre\-defined regions rather than doing base\-pair resolution analysis. `methylKit` provides functionality to do such analysis. We can either tile the whole genome to tiles with predefined length, or we can use pre\-defined regions such as promoters or CpG islands. This kind of regional analysis is carried out by adding up C and T counts from each covered cytosine and returning a total C and T count for each region.
The function below tiles the genome with windows of \\(1000\\) bp length and \\(1000\\) bp step\-size and summarizes the methylation information on those tiles. In this case, it returns a `methylRawList` object which can be fed into `unite()` and `calculateDiffMeth()` functions consecutively to get differentially methylated regions.
```
tiles=tileMethylCounts(myobj,win.size=1000,step.size=1000)
head(tiles[[1]],3)
```
```
## chr start end strand coverage numCs numTs
## 1 chr21 9764001 9765000 * 24 3 21
## 2 chr21 9820001 9821000 * 13 0 13
## 3 chr21 9837001 9838000 * 11 0 11
```
In addition, if we are interested in particular regions, we can also get those regions as methylKit objects after summarizing the methylation information as described above. The code below summarizes the methylation information over a given set of promoter regions and outputs a `methylRaw` or `methylRawList` object depending on the input. We are using the output of
`genomation` functions used above to provide the locations of promoters. For regional summary functions, we need to
provide regions of interest as GRanges objects.
```
library(genomation)
# read the gene BED file
gene.obj=readTranscriptFeatures(system.file("extdata", "refseq.hg18.bed.txt",
package = "methylKit"))
promoters=regionCounts(myobj,gene.obj$promoters)
head(promoters[[1]])
```
```
## chr start end strand coverage numCs numTs
## 1 chr21 10011791 10013791 - 7953 6662 1290
## 2 chr21 10119796 10121796 - 1725 1171 554
## 3 chr21 10119808 10121808 - 1725 1171 554
## 4 chr21 13903368 13905368 + 10 10 0
## 5 chr21 14273636 14275636 - 282 220 62
## 6 chr21 14509336 14511336 + 1058 55 1003
```
In addition, it is possible to cluster DMCs based on their proximity and direction of differential methylation. This can be achieved by the `methSeg()` function in methylKit. We will see more about the `methSeg()` function in the following section.
But it can take the output of `getMethylDiff()` function and therefore can work on DMCs to get differentially methylated regions.
#### 10\.5\.1\.5 Adding covariates
Covariates can be included in the analysis as well in methylKit. The `calculateDiffMeth()` function will then try to
separate the influence of the covariates from the
treatment effect via the logistic regression model. In this case, we will test
if the full model (model with treatment and covariates) is better than the model with
the covariates only. If there is no effect due to the treatment (sample groups),
the full model will not explain the data better than the model with covariates
only. In `calculateDiffMeth()`, this is achieved by
supplying the `covariates` argument in the format of a `data.frame`.
Below, we simulate methylation data and add a `data.frame` for the age.
The data frame can include more columns, and those columns can also be
`factor` variables. The row order of the data.frame should match the order
of samples in the `methylBase` object. Below we are showing an example
of this using a simulated data set where methylation values of CpGs will be affected by the age of the sample.
```
covariates=data.frame(age=c(30,80,34,30,80,40))
sim.methylBase=dataSim(replicates=6,sites=1000,
treatment=c(rep(1,3),rep(0,3)),
covariates=covariates,
sample.ids=c(paste0("test",1:3),paste0("ctrl",1:3)))
my.diffMeth3=calculateDiffMeth(sim.methylBase,
covariates=covariates,
overdispersion="MN",
test="Chisq",mc.cores=1)
```
#### 10\.5\.1\.1 Fisher’s exact test
Differential DNA methylation is usually calculated by comparing the proportion of methylated Cs in a test sample relative to a control. In simple comparisons between such pairs of samples (i.e. test and control), methods such as Fisher’s exact test can be used. If there are replicates, replicates can be pooled within groups to a single sample per group. This strategy, however, does not take into account biological variability between replicates. We will now show how to compare pairs of samples via the `calculateDiffMeth()` function in `methylKit`. When there is only one sample per sample group, `calculateDiffMeth()` automatically applies Fisher’s exact test. We will now extract one sample from each group and run `calculateDiffMeth()`, which will automatically run Fisher’s exact test.
```
getSampleID(meth)
new.meth=reorganize(meth,sample.ids=c("test1","ctrl1"),treatment=c(1,0))
dmf=calculateDiffMeth(new.meth)
```
As mentioned, we can also pool the samples from the same group by adding up the number of Cs and Ts per group. This way even if we have replicated experiments we treat them as single experiments, and can apply Fisher’s exact test. We will now pool the samples and apply the `calculateDiffMeth()` function.
```
pooled.meth=pool(meth,sample.ids=c("test","control"))
dm.pooledf=calculateDiffMeth(pooled.meth)
```
The `calculateDiffMeth()` function returns the P\-values for all bases or regions in the input methylBase object. We need to filter to get differentially methylated CpGs. This can be done via the `getMethlyDiff()` function or simple filtering via `[ ]` notation. Below we show how to filter the `methylDiff` object output by the `calculateDiffMeth()` function in order to get differentially methylated CpGs. The function arguments define cutoff values for the methylation difference between groups and q\-value. In these cases, we require a methylation difference of 25% and a q\-value of at least \\(0\.01\\).
```
# get differentially methylated bases/regions with specific cutoffs
all.diff=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="all")
# get hyper-methylated
hyper=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="hyper")
# get hypo-methylated
hypo=getMethylDiff(dm.pooledf,difference=25,qvalue=0.01,type="hypo")
#using [ ] notation
hyper2=dm.pooledf[dm.pooledf$qvalue < 0.01 & dm.pooledf$meth.diff > 25,]
```
#### 10\.5\.1\.2 Logistic regression based tests
Regression\-based methods are generally used to model methylation levels in relation to the sample groups and variation between replicates. Differences between currently available regression methods stem from the choice of distribution to model the data and the variation associated with it. In the simplest case, linear regression can be used to model methylation per given CpG or loci across sample groups. The model fits regression coefficients to model the expected methylation proportion values for each CpG site across sample groups. Hence, the null hypothesis of the model coefficients being zero could be tested using t\-statistics. However, linear\-regression\-based methods might produce fitted methylation levels outside the range \\(\[0,1]\\) unless the values are transformed before regression. An alternative is logistic regression, which can deal with data strictly bounded between 0 and 1 and with non\-constant variance, such as methylation proportion/fraction values. In the logistic regression, it is assumed that fitted values have variation \\(np(1\-p)\\), where \\(p\\) is the fitted methylation proportion for a given sample and \\(n\\) is the read coverage. If the observed variance is larger or smaller than assumed by the model, one speaks of under\- or over\-dispersion. This over/under\-dispersion can be corrected by calculating a scaling factor and using that factor to adjust the variance estimates as in \\(np(1\-p)s\\), where \\(s\\) is the scaling factor. MethylKit can apply logistic regression to test the methylation difference with or without the over\-dispersion correction. In this case, Chi\-square or F\-test can be used to compare the difference in the deviances of the null model and the alternative model. The null model assumes there is no relationship between sample groups and methylation, and the alternative model assumes that there is a relationship where sample groups are predictive of methylation values for a given CpG or region for which the model is constructed. Next, we are going to use the logistic\-regression\-based model with over\-dispersion correction and Chi\-square test.
```
dm.lr=calculateDiffMeth(meth,overdispersion = "MN",test ="Chisq")
```
#### 10\.5\.1\.3 Betabinomial\-distribution\-based tests
More complex regression models use beta binomial distribution and are particularly useful for better modeling the variance. Similar to logistic regression, their observation follows binomial distribution (number of reads), but methylation proportion itself can vary across samples, according to a beta distribution. It can deal with fitting values in the \\(\[0,1]\\) range and performs better when there is greater variance than expected by the simple logistic model. In essence, these models have a different way of calculating a scaling factor when there is over\-dispersion in the model. Further enhancements are made to these models by using the empirical Bayes methods that can better estimate hyper parameters of the beta distribution (variance\-related parameters) by borrowing information between loci or regions within the genome to aid with inference about each individual loci or region. We are now going to use a beta\-binomial based model called DSS (Feng, Conneely, and Wu [2014](#ref-Feng2014-pd)) to calculate differential methylation.
```
dm.dss=calculateDiffMethDSS(meth)
```
```
## Using internal DSS code...
```
#### 10\.5\.1\.4 Differential methylation for regions rather than base\-pairs
Until now, we have worked on differentially methylated cytosines. However,
working with base\-pair resolution data has its problems. Not all the CpGs will be covered in all samples. If covered they may have low coverage, which reduces the power of the tests. Instead of base\-pairs, we can choose to work with regions. So, it might be desirable to summarize methylation information over pre\-defined regions rather than doing base\-pair resolution analysis. `methylKit` provides functionality to do such analysis. We can either tile the whole genome to tiles with predefined length, or we can use pre\-defined regions such as promoters or CpG islands. This kind of regional analysis is carried out by adding up C and T counts from each covered cytosine and returning a total C and T count for each region.
The function below tiles the genome with windows of \\(1000\\) bp length and \\(1000\\) bp step\-size and summarizes the methylation information on those tiles. In this case, it returns a `methylRawList` object which can be fed into `unite()` and `calculateDiffMeth()` functions consecutively to get differentially methylated regions.
```
tiles=tileMethylCounts(myobj,win.size=1000,step.size=1000)
head(tiles[[1]],3)
```
```
## chr start end strand coverage numCs numTs
## 1 chr21 9764001 9765000 * 24 3 21
## 2 chr21 9820001 9821000 * 13 0 13
## 3 chr21 9837001 9838000 * 11 0 11
```
In addition, if we are interested in particular regions, we can also get those regions as methylKit objects after summarizing the methylation information as described above. The code below summarizes the methylation information over a given set of promoter regions and outputs a `methylRaw` or `methylRawList` object depending on the input. We are using the output of
`genomation` functions used above to provide the locations of promoters. For regional summary functions, we need to
provide regions of interest as GRanges objects.
```
library(genomation)
# read the gene BED file
gene.obj=readTranscriptFeatures(system.file("extdata", "refseq.hg18.bed.txt",
package = "methylKit"))
promoters=regionCounts(myobj,gene.obj$promoters)
head(promoters[[1]])
```
```
## chr start end strand coverage numCs numTs
## 1 chr21 10011791 10013791 - 7953 6662 1290
## 2 chr21 10119796 10121796 - 1725 1171 554
## 3 chr21 10119808 10121808 - 1725 1171 554
## 4 chr21 13903368 13905368 + 10 10 0
## 5 chr21 14273636 14275636 - 282 220 62
## 6 chr21 14509336 14511336 + 1058 55 1003
```
In addition, it is possible to cluster DMCs based on their proximity and direction of differential methylation. This can be achieved by the `methSeg()` function in methylKit. We will see more about the `methSeg()` function in the following section.
But it can take the output of `getMethylDiff()` function and therefore can work on DMCs to get differentially methylated regions.
#### 10\.5\.1\.5 Adding covariates
Covariates can be included in the analysis as well in methylKit. The `calculateDiffMeth()` function will then try to
separate the influence of the covariates from the
treatment effect via the logistic regression model. In this case, we will test
if the full model (model with treatment and covariates) is better than the model with
the covariates only. If there is no effect due to the treatment (sample groups),
the full model will not explain the data better than the model with covariates
only. In `calculateDiffMeth()`, this is achieved by
supplying the `covariates` argument in the format of a `data.frame`.
Below, we simulate methylation data and add a `data.frame` for the age.
The data frame can include more columns, and those columns can also be
`factor` variables. The row order of the data.frame should match the order
of samples in the `methylBase` object. Below we are showing an example
of this using a simulated data set where methylation values of CpGs will be affected by the age of the sample.
```
covariates=data.frame(age=c(30,80,34,30,80,40))
sim.methylBase=dataSim(replicates=6,sites=1000,
treatment=c(rep(1,3),rep(0,3)),
covariates=covariates,
sample.ids=c(paste0("test",1:3),paste0("ctrl",1:3)))
my.diffMeth3=calculateDiffMeth(sim.methylBase,
covariates=covariates,
overdispersion="MN",
test="Chisq",mc.cores=1)
```
### 10\.5\.2 Methylation segmentation
The analysis of methylation dynamics is not exclusively restricted to differentially methylated regions across samples. Apart from this there is also an interest in examining the methylation profiles within the same sample. Usually, depressions in methylation profiles pinpoint regulatory regions like gene promoters that co\-localize with CG\-dense CpG islands. On the other hand, many gene\-body regions are extensively methylated and CpG\-poor (Bock, Beerman, Lien, et al. [2012](#ref-Bock2012-oh)). These observations would describe a bimodal model of either hyper\- or hypomethylated regions depending on the local density of CpGs (Lövkvist, Dodd, Sneppen, et al. [2016](#ref-Lovkvist2016-ky)). However, given the detection of CpG\-poor regions with locally reduced levels of methylation (on average 30%) in pluripotent embryonic stem cells and in neuronal progenitors in both mouse and human, a different model also seems reasonable (M. B. Stadler, Murr, Burger, et al. [2011](#ref-Stadler2011-iu)[a](#ref-Stadler2011-iu)). These low\-methylated regions (LMRs) are located distal to promoters, have little overlap with CpG islands, and are associated with enhancer marks such as p300 binding sites and H3K27ac enrichment.
Now we are going to try to segment a portion for the H1 human embryonic stem cell line. MethylKit uses change\-point analysis to segment the methylome. In change\-point analysis, the change\-points of a genome\-wide methylation signal are recorded and the genome is partitioned into regions between consecutive change points. CpGs in each segment are similar to each other more than the following segment.
After segmentation, methylKit function `methSeg()` identifies segments that are further clustered into segment classes using a mixture modeling approach. This clustering is based on only the average methylation level of the segments and allows the detection of distinct methylome features comparable to unmethylated regions (UMRs), lowly methylated regions (LMRs), and fully methylated regions (FMRs) mentioned in Stadler et al. (M. B. Stadler, Murr, Burger, et al. [2011](#ref-Stadler2011-yv)[b](#ref-Stadler2011-yv)). The code snippet below reads the methylation data from the H1 cell line as a `GRanges` object, and runs the segmentation with potentially up to classes of segments. Mixture modeling determines the optimal number of segments using a statistic called Bayesian information criterion (BIC). The BIC is a statistic based on model likelihood and helps us select the model that fits the data better. We have set the number of segment classes to try using the `G=1:4` argument. The `minSeg` arguments are related to the minimum number of CpGs in the segments. The function `methSeg()` outputs a diagnostic plot for segmentation. This plot is shown in Figure [10\.7](extracting-interesting-regions-differential-methylation-and-segmentation.html#fig:segDiag). It shows methylation values and lengths of segments in each segment class, as well as the BIC for different numbers of segments.
```
# read methylation data
methFile=system.file("extdata","H1.chr21.chr22.rds",
package="compGenomRData")
mbw=readRDS(methFile)
# segment the methylation data
res=methSeg(mbw,minSeg=10,G=1:4,
join.neighbours = TRUE)
```
FIGURE 10\.7: Segmentation characteristics shown in different plots. Top left: Mean methylation values per segment in each segment class. Top middle: Length of each segment as boxplots for each segment class. Top right: Number of segments in each segment class. Bottom left: Distribution of segment methylation values. Bottom right: BIC for different number of segment classes
In this case, we know that BIC does not improve much after 4 segment classes. Now, we will not have a look at the characteristics of the segment classes. We are going to plot the mean methylation value and the length of the segment as a scatter plot; the result of this plot is shown in Figure [10\.8](extracting-interesting-regions-differential-methylation-and-segmentation.html#fig:segplot).
```
# plot
plot(res$seg.mean,
log10(width(res)),pch=20,
col=scales::alpha(rainbow(4)[as.numeric(res$seg.group)], 0.2),
ylab="log10(length)",
xlab="methylation proportion")
```
FIGURE 10\.8: Scatter plot of segment mean, methylation values versus segment length. Each dot is a segment identified by the *methSeg()* function.
The highly methylated segment classes that have more than 70% methylation are usually longer; the median length is 17889 bp. The segment class that has the lowest methylation values have the median length of 1376 bp and the shortest segment class has low to medium methylation level, with median length of 412 bp.
### 10\.5\.3 Working with large files
We might want to perform differential methylation analysis in R using whole genome methylation data of multiple samples. The problem is that for genome\-wide experiments, file sizes can easily range from hundreds of megabytes to gigabytes and processing multiple instances of those files in memory (RAM) might become unfeasible unless we have access to a high\-performance compute cluster (HPC) with extensive RAM. If we want to use a desktop computer or laptop with limited RAM, we either need to restrict our analysis to a subset of the data or use packages that can handle this situation.
The methylKit package provides the capability of dealing with large files and high numbers of samples by exploiting flat file databases to substitute in\-memory objects. The internal data, apart from meta information, has a tabular structure storing chromosome, start/end position, and strand information of the associated CpG base just like many other biological formats like BED, GFF or SAM. By exporting this tabular data into a TAB\-delimited file and making sure it is accordingly position\-sorted, it can be indexed using the generic [tabix tool](http://www.htslib.org/doc/tabix.html). In general, tabix indexing is a generalization of BAM indexing for generic TAB\-delimited files. It inherits all the advantages of BAM indexing, including data compression and efficient random access in terms of few seek function calls per query (Li [2011](#ref-Li2011-wc)). `MethylKit` relies on [`Rsamtools`](http://bioconductor.org/packages/release/bioc/html/Rsamtools.html) which implements tabix functionality for R. This way internal methylKit objects can be efficiently stored as a compressed file on the disk and still be quickly accessed. Another advantage is that existing compressed files can be loaded in interactive sessions, allowing the backup and transfer of intermediate analysis results.
`methylKit` provides the capability for storing objects in tabix format within various functions. Every methylKit object has its tabix\-based flat\-file database equivalent. For example, when reading a methylation call file, the `dbtype` argument can be provided, which will create tabix\-based objects.
```
myobj=methRead( file.list,
sample.id=list("test1","test2","ctrl1","ctrl2"),
assembly="hg18",treatment=c(1,1,0,0),
dbtype="tabix")
```
The advantage of tabix\-based objects is of course saving memory and more efficient parallelization for differential methylation calculation. However, since the data is written to a file and indexed whenever a new object is created, working with tabix\-based objects will be slower at certain steps of the analysis compared to in\-memory objects.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/annotation-of-dmrsdmcs-and-segments.html |
10\.6 Annotation of DMRs/DMCs and segments
------------------------------------------
The regions of interest obtained through differential methylation or segmentation analysis often need to be integrated with genome annotation datasets. Without this type of integration, differential methylation or segmentation results will be hard to interpret in biological terms. The most common annotation task is to see where regions of interest land in relation to genes and gene parts and regulatory regions: Do they mostly occupy promoter, intronic or exonic regions? Do they overlap with repeats? Do they overlap with other epigenomic markers or long\-range regulatory regions? These questions are not specific to methylation −nearly all regions of interest obtained via genome\-wide studies have to deal with such questions. Thus, there are already multiple software tools that can produce such annotations. One is the Bioconductor package [`genomation`](http://bioconductor.org/packages/release/bioc/html/genomation.html)(Akalin, Franke, Vlahoviček, et al. [2015](#ref-Akalin2015-yk)). It can be used to annotate DMRs/DMCs and it can also be used to integrate methylation proportions over the genome with other quantitative information and produce meta\-gene plots or heatmaps. Below, we are reading a BED file for transcripts and using that to annotate DMCs with promoter/intron/exon/intergenic annotation. The `genomation::readTranscriptFeatures()` function reads a BED12 file, calculates the coordinates of promoters, exons, and introns and the subsequent function uses that information for annotation.
```
library(genomation)
# read the gene BED file
transcriptBED=system.file("extdata", "refseq.hg18.bed.txt",
package = "methylKit")
gene.obj=readTranscriptFeatures(transcriptBED)
#
# annotate differentially methylated CpGs with
# promoter/exon/intron using annotation data
#
annotateWithGeneParts(as(all.diff,"GRanges"),gene.obj)
```
```
## promoter exon intron intergenic
## 28.24 15.27 33.59 58.02
## promoter exon intron intergenic
## 28.24 0.00 13.74 58.02
## promoter exon intron
## 0.29 0.03 0.17
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 5 815 49918 52410 94644 313528
```
Similarly, we can read the CpG island annotation and annotate our differentially methylated bases/regions with them.
```
# read the shores and flanking regions and name the flanks as shores
# and CpG islands as CpGi
cpg.file=system.file("extdata", "cpgi.hg18.bed.txt",
package = "methylKit")
cpg.obj=readFeatureFlank(cpg.file,
feature.flank.name=c("CpGi","shores"))
```
```
## Warning: 'GenomicRangesList' is deprecated.
## Use 'GRangesList(..., compress=FALSE)' instead.
## See help("Deprecated")
```
```
#
# convert methylDiff object to GRanges and annotate
diffCpGann=annotateWithFeatureFlank(as(all.diff,"GRanges"),
cpg.obj$CpGi,cpg.obj$shores,
feature.name="CpGi",flank.name="shores")
```
Besides these, DMRs/DMCs might be associated with changes in gene regulation. It might be desirable to overlap them with known transcription binding sites or motifs or histone modifications. These are simply overlap operations for these kinds of analysis. You can use the `genomation::annotateWithFeature()` function or any other approach shown in Chapter [6](genomicIntervals.html#genomicIntervals), and you can also do motif discovery with methods shown in Chapter [9](chipseq.html#chipseq).
### 10\.6\.1 Further annotation with genes or gene sets
The next obvious steps for annotating your DMRs/DMCs are figuring out which genes they are associated with. Figuring out which genes are associated with your regions of interest can give a better idea of the biological implications of the methylation changes. Once you have your gene set, you can do gene set analysis as shown in Chapter [8](rnaseqanalysis.html#rnaseqanalysis) or in Chapter [11](multiomics.html#multiomics). There are also packages such as [`rGREAT`](https://www.bioconductor.org/packages/release/bioc/html/rGREAT.html) that can simultaneously associate DMRs or any other region of interest to genes and do gene set analysis.
### 10\.6\.1 Further annotation with genes or gene sets
The next obvious steps for annotating your DMRs/DMCs are figuring out which genes they are associated with. Figuring out which genes are associated with your regions of interest can give a better idea of the biological implications of the methylation changes. Once you have your gene set, you can do gene set analysis as shown in Chapter [8](rnaseqanalysis.html#rnaseqanalysis) or in Chapter [11](multiomics.html#multiomics). There are also packages such as [`rGREAT`](https://www.bioconductor.org/packages/release/bioc/html/rGREAT.html) that can simultaneously associate DMRs or any other region of interest to genes and do gene set analysis.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/exercises-8.html |
10\.8 Exercises
---------------
### 10\.8\.1 Differential methylation
The main objective of this exercise is getting differential methylated cytosines between two groups of samples: IDH\-mut (AML patients with IDH mutations) vs. NBM (normal bone marrow samples).
1. Download methylation call files from GEO. These files are readable by methlKit using default `methRead` arguments. \[Difficulty: **Beginner**]
| samples | Link |
| --- | --- |
| IDH1\_rep1 | [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSM919990&format=file&file=GSM919990%5FIDH%2Dmut%5F1%5FmyCpG%2Etxt%2Egz) |
| IDH1\_rep2 | [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSM919991&format=file&file=GSM919991%5FIDH%5Fmut%5F2%5FmyCpG%2Etxt%2Egz) |
| NBM\_rep1 | [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSM919982&format=file&file=GSM919982%5FNBM%5F1%5FmyCpG%2Etxt%2Egz) |
| NBM\_rep2 | [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSM919984&format=file&file=GSM919984%5FNBM%5F2%5FRep1%5FmyCpG%2Etxt%2Egz) |
Example code for reading a file:
```
library(methylKit)
m=methRead("~/Downloads/GSM919982_NBM_1_myCpG.txt.gz",
sample.id = "idh",assembly="hg18")
```
2. Find differentially methylated cytosines. Use chr1 and chr2 only if you need to save time. You can subset it after you download the files either in R or Unix. The files are for hg18 assembly of human genome. \[Difficulty: **Beginner**]
3. Describe the general differential methylation trend, what is the main effect for most CpGs? \[Difficulty: **Intermediate**]
4. Annotate differentially methylated cytosines (DMCs) as promoter/intron/exon? \[Difficulty: **Beginner**]
5. Which genes are the nearest to DMCs? \[Difficulty: **Intermediate**]
6. Can you do gene set analysis either in R or via web\-based tools? \[Difficulty: **Advanced**]
### 10\.8\.2 Methylome segmentation
The main objective of this exercise is to learn how to do methylome segmentation and the downstream analysis for annotation and data integration.
1. Download the human embryonic stem\-cell (H1 Cell Line) methylation bigWig files from the [Roadmap Epigenomics website](http://egg2.wustl.edu/roadmap/web_portal/processed_data.html#MethylData). It may take a while to understand how the website is structured and which bigWig file to use. That is part of the exercise. The files you will download are for hg19 assembly unless stated otherwise. \[Difficulty: **Beginner**]
2. Do segmentation on hESC methylome. You can only use chr1 if using the whole genome takes too much time. \[Difficulty: **Intermediate**]
3. Annotate segments and the kinds of gene\-based features each segment class overlaps with (promoter/exon/intron). \[Difficulty: **Beginner**]
4. For each segment type, annotate the segments with chromHMM annotations from the Roadmap Epigenome database available [here](https://egg2.wustl.edu/roadmap/web_portal/chr_state_learning.html#core_15state). The specific file you should use is [here](https://egg2.wustl.edu/roadmap/data/byFileType/chromhmmSegmentations/ChmmModels/coreMarks/jointModel/final/E003_15_coreMarks_mnemonics.bed.gz). This is a bed file with chromHMM annotations. chromHMM annotations are parts of the genome identified by a hidden\-Markov\-model\-based machine learning algorithm. The segments correspond to active promoters, enhancers, active transcription, insulators, etc. The chromHMM model uses histone modification ChIP\-seq and potentially other ChIP\-seq data sets to annotate the genome.\[Difficulty: **Advanced**]
### 10\.8\.1 Differential methylation
The main objective of this exercise is getting differential methylated cytosines between two groups of samples: IDH\-mut (AML patients with IDH mutations) vs. NBM (normal bone marrow samples).
1. Download methylation call files from GEO. These files are readable by methlKit using default `methRead` arguments. \[Difficulty: **Beginner**]
| samples | Link |
| --- | --- |
| IDH1\_rep1 | [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSM919990&format=file&file=GSM919990%5FIDH%2Dmut%5F1%5FmyCpG%2Etxt%2Egz) |
| IDH1\_rep2 | [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSM919991&format=file&file=GSM919991%5FIDH%5Fmut%5F2%5FmyCpG%2Etxt%2Egz) |
| NBM\_rep1 | [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSM919982&format=file&file=GSM919982%5FNBM%5F1%5FmyCpG%2Etxt%2Egz) |
| NBM\_rep2 | [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSM919984&format=file&file=GSM919984%5FNBM%5F2%5FRep1%5FmyCpG%2Etxt%2Egz) |
Example code for reading a file:
```
library(methylKit)
m=methRead("~/Downloads/GSM919982_NBM_1_myCpG.txt.gz",
sample.id = "idh",assembly="hg18")
```
2. Find differentially methylated cytosines. Use chr1 and chr2 only if you need to save time. You can subset it after you download the files either in R or Unix. The files are for hg18 assembly of human genome. \[Difficulty: **Beginner**]
3. Describe the general differential methylation trend, what is the main effect for most CpGs? \[Difficulty: **Intermediate**]
4. Annotate differentially methylated cytosines (DMCs) as promoter/intron/exon? \[Difficulty: **Beginner**]
5. Which genes are the nearest to DMCs? \[Difficulty: **Intermediate**]
6. Can you do gene set analysis either in R or via web\-based tools? \[Difficulty: **Advanced**]
### 10\.8\.2 Methylome segmentation
The main objective of this exercise is to learn how to do methylome segmentation and the downstream analysis for annotation and data integration.
1. Download the human embryonic stem\-cell (H1 Cell Line) methylation bigWig files from the [Roadmap Epigenomics website](http://egg2.wustl.edu/roadmap/web_portal/processed_data.html#MethylData). It may take a while to understand how the website is structured and which bigWig file to use. That is part of the exercise. The files you will download are for hg19 assembly unless stated otherwise. \[Difficulty: **Beginner**]
2. Do segmentation on hESC methylome. You can only use chr1 if using the whole genome takes too much time. \[Difficulty: **Intermediate**]
3. Annotate segments and the kinds of gene\-based features each segment class overlaps with (promoter/exon/intron). \[Difficulty: **Beginner**]
4. For each segment type, annotate the segments with chromHMM annotations from the Roadmap Epigenome database available [here](https://egg2.wustl.edu/roadmap/web_portal/chr_state_learning.html#core_15state). The specific file you should use is [here](https://egg2.wustl.edu/roadmap/data/byFileType/chromhmmSegmentations/ChmmModels/coreMarks/jointModel/final/E003_15_coreMarks_mnemonics.bed.gz). This is a bed file with chromHMM annotations. chromHMM annotations are parts of the genome identified by a hidden\-Markov\-model\-based machine learning algorithm. The segments correspond to active promoters, enhancers, active transcription, insulators, etc. The chromHMM model uses histone modification ChIP\-seq and potentially other ChIP\-seq data sets to annotate the genome.\[Difficulty: **Advanced**]
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/use-case-multi-omics-data-from-colorectal-cancer.html |
11\.1 Use case: Multi\-omics data from colorectal cancer
--------------------------------------------------------
The examples in this chapter will use the following data: a set of 121 tumors from the TCGA (Weinstein, Collisson, Mills, et al. [2013](#ref-tcga_pan_cancer)) colorectal cancer cohort. The tumors have been profiled for gene expression using RNA\-seq, mutations using Exome\-seq, and copy number variations using genotyping arrays. Projects such as TCGA have turbocharged efforts to sub\-divide cancer into subtypes. Although two tumors arise in the colon, they may have distinct molecular profiles, which is important for treatment decisions. The subset of tumors used in this chapter belong to two distinct molecular subtypes defined by the Colorectal Cancer Subtyping Consortium (Guinney, Dienstmann, Wang, et al. [2015](#ref-cmscc)), *CMS1* and *CMS3*. The following code snippets load this multi\-omics data from the companion package, starting with gene expression data from RNA\-seq (see Chapter [8](rnaseqanalysis.html#rnaseqanalysis)). Below we are reading the RNA\-seq data from the `compGenomRData` package.
```
# read in the csv from the companion package as a data frame
csvfile <- system.file("extdata", "multi-omics", "COREAD_CMS13_gex.csv",
package="compGenomRData")
x1 <- read.csv(csvfile, row.names=1)
# Fix the gene names in the data frame
rownames(x1) <- sapply(strsplit(rownames(x1), "\\|"), function(x) x[1])
# Output a table
knitr::kable(head(t(head(x1))), caption="Example gene expression data (head)")
```
TABLE 11\.1: Example gene expression data (head)
| | RNF113A | S100A13 | AP3D1 | ATP6V1G1 | UBQLN4 | TPPP3 |
| --- | --- | --- | --- | --- | --- | --- |
| TCGA.A6\.2672 | 21\.19567 | 19\.72600 | 11\.53022 | 0\.00000 | 15\.35637 | 12\.76747 |
| TCGA.A6\.3809 | 21\.50866 | 18\.65729 | 12\.98830 | 14\.12675 | 19\.62208 | 0\.00000 |
| TCGA.A6\.5661 | 20\.08072 | 18\.97034 | 10\.83759 | 15\.31325 | 0\.00000 | 0\.00000 |
| TCGA.A6\.5665 | 0\.00000 | 11\.88336 | 10\.24248 | 19\.79300 | 0\.00000 | 0\.00000 |
| TCGA.A6\.6653 | 0\.00000 | 12\.07753 | 0\.00000 | 0\.00000 | 0\.00000 | 0\.00000 |
| TCGA.A6\.6780 | 0\.00000 | 12\.99128 | 0\.00000 | 19\.96976 | 13\.17618 | 11\.58742 |
| Table @ref(tab | :moloadMult | iomicsGE) s | hows the he | ad of the g | ene express | ion matrix. The rows correspond to patients, referred to by their TCGA identifier, as the first column of the table. Columns represent the genes, and the values are RPKM expression values. The column names are the names or symbols of the genes. |
| The details abo | ut how thes | e expressio | n values ar | e calculate | d are in Ch | apter [8](rnaseqanalysis.html#rnaseqanalysis). |
We first **read mutation data** with the following code snippet.
```
# read in the csv from the companion package as a data frame
csvfile <- system.file("extdata", "multi-omics", "COREAD_CMS13_muts.csv",
package="compGenomRData")
x2 <- read.csv(csvfile, row.names=1)
# Set mutation data to be binary (so if a gene has more than 1 mutation,
# we only count one)
x2[x2>0]=1
# output a table
knitr::kable(head(t(head(x2))), caption="Example mutation data (head)")
```
TABLE 11\.2: Example mutation data (head)
| | TTN | TP53 | APC | KRAS | SYNE1 | MUC16 |
| --- | --- | --- | --- | --- | --- | --- |
| TCGA.A6\.2672 | 1 | 0 | 0 | 0 | 1 | 1 |
| TCGA.A6\.3809 | 1 | 0 | 0 | 0 | 0 | 0 |
| TCGA.A6\.5661 | 1 | 0 | 0 | 0 | 1 | 1 |
| TCGA.A6\.5665 | 1 | 0 | 0 | 0 | 1 | 1 |
| TCGA.A6\.6653 | 1 | 0 | 0 | 1 | 0 | 0 |
| TCGA.A6\.6780 | 1 | 0 | 0 | 0 | 0 | 1 |
| Table @ref(tab | :moloa | dMultio | micsMU | T) show | s the mu | tations of these tumors (mutations were introduced in Chapter [1](intro.html#intro)). In the mutation matrix, each cell is a binary 1/0, indicating whether or not a tumor has a non\-synonymous mutation in the gene indicated by the column. These types of mutations change the aminoacid sequence, therefore they are likely to change the function of the protein. |
Next, we **read copy number data** with the following code snippet.
```
# read in the csv from the companion package as a data frame
csvfile <- system.file("extdata", "multi-omics", "COREAD_CMS13_cnv.csv",
package="compGenomRData")
x3 <- read.csv(csvfile, row.names=1)
# output a table
knitr::kable(head(t(head(x3))),
caption="Example copy number data for CRC samples")
```
TABLE 11\.3: Example copy number data for CRC samples
| | 8p23\.2 | 8p23\.3 | 8p23\.1 | 8p21\.3 | 8p12 | 8p22 |
| --- | --- | --- | --- | --- | --- | --- |
| TCGA.A6\.2672 | 0 | 0 | 0 | 0 | 0 | 0 |
| TCGA.A6\.3809 | 0 | 0 | 0 | 0 | 0 | 0 |
| TCGA.A6\.5661 | 0 | 0 | 0 | 0 | 0 | 0 |
| TCGA.A6\.5665 | 0 | 0 | 0 | 0 | 0 | 0 |
| TCGA.A6\.6653 | 0 | 0 | 0 | 0 | 0 | 0 |
| TCGA.A6\.6780 | 0 | 0 | 0 | 0 | 0 | 0 |
| Finally, table | @ref(tab | :moloadMu | ltiomicsC | NV) shows | GISTIC | scores (Mermel, Schumacher, Hill, et al. [2011](#ref-mermel2011gistic2)) for copy number alterations in these tumors. During transformation from healthy cells to cancer cells, the genome sometimes undergoes large\-scale instability; large segments of the genome might be replicated or lost. This will be reflected in each segment’s “copy number”. In this matrix, each column corresponds to a chromosome segment, and the value of the cell is a real\-valued score indicating if this segment has been amplified (copied more) or lost, relative to a non\-cancer control from the same patient. |
Each of the data types (gene expression, mutations, copy number variation) on its own, provides some signal which allows us to somewhat separate the samples into the two different subtypes. In order to explore these relations, we must first obtain the subtypes of these tumors. The following code snippet reads these, also from the companion package:
```
# read in the csv from the companion package as a data frame
csvfile <- system.file("extdata", "multi-omics", "COREAD_CMS13_subtypes.csv",
package="compGenomRData")
covariates <- read.csv(csvfile, row.names=1)
# Fix the TCGA identifiers so they match up with the omics data
rownames(covariates) <- gsub(pattern = '-', replacement = '\\.',
rownames(covariates))
covariates <- covariates[colnames(x1),]
# create a dataframe which will be used to annotate later graphs
anno_col <- data.frame(cms=as.factor(covariates$cms_label))
rownames(anno_col) <- rownames(covariates)
```
Before proceeding with any multi\-omics integration analysis which might obscure the underlying data, it is important to take a look at each omic data type on its own, and in this case in particular, to examine their relation to the underlying condition, i.e. the cancer subtype. A great way to get an eagle\-eye view of such large data is using heatmaps (see Chapter [4](unsupervisedLearning.html#unsupervisedLearning) for more details).
We will first check the gene expression data in relation to the subtypes. One way of doing that is plotting a heatmap and clustering the tumors, while displaying a color annotation atop the heatmap, indicating which subtype each tumor belongs to. This is shown in Figure [11\.1](use-case-multi-omics-data-from-colorectal-cancer.html#fig:mogeneExpressionHeatmap), which is generated by the following code snippet:
```
pheatmap::pheatmap(x1,
annotation_col = anno_col,
show_colnames = FALSE,
show_rownames = FALSE,
main="Gene expression data")
```
FIGURE 11\.1: Heatmap of gene expression data for colorectal cancers.
In Figure [11\.1](use-case-multi-omics-data-from-colorectal-cancer.html#fig:mogeneExpressionHeatmap), each column is a tumor, and each row is a gene. The values in the cells are FPKM values. There is another band above the heatmap annotating each column (tumor) with its corresponding subtype. The tumors are clustered using hierarchical clustering denoted by the dendrogram above the heatmap, according to which the columns (tumors) are ordered. While this ordering corresponds somewhat to the subtypes, it would not be possible to cut this dendrogram in a way which achieves perfect separation between the subtypes.
Next we repeat the same exercise using the mutation data. The following snippet generates Figure [11\.2](use-case-multi-omics-data-from-colorectal-cancer.html#fig:momutationsHeatmap):
```
pheatmap::pheatmap(x2,
annotation_col = anno_col,
show_colnames = FALSE,
show_rownames = FALSE,
main="Mutation data")
```
FIGURE 11\.2: Heatmap of mutation data for colorectal cancers.
An examination of Figure [11\.2](use-case-multi-omics-data-from-colorectal-cancer.html#fig:momutationsHeatmap) shows that tumors clustered and ordered by mutation data correspond very closely to their CMS subtypes. However, one should be careful in drawing conclusions about this result. Upon closer examination, you might notice that the separating factor seems to be that CMS1 tumors have significantly more mutations than do CMS3 tumors. This, rather than mutations in a specific genes, seems to be driving this clustering result. Nevertheless, this hyper\-mutated status is an important indicator for this subtype.
Finally, we look into copy number variation data and try to see if clustered samples are in concordance with subtypes. The following code snippet generates Figure [11\.3](use-case-multi-omics-data-from-colorectal-cancer.html#fig:moCNVHeatmap):
```
pheatmap::pheatmap(x3,
annotation_col = anno_col,
show_colnames = FALSE,
show_rownames = FALSE,
main="Copy number data")
```
FIGURE 11\.3: Heatmap of copy number variation data, colorectal cancers.
The interpretation of Figure [11\.3](use-case-multi-omics-data-from-colorectal-cancer.html#fig:moCNVHeatmap) is left as an exercise for the reader.
It is clear that while there is some “signal” in each of these omics types, as is evident from these heatmaps, it is equally clear that none of these omics types completely and on its own explains the subtypes. Each omics type provides but a glimpse into what makes each of these tumors different from a healthy cell. Through the rest of this chapter, we will demonstrate how analyzing the gene expression, mutations, and copy number variations, in tandem, we will be able to get a better picture of what separates these cancer subtypes.
The next section will describe latent variable models for multi\-omics integrations. Latent variable models are a form of dimensionality reduction (see Chapter [4](unsupervisedLearning.html#unsupervisedLearning)). Each omics data type is “big data” in its own right; a typical RNA\-seq experiment profiles upwards of 50 thousand different transcripts. The difficulties in handling large data matrices are only exacerbated by the introduction of more omics types into the analysis, as we are suggesting here. In order to overcome these challenges, latent variable models are a powerful way to reduce the dimensionality of the data down to a manageable size.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html |
11\.3 Matrix factorization methods for unsupervised multi\-omics data integration
---------------------------------------------------------------------------------
Matrix factorization techniques attempt to infer a set of latent variables from the data by finding factors of a data matrix. Principal Component Analysis (introduced in Chapter [4](unsupervisedLearning.html#unsupervisedLearning)) is a form of matrix factorization which finds factors based on the covariance structure of the data. Generally, matrix factorization methods may be formulated as
\\\[
X \= WH,
\\]
where \\(X\\) is the *data matrix*, \\(\[M \\times N]\\) where \\(M\\) is the number of features (typically genes), and \\(N\\) is the number of samples. \\(W\\) is an \\(\[M \\times K]\\) *factors* matrix, and \\(H\\) is the \\(\[K \\times N]\\) *latent variable coefficient matrix*. Tying this back to PCA, where \\(X \= U \\Sigma V^T\\), we may formulate the factorization in the same terms by setting \\(W\=U\\Sigma\\) and \\(H\=V^T\\). If \\(K\=rank(X)\\), this factorization is lossless, i.e. \\(X\=WH\\). However if we choose \\(K\<rank(X)\\), the factorization is lossy, i.e. \\(X \\approx WH\\). In that case, matrix factorization methods normally opt to minimize the error
\\\[
min\~\\\|X\-WH\\\|.
\\]
As we normally seek a latent variable model with a considerably lower dimensionality than \\(X\\), this is the more common case.
The loss function we choose to minimize may be further subject to some constraints or regularization terms. Regularization has been introduced in Chapter [5](supervisedLearning.html#supervisedLearning). In the current context of latent factor models, a regularization term might be added to the loss function, i.e. we might choose to minimize \\(min\~\\\|X\-WH\\\| \+ \\lambda \\\|W\\\|^2\\) (this is called \\(L\_2\\)\-regularization) instead of merely the reconstruction error. Adding such a term to our loss function here will push the \\(W\\) matrix entries towards 0, in effect balancing between better reconstruction of the data and a more parsimonious model. A more parsimonious latent factor model is one with more sparsity in the latent factors. This sparsity is desirable for model interpretation, as will become evident in later sections.
FIGURE 11\.4: General matrix factorization framework. The data matrix on the left\-hand side is decomposed into factors on the right\-hand side. The equality may be an approximation as some matrix factorization methods are lossless (exact), while others are an approximation.
In Figure [11\.4](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momatrixFactorization), the \\(5 \\times 4\\) data matrix \\(X\\) is decomposed to a 2\-dimensional latent variable model.
### 11\.3\.1 Multiple factor analysis
Multiple factor analysis is a natural starting point for a discussion about matrix factorization methods for integrating multiple data types. It is a straightforward extension of PCA into the domain of multiple data types.[2](#fn2)
Figure [11\.5](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:moMFA) sketches a naive extension of PCA to a multi\-omics context.
FIGURE 11\.5: A naive extension of PCA to multi\-omics; data matrices from different platforms are stacked, before applying PCA.
Formally, we have
\\\[
X \= \\begin{bmatrix}
X\_{1} \\\\
X\_{2} \\\\
\\vdots \\\\
X\_{L}
\\end{bmatrix} \= WH,
\\]
a joint decomposition of the different data matrices (\\(X\_i\\)) into the factor matrix \\(W\\) and the latent variable matrix \\(H\\). This way, we can leverage the ability of PCA to find the highest variance decomposition of the data, when the data consists of different omics types. As a reminder, PCA finds the linear combinations of the features which, when the data is projected onto them, preserve the most variance of any \\(K\\)\-dimensional space. But because measurements from different experiments have different scales, they will also have variance (and co\-variance) at different scales.
Multiple Factor Analysis addresses this issue and achieves balance among the data types by normalizing each of the data types, before stacking them and passing them on to PCA. Formally, MFA is given by
\\\[
X\_n \= \\begin{bmatrix}
X\_{1} / \\lambda^{(1\)}\_1 \\\\
X\_{2} / \\lambda^{(2\)}\_1 \\\\
\\vdots \\\\
X\_{L} / \\lambda^{(L)}\_1
\\end{bmatrix} \= WH,
\\]
where \\(\\lambda^{(i)}\_1\\) is the first eigenvalue of the principal component decomposition of \\(X\_i\\).
Following this normalization step, we apply PCA to \\(X\_n\\). From there on, MFA analysis is the same as PCA analysis, and we refer the reader to Chapter [4](unsupervisedLearning.html#unsupervisedLearning) for more details.
#### 11\.3\.1\.1 MFA in R
MFA is available through the CRAN package `FactoMineR`. The code snippet below shows how to run it:
```
# run the MFA function from the FactoMineR package
r.mfa <- FactoMineR::MFA(
t(rbind(x1,x2,x3)), # binding the omics types together
c(dim(x1)[1], dim(x2)[1], dim(x3)[1]), # specifying the dimensions of each
graph=FALSE)
```
Since this generates a two\-dimensional factorization of the multi\-omics data, we can now plot each tumor as a dot in a 2D scatter plot to see how well the MFA factors separate the cancer subtypes. The following code snippet generates Figure [11\.6](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfascatterplot):
```
# first, extract the H and W matrices from the MFA run result
mfa.h <- r.mfa$global.pca$ind$coord
mfa.w <- r.mfa$quanti.var$coord
# create a dataframe with the H matrix and the CMS label
mfa_df <- as.data.frame(mfa.h)
mfa_df$subtype <- factor(covariates[rownames(mfa_df),]$cms_label)
# create the plot
ggplot2::ggplot(mfa_df, ggplot2::aes(x=Dim.1, y=Dim.2, color=subtype)) +
ggplot2::geom_point() + ggplot2::ggtitle("Scatter plot of MFA")
```
FIGURE 11\.6: Scatter plot of 2\-dimensional MFA for multi\-omics data shows separation between the subtypes.
Figure [11\.6](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfascatterplot) shows remarkable separation between the cancer subtypes; it is easy enough to draw a line separating the tumors to CMS subtypes with good accuracy.
Another way to examine the MFA factors, which is also useful for factor models with more than two components, is a heatmap, as shown in Figure [11\.7](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfaheatmap), generated by the following code snippet:
```
pheatmap::pheatmap(t(mfa.h)[1:2,], annotation_col = anno_col,
show_colnames = FALSE,
main="MFA for multi-omics integration")
```
FIGURE 11\.7: A heatmap of the two MFA components shows separation between the cancer subtypes.
Figure [11\.7](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfaheatmap) shows that indeed, when tumors are clustered and ordered using the two MFA factors we learned above, their separation into CMS clusters is nearly trivial.
**Want to know more ?**
* Learn more about FactoMineR on the website: <http://factominer.free.fr/>
* Learn more about MFA on the Wikipedia page <https://en.wikipedia.org/wiki/Multiple_factor_analysis>
### 11\.3\.2 Joint non\-negative matrix factorization
As introduced in Chapter [4](unsupervisedLearning.html#unsupervisedLearning), NMF (Non\-negative Matrix Factorization) is an algorithm from 2000 that seeks to find a non\-negative additive decomposition for a non\-negative data matrix. It takes the familiar form \\(X \\approx WH\\), with \\(X \\ge 0\\), \\(W \\ge 0\\), and \\(H \\ge 0\\). The non\-negative constraints make a lossless decomposition (i.e. \\(X\=WH\\)) generally impossible. Hence, NMF attempts to find a solution which minimizes the Frobenius norm of the reconstruction:
\\\[
min\~\\\|X\-WH\\\|\_F \\\\
W \\ge 0, \\\\
H \\ge 0,
\\]
where the Frobenius norm \\(\\\|\\cdot\\\|\_F\\) is the matrix equivalent of the Euclidean distance:
\\\[
\\\|X\\\|\_F \= \\sqrt{\\sum\_i\\sum\_jx\_{ij}^2}.
\\]
This is typically solved for \\(W\\) and \\(H\\) using random initializations followed by iterations of a multiplicative update rule:
\\\[\\begin{align}
W\_{t\+1} \&\= W\_t^T \\frac{XH\_t^T}{XH\_tH\_t^T} \\\\
H\_{t\+1} \&\= H\_t \\frac{W\_t^TX}{W^T\_tW\_tX}.
\\end{align}\\]
Since this algorithm is guaranteed only to converge to a local minimum, it is typically run several times with random initializations, and the best result is kept.
In the multi\-omics context, we will, as in the MFA case, wish to find a decomposition for an integrated data matrix of the form
\\\[
X \= \\begin{bmatrix}
X\_{1} \\\\
X\_{2} \\\\
\\vdots \\\\
X\_{L}
\\end{bmatrix},
\\]
with \\(X\_i\\)s denoting data from different omics platforms.
As NMF seeks to minimize the reconstruction error \\(\\\|X\-WH\\\|\_F\\), some care needs to be taken with regards to data normalization. Different omics platforms may produce data with different scales (i.e. real\-valued gene expression quantification, binary mutation data, etc.), and so will have different baseline Frobenius norms. To address this, when doing Joint NMF, we first feature\-normalize each data matrix, and then normalize by the Frobenius norm of the data matrix. Formally, we run NMF on
\\\[
X \= \\begin{bmatrix}
X\_{1}^N / \\alpha\_1 \\\\
X\_{2}^N / \\alpha\_2 \\\\
\\vdots \\\\
X\_{L}^N / \\alpha\_L
\\end{bmatrix},
\\]
where \\(X\_i^N\\) is the feature\-normalized data matrix \\(X\_i^N \= \\frac{x^{ij}}{\\sum\_jx^{ij}}\\), and \\(\\alpha\_i \= \\\|X\_{i}^N\\\|\_F\\).
Another consideration with NMF is the non\-negativity constraint. Different omics data types may have negative values, for instance, copy\-number variations (CNVs) may be positive, indicating gains, or negative, indicating losses, as in Table [11\.4](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#tab:mocnvsplitcolshow1). In order to turn such data into a non\-negative form, we will split each feature into two features, one new feature holding all the non\-negative values of the original feature, and another feature holding the absolute value of the negative ones, as in Table [11\.5](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#tab:mocnvsplitcolshow2).
TABLE 11\.4: Example copy number data. Data can be both positive (amplified regions) or negative (deleted regions).
| | seg1 | seg2 |
| --- | --- | --- |
| samp1 | 1 | 0 |
| samp2 | 2 | 1 |
| samp3 | 1 | \-2 |
TABLE 11\.5: Example copy number data after splitting each column into a column representing copy number gains (\+) and a column representing deletions (\-). This data matrix is non\-negative, and thus suitable for NMF algorithms.
| | seg1\+ | seg1\- | seg2\+ | seg2\- |
| --- | --- | --- | --- | --- |
| samp1 | 1 | 0 | 0 | 0 |
| samp2 | 2 | 0 | 1 | 0 |
| samp3 | 1 | 0 | 0 | 2 |
#### 11\.3\.2\.1 NMF in R
Many NMF algorithms are available through the CRAN package `NMF`. The following code chunk demonstrates how it may be run:
```
# Feature-normalize the data
x1.featnorm <- x1 / rowSums(x1)
x2.featnorm <- x2 / rowSums(x2)
x3.featnorm <- x3 / rowSums(x3)
# Normalize by each omics type's frobenius norm
x1.featnorm.frobnorm <- x1.featnorm / norm(as.matrix(x1.featnorm), type="F")
x2.featnorm.frobnorm <- x2.featnorm / norm(as.matrix(x2.featnorm), type="F")
x3.featnorm.frobnorm <- x3.featnorm / norm(as.matrix(x3.featnorm), type="F")
# Split the features of the CNV matrix into two non-negative features each
x3.featnorm.frobnorm.nonneg <- t(split_neg_columns(t(x3.featnorm.frobnorm)))
# run the nmf function from the NMF package
require(NMF)
```
```
## Warning: package 'NMF' was built under R version 4.0.2
```
```
r.nmf <- nmf(t(rbind(x1.featnorm.frobnorm,
x2.featnorm.frobnorm,
x3.featnorm.frobnorm.nonneg)),
2,
method='Frobenius')
# exctract the H and W matrices from the nmf run result
nmf.h <- NMF::basis(r.nmf)
nmf.w <- NMF::coef(r.nmf)
nmfw <- t(nmf.w)
```
As with MFA, we can examine how well 2\-factor NMF splits tumors into subtypes by looking at the scatter plot in Figure [11\.8](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfscatterplot), generated by the following code chunk:
```
# create a dataframe with the H matrix and the CMS label (subtype)
nmf_df <- as.data.frame(nmf.h)
colnames(nmf_df) <- c("dim1", "dim2")
nmf_df$subtype <- factor(covariates[rownames(nmf_df),]$cms_label)
# create the scatter plot
ggplot2::ggplot(nmf_df, ggplot2::aes(x=dim1, y=dim2, color=subtype)) +
ggplot2::geom_point() +
ggplot2::ggtitle("Scatter plot of 2-component NMF for multi-omics integration")
```
FIGURE 11\.8: NMF creates a disentangled representation of the data using two components which allow for separation between tumor sub\-types CMS1 and CMS3 based on NMF factors learned from multi\-omics data.
Figure [11\.8](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfscatterplot) shows an important difference between NMF and MFA (PCA). It shows the tendency of samples to lie close to the X or Y axes, that is, the tendency of each sample to be high in only one of the factors. This will be discussed more in the later section on disentangledness.
Again, should we choose to run NMF with more than two factors, a more useful plot might be the heatmap shown in Figure [11\.9](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfheatmap), generated by the following code snippet:
```
pheatmap::pheatmap(t(nmf_df[,1:2]),
annotation_col = anno_col,
show_colnames=FALSE,
main="Heatmap of 2-component NMF")
```
FIGURE 11\.9: A heatmap of NMF factors shows separability of tumors into subtype clusters. This plot is more useful than a scatter plot when there are more than two factors.
**Want to know more ?**
* Joint NMF to uncover gene regulatory networks: Zhang S., Li Q., Liu J., Zhou X. J. (2011\). A novel computational framework for simultaneous integration of multiple types of genomic data to identify microRNA\-gene regulatory modules. *Bioinformatics* 27, i401–i409\. 10\.1093/bioinformatics/btr206 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3117336/>
* Joint NMF for cancer research: Zhang S., Liu C.\-C., Li W., Shen H., Laird P. W., Zhou X. J. (2012\). Discovery of multi\-dimensional modules by integrative analysis of cancer genomic data. *Nucleic Acids Res.* 40, 9379–9391\. 10\.1093/nar/gks725 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3479191/>
### 11\.3\.3 iCluster
iCluster takes a Bayesian approach to the latent variable model. In Bayesian statistics, we infer distributions over model parameters, rather than finding a single maximum\-likelihood parameter estimate. In iCluster, we model the data as
\\\[
X\_{(i)} \= W\_{(i)}Z \+ \\epsilon\_i,
\\]
where \\(X\_{(i)}\\) is a data matrix from a single omics platform, \\(W\_{(i)}\\) are model parameters, \\(Z\\) is a latent variable matrix, and is shared among the different omics platforms, and \\(\\epsilon\_i\\) is a “noise” random variable, \\(\\epsilon \\sim N(0,\\Psi)\\), with \\(\\Psi \= diag(\\psi\_1,\\dots \\psi\_M)\\) is a diagonal covariance matrix.
FIGURE 11\.10: Sketch of iCluster model. Each omics datatype is decomposed to a coefficient matrix and a shared latent variable matrix, plus noise.
Note that with this construction, the omics measurements \\(X\\) are expected to be the same for samples with the same latent variable representation, up to Gaussian noise. Further, we assume a Gaussian prior distribution on the latent variables \\(Z \\sim N(0,I)\\), which means we assume \\(X\_{(i)} \\sim N \\big( 0,W\_{(i)} W\_{(i)}^T \+ \\Psi\_{(i)} \\big)\\). In order to find suitable values for \\(W\\), \\(Z\\), and \\(\\Psi\\), we can write down the multivariate normal log\-likelihood function and optimize it. For a multivariate normal distribution with mean \\(0\\) and covariance \\(\\Sigma\\), the log\-likelihood function is given by
\\\[
\\ell \= \-\\frac{1}{2} \\bigg( \\ln (\|\\Sigma\|) \+ X^T \\Sigma^{\-1} X \+ k\\ln (2 \\pi) \\bigg)
\\]
(this is simply the log of the Probability Density Function of a multivariate Gaussian). For the multi\-omics iCluster case, we have \\(X\=\\big( X\_{(1\)}, \\dots, X\_{(L)} \\big)^T\\), \\(W \= \\big( W\_{(1\)}, \\dots, W\_{(L)} \\big)^T\\), where \\(X\\) is a multivariate normal with \\(0\\)\-mean and \\(\\Sigma \= W W^T \+ \\Psi\\) covariance. Hence, the log\-likelihood function for the iCluster model is given by:
\\\[
\\ell\_{iC}(W,\\Sigma) \= \-\\frac{1}{2} \\bigg( \\sum\_{i\=1}^L \\ln (\|\\Sigma\|) \+ X^T\\Sigma^{\-1}X \+ p\_i \\ln (2 \\pi) \\bigg)
\\]
where \\(p\_i\\) is the number of features in omics data type \\(i\\). Because this model has more parameters than we typically have samples, we need to push the model to use fewer parameters than it has at its disposal, by using regularization. iCluster uses Lasso regularization, which is a direct penalty on the absolute value of the parameters. I.e., instead of optimizing \\(\\ell\_{iC}(W,\\Sigma)\\), we will optimize the regularized log\-likelihood:
\\\[
\\ell \= \\ell\_{iC}(W,\\Sigma) \- \\lambda\\\|W\\\|\_1\.
\\]
The parameter \\(\\lambda\\) acts as a dial to weigh the trade\-off between better model fits (higher log\-likelihood) and a sparser model, with more \\(w\_{ij}\\)s set to \\(0\\), which gives models which generalize better and are more interpretable.
In order to solve this problem, iCluster employs the Expectation Maximization (EM) algorithm. The full details are beyond the scope of this textbook. We will introduce a short sketch instead. The intuition behind the EM algorithm is a more general case of the k\-means clustering algorithm (Chapter 4\). The basic **EM algorithm** is as follows.
* Initialize \\(W\\) and \\(\\Psi\\).
* **Until convergence of \\(W\\), \\(\\Psi\\)**
+ E\-step: Calculate the expected value of \\(Z\\) given the current estimates of \\(W\\) and \\(\\Psi\\) and the data \\(X\\).
+ M\-step: Calculate maximum likelihood estimates for the parameters \\(W\\) and \\(\\Psi\\) based on the current estimate of \\(Z\\) and the data \\(X\\).
#### 11\.3\.3\.1 iCluster\+: Extending iCluster
iCluster\+ is an extension of the iCluster framework, which allows for omics types to arise from distributions other than a Gaussian. While normal distributions are a good assumption for log\-transformed, centered gene expression data, it is a poor model for binary mutations data, or for copy number variation data, which can typically take the values \\((\-2, 1, 0, 1, 2\)\\) for heterozygous / monozygous deletions or amplifications. iCluster\+ allows the different \\(X\\)s to have different distributions:
* for binary mutations, \\(X\\) is drawn from a multivariate binomial
* for normal, continuous data, \\(X\\) is drawn from a multivariate Gaussian
* for copy number variations, \\(X\\) is drawn from a multinomial
* for count data, \\(X\\) is drawn from a Poisson.
In that way, iCluster\+ allows us to explicitly model our assumptions about the distributions of our different omics data types, and leverage the strengths of Bayesian inference.
Both iCluster and iCluster\+ make use of sophisticated Bayesian inference algorithms (EM for iCluster, Metropolis\-Hastings MCMC for iCluster\+), which means they do not scale up trivially. Therefore, it is recommended to filter down the features to a manageable size before inputting data to the algorithm. The exact size of “manageable” data depends on your hardware, but a rule of thumb is that dimensions in the thousands are ok, but in the tens of thousands might be too slow.
#### 11\.3\.3\.2 Running iCluster\+
iCluster\+ is available through the Bioconductor package `iClusterPlus`. The following code snippet demonstrates how it can be run with two components:
```
# run the iClusterPlus function
r.icluster <- iClusterPlus::iClusterPlus(
t(x1), # Providing each omics type
t(x2),
t(x3),
type=c("gaussian", "binomial", "multinomial"), # Providing the distributions
K=2, # provide the number of factors to learn
alpha=c(1,1,1), # as well as other model parameters
lambda=c(.03,.03,.03))
# extract the H and W matrices from the run result
# here, we refer to H as z, to keep with iCluster terminology
icluster.z <- r.icluster$meanZ
rownames(icluster.z) <- rownames(covariates) # fix the row names
icluster.ws <- r.icluster$beta
# construct a dataframe with the H matrix (z) and the cancer subtypes
# for later plotting
icp_df <- as.data.frame(icluster.z)
colnames(icp_df) <- c("dim1", "dim2")
rownames(icp_df) <- colnames(x1)
icp_df$subtype <- factor(covariates[rownames(icp_df),]$cms_label)
```
As with other methods, we examine the iCluster results by looking at the scatter plot in Figure [11\.11](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:moiclusterplusscatter) and the heatmap in Figure [11\.12](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:moiclusterplusheatmap). Both figures show that iCluster learns two factors which nearly perfectly discriminate between tumors of the two subtypes.
FIGURE 11\.11: iCluster\+ learns factors which allow tumor sub\-types CMS1 and CMS3 to be discriminated.
FIGURE 11\.12: iCluster\+ factors, shown in a heatmap, separate tumors into their subtypes well.
**Want to know more ?**
* Read the original iCluster paper: Shen R., Olshen A. B., Ladanyi M. (2009\). Integrative clustering of multiple genomic data types using a joint latent variable model with application to breast and lung cancer subtype analysis. *Bioinformatics* 25, 2906–2912\. 10\.1093/bioinformatics/btp543 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2800366/>
* Read the original iClusterPlus paper: an extension of iCluster: Shen R., Mo Q., Schultz N., Seshan V. E., Olshen A. B., Huse J., et al. (2012\). Integrative subtype discovery in glioblastoma using iCluster. *PLoS ONE* 7:e35236\. 10\.1371/journal.pone.0035236 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3335101/>
* Learn more about the LASSO for model regularization: Tibshirani, R. (1996\). Regression shrinkage and selection via the lasso. *J. Royal. Statist. Soc B.*, Vol. 58, No. 1, pages 267\-288: [http://www\-stat.stanford.edu/%7Etibs/lasso/lasso.pdf](http://www-stat.stanford.edu/%7Etibs/lasso/lasso.pdf)
* Learn more about the EM algorithm: Dempster, A. P., et al. Maximum likelihood from incomplete data via the EM algorithm. *Journal of the Royal Statistical Society. Series B (Methodological)*, vol. 39, no. 1, 1977, pp. 1–38\. JSTOR, JSTOR: <http://www.jstor.org/stable/2984875>
* Read about MCMC algorithms: Hastings, W.K. (1970\). Monte Carlo sampling methods using Markov chains and their applications. *Biometrika.* 57 (1\): 97–109\. [doi:10\.1093/biomet/57\.1\.97](doi:10.1093/biomet/57.1.97): <https://www.jstor.org/stable/2334940>
### 11\.3\.1 Multiple factor analysis
Multiple factor analysis is a natural starting point for a discussion about matrix factorization methods for integrating multiple data types. It is a straightforward extension of PCA into the domain of multiple data types.[2](#fn2)
Figure [11\.5](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:moMFA) sketches a naive extension of PCA to a multi\-omics context.
FIGURE 11\.5: A naive extension of PCA to multi\-omics; data matrices from different platforms are stacked, before applying PCA.
Formally, we have
\\\[
X \= \\begin{bmatrix}
X\_{1} \\\\
X\_{2} \\\\
\\vdots \\\\
X\_{L}
\\end{bmatrix} \= WH,
\\]
a joint decomposition of the different data matrices (\\(X\_i\\)) into the factor matrix \\(W\\) and the latent variable matrix \\(H\\). This way, we can leverage the ability of PCA to find the highest variance decomposition of the data, when the data consists of different omics types. As a reminder, PCA finds the linear combinations of the features which, when the data is projected onto them, preserve the most variance of any \\(K\\)\-dimensional space. But because measurements from different experiments have different scales, they will also have variance (and co\-variance) at different scales.
Multiple Factor Analysis addresses this issue and achieves balance among the data types by normalizing each of the data types, before stacking them and passing them on to PCA. Formally, MFA is given by
\\\[
X\_n \= \\begin{bmatrix}
X\_{1} / \\lambda^{(1\)}\_1 \\\\
X\_{2} / \\lambda^{(2\)}\_1 \\\\
\\vdots \\\\
X\_{L} / \\lambda^{(L)}\_1
\\end{bmatrix} \= WH,
\\]
where \\(\\lambda^{(i)}\_1\\) is the first eigenvalue of the principal component decomposition of \\(X\_i\\).
Following this normalization step, we apply PCA to \\(X\_n\\). From there on, MFA analysis is the same as PCA analysis, and we refer the reader to Chapter [4](unsupervisedLearning.html#unsupervisedLearning) for more details.
#### 11\.3\.1\.1 MFA in R
MFA is available through the CRAN package `FactoMineR`. The code snippet below shows how to run it:
```
# run the MFA function from the FactoMineR package
r.mfa <- FactoMineR::MFA(
t(rbind(x1,x2,x3)), # binding the omics types together
c(dim(x1)[1], dim(x2)[1], dim(x3)[1]), # specifying the dimensions of each
graph=FALSE)
```
Since this generates a two\-dimensional factorization of the multi\-omics data, we can now plot each tumor as a dot in a 2D scatter plot to see how well the MFA factors separate the cancer subtypes. The following code snippet generates Figure [11\.6](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfascatterplot):
```
# first, extract the H and W matrices from the MFA run result
mfa.h <- r.mfa$global.pca$ind$coord
mfa.w <- r.mfa$quanti.var$coord
# create a dataframe with the H matrix and the CMS label
mfa_df <- as.data.frame(mfa.h)
mfa_df$subtype <- factor(covariates[rownames(mfa_df),]$cms_label)
# create the plot
ggplot2::ggplot(mfa_df, ggplot2::aes(x=Dim.1, y=Dim.2, color=subtype)) +
ggplot2::geom_point() + ggplot2::ggtitle("Scatter plot of MFA")
```
FIGURE 11\.6: Scatter plot of 2\-dimensional MFA for multi\-omics data shows separation between the subtypes.
Figure [11\.6](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfascatterplot) shows remarkable separation between the cancer subtypes; it is easy enough to draw a line separating the tumors to CMS subtypes with good accuracy.
Another way to examine the MFA factors, which is also useful for factor models with more than two components, is a heatmap, as shown in Figure [11\.7](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfaheatmap), generated by the following code snippet:
```
pheatmap::pheatmap(t(mfa.h)[1:2,], annotation_col = anno_col,
show_colnames = FALSE,
main="MFA for multi-omics integration")
```
FIGURE 11\.7: A heatmap of the two MFA components shows separation between the cancer subtypes.
Figure [11\.7](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfaheatmap) shows that indeed, when tumors are clustered and ordered using the two MFA factors we learned above, their separation into CMS clusters is nearly trivial.
**Want to know more ?**
* Learn more about FactoMineR on the website: <http://factominer.free.fr/>
* Learn more about MFA on the Wikipedia page <https://en.wikipedia.org/wiki/Multiple_factor_analysis>
#### 11\.3\.1\.1 MFA in R
MFA is available through the CRAN package `FactoMineR`. The code snippet below shows how to run it:
```
# run the MFA function from the FactoMineR package
r.mfa <- FactoMineR::MFA(
t(rbind(x1,x2,x3)), # binding the omics types together
c(dim(x1)[1], dim(x2)[1], dim(x3)[1]), # specifying the dimensions of each
graph=FALSE)
```
Since this generates a two\-dimensional factorization of the multi\-omics data, we can now plot each tumor as a dot in a 2D scatter plot to see how well the MFA factors separate the cancer subtypes. The following code snippet generates Figure [11\.6](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfascatterplot):
```
# first, extract the H and W matrices from the MFA run result
mfa.h <- r.mfa$global.pca$ind$coord
mfa.w <- r.mfa$quanti.var$coord
# create a dataframe with the H matrix and the CMS label
mfa_df <- as.data.frame(mfa.h)
mfa_df$subtype <- factor(covariates[rownames(mfa_df),]$cms_label)
# create the plot
ggplot2::ggplot(mfa_df, ggplot2::aes(x=Dim.1, y=Dim.2, color=subtype)) +
ggplot2::geom_point() + ggplot2::ggtitle("Scatter plot of MFA")
```
FIGURE 11\.6: Scatter plot of 2\-dimensional MFA for multi\-omics data shows separation between the subtypes.
Figure [11\.6](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfascatterplot) shows remarkable separation between the cancer subtypes; it is easy enough to draw a line separating the tumors to CMS subtypes with good accuracy.
Another way to examine the MFA factors, which is also useful for factor models with more than two components, is a heatmap, as shown in Figure [11\.7](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfaheatmap), generated by the following code snippet:
```
pheatmap::pheatmap(t(mfa.h)[1:2,], annotation_col = anno_col,
show_colnames = FALSE,
main="MFA for multi-omics integration")
```
FIGURE 11\.7: A heatmap of the two MFA components shows separation between the cancer subtypes.
Figure [11\.7](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:momfaheatmap) shows that indeed, when tumors are clustered and ordered using the two MFA factors we learned above, their separation into CMS clusters is nearly trivial.
**Want to know more ?**
* Learn more about FactoMineR on the website: <http://factominer.free.fr/>
* Learn more about MFA on the Wikipedia page <https://en.wikipedia.org/wiki/Multiple_factor_analysis>
### 11\.3\.2 Joint non\-negative matrix factorization
As introduced in Chapter [4](unsupervisedLearning.html#unsupervisedLearning), NMF (Non\-negative Matrix Factorization) is an algorithm from 2000 that seeks to find a non\-negative additive decomposition for a non\-negative data matrix. It takes the familiar form \\(X \\approx WH\\), with \\(X \\ge 0\\), \\(W \\ge 0\\), and \\(H \\ge 0\\). The non\-negative constraints make a lossless decomposition (i.e. \\(X\=WH\\)) generally impossible. Hence, NMF attempts to find a solution which minimizes the Frobenius norm of the reconstruction:
\\\[
min\~\\\|X\-WH\\\|\_F \\\\
W \\ge 0, \\\\
H \\ge 0,
\\]
where the Frobenius norm \\(\\\|\\cdot\\\|\_F\\) is the matrix equivalent of the Euclidean distance:
\\\[
\\\|X\\\|\_F \= \\sqrt{\\sum\_i\\sum\_jx\_{ij}^2}.
\\]
This is typically solved for \\(W\\) and \\(H\\) using random initializations followed by iterations of a multiplicative update rule:
\\\[\\begin{align}
W\_{t\+1} \&\= W\_t^T \\frac{XH\_t^T}{XH\_tH\_t^T} \\\\
H\_{t\+1} \&\= H\_t \\frac{W\_t^TX}{W^T\_tW\_tX}.
\\end{align}\\]
Since this algorithm is guaranteed only to converge to a local minimum, it is typically run several times with random initializations, and the best result is kept.
In the multi\-omics context, we will, as in the MFA case, wish to find a decomposition for an integrated data matrix of the form
\\\[
X \= \\begin{bmatrix}
X\_{1} \\\\
X\_{2} \\\\
\\vdots \\\\
X\_{L}
\\end{bmatrix},
\\]
with \\(X\_i\\)s denoting data from different omics platforms.
As NMF seeks to minimize the reconstruction error \\(\\\|X\-WH\\\|\_F\\), some care needs to be taken with regards to data normalization. Different omics platforms may produce data with different scales (i.e. real\-valued gene expression quantification, binary mutation data, etc.), and so will have different baseline Frobenius norms. To address this, when doing Joint NMF, we first feature\-normalize each data matrix, and then normalize by the Frobenius norm of the data matrix. Formally, we run NMF on
\\\[
X \= \\begin{bmatrix}
X\_{1}^N / \\alpha\_1 \\\\
X\_{2}^N / \\alpha\_2 \\\\
\\vdots \\\\
X\_{L}^N / \\alpha\_L
\\end{bmatrix},
\\]
where \\(X\_i^N\\) is the feature\-normalized data matrix \\(X\_i^N \= \\frac{x^{ij}}{\\sum\_jx^{ij}}\\), and \\(\\alpha\_i \= \\\|X\_{i}^N\\\|\_F\\).
Another consideration with NMF is the non\-negativity constraint. Different omics data types may have negative values, for instance, copy\-number variations (CNVs) may be positive, indicating gains, or negative, indicating losses, as in Table [11\.4](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#tab:mocnvsplitcolshow1). In order to turn such data into a non\-negative form, we will split each feature into two features, one new feature holding all the non\-negative values of the original feature, and another feature holding the absolute value of the negative ones, as in Table [11\.5](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#tab:mocnvsplitcolshow2).
TABLE 11\.4: Example copy number data. Data can be both positive (amplified regions) or negative (deleted regions).
| | seg1 | seg2 |
| --- | --- | --- |
| samp1 | 1 | 0 |
| samp2 | 2 | 1 |
| samp3 | 1 | \-2 |
TABLE 11\.5: Example copy number data after splitting each column into a column representing copy number gains (\+) and a column representing deletions (\-). This data matrix is non\-negative, and thus suitable for NMF algorithms.
| | seg1\+ | seg1\- | seg2\+ | seg2\- |
| --- | --- | --- | --- | --- |
| samp1 | 1 | 0 | 0 | 0 |
| samp2 | 2 | 0 | 1 | 0 |
| samp3 | 1 | 0 | 0 | 2 |
#### 11\.3\.2\.1 NMF in R
Many NMF algorithms are available through the CRAN package `NMF`. The following code chunk demonstrates how it may be run:
```
# Feature-normalize the data
x1.featnorm <- x1 / rowSums(x1)
x2.featnorm <- x2 / rowSums(x2)
x3.featnorm <- x3 / rowSums(x3)
# Normalize by each omics type's frobenius norm
x1.featnorm.frobnorm <- x1.featnorm / norm(as.matrix(x1.featnorm), type="F")
x2.featnorm.frobnorm <- x2.featnorm / norm(as.matrix(x2.featnorm), type="F")
x3.featnorm.frobnorm <- x3.featnorm / norm(as.matrix(x3.featnorm), type="F")
# Split the features of the CNV matrix into two non-negative features each
x3.featnorm.frobnorm.nonneg <- t(split_neg_columns(t(x3.featnorm.frobnorm)))
# run the nmf function from the NMF package
require(NMF)
```
```
## Warning: package 'NMF' was built under R version 4.0.2
```
```
r.nmf <- nmf(t(rbind(x1.featnorm.frobnorm,
x2.featnorm.frobnorm,
x3.featnorm.frobnorm.nonneg)),
2,
method='Frobenius')
# exctract the H and W matrices from the nmf run result
nmf.h <- NMF::basis(r.nmf)
nmf.w <- NMF::coef(r.nmf)
nmfw <- t(nmf.w)
```
As with MFA, we can examine how well 2\-factor NMF splits tumors into subtypes by looking at the scatter plot in Figure [11\.8](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfscatterplot), generated by the following code chunk:
```
# create a dataframe with the H matrix and the CMS label (subtype)
nmf_df <- as.data.frame(nmf.h)
colnames(nmf_df) <- c("dim1", "dim2")
nmf_df$subtype <- factor(covariates[rownames(nmf_df),]$cms_label)
# create the scatter plot
ggplot2::ggplot(nmf_df, ggplot2::aes(x=dim1, y=dim2, color=subtype)) +
ggplot2::geom_point() +
ggplot2::ggtitle("Scatter plot of 2-component NMF for multi-omics integration")
```
FIGURE 11\.8: NMF creates a disentangled representation of the data using two components which allow for separation between tumor sub\-types CMS1 and CMS3 based on NMF factors learned from multi\-omics data.
Figure [11\.8](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfscatterplot) shows an important difference between NMF and MFA (PCA). It shows the tendency of samples to lie close to the X or Y axes, that is, the tendency of each sample to be high in only one of the factors. This will be discussed more in the later section on disentangledness.
Again, should we choose to run NMF with more than two factors, a more useful plot might be the heatmap shown in Figure [11\.9](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfheatmap), generated by the following code snippet:
```
pheatmap::pheatmap(t(nmf_df[,1:2]),
annotation_col = anno_col,
show_colnames=FALSE,
main="Heatmap of 2-component NMF")
```
FIGURE 11\.9: A heatmap of NMF factors shows separability of tumors into subtype clusters. This plot is more useful than a scatter plot when there are more than two factors.
**Want to know more ?**
* Joint NMF to uncover gene regulatory networks: Zhang S., Li Q., Liu J., Zhou X. J. (2011\). A novel computational framework for simultaneous integration of multiple types of genomic data to identify microRNA\-gene regulatory modules. *Bioinformatics* 27, i401–i409\. 10\.1093/bioinformatics/btr206 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3117336/>
* Joint NMF for cancer research: Zhang S., Liu C.\-C., Li W., Shen H., Laird P. W., Zhou X. J. (2012\). Discovery of multi\-dimensional modules by integrative analysis of cancer genomic data. *Nucleic Acids Res.* 40, 9379–9391\. 10\.1093/nar/gks725 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3479191/>
#### 11\.3\.2\.1 NMF in R
Many NMF algorithms are available through the CRAN package `NMF`. The following code chunk demonstrates how it may be run:
```
# Feature-normalize the data
x1.featnorm <- x1 / rowSums(x1)
x2.featnorm <- x2 / rowSums(x2)
x3.featnorm <- x3 / rowSums(x3)
# Normalize by each omics type's frobenius norm
x1.featnorm.frobnorm <- x1.featnorm / norm(as.matrix(x1.featnorm), type="F")
x2.featnorm.frobnorm <- x2.featnorm / norm(as.matrix(x2.featnorm), type="F")
x3.featnorm.frobnorm <- x3.featnorm / norm(as.matrix(x3.featnorm), type="F")
# Split the features of the CNV matrix into two non-negative features each
x3.featnorm.frobnorm.nonneg <- t(split_neg_columns(t(x3.featnorm.frobnorm)))
# run the nmf function from the NMF package
require(NMF)
```
```
## Warning: package 'NMF' was built under R version 4.0.2
```
```
r.nmf <- nmf(t(rbind(x1.featnorm.frobnorm,
x2.featnorm.frobnorm,
x3.featnorm.frobnorm.nonneg)),
2,
method='Frobenius')
# exctract the H and W matrices from the nmf run result
nmf.h <- NMF::basis(r.nmf)
nmf.w <- NMF::coef(r.nmf)
nmfw <- t(nmf.w)
```
As with MFA, we can examine how well 2\-factor NMF splits tumors into subtypes by looking at the scatter plot in Figure [11\.8](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfscatterplot), generated by the following code chunk:
```
# create a dataframe with the H matrix and the CMS label (subtype)
nmf_df <- as.data.frame(nmf.h)
colnames(nmf_df) <- c("dim1", "dim2")
nmf_df$subtype <- factor(covariates[rownames(nmf_df),]$cms_label)
# create the scatter plot
ggplot2::ggplot(nmf_df, ggplot2::aes(x=dim1, y=dim2, color=subtype)) +
ggplot2::geom_point() +
ggplot2::ggtitle("Scatter plot of 2-component NMF for multi-omics integration")
```
FIGURE 11\.8: NMF creates a disentangled representation of the data using two components which allow for separation between tumor sub\-types CMS1 and CMS3 based on NMF factors learned from multi\-omics data.
Figure [11\.8](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfscatterplot) shows an important difference between NMF and MFA (PCA). It shows the tendency of samples to lie close to the X or Y axes, that is, the tendency of each sample to be high in only one of the factors. This will be discussed more in the later section on disentangledness.
Again, should we choose to run NMF with more than two factors, a more useful plot might be the heatmap shown in Figure [11\.9](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfheatmap), generated by the following code snippet:
```
pheatmap::pheatmap(t(nmf_df[,1:2]),
annotation_col = anno_col,
show_colnames=FALSE,
main="Heatmap of 2-component NMF")
```
FIGURE 11\.9: A heatmap of NMF factors shows separability of tumors into subtype clusters. This plot is more useful than a scatter plot when there are more than two factors.
**Want to know more ?**
* Joint NMF to uncover gene regulatory networks: Zhang S., Li Q., Liu J., Zhou X. J. (2011\). A novel computational framework for simultaneous integration of multiple types of genomic data to identify microRNA\-gene regulatory modules. *Bioinformatics* 27, i401–i409\. 10\.1093/bioinformatics/btr206 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3117336/>
* Joint NMF for cancer research: Zhang S., Liu C.\-C., Li W., Shen H., Laird P. W., Zhou X. J. (2012\). Discovery of multi\-dimensional modules by integrative analysis of cancer genomic data. *Nucleic Acids Res.* 40, 9379–9391\. 10\.1093/nar/gks725 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3479191/>
### 11\.3\.3 iCluster
iCluster takes a Bayesian approach to the latent variable model. In Bayesian statistics, we infer distributions over model parameters, rather than finding a single maximum\-likelihood parameter estimate. In iCluster, we model the data as
\\\[
X\_{(i)} \= W\_{(i)}Z \+ \\epsilon\_i,
\\]
where \\(X\_{(i)}\\) is a data matrix from a single omics platform, \\(W\_{(i)}\\) are model parameters, \\(Z\\) is a latent variable matrix, and is shared among the different omics platforms, and \\(\\epsilon\_i\\) is a “noise” random variable, \\(\\epsilon \\sim N(0,\\Psi)\\), with \\(\\Psi \= diag(\\psi\_1,\\dots \\psi\_M)\\) is a diagonal covariance matrix.
FIGURE 11\.10: Sketch of iCluster model. Each omics datatype is decomposed to a coefficient matrix and a shared latent variable matrix, plus noise.
Note that with this construction, the omics measurements \\(X\\) are expected to be the same for samples with the same latent variable representation, up to Gaussian noise. Further, we assume a Gaussian prior distribution on the latent variables \\(Z \\sim N(0,I)\\), which means we assume \\(X\_{(i)} \\sim N \\big( 0,W\_{(i)} W\_{(i)}^T \+ \\Psi\_{(i)} \\big)\\). In order to find suitable values for \\(W\\), \\(Z\\), and \\(\\Psi\\), we can write down the multivariate normal log\-likelihood function and optimize it. For a multivariate normal distribution with mean \\(0\\) and covariance \\(\\Sigma\\), the log\-likelihood function is given by
\\\[
\\ell \= \-\\frac{1}{2} \\bigg( \\ln (\|\\Sigma\|) \+ X^T \\Sigma^{\-1} X \+ k\\ln (2 \\pi) \\bigg)
\\]
(this is simply the log of the Probability Density Function of a multivariate Gaussian). For the multi\-omics iCluster case, we have \\(X\=\\big( X\_{(1\)}, \\dots, X\_{(L)} \\big)^T\\), \\(W \= \\big( W\_{(1\)}, \\dots, W\_{(L)} \\big)^T\\), where \\(X\\) is a multivariate normal with \\(0\\)\-mean and \\(\\Sigma \= W W^T \+ \\Psi\\) covariance. Hence, the log\-likelihood function for the iCluster model is given by:
\\\[
\\ell\_{iC}(W,\\Sigma) \= \-\\frac{1}{2} \\bigg( \\sum\_{i\=1}^L \\ln (\|\\Sigma\|) \+ X^T\\Sigma^{\-1}X \+ p\_i \\ln (2 \\pi) \\bigg)
\\]
where \\(p\_i\\) is the number of features in omics data type \\(i\\). Because this model has more parameters than we typically have samples, we need to push the model to use fewer parameters than it has at its disposal, by using regularization. iCluster uses Lasso regularization, which is a direct penalty on the absolute value of the parameters. I.e., instead of optimizing \\(\\ell\_{iC}(W,\\Sigma)\\), we will optimize the regularized log\-likelihood:
\\\[
\\ell \= \\ell\_{iC}(W,\\Sigma) \- \\lambda\\\|W\\\|\_1\.
\\]
The parameter \\(\\lambda\\) acts as a dial to weigh the trade\-off between better model fits (higher log\-likelihood) and a sparser model, with more \\(w\_{ij}\\)s set to \\(0\\), which gives models which generalize better and are more interpretable.
In order to solve this problem, iCluster employs the Expectation Maximization (EM) algorithm. The full details are beyond the scope of this textbook. We will introduce a short sketch instead. The intuition behind the EM algorithm is a more general case of the k\-means clustering algorithm (Chapter 4\). The basic **EM algorithm** is as follows.
* Initialize \\(W\\) and \\(\\Psi\\).
* **Until convergence of \\(W\\), \\(\\Psi\\)**
+ E\-step: Calculate the expected value of \\(Z\\) given the current estimates of \\(W\\) and \\(\\Psi\\) and the data \\(X\\).
+ M\-step: Calculate maximum likelihood estimates for the parameters \\(W\\) and \\(\\Psi\\) based on the current estimate of \\(Z\\) and the data \\(X\\).
#### 11\.3\.3\.1 iCluster\+: Extending iCluster
iCluster\+ is an extension of the iCluster framework, which allows for omics types to arise from distributions other than a Gaussian. While normal distributions are a good assumption for log\-transformed, centered gene expression data, it is a poor model for binary mutations data, or for copy number variation data, which can typically take the values \\((\-2, 1, 0, 1, 2\)\\) for heterozygous / monozygous deletions or amplifications. iCluster\+ allows the different \\(X\\)s to have different distributions:
* for binary mutations, \\(X\\) is drawn from a multivariate binomial
* for normal, continuous data, \\(X\\) is drawn from a multivariate Gaussian
* for copy number variations, \\(X\\) is drawn from a multinomial
* for count data, \\(X\\) is drawn from a Poisson.
In that way, iCluster\+ allows us to explicitly model our assumptions about the distributions of our different omics data types, and leverage the strengths of Bayesian inference.
Both iCluster and iCluster\+ make use of sophisticated Bayesian inference algorithms (EM for iCluster, Metropolis\-Hastings MCMC for iCluster\+), which means they do not scale up trivially. Therefore, it is recommended to filter down the features to a manageable size before inputting data to the algorithm. The exact size of “manageable” data depends on your hardware, but a rule of thumb is that dimensions in the thousands are ok, but in the tens of thousands might be too slow.
#### 11\.3\.3\.2 Running iCluster\+
iCluster\+ is available through the Bioconductor package `iClusterPlus`. The following code snippet demonstrates how it can be run with two components:
```
# run the iClusterPlus function
r.icluster <- iClusterPlus::iClusterPlus(
t(x1), # Providing each omics type
t(x2),
t(x3),
type=c("gaussian", "binomial", "multinomial"), # Providing the distributions
K=2, # provide the number of factors to learn
alpha=c(1,1,1), # as well as other model parameters
lambda=c(.03,.03,.03))
# extract the H and W matrices from the run result
# here, we refer to H as z, to keep with iCluster terminology
icluster.z <- r.icluster$meanZ
rownames(icluster.z) <- rownames(covariates) # fix the row names
icluster.ws <- r.icluster$beta
# construct a dataframe with the H matrix (z) and the cancer subtypes
# for later plotting
icp_df <- as.data.frame(icluster.z)
colnames(icp_df) <- c("dim1", "dim2")
rownames(icp_df) <- colnames(x1)
icp_df$subtype <- factor(covariates[rownames(icp_df),]$cms_label)
```
As with other methods, we examine the iCluster results by looking at the scatter plot in Figure [11\.11](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:moiclusterplusscatter) and the heatmap in Figure [11\.12](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:moiclusterplusheatmap). Both figures show that iCluster learns two factors which nearly perfectly discriminate between tumors of the two subtypes.
FIGURE 11\.11: iCluster\+ learns factors which allow tumor sub\-types CMS1 and CMS3 to be discriminated.
FIGURE 11\.12: iCluster\+ factors, shown in a heatmap, separate tumors into their subtypes well.
**Want to know more ?**
* Read the original iCluster paper: Shen R., Olshen A. B., Ladanyi M. (2009\). Integrative clustering of multiple genomic data types using a joint latent variable model with application to breast and lung cancer subtype analysis. *Bioinformatics* 25, 2906–2912\. 10\.1093/bioinformatics/btp543 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2800366/>
* Read the original iClusterPlus paper: an extension of iCluster: Shen R., Mo Q., Schultz N., Seshan V. E., Olshen A. B., Huse J., et al. (2012\). Integrative subtype discovery in glioblastoma using iCluster. *PLoS ONE* 7:e35236\. 10\.1371/journal.pone.0035236 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3335101/>
* Learn more about the LASSO for model regularization: Tibshirani, R. (1996\). Regression shrinkage and selection via the lasso. *J. Royal. Statist. Soc B.*, Vol. 58, No. 1, pages 267\-288: [http://www\-stat.stanford.edu/%7Etibs/lasso/lasso.pdf](http://www-stat.stanford.edu/%7Etibs/lasso/lasso.pdf)
* Learn more about the EM algorithm: Dempster, A. P., et al. Maximum likelihood from incomplete data via the EM algorithm. *Journal of the Royal Statistical Society. Series B (Methodological)*, vol. 39, no. 1, 1977, pp. 1–38\. JSTOR, JSTOR: <http://www.jstor.org/stable/2984875>
* Read about MCMC algorithms: Hastings, W.K. (1970\). Monte Carlo sampling methods using Markov chains and their applications. *Biometrika.* 57 (1\): 97–109\. [doi:10\.1093/biomet/57\.1\.97](doi:10.1093/biomet/57.1.97): <https://www.jstor.org/stable/2334940>
#### 11\.3\.3\.1 iCluster\+: Extending iCluster
iCluster\+ is an extension of the iCluster framework, which allows for omics types to arise from distributions other than a Gaussian. While normal distributions are a good assumption for log\-transformed, centered gene expression data, it is a poor model for binary mutations data, or for copy number variation data, which can typically take the values \\((\-2, 1, 0, 1, 2\)\\) for heterozygous / monozygous deletions or amplifications. iCluster\+ allows the different \\(X\\)s to have different distributions:
* for binary mutations, \\(X\\) is drawn from a multivariate binomial
* for normal, continuous data, \\(X\\) is drawn from a multivariate Gaussian
* for copy number variations, \\(X\\) is drawn from a multinomial
* for count data, \\(X\\) is drawn from a Poisson.
In that way, iCluster\+ allows us to explicitly model our assumptions about the distributions of our different omics data types, and leverage the strengths of Bayesian inference.
Both iCluster and iCluster\+ make use of sophisticated Bayesian inference algorithms (EM for iCluster, Metropolis\-Hastings MCMC for iCluster\+), which means they do not scale up trivially. Therefore, it is recommended to filter down the features to a manageable size before inputting data to the algorithm. The exact size of “manageable” data depends on your hardware, but a rule of thumb is that dimensions in the thousands are ok, but in the tens of thousands might be too slow.
#### 11\.3\.3\.2 Running iCluster\+
iCluster\+ is available through the Bioconductor package `iClusterPlus`. The following code snippet demonstrates how it can be run with two components:
```
# run the iClusterPlus function
r.icluster <- iClusterPlus::iClusterPlus(
t(x1), # Providing each omics type
t(x2),
t(x3),
type=c("gaussian", "binomial", "multinomial"), # Providing the distributions
K=2, # provide the number of factors to learn
alpha=c(1,1,1), # as well as other model parameters
lambda=c(.03,.03,.03))
# extract the H and W matrices from the run result
# here, we refer to H as z, to keep with iCluster terminology
icluster.z <- r.icluster$meanZ
rownames(icluster.z) <- rownames(covariates) # fix the row names
icluster.ws <- r.icluster$beta
# construct a dataframe with the H matrix (z) and the cancer subtypes
# for later plotting
icp_df <- as.data.frame(icluster.z)
colnames(icp_df) <- c("dim1", "dim2")
rownames(icp_df) <- colnames(x1)
icp_df$subtype <- factor(covariates[rownames(icp_df),]$cms_label)
```
As with other methods, we examine the iCluster results by looking at the scatter plot in Figure [11\.11](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:moiclusterplusscatter) and the heatmap in Figure [11\.12](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:moiclusterplusheatmap). Both figures show that iCluster learns two factors which nearly perfectly discriminate between tumors of the two subtypes.
FIGURE 11\.11: iCluster\+ learns factors which allow tumor sub\-types CMS1 and CMS3 to be discriminated.
FIGURE 11\.12: iCluster\+ factors, shown in a heatmap, separate tumors into their subtypes well.
**Want to know more ?**
* Read the original iCluster paper: Shen R., Olshen A. B., Ladanyi M. (2009\). Integrative clustering of multiple genomic data types using a joint latent variable model with application to breast and lung cancer subtype analysis. *Bioinformatics* 25, 2906–2912\. 10\.1093/bioinformatics/btp543 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2800366/>
* Read the original iClusterPlus paper: an extension of iCluster: Shen R., Mo Q., Schultz N., Seshan V. E., Olshen A. B., Huse J., et al. (2012\). Integrative subtype discovery in glioblastoma using iCluster. *PLoS ONE* 7:e35236\. 10\.1371/journal.pone.0035236 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3335101/>
* Learn more about the LASSO for model regularization: Tibshirani, R. (1996\). Regression shrinkage and selection via the lasso. *J. Royal. Statist. Soc B.*, Vol. 58, No. 1, pages 267\-288: [http://www\-stat.stanford.edu/%7Etibs/lasso/lasso.pdf](http://www-stat.stanford.edu/%7Etibs/lasso/lasso.pdf)
* Learn more about the EM algorithm: Dempster, A. P., et al. Maximum likelihood from incomplete data via the EM algorithm. *Journal of the Royal Statistical Society. Series B (Methodological)*, vol. 39, no. 1, 1977, pp. 1–38\. JSTOR, JSTOR: <http://www.jstor.org/stable/2984875>
* Read about MCMC algorithms: Hastings, W.K. (1970\). Monte Carlo sampling methods using Markov chains and their applications. *Biometrika.* 57 (1\): 97–109\. [doi:10\.1093/biomet/57\.1\.97](doi:10.1093/biomet/57.1.97): <https://www.jstor.org/stable/2334940>
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/clustering-using-latent-factors.html |
11\.4 Clustering using latent factors
-------------------------------------
A common analysis in biological investigations is clustering. This is often interesting in cancer studies as one hopes to find groups of tumors (clusters) which behave similarly, i.e. have similar risks and/or respond to the same drugs. PCA is a common step in clustering analyses, and so it is easy to see how the latent variable models above may all be a useful pre\-processing step before clustering. In the examples below, we will use the latent variables inferred by the algorithms in the previous section on the set of colorectal cancer tumors from the TCGA. For a more complete introduction to clustering, see Chapter [4](unsupervisedLearning.html#unsupervisedLearning).
### 11\.4\.1 One\-hot clustering
A specific clustering method for NMF data is to assume each sample is driven by one component, i.e. that the number of clusters \\(K\\) is the same as the number of latent variables in the model and that each sample may be associated to one of those components. We assign each sample a cluster label based on the latent variable which affects it the most. Figure [11\.9](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfheatmap) above (heatmap of 2\-component NMF) shows the latent variable values for the two latent variables, for the 72 tumors, obtained by Joint NMF.
The two rows are the two latent variables, and the columns are the 72 tumors. We can observe that most tumors are indeed driven mainly by one of the factors, and not a combination of the two. We can use this to assign each tumor a cluster label based on its dominant factor, shown in the following code snippet, which also produces the heatmap in Figure [11\.13](clustering-using-latent-factors.html#fig:moNMFClustering).
```
# one-hot clustering in one line of code:
# assign each sample the cluster according to its dominant NMF factor
# easily accessible using the max.col function
nmf.clusters <- max.col(nmf.h)
names(nmf.clusters) <- rownames(nmf.h)
# create an annotation data frame indicating the NMF one-hot clusters
# as well as the cancer subtypes, for the heatmap plot below
anno_nmf_cl <- data.frame(
nmf.cluster=factor(nmf.clusters),
cms.subtype=factor(covariates[rownames(nmf.h),]$cms_label)
)
# generate the plot
pheatmap::pheatmap(t(nmf.h[order(nmf.clusters),]),
cluster_cols=FALSE, cluster_rows=FALSE,
annotation_col = anno_nmf_cl,
show_colnames = FALSE,border_color=NA,
main="Joint NMF factors\nwith clusters and molecular subtypes")
```
FIGURE 11\.13: Joint NMF factors with clusters, and molecular sub\-types. One\-hot clustering assigns one cluster per dimension, where each sample is assigned a cluster based on its dominant component. The clusters largely recapitulate the CMS sub\-types.
We see that using one\-hot clustering with Joint NMF, we were able to find two clusters in the data which correspond fairly well with the molecular subtype of the tumors.
The one\-hot clustering method does not lend itself very well to the other methods discussed above, i.e. iCluster and MFA. The latent variables produced by those other methods may be negative, and further, in the case of iCluster, are going to assume a multivariate Gaussian shape. As such, it is not trivial to pick one “dominant factor” for them. For NMF variants, this is a very common way to assign clusters.
### 11\.4\.2 K\-means clustering
K\-means clustering was introduced in Chapter [4](unsupervisedLearning.html#unsupervisedLearning). Briefly, k\-means is a special case of the EM algorithm, and indeed iCluster was originally conceived as an extension of K\-means from binary cluster assignments to real\-valued latent variables. The iCluster algorithm, as it is so named, calls for application of K\-means clustering on its latent variables, after the inference step. The following code snippet shows how to pull K\-means clusters out of the iCluster results, and produces the heatmap in Figure [11\.14](clustering-using-latent-factors.html#fig:moiClusterHeatmap), which shows how well these clusters correspond to cancer subtypes.
```
# use the kmeans function to cluster the iCluster H matrix (here, z)
# using 2 as the number of clusters.
icluster.clusters <- kmeans(icluster.z, 2)$cluster
names(icluster.clusters) <- rownames(icluster.z)
# create an annotation dataframe for the heatmap plot
# containing the kmeans cluster assignments and the cancer subtypes
anno_icluster_cl <- data.frame(
iCluster=factor(icluster.clusters),
cms.subtype=factor(covariates$cms_label))
# generate the figure
pheatmap::pheatmap(
t(icluster.z[order(icluster.clusters),]), # order z by the kmeans clusters
cluster_cols=FALSE, # use cluster_cols and cluster_rows=FALSE
cluster_rows=FALSE, # as we want the ordering by k-means clusters to hold
show_colnames = FALSE,border_color=NA,
annotation_col = anno_icluster_cl,
main="iCluster factors\nwith clusters and molecular subtypes")
```
FIGURE 11\.14: K\-means clustering on iCluster\+ factors largely recapitulates the CMS sub\-types.
This demonstrates the ability of iClusterPlus to find clusters which correspond to molecular subtypes, based on multi\-omics data.
### 11\.4\.1 One\-hot clustering
A specific clustering method for NMF data is to assume each sample is driven by one component, i.e. that the number of clusters \\(K\\) is the same as the number of latent variables in the model and that each sample may be associated to one of those components. We assign each sample a cluster label based on the latent variable which affects it the most. Figure [11\.9](matrix-factorization-methods-for-unsupervised-multi-omics-data-integration.html#fig:monmfheatmap) above (heatmap of 2\-component NMF) shows the latent variable values for the two latent variables, for the 72 tumors, obtained by Joint NMF.
The two rows are the two latent variables, and the columns are the 72 tumors. We can observe that most tumors are indeed driven mainly by one of the factors, and not a combination of the two. We can use this to assign each tumor a cluster label based on its dominant factor, shown in the following code snippet, which also produces the heatmap in Figure [11\.13](clustering-using-latent-factors.html#fig:moNMFClustering).
```
# one-hot clustering in one line of code:
# assign each sample the cluster according to its dominant NMF factor
# easily accessible using the max.col function
nmf.clusters <- max.col(nmf.h)
names(nmf.clusters) <- rownames(nmf.h)
# create an annotation data frame indicating the NMF one-hot clusters
# as well as the cancer subtypes, for the heatmap plot below
anno_nmf_cl <- data.frame(
nmf.cluster=factor(nmf.clusters),
cms.subtype=factor(covariates[rownames(nmf.h),]$cms_label)
)
# generate the plot
pheatmap::pheatmap(t(nmf.h[order(nmf.clusters),]),
cluster_cols=FALSE, cluster_rows=FALSE,
annotation_col = anno_nmf_cl,
show_colnames = FALSE,border_color=NA,
main="Joint NMF factors\nwith clusters and molecular subtypes")
```
FIGURE 11\.13: Joint NMF factors with clusters, and molecular sub\-types. One\-hot clustering assigns one cluster per dimension, where each sample is assigned a cluster based on its dominant component. The clusters largely recapitulate the CMS sub\-types.
We see that using one\-hot clustering with Joint NMF, we were able to find two clusters in the data which correspond fairly well with the molecular subtype of the tumors.
The one\-hot clustering method does not lend itself very well to the other methods discussed above, i.e. iCluster and MFA. The latent variables produced by those other methods may be negative, and further, in the case of iCluster, are going to assume a multivariate Gaussian shape. As such, it is not trivial to pick one “dominant factor” for them. For NMF variants, this is a very common way to assign clusters.
### 11\.4\.2 K\-means clustering
K\-means clustering was introduced in Chapter [4](unsupervisedLearning.html#unsupervisedLearning). Briefly, k\-means is a special case of the EM algorithm, and indeed iCluster was originally conceived as an extension of K\-means from binary cluster assignments to real\-valued latent variables. The iCluster algorithm, as it is so named, calls for application of K\-means clustering on its latent variables, after the inference step. The following code snippet shows how to pull K\-means clusters out of the iCluster results, and produces the heatmap in Figure [11\.14](clustering-using-latent-factors.html#fig:moiClusterHeatmap), which shows how well these clusters correspond to cancer subtypes.
```
# use the kmeans function to cluster the iCluster H matrix (here, z)
# using 2 as the number of clusters.
icluster.clusters <- kmeans(icluster.z, 2)$cluster
names(icluster.clusters) <- rownames(icluster.z)
# create an annotation dataframe for the heatmap plot
# containing the kmeans cluster assignments and the cancer subtypes
anno_icluster_cl <- data.frame(
iCluster=factor(icluster.clusters),
cms.subtype=factor(covariates$cms_label))
# generate the figure
pheatmap::pheatmap(
t(icluster.z[order(icluster.clusters),]), # order z by the kmeans clusters
cluster_cols=FALSE, # use cluster_cols and cluster_rows=FALSE
cluster_rows=FALSE, # as we want the ordering by k-means clusters to hold
show_colnames = FALSE,border_color=NA,
annotation_col = anno_icluster_cl,
main="iCluster factors\nwith clusters and molecular subtypes")
```
FIGURE 11\.14: K\-means clustering on iCluster\+ factors largely recapitulates the CMS sub\-types.
This demonstrates the ability of iClusterPlus to find clusters which correspond to molecular subtypes, based on multi\-omics data.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/biological-interpretation-of-latent-factors.html |
11\.5 Biological interpretation of latent factors
-------------------------------------------------
### 11\.5\.1 Inspection of feature weights in loading vectors
The most straightforward way to go about interpreting the latent factors in a biological context, is to look at the coefficients which are associated with them. The latent variable models introduced above all take the linear form \\(X \\approx WH\\), where \\(W\\) is a factor matrix, with coefficients tying each latent variable with each of the features in the \\(L\\) original multi\-omics data matrices. By inspecting these coefficients, we can get a sense of which multi\-omics features are co\-regulated. The code snippet below generates Figure [11\.15](biological-interpretation-of-latent-factors.html#fig:moNMFHeatmap), which shows the coefficients of the Joint NMF analysis above:
```
# create an annotation dataframe for the heatmap
# for each feature, indicating its omics-type
data_anno <- data.frame(
omics=c(rep('expression',dim(x1)[1]),
rep('mut',dim(x2)[1]),
rep('cnv',dim(x3.featnorm.frobnorm.nonneg)[1])))
rownames(data_anno) <- c(rownames(x1),
paste0("mut:", rownames(x2)),
rownames(x3.featnorm.frobnorm.nonneg))
rownames(nmfw) <- rownames(data_anno)
# generate the heat map
pheatmap::pheatmap(nmfw,
cluster_cols = FALSE,
annotation_row = data_anno,
main="NMF coefficients",
clustering_distance_rows = "manhattan",
fontsize_row = 1)
```
FIGURE 11\.15: Heatmap showing the association of input features from multi\-omics data (gene expression, copy number variation, and mutations), with JNMF factors. Gene expression features dominate both factors, but copy numbers and mutations mostly affect only one factor each.
Inspection of the factor coefficients in the heatmap above reveals that Joint NMF has found two nearly orthogonal non\-negative factors. One is associated with high expression of the HOXC11, ZIC5, and XIRP1 genes, frequent mutations in the BRAF, PCDHGA6, and DNAH5 genes, as well as losses in the 18q12\.2 and gains in 8p21\.1 cytobands. The other factor is associated with high expression of the SOX1 gene, more frequent mutations in the APC, KRAS, and TP53 genes, and a weak association with some CNVs.
#### 11\.5\.1\.1 Disentangled representations
The property displayed above, where each feature is predominantly associated with only a single factor, is termed *disentangledness*, i.e. it leads to *disentangled* latent variable representations, as changing one input feature only affects a single latent variable. This property is very desirable as it greatly simplifies the biological interpretation of modules. Here, we have two modules with a set of co\-occurring molecular signatures which merit deeper investigation into the mechanisms by which these different omics features are related. For this reason, NMF is widely used in computational biology today.
### 11\.5\.2 Making sense of factors using enrichment analysis
In order to investigate the oncogenic processes that drive the differences between tumors, we may draw upon biological prior knowledge by looking for overlaps between genes that drive certain tumors, and genes involved in familiar biological processes.
#### 11\.5\.2\.1 Enrichment analysis
The recent decades of genomics have uncovered many of the ways in which genes cooperate to perform biological functions in concert. This work has resulted in rich annotations of genes, groups of genes, and the different functions they carry out. Examples of such annotations include the Gene Ontology Consortium’s *GO terms* (Ashburner, Ball, Blake, et al. [2000](#ref-go_first_paper), @go\_latest\_paper), the *Reactome pathways database* (A. Fabregat, Jupe, Matthews, et al. [2018](#ref-reactome_latent_paper)), and the *Kyoto Encyclopaedia of Genes and Genomes* (Kanehisa, Furumichi, Tanabe, et al. [2017](#ref-kegg_latest_paper)). These resources, as well as others, publish lists of so\-called *gene sets*, or *pathways*, which are sets of genes which are known to operate together in some biological function, e.g. protein synthesis, DNA mismatch repair, cellular adhesion, and many other functions. Gene set enrichment analysis is a method which looks for overlaps between genes which we have found to be of interest, e.g. by them being implicated in a certain tumor type, and the a\-priori gene sets discussed above.
In the context of making sense of latent factors, the question we will be asking is whether the genes which drive the value of a latent factor (the genes with the highest factor coefficients) also belong to any interesting annotated gene sets, and whether the overlap is greater than we would expect by chance. If there are \\(N\\) genes in total, \\(K\\) of which belong to a gene set, the probability that \\(k\\) out of the \\(n\\) genes associated with a latent factor are also associated with a gene set is given by the hypergeometric distribution:
\\\[
P(k) \= \\frac{{\\binom{K}{k}} \- \\binom{N\-K}{n\-k}}{\\binom{N}{n}}.
\\]
The **hypergeometric test** uses the hypergeometric distribution to assess the statistical significance of the presence of genes belonging to a gene set in the latent factor. The null hypothesis is that there is no relationship between genes in a gene set, and genes in a latent factor. When testing for over\-representation of gene set genes in a latent factor, the P value from the hypergeometric test is the probability of getting \\(k\\) or more genes from a gene set in a latent factor
\\\[
p \= \\sum\_{i\=k}^K P(k\=i).
\\]
The hypergeometric enrichment test is also referred to as *Fisher’s one\-sided exact test*. This way, we can determine if the genes associated with a factor significantly overlap (beyond chance) the genes involved in a biological process. Because we will typically be testing many gene sets, we will also need to apply multiple testing correction, such as Benjamini\-Hochberg correction (see Chapter 3, multiple testing correction).
#### 11\.5\.2\.2 Example in R
In R, we can do this analysis using the `enrichR` package, which gives us access to many gene set libraries. In the example below, we will find the genes associated with preferentially NMF factor 1 or NMF factor 2, by the contribution of those genes’ expression values to the factor. Then, we’ll use `enrichR` to query the Gene Ontology terms which might be overlapping:
```
require(enrichR)
# select genes associated preferentially with each factor
# by their relative loading in the W matrix
genes.factor.1 <- names(which(nmfw[1:dim(x1)[1],1] > nmfw[1:dim(x1)[1],2]))
genes.factor.2 <- names(which(nmfw[1:dim(x1)[1],1] < nmfw[1:dim(x1)[1],2]))
# call the enrichr function to find gene sets enriched
# in each latent factor in the GO Biological Processes 2018 library
go.factor.1 <- enrichR::enrichr(genes.factor.1,
databases = c("GO_Biological_Process_2018")
)$GO_Biological_Process_2018
go.factor.2 <- enrichR::enrichr(genes.factor.2,
databases = c("GO_Biological_Process_2018")
)$GO_Biological_Process_2018
```
The top GO terms associated with NMF factor 2 are shown in Table [**??**](#tab:moNMFGOTerms):
### 11\.5\.3 Interpretation using additional covariates
Another way to ascribe biological significance to the latent variables is by correlating them with additional covariates we might have about the samples. In our example, the colorectal cancer tumors have also been characterized for microsatellite instability (MSI) status, using an external test (typically PCR\-based). By examining the latent variable values as they relate to a tumor’s MSI status, we might discover that we’ve learned latent factors that are related to it. The following code snippet demonstrates how this might be looked into, by generating Figures [11\.16](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates) and [11\.17](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates2):
```
# create a data frame holding covariates (age, gender, MSI status)
a <- data.frame(age=covariates$age,
gender=as.numeric(covariates$gender),
msi=covariates$msi)
```
```
## Warning in data.frame(age = covariates$age, gender =
## as.numeric(covariates$gender), : NAs introduced by coercion
```
```
b <- nmf.h
colnames(b) <- c('factor1', 'factor2')
# concatenate the covariate dataframe with the H matrix
cov_factor <- cbind(a,b)
# generate the figure
ggplot2::ggplot(cov_factor, ggplot2::aes(x=msi, y=factor1, group=msi)) +
ggplot2::geom_boxplot() +
ggplot2::ggtitle("NMF factor 1 microsatellite instability")
```
FIGURE 11\.16: Box plot showing MSI/MSS status distribution and NMF factor 1 values.
```
ggplot2::ggplot(cov_factor, ggplot2::aes(x=msi, y=factor2, group=msi)) +
ggplot2::geom_boxplot() +
ggplot2::ggtitle("NMF factor 2 and microsatellite instability")
```
FIGURE 11\.17: Box plot showing MSI/MSS status distribution and NMF factor 2 values.
Figures [11\.16](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates) and [11\.17](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates2) show that NMF factor 1 and NMF factor 2 are separated by the MSI or MSS (microsatellite stability) status of the tumors.
### 11\.5\.1 Inspection of feature weights in loading vectors
The most straightforward way to go about interpreting the latent factors in a biological context, is to look at the coefficients which are associated with them. The latent variable models introduced above all take the linear form \\(X \\approx WH\\), where \\(W\\) is a factor matrix, with coefficients tying each latent variable with each of the features in the \\(L\\) original multi\-omics data matrices. By inspecting these coefficients, we can get a sense of which multi\-omics features are co\-regulated. The code snippet below generates Figure [11\.15](biological-interpretation-of-latent-factors.html#fig:moNMFHeatmap), which shows the coefficients of the Joint NMF analysis above:
```
# create an annotation dataframe for the heatmap
# for each feature, indicating its omics-type
data_anno <- data.frame(
omics=c(rep('expression',dim(x1)[1]),
rep('mut',dim(x2)[1]),
rep('cnv',dim(x3.featnorm.frobnorm.nonneg)[1])))
rownames(data_anno) <- c(rownames(x1),
paste0("mut:", rownames(x2)),
rownames(x3.featnorm.frobnorm.nonneg))
rownames(nmfw) <- rownames(data_anno)
# generate the heat map
pheatmap::pheatmap(nmfw,
cluster_cols = FALSE,
annotation_row = data_anno,
main="NMF coefficients",
clustering_distance_rows = "manhattan",
fontsize_row = 1)
```
FIGURE 11\.15: Heatmap showing the association of input features from multi\-omics data (gene expression, copy number variation, and mutations), with JNMF factors. Gene expression features dominate both factors, but copy numbers and mutations mostly affect only one factor each.
Inspection of the factor coefficients in the heatmap above reveals that Joint NMF has found two nearly orthogonal non\-negative factors. One is associated with high expression of the HOXC11, ZIC5, and XIRP1 genes, frequent mutations in the BRAF, PCDHGA6, and DNAH5 genes, as well as losses in the 18q12\.2 and gains in 8p21\.1 cytobands. The other factor is associated with high expression of the SOX1 gene, more frequent mutations in the APC, KRAS, and TP53 genes, and a weak association with some CNVs.
#### 11\.5\.1\.1 Disentangled representations
The property displayed above, where each feature is predominantly associated with only a single factor, is termed *disentangledness*, i.e. it leads to *disentangled* latent variable representations, as changing one input feature only affects a single latent variable. This property is very desirable as it greatly simplifies the biological interpretation of modules. Here, we have two modules with a set of co\-occurring molecular signatures which merit deeper investigation into the mechanisms by which these different omics features are related. For this reason, NMF is widely used in computational biology today.
#### 11\.5\.1\.1 Disentangled representations
The property displayed above, where each feature is predominantly associated with only a single factor, is termed *disentangledness*, i.e. it leads to *disentangled* latent variable representations, as changing one input feature only affects a single latent variable. This property is very desirable as it greatly simplifies the biological interpretation of modules. Here, we have two modules with a set of co\-occurring molecular signatures which merit deeper investigation into the mechanisms by which these different omics features are related. For this reason, NMF is widely used in computational biology today.
### 11\.5\.2 Making sense of factors using enrichment analysis
In order to investigate the oncogenic processes that drive the differences between tumors, we may draw upon biological prior knowledge by looking for overlaps between genes that drive certain tumors, and genes involved in familiar biological processes.
#### 11\.5\.2\.1 Enrichment analysis
The recent decades of genomics have uncovered many of the ways in which genes cooperate to perform biological functions in concert. This work has resulted in rich annotations of genes, groups of genes, and the different functions they carry out. Examples of such annotations include the Gene Ontology Consortium’s *GO terms* (Ashburner, Ball, Blake, et al. [2000](#ref-go_first_paper), @go\_latest\_paper), the *Reactome pathways database* (A. Fabregat, Jupe, Matthews, et al. [2018](#ref-reactome_latent_paper)), and the *Kyoto Encyclopaedia of Genes and Genomes* (Kanehisa, Furumichi, Tanabe, et al. [2017](#ref-kegg_latest_paper)). These resources, as well as others, publish lists of so\-called *gene sets*, or *pathways*, which are sets of genes which are known to operate together in some biological function, e.g. protein synthesis, DNA mismatch repair, cellular adhesion, and many other functions. Gene set enrichment analysis is a method which looks for overlaps between genes which we have found to be of interest, e.g. by them being implicated in a certain tumor type, and the a\-priori gene sets discussed above.
In the context of making sense of latent factors, the question we will be asking is whether the genes which drive the value of a latent factor (the genes with the highest factor coefficients) also belong to any interesting annotated gene sets, and whether the overlap is greater than we would expect by chance. If there are \\(N\\) genes in total, \\(K\\) of which belong to a gene set, the probability that \\(k\\) out of the \\(n\\) genes associated with a latent factor are also associated with a gene set is given by the hypergeometric distribution:
\\\[
P(k) \= \\frac{{\\binom{K}{k}} \- \\binom{N\-K}{n\-k}}{\\binom{N}{n}}.
\\]
The **hypergeometric test** uses the hypergeometric distribution to assess the statistical significance of the presence of genes belonging to a gene set in the latent factor. The null hypothesis is that there is no relationship between genes in a gene set, and genes in a latent factor. When testing for over\-representation of gene set genes in a latent factor, the P value from the hypergeometric test is the probability of getting \\(k\\) or more genes from a gene set in a latent factor
\\\[
p \= \\sum\_{i\=k}^K P(k\=i).
\\]
The hypergeometric enrichment test is also referred to as *Fisher’s one\-sided exact test*. This way, we can determine if the genes associated with a factor significantly overlap (beyond chance) the genes involved in a biological process. Because we will typically be testing many gene sets, we will also need to apply multiple testing correction, such as Benjamini\-Hochberg correction (see Chapter 3, multiple testing correction).
#### 11\.5\.2\.2 Example in R
In R, we can do this analysis using the `enrichR` package, which gives us access to many gene set libraries. In the example below, we will find the genes associated with preferentially NMF factor 1 or NMF factor 2, by the contribution of those genes’ expression values to the factor. Then, we’ll use `enrichR` to query the Gene Ontology terms which might be overlapping:
```
require(enrichR)
# select genes associated preferentially with each factor
# by their relative loading in the W matrix
genes.factor.1 <- names(which(nmfw[1:dim(x1)[1],1] > nmfw[1:dim(x1)[1],2]))
genes.factor.2 <- names(which(nmfw[1:dim(x1)[1],1] < nmfw[1:dim(x1)[1],2]))
# call the enrichr function to find gene sets enriched
# in each latent factor in the GO Biological Processes 2018 library
go.factor.1 <- enrichR::enrichr(genes.factor.1,
databases = c("GO_Biological_Process_2018")
)$GO_Biological_Process_2018
go.factor.2 <- enrichR::enrichr(genes.factor.2,
databases = c("GO_Biological_Process_2018")
)$GO_Biological_Process_2018
```
The top GO terms associated with NMF factor 2 are shown in Table [**??**](#tab:moNMFGOTerms):
#### 11\.5\.2\.1 Enrichment analysis
The recent decades of genomics have uncovered many of the ways in which genes cooperate to perform biological functions in concert. This work has resulted in rich annotations of genes, groups of genes, and the different functions they carry out. Examples of such annotations include the Gene Ontology Consortium’s *GO terms* (Ashburner, Ball, Blake, et al. [2000](#ref-go_first_paper), @go\_latest\_paper), the *Reactome pathways database* (A. Fabregat, Jupe, Matthews, et al. [2018](#ref-reactome_latent_paper)), and the *Kyoto Encyclopaedia of Genes and Genomes* (Kanehisa, Furumichi, Tanabe, et al. [2017](#ref-kegg_latest_paper)). These resources, as well as others, publish lists of so\-called *gene sets*, or *pathways*, which are sets of genes which are known to operate together in some biological function, e.g. protein synthesis, DNA mismatch repair, cellular adhesion, and many other functions. Gene set enrichment analysis is a method which looks for overlaps between genes which we have found to be of interest, e.g. by them being implicated in a certain tumor type, and the a\-priori gene sets discussed above.
In the context of making sense of latent factors, the question we will be asking is whether the genes which drive the value of a latent factor (the genes with the highest factor coefficients) also belong to any interesting annotated gene sets, and whether the overlap is greater than we would expect by chance. If there are \\(N\\) genes in total, \\(K\\) of which belong to a gene set, the probability that \\(k\\) out of the \\(n\\) genes associated with a latent factor are also associated with a gene set is given by the hypergeometric distribution:
\\\[
P(k) \= \\frac{{\\binom{K}{k}} \- \\binom{N\-K}{n\-k}}{\\binom{N}{n}}.
\\]
The **hypergeometric test** uses the hypergeometric distribution to assess the statistical significance of the presence of genes belonging to a gene set in the latent factor. The null hypothesis is that there is no relationship between genes in a gene set, and genes in a latent factor. When testing for over\-representation of gene set genes in a latent factor, the P value from the hypergeometric test is the probability of getting \\(k\\) or more genes from a gene set in a latent factor
\\\[
p \= \\sum\_{i\=k}^K P(k\=i).
\\]
The hypergeometric enrichment test is also referred to as *Fisher’s one\-sided exact test*. This way, we can determine if the genes associated with a factor significantly overlap (beyond chance) the genes involved in a biological process. Because we will typically be testing many gene sets, we will also need to apply multiple testing correction, such as Benjamini\-Hochberg correction (see Chapter 3, multiple testing correction).
#### 11\.5\.2\.2 Example in R
In R, we can do this analysis using the `enrichR` package, which gives us access to many gene set libraries. In the example below, we will find the genes associated with preferentially NMF factor 1 or NMF factor 2, by the contribution of those genes’ expression values to the factor. Then, we’ll use `enrichR` to query the Gene Ontology terms which might be overlapping:
```
require(enrichR)
# select genes associated preferentially with each factor
# by their relative loading in the W matrix
genes.factor.1 <- names(which(nmfw[1:dim(x1)[1],1] > nmfw[1:dim(x1)[1],2]))
genes.factor.2 <- names(which(nmfw[1:dim(x1)[1],1] < nmfw[1:dim(x1)[1],2]))
# call the enrichr function to find gene sets enriched
# in each latent factor in the GO Biological Processes 2018 library
go.factor.1 <- enrichR::enrichr(genes.factor.1,
databases = c("GO_Biological_Process_2018")
)$GO_Biological_Process_2018
go.factor.2 <- enrichR::enrichr(genes.factor.2,
databases = c("GO_Biological_Process_2018")
)$GO_Biological_Process_2018
```
The top GO terms associated with NMF factor 2 are shown in Table [**??**](#tab:moNMFGOTerms):
### 11\.5\.3 Interpretation using additional covariates
Another way to ascribe biological significance to the latent variables is by correlating them with additional covariates we might have about the samples. In our example, the colorectal cancer tumors have also been characterized for microsatellite instability (MSI) status, using an external test (typically PCR\-based). By examining the latent variable values as they relate to a tumor’s MSI status, we might discover that we’ve learned latent factors that are related to it. The following code snippet demonstrates how this might be looked into, by generating Figures [11\.16](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates) and [11\.17](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates2):
```
# create a data frame holding covariates (age, gender, MSI status)
a <- data.frame(age=covariates$age,
gender=as.numeric(covariates$gender),
msi=covariates$msi)
```
```
## Warning in data.frame(age = covariates$age, gender =
## as.numeric(covariates$gender), : NAs introduced by coercion
```
```
b <- nmf.h
colnames(b) <- c('factor1', 'factor2')
# concatenate the covariate dataframe with the H matrix
cov_factor <- cbind(a,b)
# generate the figure
ggplot2::ggplot(cov_factor, ggplot2::aes(x=msi, y=factor1, group=msi)) +
ggplot2::geom_boxplot() +
ggplot2::ggtitle("NMF factor 1 microsatellite instability")
```
FIGURE 11\.16: Box plot showing MSI/MSS status distribution and NMF factor 1 values.
```
ggplot2::ggplot(cov_factor, ggplot2::aes(x=msi, y=factor2, group=msi)) +
ggplot2::geom_boxplot() +
ggplot2::ggtitle("NMF factor 2 and microsatellite instability")
```
FIGURE 11\.17: Box plot showing MSI/MSS status distribution and NMF factor 2 values.
Figures [11\.16](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates) and [11\.17](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates2) show that NMF factor 1 and NMF factor 2 are separated by the MSI or MSS (microsatellite stability) status of the tumors.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/exercises-9.html |
11\.6 Exercises
---------------
### 11\.6\.1 Matrix factorization methods
1. Find features associated with iCluster and MFA factors, and visualize the feature weights. \[Difficulty: **Beginner**]
2. Normalizing the data matrices by their \\(\\lambda\_1\\)’s as in MFA supposes we wish to assign each data type the same importance in the down\-stream analysis. This leads to a natural generalization whereby the different data types may be differently weighted. Provide an implementation of weighed\-MFA where the different data types may be assigned individual weights. \[Difficulty: **Intermediate**]
3. In order to use NMF algorithms on data which can be negative, we need to split each feature into two new features, one positive and one negative. Implement the following function, and see that the included test does not fail: \[Difficulty: **Intermediate/Advanced**]
```
# Implement this function
split_neg_columns <- function(x) {
# your code here
}
# a test that shows the function above works
test_split_neg_columns <- function() {
input <- as.data.frame(cbind(c(1,2,1),c(0,1,-2)))
output <- as.data.frame(cbind(c(1,2,1), c(0,0,0), c(0,1,0), c(0,0,2)))
stopifnot(all(output == split_neg_columns(input)))
}
# run the test to verify your solution
test_split_neg_columns()
```
4. The iCluster\+ algorithm has some parameters which may be tuned for maximum performance. The `iClusterPlus` package has a method, `iClusterPlus::tune.iClusterPlus`, which does this automatically based on the Bayesian Information Criterion (BIC). Run this method on the data from the examples above and find the optimal lambda and alpha values. \[Difficulty: **Beginner/Intermediate**]
### 11\.6\.2 Clustering using latent factors
1. Why is one\-hot clustering more suitable for NMF than iCluster? \[Difficulty: **Intermediate**]
2. Which clustering algorithm produces better results when combined with NMF, K\-means, or one\-hot clustering? Why do you think that is? \[Difficulty: **Intermediate/Advanced**]
### 11\.6\.3 Biological interpretation of latent factors
1. Another covariate in the metadata of these tumors is their *CpG island methylator Phenotype* (CIMP). This is a phenotype carried by a group of colorectal cancers that display hypermethylation of promoter CpG island sites, resulting in the inactivation of some tumor suppressors. This is also assayed using an external test. Do any of the multi\-omics methods surveyed find a latent variable that is associated with the tumor’s CIMP phenotype? \[Difficulty: **Beginner/Intermediate**]
2. Does MFA give a disentangled representation? Does `iCluster` give disentangled representations? Why do you think that is? \[Difficulty: **Advanced**]
3. Figures [11\.16](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates) and [11\.17](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates2) show that MSI/MSS tumors have different values for NMF factors 1 and 2\. Which NMF factor is associated with microsatellite instability? \[Difficulty: **Beginner**]
4. Microsatellite instability (MSI) is associated with hyper\-mutated tumors. As seen in Figure [11\.2](use-case-multi-omics-data-from-colorectal-cancer.html#fig:momutationsHeatmap), one of the subtypes has tumors with significantly more mutations than the other. Which subtype is that? Which NMF factor is associated with that subtype? And which NMF factor is associated with MSI? \[Difficulty: **Advanced**]
### 11\.6\.1 Matrix factorization methods
1. Find features associated with iCluster and MFA factors, and visualize the feature weights. \[Difficulty: **Beginner**]
2. Normalizing the data matrices by their \\(\\lambda\_1\\)’s as in MFA supposes we wish to assign each data type the same importance in the down\-stream analysis. This leads to a natural generalization whereby the different data types may be differently weighted. Provide an implementation of weighed\-MFA where the different data types may be assigned individual weights. \[Difficulty: **Intermediate**]
3. In order to use NMF algorithms on data which can be negative, we need to split each feature into two new features, one positive and one negative. Implement the following function, and see that the included test does not fail: \[Difficulty: **Intermediate/Advanced**]
```
# Implement this function
split_neg_columns <- function(x) {
# your code here
}
# a test that shows the function above works
test_split_neg_columns <- function() {
input <- as.data.frame(cbind(c(1,2,1),c(0,1,-2)))
output <- as.data.frame(cbind(c(1,2,1), c(0,0,0), c(0,1,0), c(0,0,2)))
stopifnot(all(output == split_neg_columns(input)))
}
# run the test to verify your solution
test_split_neg_columns()
```
4. The iCluster\+ algorithm has some parameters which may be tuned for maximum performance. The `iClusterPlus` package has a method, `iClusterPlus::tune.iClusterPlus`, which does this automatically based on the Bayesian Information Criterion (BIC). Run this method on the data from the examples above and find the optimal lambda and alpha values. \[Difficulty: **Beginner/Intermediate**]
### 11\.6\.2 Clustering using latent factors
1. Why is one\-hot clustering more suitable for NMF than iCluster? \[Difficulty: **Intermediate**]
2. Which clustering algorithm produces better results when combined with NMF, K\-means, or one\-hot clustering? Why do you think that is? \[Difficulty: **Intermediate/Advanced**]
### 11\.6\.3 Biological interpretation of latent factors
1. Another covariate in the metadata of these tumors is their *CpG island methylator Phenotype* (CIMP). This is a phenotype carried by a group of colorectal cancers that display hypermethylation of promoter CpG island sites, resulting in the inactivation of some tumor suppressors. This is also assayed using an external test. Do any of the multi\-omics methods surveyed find a latent variable that is associated with the tumor’s CIMP phenotype? \[Difficulty: **Beginner/Intermediate**]
2. Does MFA give a disentangled representation? Does `iCluster` give disentangled representations? Why do you think that is? \[Difficulty: **Advanced**]
3. Figures [11\.16](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates) and [11\.17](biological-interpretation-of-latent-factors.html#fig:moNMFClinicalCovariates2) show that MSI/MSS tumors have different values for NMF factors 1 and 2\. Which NMF factor is associated with microsatellite instability? \[Difficulty: **Beginner**]
4. Microsatellite instability (MSI) is associated with hyper\-mutated tumors. As seen in Figure [11\.2](use-case-multi-omics-data-from-colorectal-cancer.html#fig:momutationsHeatmap), one of the subtypes has tumors with significantly more mutations than the other. Which subtype is that? Which NMF factor is associated with that subtype? And which NMF factor is associated with MSI? \[Difficulty: **Advanced**]
| Life Sciences |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/index.html |
Prerequisites
=============
*Last update: Sun Oct 25 12:05:18 2020 \-0500 (79503f6ee)*
You need couple of things to get `rTorch` working:
1. Install Python [Anaconda](https://www.anaconda.com/products/individual). Preferably, for 64\-bits, and above Python 3\.6\+. I have successfully tested Anaconda under four different operating systems: Windows (Win10 and Windows Server 2008\); macOS (Sierra, Mojave and Catalina); Linux (Debian, Fedora and Ubuntu); and lastly, Solaris 10\. All these tests are required by CRAN.
2. Install R, Rtools and RStudio. I used two R versions R\-3\.6\.3 and R\-4\.0\.2\.
3. Install the R package [reticulate](https://github.com/rstudio/reticulate), which is the one that provides the connection between R and Python.
4. Install the stable version `rTorch` from CRAN, or the latest version under development via GitHub.
> Note. While it is not mandatory to have a previously created a `Python` environment with `Anaconda`, where `PyTorch` and `TorchVision` have already been installed, it is another option if for some reason `reticulate` refuses to communicate with the conda environment. Keep in mind that you could also get the `rTorch` *conda* environment installed directly from the `R` console, in very similar fashion as in R\-TensorFlow does. Use the function `install_pytorch()` to install a conda environment for PyTorch.
Installation
------------
The **rTorch** package can be installed from CRAN or Github.
From CRAN:
```
install.packages("rTorch")
```
From GitHub, install `rTorch` with:
```
devtools::install_github("f0nzie/rTorch")
```
which will install rTorch from the `main` or `master` branch.
If you want to play with the latest rTorch version, then install it from the `develop` branch, like this:
```
devtools::install_github("f0nzie/rTorch", ref="develop")
```
or clone with Git from the terminal with:
```
git clone https://github.com/f0nzie/rTorch.git
```
This will allow you to build `rTorch` from source.
Python Anaconda
---------------
If your preference is installing an Anaconda environment first, these are the steps:
### Example
1. Create a `conda` environment from the terminal with:
```
conda create -n r-torch python=3.7
```
2. Activate the new environment with
```
conda activate r-torch
```
3. Install the `PyTorch` related packages with:
```
conda install python=3.6.6 pytorch torchvision cpuonly matplotlib pandas -c pytorch
```
The last part `-c pytorch` specifies the **stable** *conda* channel to download the PyTorch packages. Your *conda* installation may not work if you don’t indicate the channel.
Now, you can load `rTorch` in R or RStudio with:
```
library(rTorch)
```
### Automatic installation
I used the idea for automatic installation in the `tensorflow` package for R, to create the function `rTorch::install_pytorch()`. This function will allow you to install a `conda` environment complete with all `PyTorch` requirements plus the packages you specify. Example:
```
rTorch:::install_conda(package="pytorch=1.4", envname="r-torch",
conda="auto", conda_python_version = "3.6", pip=FALSE,
channel="pytorch",
extra_packages=c("torchvision",
"cpuonly",
"matplotlib",
"pandas"))
```
This is explained in more detailed in the [rTorch package manual](https://f0nzie.github.io/rTorch/articles/installation.html).
> **Note.** `matplotlib` and `pandas` are not really necessary for `rTorch` to work, but I was asked if `matplotlib` or `pandas` could work with `PyTorch`. So, I decided to install them for testing and experimentation. They both work.
Installation
------------
The **rTorch** package can be installed from CRAN or Github.
From CRAN:
```
install.packages("rTorch")
```
From GitHub, install `rTorch` with:
```
devtools::install_github("f0nzie/rTorch")
```
which will install rTorch from the `main` or `master` branch.
If you want to play with the latest rTorch version, then install it from the `develop` branch, like this:
```
devtools::install_github("f0nzie/rTorch", ref="develop")
```
or clone with Git from the terminal with:
```
git clone https://github.com/f0nzie/rTorch.git
```
This will allow you to build `rTorch` from source.
Python Anaconda
---------------
If your preference is installing an Anaconda environment first, these are the steps:
### Example
1. Create a `conda` environment from the terminal with:
```
conda create -n r-torch python=3.7
```
2. Activate the new environment with
```
conda activate r-torch
```
3. Install the `PyTorch` related packages with:
```
conda install python=3.6.6 pytorch torchvision cpuonly matplotlib pandas -c pytorch
```
The last part `-c pytorch` specifies the **stable** *conda* channel to download the PyTorch packages. Your *conda* installation may not work if you don’t indicate the channel.
Now, you can load `rTorch` in R or RStudio with:
```
library(rTorch)
```
### Automatic installation
I used the idea for automatic installation in the `tensorflow` package for R, to create the function `rTorch::install_pytorch()`. This function will allow you to install a `conda` environment complete with all `PyTorch` requirements plus the packages you specify. Example:
```
rTorch:::install_conda(package="pytorch=1.4", envname="r-torch",
conda="auto", conda_python_version = "3.6", pip=FALSE,
channel="pytorch",
extra_packages=c("torchvision",
"cpuonly",
"matplotlib",
"pandas"))
```
This is explained in more detailed in the [rTorch package manual](https://f0nzie.github.io/rTorch/articles/installation.html).
> **Note.** `matplotlib` and `pandas` are not really necessary for `rTorch` to work, but I was asked if `matplotlib` or `pandas` could work with `PyTorch`. So, I decided to install them for testing and experimentation. They both work.
### Example
1. Create a `conda` environment from the terminal with:
```
conda create -n r-torch python=3.7
```
2. Activate the new environment with
```
conda activate r-torch
```
3. Install the `PyTorch` related packages with:
```
conda install python=3.6.6 pytorch torchvision cpuonly matplotlib pandas -c pytorch
```
The last part `-c pytorch` specifies the **stable** *conda* channel to download the PyTorch packages. Your *conda* installation may not work if you don’t indicate the channel.
Now, you can load `rTorch` in R or RStudio with:
```
library(rTorch)
```
### Automatic installation
I used the idea for automatic installation in the `tensorflow` package for R, to create the function `rTorch::install_pytorch()`. This function will allow you to install a `conda` environment complete with all `PyTorch` requirements plus the packages you specify. Example:
```
rTorch:::install_conda(package="pytorch=1.4", envname="r-torch",
conda="auto", conda_python_version = "3.6", pip=FALSE,
channel="pytorch",
extra_packages=c("torchvision",
"cpuonly",
"matplotlib",
"pandas"))
```
This is explained in more detailed in the [rTorch package manual](https://f0nzie.github.io/rTorch/articles/installation.html).
> **Note.** `matplotlib` and `pandas` are not really necessary for `rTorch` to work, but I was asked if `matplotlib` or `pandas` could work with `PyTorch`. So, I decided to install them for testing and experimentation. They both work.
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/intro.html |
Chapter 1 Introduction
======================
*Last update: Sun Oct 25 13:00:41 2020 \-0500 (265c0b3c1\)*
1\.1 Motivation
---------------
*Why do we want a package of something that is already working well, such as PyTorch?*
There are several reasons, but the main one is to bring another machine learning framework to R. Probably, it is just me but I feel *PyTorch* very comfortable to work with. Feels pretty much like everything else in Python. Very **pythonic**. I have tried other frameworks in R. The closest that matches a natural language like PyTorch, is [MXnet](https://mxnet.apache.org/versions/1.7.0/get_started?). Unfortunately, *MXnet* it is the hardest to install and maintain after updates.
Yes. I could have worked directly with *PyTorch* in a native Python environment, such as *Jupyter,* or *PyCharm,* or [vscode](https://code.visualstudio.com/docs/python/jupyter-support) notebooks but it very hard to quit **RMarkdown** once you get used to it. It is the real thing in regards to [literate programming](https://en.wikipedia.org/wiki/Literate_programming) and **reproducibility**. It does not only contribute to improving the quality of the code but establishes a workflow for a better understanding of a subject by your intended readers (Knuth [1983](references.html#ref-knuth1983)), in what is been called the *literate programming paradigm* (Cordes and Brown [1991](references.html#ref-cordes1991)).
This has the additional benefit of giving the ability to write combination of *Python* and *R* code together in the same document. There will be times when it is better to create a class in *Python*; and other times where *R* will be more convenient to handle a data structure. I show some examples using `data.frame` and `data.table` along with *PyTorch* tensors.
1\.2 Start using `rTorch`
-------------------------
Start using `rTorch` is very simple. After installing the minimum system requirements \-such as *conda* \-, you just call it with:
```
library(rTorch)
```
There are several ways of testing if `rTorch` is up and running. Let’s see some of them:
### 1\.2\.1 Get the PyTorch version
```
rTorch::torch_version()
```
```
#> [1] "1.6"
```
### 1\.2\.2 PyTorch configuration
This will show the PyTorch version and the current version of Python installed, as well as the paths to folders where they reside.
```
rTorch::torch_config()
```
```
#> PyTorch v1.6.0 (~/miniconda3/envs/r-torch/lib/python3.7/site-packages/torch)
#> Python v3.7 (~/miniconda3/envs/r-torch/bin/python)
#> NumPy v1.19.4)
```
---
1\.3 What can you do with `rTorch`
----------------------------------
Practically, you can do everything you could with **PyTorch** within the **R** ecosystem. Additionally to the `rTorch` module, from where you can extract methods, functions and classes, there are available two more modules: `torchvision` and `np`, which is short for `numpy`. We could use the modules with:
```
rTorch::torchvision
rTorch::np
rTorch::torch
```
```
#> Module(torchvision)
#> Module(numpy)
#> Module(torch)
```
1\.4 Getting help
-----------------
We get a glimpse of the first lines of the `help("torch")` via a Python chunk:
```
help("torch")
```
```
...
#> NAME
#> torch
#>
#> DESCRIPTION
#> The torch package contains data structures for multi-dimensional
#> tensors and mathematical operations over these are defined.
#> Additionally, it provides many utilities for efficient serializing of
#> Tensors and arbitrary types, and other useful utilities.
...
```
```
help("torch.tensor")
```
```
...
#> Help on built-in function tensor in torch:
#>
#> torch.tensor = tensor(...)
#> tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor
#>
#> Constructs a tensor with :attr:`data`.
#>
#> .. warning::
#>
#> :func:`torch.tensor` always copies :attr:`data`. If you have a Tensor
#> ``data`` and want to avoid a copy, use :func:`torch.Tensor.requires_grad_`
#> or :func:`torch.Tensor.detach`.
#> If you have a NumPy ``ndarray`` and want to avoid a copy, use
#> :func:`torch.as_tensor`.
#>
#> .. warning::
#>
#> When data is a tensor `x`, :func:`torch.tensor` reads out 'the data' from whatever it is passed,
#> and constructs a leaf variable. Therefore ``torch.tensor(x)`` is equivalent to ``x.clone().detach()``
#> and ``torch.tensor(x, requires_grad=True)`` is equivalent to ``x.clone().detach().requires_grad_(True)``.
...
```
```
help("torch.cat")
```
```
...
#> Help on built-in function cat in torch:
#>
#> torch.cat = cat(...)
#> cat(tensors, dim=0, out=None) -> Tensor
#>
#> Concatenates the given sequence of :attr:`seq` tensors in the given dimension.
#> All tensors must either have the same shape (except in the concatenating
#> dimension) or be empty.
#>
#> :func:`torch.cat` can be seen as an inverse operation for :func:`torch.split`
#> and :func:`torch.chunk`.
#>
#> :func:`torch.cat` can be best understood via examples.
#>
#> Args:
#> tensors (sequence of Tensors): any python sequence of tensors of the same type.
#> Non-empty tensors provided must have the same shape, except in the
#> cat dimension.
#> dim (int, optional): the dimension over which the tensors are concatenated
#> out (Tensor, optional): the output tensor.
...
```
```
help("numpy.arange")
```
```
...
#> Help on built-in function arange in numpy:
#>
#> numpy.arange = arange(...)
#> arange([start,] stop[, step,], dtype=None)
#>
#> Return evenly spaced values within a given interval.
#>
#> Values are generated within the half-open interval ``[start, stop)``
#> (in other words, the interval including `start` but excluding `stop`).
#> For integer arguments the function is equivalent to the Python built-in
#> `range` function, but returns an ndarray rather than a list.
#>
#> When using a non-integer step, such as 0.1, the results will often not
#> be consistent. It is better to use `numpy.linspace` for these cases.
#>
#> Parameters
#> ----------
#> start : number, optional
#> Start of interval. The interval includes this value. The default
#> start value is 0.
#> stop : number
#> End of interval. The interval does not include this value, except
#> in some cases where `step` is not an integer and floating point
#> round-off affects the length of `out`.
#> step : number, optional
...
```
Finally, these are the classes for the module `torchvision.datasets`. We are using Python to list them using the `help` function.
```
help("torchvision.datasets")
```
```
...
#> Help on package torchvision.datasets in torchvision:
#>
#> NAME
#> torchvision.datasets
#>
#> PACKAGE CONTENTS
#> caltech
#> celeba
#> cifar
#> cityscapes
#> coco
#> fakedata
#> flickr
#> folder
#> hmdb51
#> imagenet
#> kinetics
#> lsun
#> mnist
#> omniglot
#> phototour
#> samplers (package)
#> sbd
#> sbu
#> semeion
#> stl10
#> svhn
#> ucf101
#> usps
#> utils
#> video_utils
#> vision
#> voc
#>
#> CLASSES
...
```
In other words, all the functions, modules, classes in PyTorch are available to rTorch.
1\.1 Motivation
---------------
*Why do we want a package of something that is already working well, such as PyTorch?*
There are several reasons, but the main one is to bring another machine learning framework to R. Probably, it is just me but I feel *PyTorch* very comfortable to work with. Feels pretty much like everything else in Python. Very **pythonic**. I have tried other frameworks in R. The closest that matches a natural language like PyTorch, is [MXnet](https://mxnet.apache.org/versions/1.7.0/get_started?). Unfortunately, *MXnet* it is the hardest to install and maintain after updates.
Yes. I could have worked directly with *PyTorch* in a native Python environment, such as *Jupyter,* or *PyCharm,* or [vscode](https://code.visualstudio.com/docs/python/jupyter-support) notebooks but it very hard to quit **RMarkdown** once you get used to it. It is the real thing in regards to [literate programming](https://en.wikipedia.org/wiki/Literate_programming) and **reproducibility**. It does not only contribute to improving the quality of the code but establishes a workflow for a better understanding of a subject by your intended readers (Knuth [1983](references.html#ref-knuth1983)), in what is been called the *literate programming paradigm* (Cordes and Brown [1991](references.html#ref-cordes1991)).
This has the additional benefit of giving the ability to write combination of *Python* and *R* code together in the same document. There will be times when it is better to create a class in *Python*; and other times where *R* will be more convenient to handle a data structure. I show some examples using `data.frame` and `data.table` along with *PyTorch* tensors.
1\.2 Start using `rTorch`
-------------------------
Start using `rTorch` is very simple. After installing the minimum system requirements \-such as *conda* \-, you just call it with:
```
library(rTorch)
```
There are several ways of testing if `rTorch` is up and running. Let’s see some of them:
### 1\.2\.1 Get the PyTorch version
```
rTorch::torch_version()
```
```
#> [1] "1.6"
```
### 1\.2\.2 PyTorch configuration
This will show the PyTorch version and the current version of Python installed, as well as the paths to folders where they reside.
```
rTorch::torch_config()
```
```
#> PyTorch v1.6.0 (~/miniconda3/envs/r-torch/lib/python3.7/site-packages/torch)
#> Python v3.7 (~/miniconda3/envs/r-torch/bin/python)
#> NumPy v1.19.4)
```
---
### 1\.2\.1 Get the PyTorch version
```
rTorch::torch_version()
```
```
#> [1] "1.6"
```
### 1\.2\.2 PyTorch configuration
This will show the PyTorch version and the current version of Python installed, as well as the paths to folders where they reside.
```
rTorch::torch_config()
```
```
#> PyTorch v1.6.0 (~/miniconda3/envs/r-torch/lib/python3.7/site-packages/torch)
#> Python v3.7 (~/miniconda3/envs/r-torch/bin/python)
#> NumPy v1.19.4)
```
---
1\.3 What can you do with `rTorch`
----------------------------------
Practically, you can do everything you could with **PyTorch** within the **R** ecosystem. Additionally to the `rTorch` module, from where you can extract methods, functions and classes, there are available two more modules: `torchvision` and `np`, which is short for `numpy`. We could use the modules with:
```
rTorch::torchvision
rTorch::np
rTorch::torch
```
```
#> Module(torchvision)
#> Module(numpy)
#> Module(torch)
```
1\.4 Getting help
-----------------
We get a glimpse of the first lines of the `help("torch")` via a Python chunk:
```
help("torch")
```
```
...
#> NAME
#> torch
#>
#> DESCRIPTION
#> The torch package contains data structures for multi-dimensional
#> tensors and mathematical operations over these are defined.
#> Additionally, it provides many utilities for efficient serializing of
#> Tensors and arbitrary types, and other useful utilities.
...
```
```
help("torch.tensor")
```
```
...
#> Help on built-in function tensor in torch:
#>
#> torch.tensor = tensor(...)
#> tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor
#>
#> Constructs a tensor with :attr:`data`.
#>
#> .. warning::
#>
#> :func:`torch.tensor` always copies :attr:`data`. If you have a Tensor
#> ``data`` and want to avoid a copy, use :func:`torch.Tensor.requires_grad_`
#> or :func:`torch.Tensor.detach`.
#> If you have a NumPy ``ndarray`` and want to avoid a copy, use
#> :func:`torch.as_tensor`.
#>
#> .. warning::
#>
#> When data is a tensor `x`, :func:`torch.tensor` reads out 'the data' from whatever it is passed,
#> and constructs a leaf variable. Therefore ``torch.tensor(x)`` is equivalent to ``x.clone().detach()``
#> and ``torch.tensor(x, requires_grad=True)`` is equivalent to ``x.clone().detach().requires_grad_(True)``.
...
```
```
help("torch.cat")
```
```
...
#> Help on built-in function cat in torch:
#>
#> torch.cat = cat(...)
#> cat(tensors, dim=0, out=None) -> Tensor
#>
#> Concatenates the given sequence of :attr:`seq` tensors in the given dimension.
#> All tensors must either have the same shape (except in the concatenating
#> dimension) or be empty.
#>
#> :func:`torch.cat` can be seen as an inverse operation for :func:`torch.split`
#> and :func:`torch.chunk`.
#>
#> :func:`torch.cat` can be best understood via examples.
#>
#> Args:
#> tensors (sequence of Tensors): any python sequence of tensors of the same type.
#> Non-empty tensors provided must have the same shape, except in the
#> cat dimension.
#> dim (int, optional): the dimension over which the tensors are concatenated
#> out (Tensor, optional): the output tensor.
...
```
```
help("numpy.arange")
```
```
...
#> Help on built-in function arange in numpy:
#>
#> numpy.arange = arange(...)
#> arange([start,] stop[, step,], dtype=None)
#>
#> Return evenly spaced values within a given interval.
#>
#> Values are generated within the half-open interval ``[start, stop)``
#> (in other words, the interval including `start` but excluding `stop`).
#> For integer arguments the function is equivalent to the Python built-in
#> `range` function, but returns an ndarray rather than a list.
#>
#> When using a non-integer step, such as 0.1, the results will often not
#> be consistent. It is better to use `numpy.linspace` for these cases.
#>
#> Parameters
#> ----------
#> start : number, optional
#> Start of interval. The interval includes this value. The default
#> start value is 0.
#> stop : number
#> End of interval. The interval does not include this value, except
#> in some cases where `step` is not an integer and floating point
#> round-off affects the length of `out`.
#> step : number, optional
...
```
Finally, these are the classes for the module `torchvision.datasets`. We are using Python to list them using the `help` function.
```
help("torchvision.datasets")
```
```
...
#> Help on package torchvision.datasets in torchvision:
#>
#> NAME
#> torchvision.datasets
#>
#> PACKAGE CONTENTS
#> caltech
#> celeba
#> cifar
#> cityscapes
#> coco
#> fakedata
#> flickr
#> folder
#> hmdb51
#> imagenet
#> kinetics
#> lsun
#> mnist
#> omniglot
#> phototour
#> samplers (package)
#> sbd
#> sbu
#> semeion
#> stl10
#> svhn
#> ucf101
#> usps
#> utils
#> video_utils
#> vision
#> voc
#>
#> CLASSES
...
```
In other words, all the functions, modules, classes in PyTorch are available to rTorch.
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/pytorch-and-numpy.html |
Chapter 2 PyTorch and NumPy
===========================
*Last update: Thu Nov 19 14:20:26 2020 \-0600 (562e6f2c5\)*
2\.1 PyTorch modules in `rTorch`
--------------------------------
### 2\.1\.1 torchvision
This is an example of using the `torchvision` module. With `torchvision` and its `dataset` set of function, we could download any of the popular datasets for machine learning made available by PyTorch. In this example, we will be downloading the training dataset of the **MNIST** handwritten digits. There are 60,000 images in the **training** set and 10,000 images in the **test** set. The images will download on the folder `./datasets,` or any other you want, which can be set with the parameter `root`.
```
library(rTorch)
transforms <- torchvision$transforms
# this is the folder where the datasets will be downloaded
local_folder <- './datasets/mnist_digits'
train_dataset = torchvision$datasets$MNIST(root = local_folder,
train = TRUE,
transform = transforms$ToTensor(),
download = TRUE)
train_dataset
```
```
#> Dataset MNIST
#> Number of datapoints: 60000
#> Root location: ./datasets/mnist_digits
#> Split: Train
#> StandardTransform
#> Transform: ToTensor()
```
You can do similarly for the `test` dataset if you set the flag `train = FALSE`. The `test` dataset has only 10,000 images.
```
test_dataset = torchvision$datasets$MNIST(root = local_folder,
train = FALSE,
transform = transforms$ToTensor())
test_dataset
```
```
#> Dataset MNIST
#> Number of datapoints: 10000
#> Root location: ./datasets/mnist_digits
#> Split: Test
#> StandardTransform
#> Transform: ToTensor()
```
### 2\.1\.2 numpy
`numpy` is automatically installed when `PyTorch` is. There is some interdependence between both. Anytime that we need to do some transformation that is not available in `PyTorch`, we will use `numpy`. Just keep in mind that `numpy` does not have support for *GPUs*; you will have to convert the numpy array to a torch tensor afterwards.
2\.2 Common array operations
----------------------------
There are several operations that we could perform with `numpy` such creating arrays:
### Create an array
Create an array:
```
# do some array manipulations with NumPy
a <- np$array(c(1:4))
a
```
```
#> [1] 1 2 3 4
```
We could do this if we add instead a Python chunk like this:
```
{python}
import numpy as np
a = np.arange(1, 5)
a
```
```
import numpy as np
a = np.arange(1, 5)
a
```
```
#> array([1, 2, 3, 4])
```
Create an array of a desired shape:
```
np$reshape(np$arange(0, 9), c(3L, 3L))
```
```
#> [,1] [,2] [,3]
#> [1,] 0 1 2
#> [2,] 3 4 5
#> [3,] 6 7 8
```
Create an array by spelling out its components and `type`:
```
np$array(list(
list( 73, 67, 43),
list( 87, 134, 58),
list(102, 43, 37),
list( 73, 67, 43),
list( 91, 88, 64),
list(102, 43, 37),
list( 69, 96, 70),
list( 91, 88, 64),
list(102, 43, 37),
list( 69, 96, 70)
), dtype='float32')
```
```
#> [,1] [,2] [,3]
#> [1,] 73 67 43
#> [2,] 87 134 58
#> [3,] 102 43 37
#> [4,] 73 67 43
#> [5,] 91 88 64
#> [6,] 102 43 37
#> [7,] 69 96 70
#> [8,] 91 88 64
#> [9,] 102 43 37
#> [10,] 69 96 70
```
We will use the `train` and `test` datasets that we loaded with `torchvision`.
### Reshape an array
For the same `test` dataset that we loaded above from `MNIST` digits, we will show the image of the handwritten digit and its label or class. Before plotting the image, we need to:
1. Extract the image and label from the dataset
2. Convert the tensor to a numpy array
3. Reshape the tensor as a 2D array
4. Plot the digit and its label
```
rotate <- function(x) t(apply(x, 2, rev)) # function to rotate the matrix
# label for the image
label <- test_dataset[0][[2]]
label
# convert tensor to numpy array
.show_img <- test_dataset[0][[1]]$numpy()
dim(.show_img)
# reshape 3D array to 2D
show_img <- np$reshape(.show_img, c(28L, 28L))
dim(show_img)
```
```
#> [1] 7
#> [1] 1 28 28
#> [1] 28 28
```
We are simply using the `r-base` `image` function:
```
# show in gray shades and rotate
image(rotate(show_img), col = gray.colors(64))
title(label)
```
### Generate a random array in NumPy
```
# set the seed
np$random$seed(123L)
# generate a random array
x = np$random$rand(100L)
dim(x)
# calculate the y array
y = np$sin(x) * np$power(x, 3L) + 3L * x + np$random$rand(100L) * 0.8
class(y)
```
```
#> [1] 100
#> [1] "array"
```
From the classes, we can tell that the `numpy` arrays are automatically converted to `R` arrays. Let’s plot `x` vs `y`:
```
plot(x, y)
```
2\.3 Common tensor operations
-----------------------------
### Generate random tensors
The same operation can be performed with pure torch tensors:. This is very similar to the example above. The only difference is that this time we are using tensors and not `numpy` arrays.
```
library(rTorch)
invisible(torch$manual_seed(123L))
x <- torch$rand(100L) # use torch$randn(100L): positive and negative numbers
y <- torch$sin(x) * torch$pow(x, 3L) + 3L * x + torch$rand(100L) * 0.8
class(x)
class(y)
```
```
#> [1] "torch.Tensor" "torch._C._TensorBase" "python.builtin.object"
#> [1] "torch.Tensor" "torch._C._TensorBase" "python.builtin.object"
```
Since the classes are `torch` tensors, to plot them in R, they first need to be converted to numpy, and then to R:
```
plot(x$numpy(), y$numpy())
```
### `numpy` array to PyTorch tensor
Converting a `numpy` array to a PyTorch tensor is a very common operation that I have seen in examples using PyTorch. Creating first the array in `numpy`. and then convert it to a `torch` tensor.
```
# input array
x = np$array(rbind(
c(0,0,1),
c(0,1,1),
c(1,0,1),
c(1,1,1)))
# the numpy array
x
```
```
#> [,1] [,2] [,3]
#> [1,] 0 0 1
#> [2,] 0 1 1
#> [3,] 1 0 1
#> [4,] 1 1 1
```
This is another common operation that will find in the PyTorch tutorials: converting a `numpy` array from a certain type to a tensor of the same type:
```
# convert the numpy array to a float type
Xn <- np$float32(x)
# convert the numpy array to a float tensor
Xt <- torch$FloatTensor(Xn)
Xt
```
```
#> tensor([[0., 0., 1.],
#> [0., 1., 1.],
#> [1., 0., 1.],
#> [1., 1., 1.]])
```
2\.4 Python built\-in functions
-------------------------------
To access the Python built\-in functions we make use of the package `reticulate` and the function `import_builtins()`.
Here are part of the built\-in functions and operators offered by the R package `reticulate`. I am using the R function `grep()` to discard those which carry the keywords `Error`, or `Warning`, or `Exit`.
```
py_bi <- reticulate::import_builtins()
grep("Error|Warning|Exit", names(py_bi), value = TRUE, invert = TRUE,
perl = TRUE)
```
```
#> [1] "abs" "all" "any"
#> [4] "ascii" "BaseException" "bin"
#> [7] "bool" "breakpoint" "bytearray"
#> [10] "bytes" "callable" "chr"
#> [13] "classmethod" "compile" "complex"
#> [16] "copyright" "credits" "delattr"
#> [19] "dict" "dir" "divmod"
#> [22] "Ellipsis" "enumerate" "eval"
#> [25] "Exception" "exec" "exit"
#> [28] "False" "filter" "float"
#> [31] "format" "frozenset" "getattr"
#> [34] "globals" "hasattr" "hash"
#> [37] "help" "hex" "id"
#> [40] "input" "int" "isinstance"
#> [43] "issubclass" "iter" "KeyboardInterrupt"
#> [46] "len" "license" "list"
#> [49] "locals" "map" "max"
#> [52] "memoryview" "min" "next"
#> [55] "None" "NotImplemented" "object"
#> [58] "oct" "open" "ord"
#> [61] "pow" "print" "property"
#> [64] "quit" "range" "repr"
#> [67] "reversed" "round" "set"
#> [70] "setattr" "slice" "sorted"
#> [73] "staticmethod" "StopAsyncIteration" "StopIteration"
#> [76] "str" "sum" "super"
#> [79] "True" "tuple" "type"
#> [82] "vars" "zip"
```
#### Length of a dataset
Sometimes, we will need the Python `len` function to find out the length of an object:
```
py_bi$len(train_dataset)
py_bi$len(test_dataset)
```
```
#> [1] 60000
#> [1] 10000
```
#### Iterators
Iterators are used a lot in dataset operations when running a neural network. In this example we will iterate through only 100 elements of the 60,000 of the MNIST `train` dataset. The goal is printing the “label” or “class” for the digits we are reading. The digits are not show here; they are stored in tensors.
```
# iterate through training dataset
enum_train_dataset <- py_bi$enumerate(train_dataset)
cat(sprintf("%8s %8s \n", "index", "label"))
for (i in 1:py_bi$len(train_dataset)) {
obj <- reticulate::iter_next(enum_train_dataset)
idx <- obj[[1]] # index number
cat(sprintf("%8d %5d \n", idx, obj[[2]][[2]]))
if (i >= 100) break # print only 100 labels
}
#> index label
#> 0 5
#> 1 0
#> 2 4
#> 3 1
#> 4 9
#> 5 2
#> 6 1
#> 7 3
#> 8 1
#> 9 4
#> 10 3
#> 11 5
#> 12 3
#> 13 6
#> 14 1
#> 15 7
#> 16 2
#> 17 8
#> 18 6
#> 19 9
#> 20 4
#> 21 0
#> 22 9
#> 23 1
#> 24 1
#> 25 2
#> 26 4
#> 27 3
#> 28 2
#> 29 7
#> 30 3
#> 31 8
#> 32 6
#> 33 9
#> 34 0
#> 35 5
#> 36 6
#> 37 0
#> 38 7
#> 39 6
#> 40 1
#> 41 8
#> 42 7
#> 43 9
#> 44 3
#> 45 9
#> 46 8
#> 47 5
#> 48 9
#> 49 3
#> 50 3
#> 51 0
#> 52 7
#> 53 4
#> 54 9
#> 55 8
#> 56 0
#> 57 9
#> 58 4
#> 59 1
#> 60 4
#> 61 4
#> 62 6
#> 63 0
#> 64 4
#> 65 5
#> 66 6
#> 67 1
#> 68 0
#> 69 0
#> 70 1
#> 71 7
#> 72 1
#> 73 6
#> 74 3
#> 75 0
#> 76 2
#> 77 1
#> 78 1
#> 79 7
#> 80 9
#> 81 0
#> 82 2
#> 83 6
#> 84 7
#> 85 8
#> 86 3
#> 87 9
#> 88 0
#> 89 4
#> 90 6
#> 91 7
#> 92 4
#> 93 6
#> 94 8
#> 95 0
#> 96 7
#> 97 8
#> 98 3
#> 99 1
```
#### Types and instances
Types, instances and classes are important to take decisions on how we will process data that is being read from the datasets. In this example, we want to know if an object is of certain instance:
```
# get the class of the object
py_bi$type(train_dataset)
# is train_dataset a torchvision dataset class
py_bi$isinstance(train_dataset, torchvision$datasets$mnist$MNIST)
```
```
#> <class 'torchvision.datasets.mnist.MNIST'>
#> [1] TRUE
```
2\.1 PyTorch modules in `rTorch`
--------------------------------
### 2\.1\.1 torchvision
This is an example of using the `torchvision` module. With `torchvision` and its `dataset` set of function, we could download any of the popular datasets for machine learning made available by PyTorch. In this example, we will be downloading the training dataset of the **MNIST** handwritten digits. There are 60,000 images in the **training** set and 10,000 images in the **test** set. The images will download on the folder `./datasets,` or any other you want, which can be set with the parameter `root`.
```
library(rTorch)
transforms <- torchvision$transforms
# this is the folder where the datasets will be downloaded
local_folder <- './datasets/mnist_digits'
train_dataset = torchvision$datasets$MNIST(root = local_folder,
train = TRUE,
transform = transforms$ToTensor(),
download = TRUE)
train_dataset
```
```
#> Dataset MNIST
#> Number of datapoints: 60000
#> Root location: ./datasets/mnist_digits
#> Split: Train
#> StandardTransform
#> Transform: ToTensor()
```
You can do similarly for the `test` dataset if you set the flag `train = FALSE`. The `test` dataset has only 10,000 images.
```
test_dataset = torchvision$datasets$MNIST(root = local_folder,
train = FALSE,
transform = transforms$ToTensor())
test_dataset
```
```
#> Dataset MNIST
#> Number of datapoints: 10000
#> Root location: ./datasets/mnist_digits
#> Split: Test
#> StandardTransform
#> Transform: ToTensor()
```
### 2\.1\.2 numpy
`numpy` is automatically installed when `PyTorch` is. There is some interdependence between both. Anytime that we need to do some transformation that is not available in `PyTorch`, we will use `numpy`. Just keep in mind that `numpy` does not have support for *GPUs*; you will have to convert the numpy array to a torch tensor afterwards.
### 2\.1\.1 torchvision
This is an example of using the `torchvision` module. With `torchvision` and its `dataset` set of function, we could download any of the popular datasets for machine learning made available by PyTorch. In this example, we will be downloading the training dataset of the **MNIST** handwritten digits. There are 60,000 images in the **training** set and 10,000 images in the **test** set. The images will download on the folder `./datasets,` or any other you want, which can be set with the parameter `root`.
```
library(rTorch)
transforms <- torchvision$transforms
# this is the folder where the datasets will be downloaded
local_folder <- './datasets/mnist_digits'
train_dataset = torchvision$datasets$MNIST(root = local_folder,
train = TRUE,
transform = transforms$ToTensor(),
download = TRUE)
train_dataset
```
```
#> Dataset MNIST
#> Number of datapoints: 60000
#> Root location: ./datasets/mnist_digits
#> Split: Train
#> StandardTransform
#> Transform: ToTensor()
```
You can do similarly for the `test` dataset if you set the flag `train = FALSE`. The `test` dataset has only 10,000 images.
```
test_dataset = torchvision$datasets$MNIST(root = local_folder,
train = FALSE,
transform = transforms$ToTensor())
test_dataset
```
```
#> Dataset MNIST
#> Number of datapoints: 10000
#> Root location: ./datasets/mnist_digits
#> Split: Test
#> StandardTransform
#> Transform: ToTensor()
```
### 2\.1\.2 numpy
`numpy` is automatically installed when `PyTorch` is. There is some interdependence between both. Anytime that we need to do some transformation that is not available in `PyTorch`, we will use `numpy`. Just keep in mind that `numpy` does not have support for *GPUs*; you will have to convert the numpy array to a torch tensor afterwards.
2\.2 Common array operations
----------------------------
There are several operations that we could perform with `numpy` such creating arrays:
### Create an array
Create an array:
```
# do some array manipulations with NumPy
a <- np$array(c(1:4))
a
```
```
#> [1] 1 2 3 4
```
We could do this if we add instead a Python chunk like this:
```
{python}
import numpy as np
a = np.arange(1, 5)
a
```
```
import numpy as np
a = np.arange(1, 5)
a
```
```
#> array([1, 2, 3, 4])
```
Create an array of a desired shape:
```
np$reshape(np$arange(0, 9), c(3L, 3L))
```
```
#> [,1] [,2] [,3]
#> [1,] 0 1 2
#> [2,] 3 4 5
#> [3,] 6 7 8
```
Create an array by spelling out its components and `type`:
```
np$array(list(
list( 73, 67, 43),
list( 87, 134, 58),
list(102, 43, 37),
list( 73, 67, 43),
list( 91, 88, 64),
list(102, 43, 37),
list( 69, 96, 70),
list( 91, 88, 64),
list(102, 43, 37),
list( 69, 96, 70)
), dtype='float32')
```
```
#> [,1] [,2] [,3]
#> [1,] 73 67 43
#> [2,] 87 134 58
#> [3,] 102 43 37
#> [4,] 73 67 43
#> [5,] 91 88 64
#> [6,] 102 43 37
#> [7,] 69 96 70
#> [8,] 91 88 64
#> [9,] 102 43 37
#> [10,] 69 96 70
```
We will use the `train` and `test` datasets that we loaded with `torchvision`.
### Reshape an array
For the same `test` dataset that we loaded above from `MNIST` digits, we will show the image of the handwritten digit and its label or class. Before plotting the image, we need to:
1. Extract the image and label from the dataset
2. Convert the tensor to a numpy array
3. Reshape the tensor as a 2D array
4. Plot the digit and its label
```
rotate <- function(x) t(apply(x, 2, rev)) # function to rotate the matrix
# label for the image
label <- test_dataset[0][[2]]
label
# convert tensor to numpy array
.show_img <- test_dataset[0][[1]]$numpy()
dim(.show_img)
# reshape 3D array to 2D
show_img <- np$reshape(.show_img, c(28L, 28L))
dim(show_img)
```
```
#> [1] 7
#> [1] 1 28 28
#> [1] 28 28
```
We are simply using the `r-base` `image` function:
```
# show in gray shades and rotate
image(rotate(show_img), col = gray.colors(64))
title(label)
```
### Generate a random array in NumPy
```
# set the seed
np$random$seed(123L)
# generate a random array
x = np$random$rand(100L)
dim(x)
# calculate the y array
y = np$sin(x) * np$power(x, 3L) + 3L * x + np$random$rand(100L) * 0.8
class(y)
```
```
#> [1] 100
#> [1] "array"
```
From the classes, we can tell that the `numpy` arrays are automatically converted to `R` arrays. Let’s plot `x` vs `y`:
```
plot(x, y)
```
### Create an array
Create an array:
```
# do some array manipulations with NumPy
a <- np$array(c(1:4))
a
```
```
#> [1] 1 2 3 4
```
We could do this if we add instead a Python chunk like this:
```
{python}
import numpy as np
a = np.arange(1, 5)
a
```
```
import numpy as np
a = np.arange(1, 5)
a
```
```
#> array([1, 2, 3, 4])
```
Create an array of a desired shape:
```
np$reshape(np$arange(0, 9), c(3L, 3L))
```
```
#> [,1] [,2] [,3]
#> [1,] 0 1 2
#> [2,] 3 4 5
#> [3,] 6 7 8
```
Create an array by spelling out its components and `type`:
```
np$array(list(
list( 73, 67, 43),
list( 87, 134, 58),
list(102, 43, 37),
list( 73, 67, 43),
list( 91, 88, 64),
list(102, 43, 37),
list( 69, 96, 70),
list( 91, 88, 64),
list(102, 43, 37),
list( 69, 96, 70)
), dtype='float32')
```
```
#> [,1] [,2] [,3]
#> [1,] 73 67 43
#> [2,] 87 134 58
#> [3,] 102 43 37
#> [4,] 73 67 43
#> [5,] 91 88 64
#> [6,] 102 43 37
#> [7,] 69 96 70
#> [8,] 91 88 64
#> [9,] 102 43 37
#> [10,] 69 96 70
```
We will use the `train` and `test` datasets that we loaded with `torchvision`.
### Reshape an array
For the same `test` dataset that we loaded above from `MNIST` digits, we will show the image of the handwritten digit and its label or class. Before plotting the image, we need to:
1. Extract the image and label from the dataset
2. Convert the tensor to a numpy array
3. Reshape the tensor as a 2D array
4. Plot the digit and its label
```
rotate <- function(x) t(apply(x, 2, rev)) # function to rotate the matrix
# label for the image
label <- test_dataset[0][[2]]
label
# convert tensor to numpy array
.show_img <- test_dataset[0][[1]]$numpy()
dim(.show_img)
# reshape 3D array to 2D
show_img <- np$reshape(.show_img, c(28L, 28L))
dim(show_img)
```
```
#> [1] 7
#> [1] 1 28 28
#> [1] 28 28
```
We are simply using the `r-base` `image` function:
```
# show in gray shades and rotate
image(rotate(show_img), col = gray.colors(64))
title(label)
```
### Generate a random array in NumPy
```
# set the seed
np$random$seed(123L)
# generate a random array
x = np$random$rand(100L)
dim(x)
# calculate the y array
y = np$sin(x) * np$power(x, 3L) + 3L * x + np$random$rand(100L) * 0.8
class(y)
```
```
#> [1] 100
#> [1] "array"
```
From the classes, we can tell that the `numpy` arrays are automatically converted to `R` arrays. Let’s plot `x` vs `y`:
```
plot(x, y)
```
2\.3 Common tensor operations
-----------------------------
### Generate random tensors
The same operation can be performed with pure torch tensors:. This is very similar to the example above. The only difference is that this time we are using tensors and not `numpy` arrays.
```
library(rTorch)
invisible(torch$manual_seed(123L))
x <- torch$rand(100L) # use torch$randn(100L): positive and negative numbers
y <- torch$sin(x) * torch$pow(x, 3L) + 3L * x + torch$rand(100L) * 0.8
class(x)
class(y)
```
```
#> [1] "torch.Tensor" "torch._C._TensorBase" "python.builtin.object"
#> [1] "torch.Tensor" "torch._C._TensorBase" "python.builtin.object"
```
Since the classes are `torch` tensors, to plot them in R, they first need to be converted to numpy, and then to R:
```
plot(x$numpy(), y$numpy())
```
### `numpy` array to PyTorch tensor
Converting a `numpy` array to a PyTorch tensor is a very common operation that I have seen in examples using PyTorch. Creating first the array in `numpy`. and then convert it to a `torch` tensor.
```
# input array
x = np$array(rbind(
c(0,0,1),
c(0,1,1),
c(1,0,1),
c(1,1,1)))
# the numpy array
x
```
```
#> [,1] [,2] [,3]
#> [1,] 0 0 1
#> [2,] 0 1 1
#> [3,] 1 0 1
#> [4,] 1 1 1
```
This is another common operation that will find in the PyTorch tutorials: converting a `numpy` array from a certain type to a tensor of the same type:
```
# convert the numpy array to a float type
Xn <- np$float32(x)
# convert the numpy array to a float tensor
Xt <- torch$FloatTensor(Xn)
Xt
```
```
#> tensor([[0., 0., 1.],
#> [0., 1., 1.],
#> [1., 0., 1.],
#> [1., 1., 1.]])
```
### Generate random tensors
The same operation can be performed with pure torch tensors:. This is very similar to the example above. The only difference is that this time we are using tensors and not `numpy` arrays.
```
library(rTorch)
invisible(torch$manual_seed(123L))
x <- torch$rand(100L) # use torch$randn(100L): positive and negative numbers
y <- torch$sin(x) * torch$pow(x, 3L) + 3L * x + torch$rand(100L) * 0.8
class(x)
class(y)
```
```
#> [1] "torch.Tensor" "torch._C._TensorBase" "python.builtin.object"
#> [1] "torch.Tensor" "torch._C._TensorBase" "python.builtin.object"
```
Since the classes are `torch` tensors, to plot them in R, they first need to be converted to numpy, and then to R:
```
plot(x$numpy(), y$numpy())
```
### `numpy` array to PyTorch tensor
Converting a `numpy` array to a PyTorch tensor is a very common operation that I have seen in examples using PyTorch. Creating first the array in `numpy`. and then convert it to a `torch` tensor.
```
# input array
x = np$array(rbind(
c(0,0,1),
c(0,1,1),
c(1,0,1),
c(1,1,1)))
# the numpy array
x
```
```
#> [,1] [,2] [,3]
#> [1,] 0 0 1
#> [2,] 0 1 1
#> [3,] 1 0 1
#> [4,] 1 1 1
```
This is another common operation that will find in the PyTorch tutorials: converting a `numpy` array from a certain type to a tensor of the same type:
```
# convert the numpy array to a float type
Xn <- np$float32(x)
# convert the numpy array to a float tensor
Xt <- torch$FloatTensor(Xn)
Xt
```
```
#> tensor([[0., 0., 1.],
#> [0., 1., 1.],
#> [1., 0., 1.],
#> [1., 1., 1.]])
```
2\.4 Python built\-in functions
-------------------------------
To access the Python built\-in functions we make use of the package `reticulate` and the function `import_builtins()`.
Here are part of the built\-in functions and operators offered by the R package `reticulate`. I am using the R function `grep()` to discard those which carry the keywords `Error`, or `Warning`, or `Exit`.
```
py_bi <- reticulate::import_builtins()
grep("Error|Warning|Exit", names(py_bi), value = TRUE, invert = TRUE,
perl = TRUE)
```
```
#> [1] "abs" "all" "any"
#> [4] "ascii" "BaseException" "bin"
#> [7] "bool" "breakpoint" "bytearray"
#> [10] "bytes" "callable" "chr"
#> [13] "classmethod" "compile" "complex"
#> [16] "copyright" "credits" "delattr"
#> [19] "dict" "dir" "divmod"
#> [22] "Ellipsis" "enumerate" "eval"
#> [25] "Exception" "exec" "exit"
#> [28] "False" "filter" "float"
#> [31] "format" "frozenset" "getattr"
#> [34] "globals" "hasattr" "hash"
#> [37] "help" "hex" "id"
#> [40] "input" "int" "isinstance"
#> [43] "issubclass" "iter" "KeyboardInterrupt"
#> [46] "len" "license" "list"
#> [49] "locals" "map" "max"
#> [52] "memoryview" "min" "next"
#> [55] "None" "NotImplemented" "object"
#> [58] "oct" "open" "ord"
#> [61] "pow" "print" "property"
#> [64] "quit" "range" "repr"
#> [67] "reversed" "round" "set"
#> [70] "setattr" "slice" "sorted"
#> [73] "staticmethod" "StopAsyncIteration" "StopIteration"
#> [76] "str" "sum" "super"
#> [79] "True" "tuple" "type"
#> [82] "vars" "zip"
```
#### Length of a dataset
Sometimes, we will need the Python `len` function to find out the length of an object:
```
py_bi$len(train_dataset)
py_bi$len(test_dataset)
```
```
#> [1] 60000
#> [1] 10000
```
#### Iterators
Iterators are used a lot in dataset operations when running a neural network. In this example we will iterate through only 100 elements of the 60,000 of the MNIST `train` dataset. The goal is printing the “label” or “class” for the digits we are reading. The digits are not show here; they are stored in tensors.
```
# iterate through training dataset
enum_train_dataset <- py_bi$enumerate(train_dataset)
cat(sprintf("%8s %8s \n", "index", "label"))
for (i in 1:py_bi$len(train_dataset)) {
obj <- reticulate::iter_next(enum_train_dataset)
idx <- obj[[1]] # index number
cat(sprintf("%8d %5d \n", idx, obj[[2]][[2]]))
if (i >= 100) break # print only 100 labels
}
#> index label
#> 0 5
#> 1 0
#> 2 4
#> 3 1
#> 4 9
#> 5 2
#> 6 1
#> 7 3
#> 8 1
#> 9 4
#> 10 3
#> 11 5
#> 12 3
#> 13 6
#> 14 1
#> 15 7
#> 16 2
#> 17 8
#> 18 6
#> 19 9
#> 20 4
#> 21 0
#> 22 9
#> 23 1
#> 24 1
#> 25 2
#> 26 4
#> 27 3
#> 28 2
#> 29 7
#> 30 3
#> 31 8
#> 32 6
#> 33 9
#> 34 0
#> 35 5
#> 36 6
#> 37 0
#> 38 7
#> 39 6
#> 40 1
#> 41 8
#> 42 7
#> 43 9
#> 44 3
#> 45 9
#> 46 8
#> 47 5
#> 48 9
#> 49 3
#> 50 3
#> 51 0
#> 52 7
#> 53 4
#> 54 9
#> 55 8
#> 56 0
#> 57 9
#> 58 4
#> 59 1
#> 60 4
#> 61 4
#> 62 6
#> 63 0
#> 64 4
#> 65 5
#> 66 6
#> 67 1
#> 68 0
#> 69 0
#> 70 1
#> 71 7
#> 72 1
#> 73 6
#> 74 3
#> 75 0
#> 76 2
#> 77 1
#> 78 1
#> 79 7
#> 80 9
#> 81 0
#> 82 2
#> 83 6
#> 84 7
#> 85 8
#> 86 3
#> 87 9
#> 88 0
#> 89 4
#> 90 6
#> 91 7
#> 92 4
#> 93 6
#> 94 8
#> 95 0
#> 96 7
#> 97 8
#> 98 3
#> 99 1
```
#### Types and instances
Types, instances and classes are important to take decisions on how we will process data that is being read from the datasets. In this example, we want to know if an object is of certain instance:
```
# get the class of the object
py_bi$type(train_dataset)
# is train_dataset a torchvision dataset class
py_bi$isinstance(train_dataset, torchvision$datasets$mnist$MNIST)
```
```
#> <class 'torchvision.datasets.mnist.MNIST'>
#> [1] TRUE
```
#### Length of a dataset
Sometimes, we will need the Python `len` function to find out the length of an object:
```
py_bi$len(train_dataset)
py_bi$len(test_dataset)
```
```
#> [1] 60000
#> [1] 10000
```
#### Iterators
Iterators are used a lot in dataset operations when running a neural network. In this example we will iterate through only 100 elements of the 60,000 of the MNIST `train` dataset. The goal is printing the “label” or “class” for the digits we are reading. The digits are not show here; they are stored in tensors.
```
# iterate through training dataset
enum_train_dataset <- py_bi$enumerate(train_dataset)
cat(sprintf("%8s %8s \n", "index", "label"))
for (i in 1:py_bi$len(train_dataset)) {
obj <- reticulate::iter_next(enum_train_dataset)
idx <- obj[[1]] # index number
cat(sprintf("%8d %5d \n", idx, obj[[2]][[2]]))
if (i >= 100) break # print only 100 labels
}
#> index label
#> 0 5
#> 1 0
#> 2 4
#> 3 1
#> 4 9
#> 5 2
#> 6 1
#> 7 3
#> 8 1
#> 9 4
#> 10 3
#> 11 5
#> 12 3
#> 13 6
#> 14 1
#> 15 7
#> 16 2
#> 17 8
#> 18 6
#> 19 9
#> 20 4
#> 21 0
#> 22 9
#> 23 1
#> 24 1
#> 25 2
#> 26 4
#> 27 3
#> 28 2
#> 29 7
#> 30 3
#> 31 8
#> 32 6
#> 33 9
#> 34 0
#> 35 5
#> 36 6
#> 37 0
#> 38 7
#> 39 6
#> 40 1
#> 41 8
#> 42 7
#> 43 9
#> 44 3
#> 45 9
#> 46 8
#> 47 5
#> 48 9
#> 49 3
#> 50 3
#> 51 0
#> 52 7
#> 53 4
#> 54 9
#> 55 8
#> 56 0
#> 57 9
#> 58 4
#> 59 1
#> 60 4
#> 61 4
#> 62 6
#> 63 0
#> 64 4
#> 65 5
#> 66 6
#> 67 1
#> 68 0
#> 69 0
#> 70 1
#> 71 7
#> 72 1
#> 73 6
#> 74 3
#> 75 0
#> 76 2
#> 77 1
#> 78 1
#> 79 7
#> 80 9
#> 81 0
#> 82 2
#> 83 6
#> 84 7
#> 85 8
#> 86 3
#> 87 9
#> 88 0
#> 89 4
#> 90 6
#> 91 7
#> 92 4
#> 93 6
#> 94 8
#> 95 0
#> 96 7
#> 97 8
#> 98 3
#> 99 1
```
#### Types and instances
Types, instances and classes are important to take decisions on how we will process data that is being read from the datasets. In this example, we want to know if an object is of certain instance:
```
# get the class of the object
py_bi$type(train_dataset)
# is train_dataset a torchvision dataset class
py_bi$isinstance(train_dataset, torchvision$datasets$mnist$MNIST)
```
```
#> <class 'torchvision.datasets.mnist.MNIST'>
#> [1] TRUE
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/rtorch-vs-pytorch.html |
Chapter 3 rTorch vs PyTorch
===========================
*Last update: Sun Oct 25 13:00:41 2020 \-0500 (265c0b3c1\)*
3\.1 What’s different
---------------------
This chapter will explain the main differences between `PyTorch` and `rTorch`. Most of the things work directly in `PyTorch` but we need to be aware of some minor differences when working with rTorch. Here is a review of existing methods.
Let’s start by loading `rTorch`:
```
library(rTorch)
```
3\.2 Calling objects from PyTorch
---------------------------------
We use the dollar sign or `$` to call a class, function or method from the `rTorch` modules. In this case, from the `torch` module:
```
torch$tensor(c(1, 2, 3))
```
```
#> tensor([1., 2., 3.])
```
In Python, what we do is using the **dot** to separate the sub\-members of an object:
```
import torch
torch.tensor([1, 2, 3])
```
```
#> tensor([1, 2, 3])
```
3\.3 Call functions from `torch`
--------------------------------
```
library(rTorch)
# these are the equivalents of the Python import module
nn <- torch$nn
transforms <- torchvision$transforms
dsets <- torchvision$datasets
torch$tensor(c(1, 2, 3))
```
```
#> tensor([1., 2., 3.])
```
The code above is equivalent to writing this code in Python:
```
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
torch.tensor([1, 2, 3])
```
```
#> tensor([1, 2, 3])
```
Then we can proceed to extract classes, methods and functions from the `nn`, `transforms`, and `dsets` objects. In this example we use the module `torchvision$datasets` and the function `transforms$ToTensor()`. For example, the `train_dataset` of MNIST:\`
```
local_folder <- './datasets/mnist_digits'
train_dataset = torchvision$datasets$MNIST(root = local_folder,
train = TRUE,
transform = transforms$ToTensor(),
download = TRUE)
train_dataset
```
```
#> Dataset MNIST
#> Number of datapoints: 60000
#> Root location: ./datasets/mnist_digits
#> Split: Train
#> StandardTransform
#> Transform: ToTensor()
```
3\.4 Python objects
-------------------
Sometimes we are interested in knowing the internal components of a class. In that case, we use the `reticulate` function `py_list_attributes()`.
In this example, we want to show the attributes of `train_dataset`:
```
reticulate::py_list_attributes(train_dataset)
```
```
#> [1] "__add__" "__class__" "__delattr__"
#> [4] "__dict__" "__dir__" "__doc__"
#> [7] "__eq__" "__format__" "__ge__"
#> [10] "__getattribute__" "__getitem__" "__gt__"
#> [13] "__hash__" "__init__" "__init_subclass__"
#> [16] "__le__" "__len__" "__lt__"
#> [19] "__module__" "__ne__" "__new__"
#> [22] "__reduce__" "__reduce_ex__" "__repr__"
#> [25] "__setattr__" "__sizeof__" "__str__"
#> [28] "__subclasshook__" "__weakref__" "_check_exists"
#> [31] "_format_transform_repr" "_repr_indent" "class_to_idx"
#> [34] "classes" "data" "download"
#> [37] "extra_repr" "processed_folder" "raw_folder"
#> [40] "resources" "root" "target_transform"
#> [43] "targets" "test_data" "test_file"
#> [46] "test_labels" "train" "train_data"
#> [49] "train_labels" "training_file" "transform"
#> [52] "transforms"
```
Knowing the internal methods of a class could be useful when we want to refer to a specific property of such class. For example, from the list above, we know that the object `train_dataset` has an attribute `__len__`. We can call it like this:
```
train_dataset$`__len__`()
```
```
#> [1] 60000
```
3\.5 Iterating through datasets
-------------------------------
### 3\.5\.1 Enumeration
Given the following training dataset `x_train`, we want to find the number of elements of the tensor. We start by entering a `numpy` array, which then will convert to a tensor with the PyTorch function `from_numpy()`:
```
x_train_r <- array(c(3.3, 4.4, 5.5, 6.71, 6.93, 4.168,
9.779, 6.182, 7.59, 2.167, 7.042,
10.791, 5.313, 7.997, 3.1), dim = c(15,1))
x_train_np <- r_to_py(x_train_r)
x_train_ <- torch$from_numpy(x_train_np) # convert to tensor
x_train <- x_train_$type(torch$FloatTensor) # make it a a FloatTensor
print(x_train$dtype)
print(x_train)
```
```
#> torch.float32
#> tensor([[ 3.3000],
#> [ 4.4000],
#> [ 5.5000],
#> [ 6.7100],
#> [ 6.9300],
#> [ 4.1680],
#> [ 9.7790],
#> [ 6.1820],
#> [ 7.5900],
#> [ 2.1670],
#> [ 7.0420],
#> [10.7910],
#> [ 5.3130],
#> [ 7.9970],
#> [ 3.1000]])
```
`length` is similar to `nelement` for number of elements:
```
length(x_train)
x_train$nelement() # number of elements in the tensor
```
```
#> [1] 15
#> [1] 15
```
### 3\.5\.2 `enumerate` and `iterate`
```
py = import_builtins()
enum_x_train = py$enumerate(x_train)
enum_x_train
py$len(x_train)
```
```
#> <enumerate>
#> [1] 15
```
If we directly use `iterate` over the `enum_x_train` object, we get an R list with the index and the value of the `1D` tensor:
```
xit = iterate(enum_x_train, simplify = TRUE)
xit
```
```
#> [[1]]
#> [[1]][[1]]
#> [1] 0
#>
#> [[1]][[2]]
#> tensor([3.3000])
#>
#>
#> [[2]]
#> [[2]][[1]]
#> [1] 1
#>
#> [[2]][[2]]
#> tensor([4.4000])
#>
#>
#> [[3]]
#> [[3]][[1]]
#> [1] 2
#>
#> [[3]][[2]]
#> tensor([5.5000])
#>
#>
#> [[4]]
#> [[4]][[1]]
#> [1] 3
#>
#> [[4]][[2]]
#> tensor([6.7100])
#>
#>
#> [[5]]
#> [[5]][[1]]
#> [1] 4
#>
#> [[5]][[2]]
#> tensor([6.9300])
#>
#>
#> [[6]]
#> [[6]][[1]]
#> [1] 5
#>
#> [[6]][[2]]
#> tensor([4.1680])
#>
#>
#> [[7]]
#> [[7]][[1]]
#> [1] 6
#>
#> [[7]][[2]]
#> tensor([9.7790])
#>
#>
#> [[8]]
#> [[8]][[1]]
#> [1] 7
#>
#> [[8]][[2]]
#> tensor([6.1820])
#>
#>
#> [[9]]
#> [[9]][[1]]
#> [1] 8
#>
#> [[9]][[2]]
#> tensor([7.5900])
#>
#>
#> [[10]]
#> [[10]][[1]]
#> [1] 9
#>
#> [[10]][[2]]
#> tensor([2.1670])
#>
#>
#> [[11]]
#> [[11]][[1]]
#> [1] 10
#>
#> [[11]][[2]]
#> tensor([7.0420])
#>
#>
#> [[12]]
#> [[12]][[1]]
#> [1] 11
#>
#> [[12]][[2]]
#> tensor([10.7910])
#>
#>
#> [[13]]
#> [[13]][[1]]
#> [1] 12
#>
#> [[13]][[2]]
#> tensor([5.3130])
#>
#>
#> [[14]]
#> [[14]][[1]]
#> [1] 13
#>
#> [[14]][[2]]
#> tensor([7.9970])
#>
#>
#> [[15]]
#> [[15]][[1]]
#> [1] 14
#>
#> [[15]][[2]]
#> tensor([3.1000])
```
### 3\.5\.3 `for-loop` for iteration
Another way of iterating through a dataset that you will see a lot in the PyTorch tutorials is a `loop` through the length of the dataset. In this case, `x_train`. We are using `cat()` for the index (an integer), and `print()` for the tensor, since `cat` doesn’t know how to deal with tensors:
```
# reset the iterator
enum_x_train = py$enumerate(x_train)
for (i in 1:py$len(x_train)) {
obj <- iter_next(enum_x_train) # next item
cat(obj[[1]], "\t") # 1st part or index
print(obj[[2]]) # 2nd part or tensor
}
```
```
#> 0 tensor([3.3000])
#> 1 tensor([4.4000])
#> 2 tensor([5.5000])
#> 3 tensor([6.7100])
#> 4 tensor([6.9300])
#> 5 tensor([4.1680])
#> 6 tensor([9.7790])
#> 7 tensor([6.1820])
#> 8 tensor([7.5900])
#> 9 tensor([2.1670])
#> 10 tensor([7.0420])
#> 11 tensor([10.7910])
#> 12 tensor([5.3130])
#> 13 tensor([7.9970])
#> 14 tensor([3.1000])
```
Similarly, if we want the scalar values but not as tensor, then we will need to use `item()`.
```
# reset the iterator
enum_x_train = py$enumerate(x_train)
for (i in 1:py$len(x_train)) {
obj <- iter_next(enum_x_train) # next item
cat(obj[[1]], "\t") # 1st part or index
print(obj[[2]]$item()) # 2nd part or tensor
}
```
```
#> 0 [1] 3.3
#> 1 [1] 4.4
#> 2 [1] 5.5
#> 3 [1] 6.71
#> 4 [1] 6.93
#> 5 [1] 4.17
#> 6 [1] 9.78
#> 7 [1] 6.18
#> 8 [1] 7.59
#> 9 [1] 2.17
#> 10 [1] 7.04
#> 11 [1] 10.8
#> 12 [1] 5.31
#> 13 [1] 8
#> 14 [1] 3.1
```
> We will find very frequently this kind of iterators when we read a dataset read by `torchvision`. There are several different ways to iterate through these objects as you will find.
3\.6 Zero gradient
------------------
The zero gradient was one of the most difficult to implement in R if we don’t pay attention to the content of the objects carrying the **weights** and **biases**. This happens when the algorithm written in **PyTorch** is not immediately translatable to **rTorch**. This can be appreciated in this example.
> We are using the same seed in the PyTorch and rTorch versions, so, we could compare the results.
### 3\.6\.1 Code version in Python
```
import numpy as np
import torch
torch.manual_seed(0) # reproducible
# Input (temp, rainfall, humidity)
```
```
#> <torch._C.Generator object at 0x7f42c604e250>
```
```
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
# Convert inputs and targets to tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
# random weights and biases
w = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, requires_grad=True)
# function for the model
def model(x):
wt = w.t()
mm = x @ w.t()
return x @ w.t() + b # @ represents matrix multiplication in PyTorch
# MSE loss function
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
# Running all together
# Train for 100 epochs
for i in range(100):
preds = model(inputs)
loss = mse(preds, targets)
loss.backward()
with torch.no_grad():
w -= w.grad * 0.00001
b -= b.grad * 0.00001
w_gz = w.grad.zero_()
b_gz = b.grad.zero_()
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print("Loss: ", loss)
# predictions
```
```
#> Loss: tensor(1270.1233, grad_fn=<DivBackward0>)
```
```
print("\nPredictions:")
```
```
#>
#> Predictions:
```
```
preds
# Targets
```
```
#> tensor([[ 69.3122, 80.2639],
#> [ 73.7528, 97.2381],
#> [118.3933, 124.7628],
#> [ 89.6111, 93.0286],
#> [ 47.3014, 80.6467]], grad_fn=<AddBackward0>)
```
```
print("\nTargets:")
```
```
#>
#> Targets:
```
```
targets
```
```
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
### 3\.6\.2 Code version in R
```
library(rTorch)
torch$manual_seed(0)
device = torch$device('cpu')
# Input (temp, rainfall, humidity)
inputs = np$array(list(list(73, 67, 43),
list(91, 88, 64),
list(87, 134, 58),
list(102, 43, 37),
list(69, 96, 70)), dtype='float32')
# Targets (apples, oranges)
targets = np$array(list(list(56, 70),
list(81, 101),
list(119, 133),
list(22, 37),
list(103, 119)), dtype='float32')
# Convert inputs and targets to tensors
inputs = torch$from_numpy(inputs)
targets = torch$from_numpy(targets)
# random numbers for weights and biases. Then convert to double()
torch$set_default_dtype(torch$float64)
w = torch$randn(2L, 3L, requires_grad=TRUE) #$double()
b = torch$randn(2L, requires_grad=TRUE) #$double()
model <- function(x) {
wt <- w$t()
return(torch$add(torch$mm(x, wt), b))
}
# MSE loss
mse = function(t1, t2) {
diff <- torch$sub(t1, t2)
mul <- torch$sum(torch$mul(diff, diff))
return(torch$div(mul, diff$numel()))
}
# Running all together
# Adjust weights and reset gradients
for (i in 1:100) {
preds = model(inputs)
loss = mse(preds, targets)
loss$backward()
with(torch$no_grad(), {
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad, torch$scalar_tensor(1e-5)))
w$grad$zero_()
b$grad$zero_()
})
}
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
cat("Loss: "); print(loss)
# predictions
cat("\nPredictions:\n")
preds
# Targets
cat("\nTargets:\n")
targets
```
```
#> <torch._C.Generator>
#> Loss: tensor(1270.1237, grad_fn=<DivBackward0>)
#>
#> Predictions:
#> tensor([[ 69.3122, 80.2639],
#> [ 73.7528, 97.2381],
#> [118.3933, 124.7628],
#> [ 89.6111, 93.0286],
#> [ 47.3013, 80.6467]], grad_fn=<AddBackward0>)
#>
#> Targets:
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
Notice that while we are in Python, the tensor operation, gradient (\\(\\nabla\\)) of the weights \\(w\\) times the **Learning Rate** \\(\\alpha\\), is:
\\\[w \= \-w \+ \\nabla w \\; \\alpha\\]
In Python, it is a very straight forwward and clean code:
```
w -= w.grad * 1e-5
```
In R, without generics, it shows a little bit more convoluted:
```
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
```
3\.7 R generic functions
------------------------
Which why we simplified these common operations using the R generic function. When we use the generic methods from **rTorch** the operation looks much neater.
```
w$data <- w$data - w$grad * 1e-5
```
The following two expressions are equivalent, with the first being the long version natural way of doing it in **PyTorch**. The second is using the generics in R for subtraction, multiplication and scalar conversion.
```
param$data <- torch$sub(param$data,
torch$mul(param$grad$float(),
torch$scalar_tensor(learning_rate)))
}
```
```
param$data <- param$data - param$grad * learning_rate
```
3\.1 What’s different
---------------------
This chapter will explain the main differences between `PyTorch` and `rTorch`. Most of the things work directly in `PyTorch` but we need to be aware of some minor differences when working with rTorch. Here is a review of existing methods.
Let’s start by loading `rTorch`:
```
library(rTorch)
```
3\.2 Calling objects from PyTorch
---------------------------------
We use the dollar sign or `$` to call a class, function or method from the `rTorch` modules. In this case, from the `torch` module:
```
torch$tensor(c(1, 2, 3))
```
```
#> tensor([1., 2., 3.])
```
In Python, what we do is using the **dot** to separate the sub\-members of an object:
```
import torch
torch.tensor([1, 2, 3])
```
```
#> tensor([1, 2, 3])
```
3\.3 Call functions from `torch`
--------------------------------
```
library(rTorch)
# these are the equivalents of the Python import module
nn <- torch$nn
transforms <- torchvision$transforms
dsets <- torchvision$datasets
torch$tensor(c(1, 2, 3))
```
```
#> tensor([1., 2., 3.])
```
The code above is equivalent to writing this code in Python:
```
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
torch.tensor([1, 2, 3])
```
```
#> tensor([1, 2, 3])
```
Then we can proceed to extract classes, methods and functions from the `nn`, `transforms`, and `dsets` objects. In this example we use the module `torchvision$datasets` and the function `transforms$ToTensor()`. For example, the `train_dataset` of MNIST:\`
```
local_folder <- './datasets/mnist_digits'
train_dataset = torchvision$datasets$MNIST(root = local_folder,
train = TRUE,
transform = transforms$ToTensor(),
download = TRUE)
train_dataset
```
```
#> Dataset MNIST
#> Number of datapoints: 60000
#> Root location: ./datasets/mnist_digits
#> Split: Train
#> StandardTransform
#> Transform: ToTensor()
```
3\.4 Python objects
-------------------
Sometimes we are interested in knowing the internal components of a class. In that case, we use the `reticulate` function `py_list_attributes()`.
In this example, we want to show the attributes of `train_dataset`:
```
reticulate::py_list_attributes(train_dataset)
```
```
#> [1] "__add__" "__class__" "__delattr__"
#> [4] "__dict__" "__dir__" "__doc__"
#> [7] "__eq__" "__format__" "__ge__"
#> [10] "__getattribute__" "__getitem__" "__gt__"
#> [13] "__hash__" "__init__" "__init_subclass__"
#> [16] "__le__" "__len__" "__lt__"
#> [19] "__module__" "__ne__" "__new__"
#> [22] "__reduce__" "__reduce_ex__" "__repr__"
#> [25] "__setattr__" "__sizeof__" "__str__"
#> [28] "__subclasshook__" "__weakref__" "_check_exists"
#> [31] "_format_transform_repr" "_repr_indent" "class_to_idx"
#> [34] "classes" "data" "download"
#> [37] "extra_repr" "processed_folder" "raw_folder"
#> [40] "resources" "root" "target_transform"
#> [43] "targets" "test_data" "test_file"
#> [46] "test_labels" "train" "train_data"
#> [49] "train_labels" "training_file" "transform"
#> [52] "transforms"
```
Knowing the internal methods of a class could be useful when we want to refer to a specific property of such class. For example, from the list above, we know that the object `train_dataset` has an attribute `__len__`. We can call it like this:
```
train_dataset$`__len__`()
```
```
#> [1] 60000
```
3\.5 Iterating through datasets
-------------------------------
### 3\.5\.1 Enumeration
Given the following training dataset `x_train`, we want to find the number of elements of the tensor. We start by entering a `numpy` array, which then will convert to a tensor with the PyTorch function `from_numpy()`:
```
x_train_r <- array(c(3.3, 4.4, 5.5, 6.71, 6.93, 4.168,
9.779, 6.182, 7.59, 2.167, 7.042,
10.791, 5.313, 7.997, 3.1), dim = c(15,1))
x_train_np <- r_to_py(x_train_r)
x_train_ <- torch$from_numpy(x_train_np) # convert to tensor
x_train <- x_train_$type(torch$FloatTensor) # make it a a FloatTensor
print(x_train$dtype)
print(x_train)
```
```
#> torch.float32
#> tensor([[ 3.3000],
#> [ 4.4000],
#> [ 5.5000],
#> [ 6.7100],
#> [ 6.9300],
#> [ 4.1680],
#> [ 9.7790],
#> [ 6.1820],
#> [ 7.5900],
#> [ 2.1670],
#> [ 7.0420],
#> [10.7910],
#> [ 5.3130],
#> [ 7.9970],
#> [ 3.1000]])
```
`length` is similar to `nelement` for number of elements:
```
length(x_train)
x_train$nelement() # number of elements in the tensor
```
```
#> [1] 15
#> [1] 15
```
### 3\.5\.2 `enumerate` and `iterate`
```
py = import_builtins()
enum_x_train = py$enumerate(x_train)
enum_x_train
py$len(x_train)
```
```
#> <enumerate>
#> [1] 15
```
If we directly use `iterate` over the `enum_x_train` object, we get an R list with the index and the value of the `1D` tensor:
```
xit = iterate(enum_x_train, simplify = TRUE)
xit
```
```
#> [[1]]
#> [[1]][[1]]
#> [1] 0
#>
#> [[1]][[2]]
#> tensor([3.3000])
#>
#>
#> [[2]]
#> [[2]][[1]]
#> [1] 1
#>
#> [[2]][[2]]
#> tensor([4.4000])
#>
#>
#> [[3]]
#> [[3]][[1]]
#> [1] 2
#>
#> [[3]][[2]]
#> tensor([5.5000])
#>
#>
#> [[4]]
#> [[4]][[1]]
#> [1] 3
#>
#> [[4]][[2]]
#> tensor([6.7100])
#>
#>
#> [[5]]
#> [[5]][[1]]
#> [1] 4
#>
#> [[5]][[2]]
#> tensor([6.9300])
#>
#>
#> [[6]]
#> [[6]][[1]]
#> [1] 5
#>
#> [[6]][[2]]
#> tensor([4.1680])
#>
#>
#> [[7]]
#> [[7]][[1]]
#> [1] 6
#>
#> [[7]][[2]]
#> tensor([9.7790])
#>
#>
#> [[8]]
#> [[8]][[1]]
#> [1] 7
#>
#> [[8]][[2]]
#> tensor([6.1820])
#>
#>
#> [[9]]
#> [[9]][[1]]
#> [1] 8
#>
#> [[9]][[2]]
#> tensor([7.5900])
#>
#>
#> [[10]]
#> [[10]][[1]]
#> [1] 9
#>
#> [[10]][[2]]
#> tensor([2.1670])
#>
#>
#> [[11]]
#> [[11]][[1]]
#> [1] 10
#>
#> [[11]][[2]]
#> tensor([7.0420])
#>
#>
#> [[12]]
#> [[12]][[1]]
#> [1] 11
#>
#> [[12]][[2]]
#> tensor([10.7910])
#>
#>
#> [[13]]
#> [[13]][[1]]
#> [1] 12
#>
#> [[13]][[2]]
#> tensor([5.3130])
#>
#>
#> [[14]]
#> [[14]][[1]]
#> [1] 13
#>
#> [[14]][[2]]
#> tensor([7.9970])
#>
#>
#> [[15]]
#> [[15]][[1]]
#> [1] 14
#>
#> [[15]][[2]]
#> tensor([3.1000])
```
### 3\.5\.3 `for-loop` for iteration
Another way of iterating through a dataset that you will see a lot in the PyTorch tutorials is a `loop` through the length of the dataset. In this case, `x_train`. We are using `cat()` for the index (an integer), and `print()` for the tensor, since `cat` doesn’t know how to deal with tensors:
```
# reset the iterator
enum_x_train = py$enumerate(x_train)
for (i in 1:py$len(x_train)) {
obj <- iter_next(enum_x_train) # next item
cat(obj[[1]], "\t") # 1st part or index
print(obj[[2]]) # 2nd part or tensor
}
```
```
#> 0 tensor([3.3000])
#> 1 tensor([4.4000])
#> 2 tensor([5.5000])
#> 3 tensor([6.7100])
#> 4 tensor([6.9300])
#> 5 tensor([4.1680])
#> 6 tensor([9.7790])
#> 7 tensor([6.1820])
#> 8 tensor([7.5900])
#> 9 tensor([2.1670])
#> 10 tensor([7.0420])
#> 11 tensor([10.7910])
#> 12 tensor([5.3130])
#> 13 tensor([7.9970])
#> 14 tensor([3.1000])
```
Similarly, if we want the scalar values but not as tensor, then we will need to use `item()`.
```
# reset the iterator
enum_x_train = py$enumerate(x_train)
for (i in 1:py$len(x_train)) {
obj <- iter_next(enum_x_train) # next item
cat(obj[[1]], "\t") # 1st part or index
print(obj[[2]]$item()) # 2nd part or tensor
}
```
```
#> 0 [1] 3.3
#> 1 [1] 4.4
#> 2 [1] 5.5
#> 3 [1] 6.71
#> 4 [1] 6.93
#> 5 [1] 4.17
#> 6 [1] 9.78
#> 7 [1] 6.18
#> 8 [1] 7.59
#> 9 [1] 2.17
#> 10 [1] 7.04
#> 11 [1] 10.8
#> 12 [1] 5.31
#> 13 [1] 8
#> 14 [1] 3.1
```
> We will find very frequently this kind of iterators when we read a dataset read by `torchvision`. There are several different ways to iterate through these objects as you will find.
### 3\.5\.1 Enumeration
Given the following training dataset `x_train`, we want to find the number of elements of the tensor. We start by entering a `numpy` array, which then will convert to a tensor with the PyTorch function `from_numpy()`:
```
x_train_r <- array(c(3.3, 4.4, 5.5, 6.71, 6.93, 4.168,
9.779, 6.182, 7.59, 2.167, 7.042,
10.791, 5.313, 7.997, 3.1), dim = c(15,1))
x_train_np <- r_to_py(x_train_r)
x_train_ <- torch$from_numpy(x_train_np) # convert to tensor
x_train <- x_train_$type(torch$FloatTensor) # make it a a FloatTensor
print(x_train$dtype)
print(x_train)
```
```
#> torch.float32
#> tensor([[ 3.3000],
#> [ 4.4000],
#> [ 5.5000],
#> [ 6.7100],
#> [ 6.9300],
#> [ 4.1680],
#> [ 9.7790],
#> [ 6.1820],
#> [ 7.5900],
#> [ 2.1670],
#> [ 7.0420],
#> [10.7910],
#> [ 5.3130],
#> [ 7.9970],
#> [ 3.1000]])
```
`length` is similar to `nelement` for number of elements:
```
length(x_train)
x_train$nelement() # number of elements in the tensor
```
```
#> [1] 15
#> [1] 15
```
### 3\.5\.2 `enumerate` and `iterate`
```
py = import_builtins()
enum_x_train = py$enumerate(x_train)
enum_x_train
py$len(x_train)
```
```
#> <enumerate>
#> [1] 15
```
If we directly use `iterate` over the `enum_x_train` object, we get an R list with the index and the value of the `1D` tensor:
```
xit = iterate(enum_x_train, simplify = TRUE)
xit
```
```
#> [[1]]
#> [[1]][[1]]
#> [1] 0
#>
#> [[1]][[2]]
#> tensor([3.3000])
#>
#>
#> [[2]]
#> [[2]][[1]]
#> [1] 1
#>
#> [[2]][[2]]
#> tensor([4.4000])
#>
#>
#> [[3]]
#> [[3]][[1]]
#> [1] 2
#>
#> [[3]][[2]]
#> tensor([5.5000])
#>
#>
#> [[4]]
#> [[4]][[1]]
#> [1] 3
#>
#> [[4]][[2]]
#> tensor([6.7100])
#>
#>
#> [[5]]
#> [[5]][[1]]
#> [1] 4
#>
#> [[5]][[2]]
#> tensor([6.9300])
#>
#>
#> [[6]]
#> [[6]][[1]]
#> [1] 5
#>
#> [[6]][[2]]
#> tensor([4.1680])
#>
#>
#> [[7]]
#> [[7]][[1]]
#> [1] 6
#>
#> [[7]][[2]]
#> tensor([9.7790])
#>
#>
#> [[8]]
#> [[8]][[1]]
#> [1] 7
#>
#> [[8]][[2]]
#> tensor([6.1820])
#>
#>
#> [[9]]
#> [[9]][[1]]
#> [1] 8
#>
#> [[9]][[2]]
#> tensor([7.5900])
#>
#>
#> [[10]]
#> [[10]][[1]]
#> [1] 9
#>
#> [[10]][[2]]
#> tensor([2.1670])
#>
#>
#> [[11]]
#> [[11]][[1]]
#> [1] 10
#>
#> [[11]][[2]]
#> tensor([7.0420])
#>
#>
#> [[12]]
#> [[12]][[1]]
#> [1] 11
#>
#> [[12]][[2]]
#> tensor([10.7910])
#>
#>
#> [[13]]
#> [[13]][[1]]
#> [1] 12
#>
#> [[13]][[2]]
#> tensor([5.3130])
#>
#>
#> [[14]]
#> [[14]][[1]]
#> [1] 13
#>
#> [[14]][[2]]
#> tensor([7.9970])
#>
#>
#> [[15]]
#> [[15]][[1]]
#> [1] 14
#>
#> [[15]][[2]]
#> tensor([3.1000])
```
### 3\.5\.3 `for-loop` for iteration
Another way of iterating through a dataset that you will see a lot in the PyTorch tutorials is a `loop` through the length of the dataset. In this case, `x_train`. We are using `cat()` for the index (an integer), and `print()` for the tensor, since `cat` doesn’t know how to deal with tensors:
```
# reset the iterator
enum_x_train = py$enumerate(x_train)
for (i in 1:py$len(x_train)) {
obj <- iter_next(enum_x_train) # next item
cat(obj[[1]], "\t") # 1st part or index
print(obj[[2]]) # 2nd part or tensor
}
```
```
#> 0 tensor([3.3000])
#> 1 tensor([4.4000])
#> 2 tensor([5.5000])
#> 3 tensor([6.7100])
#> 4 tensor([6.9300])
#> 5 tensor([4.1680])
#> 6 tensor([9.7790])
#> 7 tensor([6.1820])
#> 8 tensor([7.5900])
#> 9 tensor([2.1670])
#> 10 tensor([7.0420])
#> 11 tensor([10.7910])
#> 12 tensor([5.3130])
#> 13 tensor([7.9970])
#> 14 tensor([3.1000])
```
Similarly, if we want the scalar values but not as tensor, then we will need to use `item()`.
```
# reset the iterator
enum_x_train = py$enumerate(x_train)
for (i in 1:py$len(x_train)) {
obj <- iter_next(enum_x_train) # next item
cat(obj[[1]], "\t") # 1st part or index
print(obj[[2]]$item()) # 2nd part or tensor
}
```
```
#> 0 [1] 3.3
#> 1 [1] 4.4
#> 2 [1] 5.5
#> 3 [1] 6.71
#> 4 [1] 6.93
#> 5 [1] 4.17
#> 6 [1] 9.78
#> 7 [1] 6.18
#> 8 [1] 7.59
#> 9 [1] 2.17
#> 10 [1] 7.04
#> 11 [1] 10.8
#> 12 [1] 5.31
#> 13 [1] 8
#> 14 [1] 3.1
```
> We will find very frequently this kind of iterators when we read a dataset read by `torchvision`. There are several different ways to iterate through these objects as you will find.
3\.6 Zero gradient
------------------
The zero gradient was one of the most difficult to implement in R if we don’t pay attention to the content of the objects carrying the **weights** and **biases**. This happens when the algorithm written in **PyTorch** is not immediately translatable to **rTorch**. This can be appreciated in this example.
> We are using the same seed in the PyTorch and rTorch versions, so, we could compare the results.
### 3\.6\.1 Code version in Python
```
import numpy as np
import torch
torch.manual_seed(0) # reproducible
# Input (temp, rainfall, humidity)
```
```
#> <torch._C.Generator object at 0x7f42c604e250>
```
```
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
# Convert inputs and targets to tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
# random weights and biases
w = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, requires_grad=True)
# function for the model
def model(x):
wt = w.t()
mm = x @ w.t()
return x @ w.t() + b # @ represents matrix multiplication in PyTorch
# MSE loss function
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
# Running all together
# Train for 100 epochs
for i in range(100):
preds = model(inputs)
loss = mse(preds, targets)
loss.backward()
with torch.no_grad():
w -= w.grad * 0.00001
b -= b.grad * 0.00001
w_gz = w.grad.zero_()
b_gz = b.grad.zero_()
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print("Loss: ", loss)
# predictions
```
```
#> Loss: tensor(1270.1233, grad_fn=<DivBackward0>)
```
```
print("\nPredictions:")
```
```
#>
#> Predictions:
```
```
preds
# Targets
```
```
#> tensor([[ 69.3122, 80.2639],
#> [ 73.7528, 97.2381],
#> [118.3933, 124.7628],
#> [ 89.6111, 93.0286],
#> [ 47.3014, 80.6467]], grad_fn=<AddBackward0>)
```
```
print("\nTargets:")
```
```
#>
#> Targets:
```
```
targets
```
```
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
### 3\.6\.2 Code version in R
```
library(rTorch)
torch$manual_seed(0)
device = torch$device('cpu')
# Input (temp, rainfall, humidity)
inputs = np$array(list(list(73, 67, 43),
list(91, 88, 64),
list(87, 134, 58),
list(102, 43, 37),
list(69, 96, 70)), dtype='float32')
# Targets (apples, oranges)
targets = np$array(list(list(56, 70),
list(81, 101),
list(119, 133),
list(22, 37),
list(103, 119)), dtype='float32')
# Convert inputs and targets to tensors
inputs = torch$from_numpy(inputs)
targets = torch$from_numpy(targets)
# random numbers for weights and biases. Then convert to double()
torch$set_default_dtype(torch$float64)
w = torch$randn(2L, 3L, requires_grad=TRUE) #$double()
b = torch$randn(2L, requires_grad=TRUE) #$double()
model <- function(x) {
wt <- w$t()
return(torch$add(torch$mm(x, wt), b))
}
# MSE loss
mse = function(t1, t2) {
diff <- torch$sub(t1, t2)
mul <- torch$sum(torch$mul(diff, diff))
return(torch$div(mul, diff$numel()))
}
# Running all together
# Adjust weights and reset gradients
for (i in 1:100) {
preds = model(inputs)
loss = mse(preds, targets)
loss$backward()
with(torch$no_grad(), {
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad, torch$scalar_tensor(1e-5)))
w$grad$zero_()
b$grad$zero_()
})
}
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
cat("Loss: "); print(loss)
# predictions
cat("\nPredictions:\n")
preds
# Targets
cat("\nTargets:\n")
targets
```
```
#> <torch._C.Generator>
#> Loss: tensor(1270.1237, grad_fn=<DivBackward0>)
#>
#> Predictions:
#> tensor([[ 69.3122, 80.2639],
#> [ 73.7528, 97.2381],
#> [118.3933, 124.7628],
#> [ 89.6111, 93.0286],
#> [ 47.3013, 80.6467]], grad_fn=<AddBackward0>)
#>
#> Targets:
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
Notice that while we are in Python, the tensor operation, gradient (\\(\\nabla\\)) of the weights \\(w\\) times the **Learning Rate** \\(\\alpha\\), is:
\\\[w \= \-w \+ \\nabla w \\; \\alpha\\]
In Python, it is a very straight forwward and clean code:
```
w -= w.grad * 1e-5
```
In R, without generics, it shows a little bit more convoluted:
```
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
```
### 3\.6\.1 Code version in Python
```
import numpy as np
import torch
torch.manual_seed(0) # reproducible
# Input (temp, rainfall, humidity)
```
```
#> <torch._C.Generator object at 0x7f42c604e250>
```
```
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
# Convert inputs and targets to tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
# random weights and biases
w = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, requires_grad=True)
# function for the model
def model(x):
wt = w.t()
mm = x @ w.t()
return x @ w.t() + b # @ represents matrix multiplication in PyTorch
# MSE loss function
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
# Running all together
# Train for 100 epochs
for i in range(100):
preds = model(inputs)
loss = mse(preds, targets)
loss.backward()
with torch.no_grad():
w -= w.grad * 0.00001
b -= b.grad * 0.00001
w_gz = w.grad.zero_()
b_gz = b.grad.zero_()
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print("Loss: ", loss)
# predictions
```
```
#> Loss: tensor(1270.1233, grad_fn=<DivBackward0>)
```
```
print("\nPredictions:")
```
```
#>
#> Predictions:
```
```
preds
# Targets
```
```
#> tensor([[ 69.3122, 80.2639],
#> [ 73.7528, 97.2381],
#> [118.3933, 124.7628],
#> [ 89.6111, 93.0286],
#> [ 47.3014, 80.6467]], grad_fn=<AddBackward0>)
```
```
print("\nTargets:")
```
```
#>
#> Targets:
```
```
targets
```
```
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
### 3\.6\.2 Code version in R
```
library(rTorch)
torch$manual_seed(0)
device = torch$device('cpu')
# Input (temp, rainfall, humidity)
inputs = np$array(list(list(73, 67, 43),
list(91, 88, 64),
list(87, 134, 58),
list(102, 43, 37),
list(69, 96, 70)), dtype='float32')
# Targets (apples, oranges)
targets = np$array(list(list(56, 70),
list(81, 101),
list(119, 133),
list(22, 37),
list(103, 119)), dtype='float32')
# Convert inputs and targets to tensors
inputs = torch$from_numpy(inputs)
targets = torch$from_numpy(targets)
# random numbers for weights and biases. Then convert to double()
torch$set_default_dtype(torch$float64)
w = torch$randn(2L, 3L, requires_grad=TRUE) #$double()
b = torch$randn(2L, requires_grad=TRUE) #$double()
model <- function(x) {
wt <- w$t()
return(torch$add(torch$mm(x, wt), b))
}
# MSE loss
mse = function(t1, t2) {
diff <- torch$sub(t1, t2)
mul <- torch$sum(torch$mul(diff, diff))
return(torch$div(mul, diff$numel()))
}
# Running all together
# Adjust weights and reset gradients
for (i in 1:100) {
preds = model(inputs)
loss = mse(preds, targets)
loss$backward()
with(torch$no_grad(), {
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad, torch$scalar_tensor(1e-5)))
w$grad$zero_()
b$grad$zero_()
})
}
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
cat("Loss: "); print(loss)
# predictions
cat("\nPredictions:\n")
preds
# Targets
cat("\nTargets:\n")
targets
```
```
#> <torch._C.Generator>
#> Loss: tensor(1270.1237, grad_fn=<DivBackward0>)
#>
#> Predictions:
#> tensor([[ 69.3122, 80.2639],
#> [ 73.7528, 97.2381],
#> [118.3933, 124.7628],
#> [ 89.6111, 93.0286],
#> [ 47.3013, 80.6467]], grad_fn=<AddBackward0>)
#>
#> Targets:
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
Notice that while we are in Python, the tensor operation, gradient (\\(\\nabla\\)) of the weights \\(w\\) times the **Learning Rate** \\(\\alpha\\), is:
\\\[w \= \-w \+ \\nabla w \\; \\alpha\\]
In Python, it is a very straight forwward and clean code:
```
w -= w.grad * 1e-5
```
In R, without generics, it shows a little bit more convoluted:
```
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
```
3\.7 R generic functions
------------------------
Which why we simplified these common operations using the R generic function. When we use the generic methods from **rTorch** the operation looks much neater.
```
w$data <- w$data - w$grad * 1e-5
```
The following two expressions are equivalent, with the first being the long version natural way of doing it in **PyTorch**. The second is using the generics in R for subtraction, multiplication and scalar conversion.
```
param$data <- torch$sub(param$data,
torch$mul(param$grad$float(),
torch$scalar_tensor(learning_rate)))
}
```
```
param$data <- param$data - param$grad * learning_rate
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/converting-tensors.html |
Chapter 4 Converting tensors
============================
*Last update: Sun Oct 25 13:00:09 2020 \-0500 (f5e8a1973\)*
```
library(rTorch)
```
4\.1 Tensor to `numpy` array
----------------------------
This is a frequent operation. I have found that this is necessary when:
* a `numpy` function is not implemented in PyTorch
* We need to convert a tensor to R
* Perform a boolean operation that is not directly available in PyTorch
```
x <- torch$arange(1, 10)
y <- x^2
```
If we attempt to plot these two tensors we get an error:
```
plot(x, y)
```
```
#> Error in as.double(x): cannot coerce type 'environment' to vector of type 'double'
```
They need to be converted to `numpy`, and then to R (which happens in the background):
```
plot(x$numpy(), y$numpy())
```
4\.2 `numpy` array to tensor
----------------------------
* Explain how transform a tensor back and forth to `numpy`.
* Why is this important?
* In what cases in this necessary?
```
p <- np$arange(1, 10)
class(p)
```
```
#> [1] "array"
```
```
(pt <- torch$as_tensor(p))
```
```
#> tensor([1., 2., 3., 4., 5., 6., 7., 8., 9.], dtype=torch.float64)
```
```
class(pt)
```
```
#> [1] "torch.Tensor" "torch._C._TensorBase" "python.builtin.object"
```
### 4\.2\.1 `numpy` array to `R`
This is mainly required for these reasons:
1. Create a data structure in R
2. Plot using `r-base` or `ggplot2`
3. Perform an analysis on parts of a tensor
4. Use R statistical functions that are not available in PyTorch
4\.3 R objects to `numpy` objects
---------------------------------
Given the R matrix \\(m\\):
```
m <- matrix(seq(1,10), nrow = 2)
m
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 3 5 7 9
#> [2,] 2 4 6 8 10
```
We explicitly convert it to a `numpy` object with the function `r_to_py()`:
```
(mp <- r_to_py(m))
```
```
#> [[ 1 3 5 7 9]
#> [ 2 4 6 8 10]]
```
```
class(mp)
```
```
#> [1] "numpy.ndarray" "python.builtin.object"
```
```
class(mp)
```
```
#> [1] "numpy.ndarray" "python.builtin.object"
```
4\.1 Tensor to `numpy` array
----------------------------
This is a frequent operation. I have found that this is necessary when:
* a `numpy` function is not implemented in PyTorch
* We need to convert a tensor to R
* Perform a boolean operation that is not directly available in PyTorch
```
x <- torch$arange(1, 10)
y <- x^2
```
If we attempt to plot these two tensors we get an error:
```
plot(x, y)
```
```
#> Error in as.double(x): cannot coerce type 'environment' to vector of type 'double'
```
They need to be converted to `numpy`, and then to R (which happens in the background):
```
plot(x$numpy(), y$numpy())
```
4\.2 `numpy` array to tensor
----------------------------
* Explain how transform a tensor back and forth to `numpy`.
* Why is this important?
* In what cases in this necessary?
```
p <- np$arange(1, 10)
class(p)
```
```
#> [1] "array"
```
```
(pt <- torch$as_tensor(p))
```
```
#> tensor([1., 2., 3., 4., 5., 6., 7., 8., 9.], dtype=torch.float64)
```
```
class(pt)
```
```
#> [1] "torch.Tensor" "torch._C._TensorBase" "python.builtin.object"
```
### 4\.2\.1 `numpy` array to `R`
This is mainly required for these reasons:
1. Create a data structure in R
2. Plot using `r-base` or `ggplot2`
3. Perform an analysis on parts of a tensor
4. Use R statistical functions that are not available in PyTorch
### 4\.2\.1 `numpy` array to `R`
This is mainly required for these reasons:
1. Create a data structure in R
2. Plot using `r-base` or `ggplot2`
3. Perform an analysis on parts of a tensor
4. Use R statistical functions that are not available in PyTorch
4\.3 R objects to `numpy` objects
---------------------------------
Given the R matrix \\(m\\):
```
m <- matrix(seq(1,10), nrow = 2)
m
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 3 5 7 9
#> [2,] 2 4 6 8 10
```
We explicitly convert it to a `numpy` object with the function `r_to_py()`:
```
(mp <- r_to_py(m))
```
```
#> [[ 1 3 5 7 9]
#> [ 2 4 6 8 10]]
```
```
class(mp)
```
```
#> [1] "numpy.ndarray" "python.builtin.object"
```
```
class(mp)
```
```
#> [1] "numpy.ndarray" "python.builtin.object"
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/tensors.html |
Chapter 5 Tensors
=================
*Last update: Sun Oct 25 13:00:41 2020 \-0500 (265c0b3c1\)*
In this chapter, we describe the most important PyTorch methods.
```
library(rTorch)
```
5\.1 Tensor data types
----------------------
```
# Default data type
torch$tensor(list(1.2, 3))$dtype # default for floating point is torch.float32
```
```
#> torch.float32
```
```
# change default data type to float64
torch$set_default_dtype(torch$float64)
torch$tensor(list(1.2, 3))$dtype # a new floating point tensor
```
```
#> torch.float64
```
### 5\.1\.1 Major tensor types
There are five major type of tensors in PyTorch: byte, float, double, long, and boolean.
```
library(rTorch)
byte <- torch$ByteTensor(3L, 3L)
float <- torch$FloatTensor(3L, 3L)
double <- torch$DoubleTensor(3L, 3L)
long <- torch$LongTensor(3L, 3L)
boolean <- torch$BoolTensor(5L, 5L)
```
```
message("byte tensor")
#> byte tensor
byte
#> tensor([[0, 0, 0],
#> [0, 0, 0],
#> [0, 0, 0]], dtype=torch.uint8)
```
```
message("float tensor")
#> float tensor
float
#> tensor([[0., 0., 0.],
#> [0., 0., 0.],
#> [0., 0., 0.]], dtype=torch.float32)
```
```
message("double")
#> double
double
#> tensor([[6.9461e-310, 6.9461e-310, 4.9407e-324],
#> [4.6489e-310, 0.0000e+00, 0.0000e+00],
#> [ 0.0000e+00, 0.0000e+00, 9.5490e-313]])
```
```
message("long")
#> long
long
#> tensor([[0, 0, 0],
#> [0, 0, 0],
#> [0, 0, 0]])
```
```
message("boolean")
#> boolean
boolean
#> tensor([[False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False]])
```
### 5\.1\.2 Example: A 4D tensor
A 4D tensor like in MNIST hand\-written digits recognition dataset:
```
mnist_4d <- torch$FloatTensor(60000L, 3L, 28L, 28L)
```
```
message("size")
#> size
mnist_4d$size()
#> torch.Size([60000, 3, 28, 28])
message("length")
#> length
length(mnist_4d)
#> [1] 141120000
message("shape, like in numpy")
#> shape, like in numpy
mnist_4d$shape
#> torch.Size([60000, 3, 28, 28])
message("number of elements")
#> number of elements
mnist_4d$numel()
#> [1] 141120000
```
### 5\.1\.3 Example: A 3D tensor
Given a 3D tensor:
```
ft3d <- torch$FloatTensor(4L, 3L, 2L)
ft3d
```
```
#> tensor([[[1.1390e+12, 3.0700e-41],
#> [1.4555e+12, 3.0700e-41],
#> [1.1344e+12, 3.0700e-41]],
#>
#> [[4.7256e+10, 3.0700e-41],
#> [4.7258e+10, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41]],
#>
#> [[1.0075e+12, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41]],
#>
#> [[1.0075e+12, 3.0700e-41],
#> [4.7259e+10, 3.0700e-41],
#> [4.7263e+10, 3.0700e-41]]], dtype=torch.float32)
```
```
ft3d$size()
#> torch.Size([4, 3, 2])
length(ft3d)
#> [1] 24
ft3d$shape
#> torch.Size([4, 3, 2])
ft3d$numel
#> <built-in method numel of Tensor>
```
5\.2 Arithmetic of tensors
--------------------------
### 5\.2\.1 Add tensors
```
# add a scalar to a tensor
# 3x5 matrix uniformly distributed between 0 and 1
mat0 <- torch$FloatTensor(3L, 5L)$uniform_(0L, 1L)
mat0 + 0.1
```
```
#> tensor([[0.9645, 0.6238, 0.9326, 0.3023, 0.1448],
#> [0.2610, 0.1987, 0.5089, 0.9776, 0.5261],
#> [0.2727, 0.5670, 0.8338, 0.4297, 0.7935]], dtype=torch.float32)
```
### 5\.2\.2 Add tensor elements
```
# fill a 3x5 matrix with 0.1
mat1 <- torch$FloatTensor(3L, 5L)$uniform_(0.1, 0.1)
print(mat1)
#> tensor([[0.1000, 0.1000, 0.1000, 0.1000, 0.1000],
#> [0.1000, 0.1000, 0.1000, 0.1000, 0.1000],
#> [0.1000, 0.1000, 0.1000, 0.1000, 0.1000]], dtype=torch.float32)
# a vector with all ones
mat2 <- torch$FloatTensor(5L)$uniform_(1, 1)
print(mat2)
#> tensor([1., 1., 1., 1., 1.], dtype=torch.float32)
# add element (1,1) to another tensor
mat1[1, 1] + mat2
#> tensor([1.1000, 1.1000, 1.1000, 1.1000, 1.1000], dtype=torch.float32)
```
Add two tensors using the function `add()`:
```
# PyTorch add two tensors
x = torch$rand(5L, 4L)
y = torch$rand(5L, 4L)
print(x$add(y))
```
```
#> tensor([[0.4604, 0.8114, 0.9630, 0.8070],
#> [0.6829, 0.4612, 0.1546, 1.1180],
#> [0.3134, 0.9399, 1.1217, 1.2846],
#> [1.9212, 1.3897, 0.5217, 0.3508],
#> [0.5801, 1.1733, 0.6494, 0.6771]])
```
Add two tensors using the generic `+`:
```
print(x + y)
```
```
#> tensor([[0.4604, 0.8114, 0.9630, 0.8070],
#> [0.6829, 0.4612, 0.1546, 1.1180],
#> [0.3134, 0.9399, 1.1217, 1.2846],
#> [1.9212, 1.3897, 0.5217, 0.3508],
#> [0.5801, 1.1733, 0.6494, 0.6771]])
```
### 5\.2\.3 Multiply a tensor by a scalar
```
# Multiply tensor by scalar
tensor = torch$ones(4L, dtype=torch$float64)
scalar = np$float64(4.321)
print(scalar)
print(torch$scalar_tensor(scalar))
```
```
#> [1] 4.32
#> tensor(4.3210)
```
> Notice that we used a NumPy function to create the scalar object `np$float64()`.
Multiply two tensors using the function `mul`:
```
(prod = torch$mul(tensor, torch$scalar_tensor(scalar)))
```
```
#> tensor([4.3210, 4.3210, 4.3210, 4.3210])
```
Short version using R generics:
```
(prod = tensor * scalar)
```
```
#> tensor([4.3210, 4.3210, 4.3210, 4.3210])
```
5\.3 NumPy and PyTorch
----------------------
`numpy` has been made available as a module in `rTorch`, which means that as soon as rTorch is loaded, you also get all the `numpy` functions available to you. We can call functions from `numpy` referring to it as `np$_a_function`. Examples:
```
# a 2D numpy array
syn0 <- np$random$rand(3L, 5L)
print(syn0)
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 0.303 0.475 0.00956 0.812 0.210
#> [2,] 0.546 0.607 0.19421 0.989 0.276
#> [3,] 0.240 0.158 0.53997 0.718 0.849
```
```
# numpy arrays of zeros
syn1 <- np$zeros(c(5L, 10L))
print(syn1)
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 0 0 0 0 0 0 0 0 0 0
#> [2,] 0 0 0 0 0 0 0 0 0 0
#> [3,] 0 0 0 0 0 0 0 0 0 0
#> [4,] 0 0 0 0 0 0 0 0 0 0
#> [5,] 0 0 0 0 0 0 0 0 0 0
```
```
# add a scalar to a numpy array
syn1 = syn1 + 0.1
print(syn1)
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
#> [2,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
#> [3,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
#> [4,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
#> [5,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
```
And the dot product of both:
```
np$dot(syn0, syn1)
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 0.181 0.181 0.181 0.181 0.181 0.181 0.181 0.181 0.181 0.181
#> [2,] 0.261 0.261 0.261 0.261 0.261 0.261 0.261 0.261 0.261 0.261
#> [3,] 0.250 0.250 0.250 0.250 0.250 0.250 0.250 0.250 0.250 0.250
```
### 5\.3\.1 Python tuples and R vectors
In `numpy` the shape of a multidimensional array needs to be defined using a `tuple`. in R we do it instead with a `vector`. There are not tuples in R.
In Python, we use a tuple, `(5, 5)` to indicate the shape of the array:
```
import numpy as np
print(np.ones((5, 5)))
```
```
#> [[1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]]
```
In R, we use a vector `c(5L, 5L)`. The `L` indicates an integer.
```
l1 <- np$ones(c(5L, 5L))
print(l1)
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 1 1 1 1
#> [2,] 1 1 1 1 1
#> [3,] 1 1 1 1 1
#> [4,] 1 1 1 1 1
#> [5,] 1 1 1 1 1
```
### 5\.3\.2 A numpy array from R vectors
For this matrix, or 2D tensor, we use three R vectors:
```
X <- np$array(rbind(c(1,2,3), c(4,5,6), c(7,8,9)))
print(X)
```
```
#> [,1] [,2] [,3]
#> [1,] 1 2 3
#> [2,] 4 5 6
#> [3,] 7 8 9
```
And we could transpose the array using `numpy` as well:
```
np$transpose(X)
```
```
#> [,1] [,2] [,3]
#> [1,] 1 4 7
#> [2,] 2 5 8
#> [3,] 3 6 9
```
### 5\.3\.3 numpy arrays to tensors
```
a = np$array(list(1, 2, 3)) # a numpy array
t = torch$as_tensor(a) # convert it to tensor
print(t)
```
```
#> tensor([1., 2., 3.])
```
### 5\.3\.4 Create and fill a tensor
We can create the tensor directly from R using `tensor()`:
```
torch$tensor(list( 1, 2, 3)) # create a tensor
t[1L]$fill_(-1) # fill element with -1
print(a)
```
```
#> tensor([1., 2., 3.])
#> tensor(-1.)
#> [1] -1 2 3
```
### 5\.3\.5 Tensor to array, and viceversa
This is a very common operation in machine learning:
```
# convert tensor to a numpy array
a = torch$rand(5L, 4L)
b = a$numpy()
print(b)
```
```
#> [,1] [,2] [,3] [,4]
#> [1,] 0.5596 0.1791 0.0149 0.568
#> [2,] 0.0946 0.0738 0.9916 0.685
#> [3,] 0.4065 0.1239 0.2190 0.905
#> [4,] 0.2055 0.0958 0.0788 0.193
#> [5,] 0.6578 0.8162 0.2609 0.097
```
```
# convert a numpy array to a tensor
np_a = np$array(c(c(3, 4), c(3, 6)))
t_a = torch$from_numpy(np_a)
print(t_a)
```
```
#> tensor([3., 4., 3., 6.])
```
5\.4 Create tensors
-------------------
A random 1D tensor:
```
ft1 <- torch$FloatTensor(np$random$rand(5L))
print(ft1)
```
```
#> tensor([0.5074, 0.2779, 0.1923, 0.8058, 0.3472], dtype=torch.float32)
```
Force a tensor as a `float` of 64\-bits:
```
ft2 <- torch$as_tensor(np$random$rand(5L), dtype= torch$float64)
print(ft2)
```
```
#> tensor([0.0704, 0.9035, 0.6435, 0.5640, 0.0108])
```
Convert the tensor to a `float` of 16\-bits:
```
ft2_dbl <- torch$as_tensor(ft2, dtype = torch$float16)
ft2_dbl
```
```
#> tensor([0.0704, 0.9033, 0.6436, 0.5640, 0.0108], dtype=torch.float16)
```
Create a tensor of size (5 x 7\) with uninitialized memory:
```
a <- torch$FloatTensor(5L, 7L)
print(a)
```
```
#> tensor([[0.0000e+00, 0.0000e+00, 1.1811e+16, 3.0700e-41, 0.0000e+00, 0.0000e+00,
#> 1.4013e-45],
#> [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
#> 0.0000e+00],
#> [4.9982e+14, 3.0700e-41, 0.0000e+00, 0.0000e+00, 4.6368e+14, 3.0700e-41,
#> 0.0000e+00],
#> [0.0000e+00, 1.4013e-45, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
#> 0.0000e+00],
#> [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
#> 0.0000e+00]], dtype=torch.float32)
```
Using arange to create a tensor. `arange` starts at 0\.
```
v = torch$arange(9L)
print(v)
```
```
#> tensor([0, 1, 2, 3, 4, 5, 6, 7, 8])
```
```
# reshape
(v = v$view(3L, 3L))
```
```
#> tensor([[0, 1, 2],
#> [3, 4, 5],
#> [6, 7, 8]])
```
### 5\.4\.1 Tensor fill
On this tensor:
```
(v = torch$ones(3L, 3L))
```
```
#> tensor([[1., 1., 1.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
Fill row 1 with 2s:
```
invisible(v[1L, ]$fill_(2L))
print(v)
```
```
#> tensor([[2., 2., 2.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
Fill row 2 with 3s:
```
invisible(v[2L, ]$fill_(3L))
print(v)
```
```
#> tensor([[2., 2., 2.],
#> [3., 3., 3.],
#> [1., 1., 1.]])
```
Fill column 3 with fours (4\):
```
invisible(v[, 3]$fill_(4L))
print(v)
```
```
#> tensor([[2., 2., 4.],
#> [3., 3., 4.],
#> [1., 1., 4.]])
```
### 5\.4\.2 Tensor with a range of values
```
# Initialize Tensor with a range of value
v = torch$arange(10L) # similar to range(5) but creating a Tensor
(v = torch$arange(0L, 10L, step = 1L)) # Size 5. Similar to range(0, 5, 1)
```
```
#> tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
```
### 5\.4\.3 Linear or log scale Tensor
Create a tensor with 10 linear points for (1, 10\) inclusive:
```
(v = torch$linspace(1L, 10L, steps = 10L))
```
```
#> tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
```
Create a tensor with 10 logarithmic points for (1, 10\) inclusive:
```
(v = torch$logspace(start=-10L, end = 10L, steps = 5L))
```
```
#> tensor([1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])
```
### 5\.4\.4 In\-place / Out\-of\-place fill
On this uninitialized tensor:
```
(a <- torch$FloatTensor(5L, 7L))
```
```
#> tensor([[0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.]], dtype=torch.float32)
```
Fill the tensor with the value 3\.5:
```
a$fill_(3.5)
```
```
#> tensor([[3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]],
#> dtype=torch.float32)
```
Add a scalar to the tensor:
```
b <- a$add(4.0)
```
The tensor `a` is still filled with 3\.5\. A new tensor `b` is returned with values 3\.5 \+ 4\.0 \= 7\.5
```
print(a)
print(b)
```
```
#> tensor([[3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]],
#> dtype=torch.float32)
#> tensor([[7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000]],
#> dtype=torch.float32)
```
5\.5 Tensor resizing
--------------------
```
x = torch$randn(2L, 3L) # Size 2x3
print(x)
#> tensor([[-0.4375, 1.2873, -0.5258],
#> [ 0.7870, -0.8505, -1.2215]])
y = x$view(6L) # Resize x to size 6
print(y)
#> tensor([-0.4375, 1.2873, -0.5258, 0.7870, -0.8505, -1.2215])
z = x$view(-1L, 2L) # Size 3x2
print(z)
#> tensor([[-0.4375, 1.2873],
#> [-0.5258, 0.7870],
#> [-0.8505, -1.2215]])
print(z$shape)
#> torch.Size([3, 2])
```
### 5\.5\.1 Exercise
Reproduce this tensor:
```
0 1 2
3 4 5
6 7 8
```
```
# create a vector with the number of elements
v = torch$arange(9L)
# resize to a 3x3 tensor
(v = v$view(3L, 3L))
```
```
#> tensor([[0, 1, 2],
#> [3, 4, 5],
#> [6, 7, 8]])
```
5\.6 Concatenate tensors
------------------------
```
x = torch$randn(2L, 3L)
print(x)
print(x$shape)
```
```
#> tensor([[-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826]])
#> torch.Size([2, 3])
```
### 5\.6\.1 Concatenate by rows
```
(x0 <- torch$cat(list(x, x, x), 0L))
print(x0$shape)
```
```
#> tensor([[-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826],
#> [-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826],
#> [-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826]])
#> torch.Size([6, 3])
```
### 5\.6\.2 Concatenate by columns
```
(x1 <- torch$cat(list(x, x, x), 1L))
print(x1$shape)
```
```
#> tensor([[-0.3954, 1.4149, 0.2381, -0.3954, 1.4149, 0.2381, -0.3954, 1.4149,
#> 0.2381],
#> [-1.2126, 0.7869, 0.0826, -1.2126, 0.7869, 0.0826, -1.2126, 0.7869,
#> 0.0826]])
#> torch.Size([2, 9])
```
5\.7 Reshape tensors
--------------------
### 5\.7\.1 With `chunk()`:
Let’s say this is an image tensor with the 3\-channels and 28x28 pixels
```
# ----- Reshape tensors -----
img <- torch$ones(3L, 28L, 28L) # Create the tensor of ones
print(img$size())
```
```
#> torch.Size([3, 28, 28])
```
On the first dimension `dim = 0L`, reshape the tensor:
```
img_chunks <- torch$chunk(img, chunks = 3L, dim = 0L)
print(length(img_chunks))
print(class(img_chunks))
```
```
#> [1] 3
#> [1] "list"
```
`img_chunks` is a `list` of three members.
The first chunk member:
```
# 1st chunk member
img_chunk <- img_chunks[[1]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
The second chunk member:
```
# 2nd chunk member
img_chunk <- img_chunks[[2]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
```
# 3rd chunk member
img_chunk <- img_chunks[[3]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
#### 5\.7\.1\.1 Exercise
1. Create a tensor of shape 3x28x28 filled with values 0\.25 on the first channel
2. The second channel with 0\.5
3. The third chanel with 0\.75
4. Find the sum for ecah separate channel
5. Find the sum of all channels
### 5\.7\.2 With `index_select()`:
```
img <- torch$ones(3L, 28L, 28L) # Create the tensor of ones
img$size()
```
```
#> torch.Size([3, 28, 28])
```
This is the layer 1:
```
# index_select. get layer 1
indices = torch$tensor(c(0L))
img_layer_1 <- torch$index_select(img, dim = 0L, index = indices)
```
The size of the layer:
```
print(img_layer_1$size())
```
```
#> torch.Size([1, 28, 28])
```
The sum of all elements in that layer:
```
print(img_layer_1$sum())
```
```
#> tensor(784.)
```
This is the layer 2:
```
# index_select. get layer 2
indices = torch$tensor(c(1L))
img_layer_2 <- torch$index_select(img, dim = 0L, index = indices)
print(img_layer_2$size())
print(img_layer_2$sum())
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
This is the layer 3:
```
# index_select. get layer 3
indices = torch$tensor(c(2L))
img_layer_3 <- torch$index_select(img, dim = 0L, index = indices)
print(img_layer_3$size())
print(img_layer_3$sum())
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
5\.8 Special tensors
--------------------
### 5\.8\.1 Identity matrix
```
# identity matrix
eye = torch$eye(3L) # Create an identity 3x3 tensor
print(eye)
```
```
#> tensor([[1., 0., 0.],
#> [0., 1., 0.],
#> [0., 0., 1.]])
```
```
# a 5x5 identity or unit matrix
torch$eye(5L)
```
```
#> tensor([[1., 0., 0., 0., 0.],
#> [0., 1., 0., 0., 0.],
#> [0., 0., 1., 0., 0.],
#> [0., 0., 0., 1., 0.],
#> [0., 0., 0., 0., 1.]])
```
### 5\.8\.2 Ones
```
(v = torch$ones(10L)) # A tensor of size 10 containing all ones
# reshape
(v = torch$ones(2L, 1L, 2L, 1L)) # Size 2x1x2x1, a 4D tensor
```
```
#> tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
#> tensor([[[[1.],
#> [1.]]],
#>
#>
#> [[[1.],
#> [1.]]]])
```
The *matrix of ones* is also called \``unitary matrix`. This is a `4x4` unitary matrix.
```
torch$ones(c(4L, 4L))
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
```
```
# eye tensor
eye = torch$eye(3L)
print(eye)
# like eye tensor
v = torch$ones_like(eye) # A tensor with same shape as eye. Fill it with 1.
v
```
```
#> tensor([[1., 0., 0.],
#> [0., 1., 0.],
#> [0., 0., 1.]])
#> tensor([[1., 1., 1.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
### 5\.8\.3 Zeros
```
(z = torch$zeros(10L)) # A tensor of size 10 containing all zeros
```
```
#> tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
```
```
# matrix of zeros
torch$zeros(c(4L, 4L))
```
```
#> tensor([[0., 0., 0., 0.],
#> [0., 0., 0., 0.],
#> [0., 0., 0., 0.],
#> [0., 0., 0., 0.]])
```
```
# a 3D tensor of zeros
torch$zeros(c(3L, 4L, 2L))
```
```
#> tensor([[[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]],
#>
#> [[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]],
#>
#> [[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]]])
```
### 5\.8\.4 Diagonal operations
Given the 1D tensor
```
a <- torch$tensor(c(1L, 2L, 3L))
a
```
```
#> tensor([1, 2, 3])
```
#### 5\.8\.4\.1 Diagonal matrix
We want to fill the main diagonal with the vector:
```
torch$diag(a)
```
```
#> tensor([[1, 0, 0],
#> [0, 2, 0],
#> [0, 0, 3]])
```
What about filling the diagonal above the main:
```
torch$diag(a, 1L)
```
```
#> tensor([[0, 1, 0, 0],
#> [0, 0, 2, 0],
#> [0, 0, 0, 3],
#> [0, 0, 0, 0]])
```
Or the diagonal below the main:
```
torch$diag(a, -1L)
```
```
#> tensor([[0, 0, 0, 0],
#> [1, 0, 0, 0],
#> [0, 2, 0, 0],
#> [0, 0, 3, 0]])
```
5\.9 Access to tensor elements
------------------------------
```
# replace an element at position 0, 0
(new_tensor = torch$Tensor(list(list(1, 2), list(3, 4))))
```
```
#> tensor([[1., 2.],
#> [3., 4.]])
```
Print element at position `1,1`:
```
print(new_tensor[1L, 1L])
```
```
#> tensor(1.)
```
Fill element at position `1,1` with 5:
```
new_tensor[1L, 1L]$fill_(5)
```
```
#> tensor(5.)
```
Show the modified tensor:
```
print(new_tensor) # tensor([[ 5., 2.],[ 3., 4.]])
```
```
#> tensor([[5., 2.],
#> [3., 4.]])
```
Access an element at position `1, 0`:
```
print(new_tensor[2L, 1L]) # tensor([ 3.])
print(new_tensor[2L, 1L]$item()) # 3.
```
```
#> tensor(3.)
#> [1] 3
```
### 5\.9\.1 Indices to tensor elements
On this tensor:
```
x = torch$randn(3L, 4L)
print(x)
```
```
#> tensor([[ 0.7076, 0.0816, -0.0431, 2.0698],
#> [ 0.6320, 0.5760, 0.1177, -1.9255],
#> [ 0.1964, -0.1771, -2.2976, -0.1239]])
```
Select indices, `dim=0`:
```
indices = torch$tensor(list(0L, 2L))
torch$index_select(x, 0L, indices)
```
```
#> tensor([[ 0.7076, 0.0816, -0.0431, 2.0698],
#> [ 0.1964, -0.1771, -2.2976, -0.1239]])
```
Select indices, `dim=1`:
```
torch$index_select(x, 1L, indices)
```
```
#> tensor([[ 0.7076, -0.0431],
#> [ 0.6320, 0.1177],
#> [ 0.1964, -2.2976]])
```
### 5\.9\.2 Using the `take` function
```
# Take by indices
src = torch$tensor(list(list(4, 3, 5),
list(6, 7, 8)) )
print(src)
print( torch$take(src, torch$tensor(list(0L, 2L, 5L))) )
```
```
#> tensor([[4., 3., 5.],
#> [6., 7., 8.]])
#> tensor([4., 5., 8.])
```
5\.10 Other tensor operations
-----------------------------
### 5\.10\.1 Cross product
```
m1 = torch$ones(3L, 5L)
m2 = torch$ones(3L, 5L)
v1 = torch$ones(3L)
# Cross product
# Size 3x5
(r = torch$cross(m1, m2))
```
```
#> tensor([[0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0.]])
```
### 5\.10\.2 Dot product
```
# Dot product of 2 tensors
# Dot product of 2 tensors
p <- torch$Tensor(list(4L, 2L))
q <- torch$Tensor(list(3L, 1L))
(r = torch$dot(p, q)) # 14
#> tensor(14.)
(r <- p %.*% q) # 14
#> tensor(14.)
```
5\.11 Logical operations
------------------------
```
m0 = torch$zeros(3L, 5L)
m1 = torch$ones(3L, 5L)
m2 = torch$eye(3L, 5L)
print(m1 == m0)
#> tensor([[False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False]])
```
```
print(m1 != m1)
#> tensor([[False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False]])
```
```
print(m2 == m2)
#> tensor([[True, True, True, True, True],
#> [True, True, True, True, True],
#> [True, True, True, True, True]])
```
```
# AND
m1 & m1
#> tensor([[1, 1, 1, 1, 1],
#> [1, 1, 1, 1, 1],
#> [1, 1, 1, 1, 1]], dtype=torch.uint8)
```
```
# OR
m0 | m2
#> tensor([[1, 0, 0, 0, 0],
#> [0, 1, 0, 0, 0],
#> [0, 0, 1, 0, 0]], dtype=torch.uint8)
```
```
# OR
m1 | m2
#> tensor([[1, 1, 1, 1, 1],
#> [1, 1, 1, 1, 1],
#> [1, 1, 1, 1, 1]], dtype=torch.uint8)
```
### 5\.11\.1 Extract a unique logical result
With `all`:
```
# tensor is less than
A <- torch$ones(60000L, 1L, 28L, 28L)
C <- A * 0.5
# is C < A
all(torch$lt(C, A))
#> tensor(1, dtype=torch.uint8)
all(C < A)
#> tensor(1, dtype=torch.uint8)
# is A < C
all(A < C)
#> tensor(0, dtype=torch.uint8)
```
With function `all_boolean`:
```
all_boolean <- function(x) {
# convert tensor of 1s and 0s to a unique boolean
as.logical(torch$all(x)$numpy())
}
# is C < A
all_boolean(torch$lt(C, A))
#> [1] TRUE
all_boolean(C < A)
#> [1] TRUE
# is A < C
all_boolean(A < C)
#> [1] FALSE
```
### 5\.11\.2 Greater than (`gt`)
```
# tensor is greater than
A <- torch$ones(60000L, 1L, 28L, 28L)
D <- A * 2.0
all(torch$gt(D, A))
#> tensor(1, dtype=torch.uint8)
all(torch$gt(A, D))
#> tensor(0, dtype=torch.uint8)
```
### 5\.11\.3 Less than or equal (`le`)
```
# tensor is less than or equal
A1 <- torch$ones(60000L, 1L, 28L, 28L)
all(torch$le(A1, A1))
#> tensor(1, dtype=torch.uint8)
all(A1 <= A1)
#> tensor(1, dtype=torch.uint8)
# tensor is greater than or equal
A0 <- torch$zeros(60000L, 1L, 28L, 28L)
all(torch$ge(A0, A0))
#> tensor(1, dtype=torch.uint8)
all(A0 >= A0)
#> tensor(1, dtype=torch.uint8)
all(A1 >= A0)
#> tensor(1, dtype=torch.uint8)
all(A1 <= A0)
#> tensor(0, dtype=torch.uint8)
```
### 5\.11\.4 Logical NOT (`!`)
```
all_true <- torch$BoolTensor(list(TRUE, TRUE, TRUE, TRUE))
all_true
#> tensor([True, True, True, True])
# logical NOT
not_all_true <- !all_true
not_all_true
#> tensor([False, False, False, False])
```
```
diag <- torch$eye(5L)
diag
#> tensor([[1., 0., 0., 0., 0.],
#> [0., 1., 0., 0., 0.],
#> [0., 0., 1., 0., 0.],
#> [0., 0., 0., 1., 0.],
#> [0., 0., 0., 0., 1.]])
# logical NOT
not_diag <- !diag
# convert to integer
not_diag$to(dtype=torch$uint8)
#> tensor([[0, 1, 1, 1, 1],
#> [1, 0, 1, 1, 1],
#> [1, 1, 0, 1, 1],
#> [1, 1, 1, 0, 1],
#> [1, 1, 1, 1, 0]], dtype=torch.uint8)
```
5\.12 Distributions
-------------------
Initialize a tensor randomized with a normal distribution with `mean=0`, `var=1`:
```
n <- torch$randn(3500L)
n
#> tensor([-0.2087, 0.6850, -0.8386, ..., 1.2029, -0.1329, -0.0998])
plot(n$numpy())
hist(n$numpy())
```
```
a <- torch$randn(8L, 5L, 6L)
# print(a)
print(a$size())
#> torch.Size([8, 5, 6])
plot(a$flatten()$numpy())
hist(a$flatten()$numpy())
```
### 5\.12\.1 Uniform matrix
```
library(rTorch)
# 3x5 matrix uniformly distributed between 0 and 1
mat0 <- torch$FloatTensor(13L, 15L)$uniform_(0L, 1L)
plot(mat0$flatten()$numpy())
hist(mat0$flatten()$numpy())
```
```
# fill a 3x5 matrix with 0.1
mat1 <- torch$FloatTensor(30L, 50L)$uniform_(0.1, 0.2)
plot(mat1$flatten()$numpy())
hist(mat1$flatten()$numpy())
```
```
# a vector with all ones
mat2 <- torch$FloatTensor(500L)$uniform_(1, 2)
plot(mat2$flatten()$numpy())
hist(mat2$flatten()$numpy())
```
### 5\.12\.2 Binomial distribution
```
Binomial <- torch$distributions$binomial$Binomial
m = Binomial(100, torch$tensor(list(0 , .2, .8, 1)))
(x = m$sample())
#> tensor([ 0., 23., 78., 100.])
```
```
m = Binomial(torch$tensor(list(list(5.), list(10.))),
torch$tensor(list(0.5, 0.8)))
(x = m$sample())
#> tensor([[3., 4.],
#> [6., 8.]])
```
```
binom <- Binomial(100, torch$FloatTensor(5L, 10L))
print(binom)
#> Binomial(total_count: torch.Size([5, 10]), probs: torch.Size([5, 10]), logits: torch.Size([5, 10]))
```
```
print(binom$sample_n(100L)$shape)
#> torch.Size([100, 5, 10])
plot(binom$sample_n(100L)$flatten()$numpy())
hist(binom$sample_n(100L)$flatten()$numpy())
```
### 5\.12\.3 Exponential distribution
```
Exponential <- torch$distributions$exponential$Exponential
m = Exponential(torch$tensor(list(1.0)))
m
#> Exponential(rate: tensor([1.]))
m$sample() # Exponential distributed with rate=1
#> tensor([0.4171])
```
```
expo <- Exponential(rate=0.25)
expo_sample <- expo$sample_n(250L) # generate 250 samples
print(expo_sample$shape)
#> torch.Size([250])
plot(expo_sample$flatten()$numpy())
hist(expo_sample$flatten()$numpy())
```
### 5\.12\.4 Weibull distribution
```
Weibull <- torch$distributions$weibull$Weibull
m = Weibull(torch$tensor(list(1.0)), torch$tensor(list(1.0)))
m$sample() # sample from a Weibull distribution with scale=1, concentration=1
#> tensor([1.7026])
```
#### 5\.12\.4\.1 Constant `scale`
```
# constant scale
for (k in 1:10) {
wei <- Weibull(scale=100, concentration=k)
wei_sample <- wei$sample_n(500L)
# plot(wei_sample$flatten()$numpy())
hist(main=paste0("Scale=100; Concentration=", k),
wei_sample$flatten()$numpy())
}
```
#### 5\.12\.4\.2 Constant `concentration`
```
# constant concentration
for (s in seq(100, 1000, 100)) {
wei <- Weibull(scale=s, concentration=1)
wei_sample <- wei$sample_n(500L)
# plot(wei_sample$flatten()$numpy())
hist(main=paste0("Concentration=1; Scale=", s),
wei_sample$flatten()$numpy())
}
```
5\.1 Tensor data types
----------------------
```
# Default data type
torch$tensor(list(1.2, 3))$dtype # default for floating point is torch.float32
```
```
#> torch.float32
```
```
# change default data type to float64
torch$set_default_dtype(torch$float64)
torch$tensor(list(1.2, 3))$dtype # a new floating point tensor
```
```
#> torch.float64
```
### 5\.1\.1 Major tensor types
There are five major type of tensors in PyTorch: byte, float, double, long, and boolean.
```
library(rTorch)
byte <- torch$ByteTensor(3L, 3L)
float <- torch$FloatTensor(3L, 3L)
double <- torch$DoubleTensor(3L, 3L)
long <- torch$LongTensor(3L, 3L)
boolean <- torch$BoolTensor(5L, 5L)
```
```
message("byte tensor")
#> byte tensor
byte
#> tensor([[0, 0, 0],
#> [0, 0, 0],
#> [0, 0, 0]], dtype=torch.uint8)
```
```
message("float tensor")
#> float tensor
float
#> tensor([[0., 0., 0.],
#> [0., 0., 0.],
#> [0., 0., 0.]], dtype=torch.float32)
```
```
message("double")
#> double
double
#> tensor([[6.9461e-310, 6.9461e-310, 4.9407e-324],
#> [4.6489e-310, 0.0000e+00, 0.0000e+00],
#> [ 0.0000e+00, 0.0000e+00, 9.5490e-313]])
```
```
message("long")
#> long
long
#> tensor([[0, 0, 0],
#> [0, 0, 0],
#> [0, 0, 0]])
```
```
message("boolean")
#> boolean
boolean
#> tensor([[False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False]])
```
### 5\.1\.2 Example: A 4D tensor
A 4D tensor like in MNIST hand\-written digits recognition dataset:
```
mnist_4d <- torch$FloatTensor(60000L, 3L, 28L, 28L)
```
```
message("size")
#> size
mnist_4d$size()
#> torch.Size([60000, 3, 28, 28])
message("length")
#> length
length(mnist_4d)
#> [1] 141120000
message("shape, like in numpy")
#> shape, like in numpy
mnist_4d$shape
#> torch.Size([60000, 3, 28, 28])
message("number of elements")
#> number of elements
mnist_4d$numel()
#> [1] 141120000
```
### 5\.1\.3 Example: A 3D tensor
Given a 3D tensor:
```
ft3d <- torch$FloatTensor(4L, 3L, 2L)
ft3d
```
```
#> tensor([[[1.1390e+12, 3.0700e-41],
#> [1.4555e+12, 3.0700e-41],
#> [1.1344e+12, 3.0700e-41]],
#>
#> [[4.7256e+10, 3.0700e-41],
#> [4.7258e+10, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41]],
#>
#> [[1.0075e+12, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41]],
#>
#> [[1.0075e+12, 3.0700e-41],
#> [4.7259e+10, 3.0700e-41],
#> [4.7263e+10, 3.0700e-41]]], dtype=torch.float32)
```
```
ft3d$size()
#> torch.Size([4, 3, 2])
length(ft3d)
#> [1] 24
ft3d$shape
#> torch.Size([4, 3, 2])
ft3d$numel
#> <built-in method numel of Tensor>
```
### 5\.1\.1 Major tensor types
There are five major type of tensors in PyTorch: byte, float, double, long, and boolean.
```
library(rTorch)
byte <- torch$ByteTensor(3L, 3L)
float <- torch$FloatTensor(3L, 3L)
double <- torch$DoubleTensor(3L, 3L)
long <- torch$LongTensor(3L, 3L)
boolean <- torch$BoolTensor(5L, 5L)
```
```
message("byte tensor")
#> byte tensor
byte
#> tensor([[0, 0, 0],
#> [0, 0, 0],
#> [0, 0, 0]], dtype=torch.uint8)
```
```
message("float tensor")
#> float tensor
float
#> tensor([[0., 0., 0.],
#> [0., 0., 0.],
#> [0., 0., 0.]], dtype=torch.float32)
```
```
message("double")
#> double
double
#> tensor([[6.9461e-310, 6.9461e-310, 4.9407e-324],
#> [4.6489e-310, 0.0000e+00, 0.0000e+00],
#> [ 0.0000e+00, 0.0000e+00, 9.5490e-313]])
```
```
message("long")
#> long
long
#> tensor([[0, 0, 0],
#> [0, 0, 0],
#> [0, 0, 0]])
```
```
message("boolean")
#> boolean
boolean
#> tensor([[False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False]])
```
### 5\.1\.2 Example: A 4D tensor
A 4D tensor like in MNIST hand\-written digits recognition dataset:
```
mnist_4d <- torch$FloatTensor(60000L, 3L, 28L, 28L)
```
```
message("size")
#> size
mnist_4d$size()
#> torch.Size([60000, 3, 28, 28])
message("length")
#> length
length(mnist_4d)
#> [1] 141120000
message("shape, like in numpy")
#> shape, like in numpy
mnist_4d$shape
#> torch.Size([60000, 3, 28, 28])
message("number of elements")
#> number of elements
mnist_4d$numel()
#> [1] 141120000
```
### 5\.1\.3 Example: A 3D tensor
Given a 3D tensor:
```
ft3d <- torch$FloatTensor(4L, 3L, 2L)
ft3d
```
```
#> tensor([[[1.1390e+12, 3.0700e-41],
#> [1.4555e+12, 3.0700e-41],
#> [1.1344e+12, 3.0700e-41]],
#>
#> [[4.7256e+10, 3.0700e-41],
#> [4.7258e+10, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41]],
#>
#> [[1.0075e+12, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41],
#> [1.0075e+12, 3.0700e-41]],
#>
#> [[1.0075e+12, 3.0700e-41],
#> [4.7259e+10, 3.0700e-41],
#> [4.7263e+10, 3.0700e-41]]], dtype=torch.float32)
```
```
ft3d$size()
#> torch.Size([4, 3, 2])
length(ft3d)
#> [1] 24
ft3d$shape
#> torch.Size([4, 3, 2])
ft3d$numel
#> <built-in method numel of Tensor>
```
5\.2 Arithmetic of tensors
--------------------------
### 5\.2\.1 Add tensors
```
# add a scalar to a tensor
# 3x5 matrix uniformly distributed between 0 and 1
mat0 <- torch$FloatTensor(3L, 5L)$uniform_(0L, 1L)
mat0 + 0.1
```
```
#> tensor([[0.9645, 0.6238, 0.9326, 0.3023, 0.1448],
#> [0.2610, 0.1987, 0.5089, 0.9776, 0.5261],
#> [0.2727, 0.5670, 0.8338, 0.4297, 0.7935]], dtype=torch.float32)
```
### 5\.2\.2 Add tensor elements
```
# fill a 3x5 matrix with 0.1
mat1 <- torch$FloatTensor(3L, 5L)$uniform_(0.1, 0.1)
print(mat1)
#> tensor([[0.1000, 0.1000, 0.1000, 0.1000, 0.1000],
#> [0.1000, 0.1000, 0.1000, 0.1000, 0.1000],
#> [0.1000, 0.1000, 0.1000, 0.1000, 0.1000]], dtype=torch.float32)
# a vector with all ones
mat2 <- torch$FloatTensor(5L)$uniform_(1, 1)
print(mat2)
#> tensor([1., 1., 1., 1., 1.], dtype=torch.float32)
# add element (1,1) to another tensor
mat1[1, 1] + mat2
#> tensor([1.1000, 1.1000, 1.1000, 1.1000, 1.1000], dtype=torch.float32)
```
Add two tensors using the function `add()`:
```
# PyTorch add two tensors
x = torch$rand(5L, 4L)
y = torch$rand(5L, 4L)
print(x$add(y))
```
```
#> tensor([[0.4604, 0.8114, 0.9630, 0.8070],
#> [0.6829, 0.4612, 0.1546, 1.1180],
#> [0.3134, 0.9399, 1.1217, 1.2846],
#> [1.9212, 1.3897, 0.5217, 0.3508],
#> [0.5801, 1.1733, 0.6494, 0.6771]])
```
Add two tensors using the generic `+`:
```
print(x + y)
```
```
#> tensor([[0.4604, 0.8114, 0.9630, 0.8070],
#> [0.6829, 0.4612, 0.1546, 1.1180],
#> [0.3134, 0.9399, 1.1217, 1.2846],
#> [1.9212, 1.3897, 0.5217, 0.3508],
#> [0.5801, 1.1733, 0.6494, 0.6771]])
```
### 5\.2\.3 Multiply a tensor by a scalar
```
# Multiply tensor by scalar
tensor = torch$ones(4L, dtype=torch$float64)
scalar = np$float64(4.321)
print(scalar)
print(torch$scalar_tensor(scalar))
```
```
#> [1] 4.32
#> tensor(4.3210)
```
> Notice that we used a NumPy function to create the scalar object `np$float64()`.
Multiply two tensors using the function `mul`:
```
(prod = torch$mul(tensor, torch$scalar_tensor(scalar)))
```
```
#> tensor([4.3210, 4.3210, 4.3210, 4.3210])
```
Short version using R generics:
```
(prod = tensor * scalar)
```
```
#> tensor([4.3210, 4.3210, 4.3210, 4.3210])
```
### 5\.2\.1 Add tensors
```
# add a scalar to a tensor
# 3x5 matrix uniformly distributed between 0 and 1
mat0 <- torch$FloatTensor(3L, 5L)$uniform_(0L, 1L)
mat0 + 0.1
```
```
#> tensor([[0.9645, 0.6238, 0.9326, 0.3023, 0.1448],
#> [0.2610, 0.1987, 0.5089, 0.9776, 0.5261],
#> [0.2727, 0.5670, 0.8338, 0.4297, 0.7935]], dtype=torch.float32)
```
### 5\.2\.2 Add tensor elements
```
# fill a 3x5 matrix with 0.1
mat1 <- torch$FloatTensor(3L, 5L)$uniform_(0.1, 0.1)
print(mat1)
#> tensor([[0.1000, 0.1000, 0.1000, 0.1000, 0.1000],
#> [0.1000, 0.1000, 0.1000, 0.1000, 0.1000],
#> [0.1000, 0.1000, 0.1000, 0.1000, 0.1000]], dtype=torch.float32)
# a vector with all ones
mat2 <- torch$FloatTensor(5L)$uniform_(1, 1)
print(mat2)
#> tensor([1., 1., 1., 1., 1.], dtype=torch.float32)
# add element (1,1) to another tensor
mat1[1, 1] + mat2
#> tensor([1.1000, 1.1000, 1.1000, 1.1000, 1.1000], dtype=torch.float32)
```
Add two tensors using the function `add()`:
```
# PyTorch add two tensors
x = torch$rand(5L, 4L)
y = torch$rand(5L, 4L)
print(x$add(y))
```
```
#> tensor([[0.4604, 0.8114, 0.9630, 0.8070],
#> [0.6829, 0.4612, 0.1546, 1.1180],
#> [0.3134, 0.9399, 1.1217, 1.2846],
#> [1.9212, 1.3897, 0.5217, 0.3508],
#> [0.5801, 1.1733, 0.6494, 0.6771]])
```
Add two tensors using the generic `+`:
```
print(x + y)
```
```
#> tensor([[0.4604, 0.8114, 0.9630, 0.8070],
#> [0.6829, 0.4612, 0.1546, 1.1180],
#> [0.3134, 0.9399, 1.1217, 1.2846],
#> [1.9212, 1.3897, 0.5217, 0.3508],
#> [0.5801, 1.1733, 0.6494, 0.6771]])
```
### 5\.2\.3 Multiply a tensor by a scalar
```
# Multiply tensor by scalar
tensor = torch$ones(4L, dtype=torch$float64)
scalar = np$float64(4.321)
print(scalar)
print(torch$scalar_tensor(scalar))
```
```
#> [1] 4.32
#> tensor(4.3210)
```
> Notice that we used a NumPy function to create the scalar object `np$float64()`.
Multiply two tensors using the function `mul`:
```
(prod = torch$mul(tensor, torch$scalar_tensor(scalar)))
```
```
#> tensor([4.3210, 4.3210, 4.3210, 4.3210])
```
Short version using R generics:
```
(prod = tensor * scalar)
```
```
#> tensor([4.3210, 4.3210, 4.3210, 4.3210])
```
5\.3 NumPy and PyTorch
----------------------
`numpy` has been made available as a module in `rTorch`, which means that as soon as rTorch is loaded, you also get all the `numpy` functions available to you. We can call functions from `numpy` referring to it as `np$_a_function`. Examples:
```
# a 2D numpy array
syn0 <- np$random$rand(3L, 5L)
print(syn0)
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 0.303 0.475 0.00956 0.812 0.210
#> [2,] 0.546 0.607 0.19421 0.989 0.276
#> [3,] 0.240 0.158 0.53997 0.718 0.849
```
```
# numpy arrays of zeros
syn1 <- np$zeros(c(5L, 10L))
print(syn1)
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 0 0 0 0 0 0 0 0 0 0
#> [2,] 0 0 0 0 0 0 0 0 0 0
#> [3,] 0 0 0 0 0 0 0 0 0 0
#> [4,] 0 0 0 0 0 0 0 0 0 0
#> [5,] 0 0 0 0 0 0 0 0 0 0
```
```
# add a scalar to a numpy array
syn1 = syn1 + 0.1
print(syn1)
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
#> [2,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
#> [3,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
#> [4,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
#> [5,] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
```
And the dot product of both:
```
np$dot(syn0, syn1)
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 0.181 0.181 0.181 0.181 0.181 0.181 0.181 0.181 0.181 0.181
#> [2,] 0.261 0.261 0.261 0.261 0.261 0.261 0.261 0.261 0.261 0.261
#> [3,] 0.250 0.250 0.250 0.250 0.250 0.250 0.250 0.250 0.250 0.250
```
### 5\.3\.1 Python tuples and R vectors
In `numpy` the shape of a multidimensional array needs to be defined using a `tuple`. in R we do it instead with a `vector`. There are not tuples in R.
In Python, we use a tuple, `(5, 5)` to indicate the shape of the array:
```
import numpy as np
print(np.ones((5, 5)))
```
```
#> [[1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]]
```
In R, we use a vector `c(5L, 5L)`. The `L` indicates an integer.
```
l1 <- np$ones(c(5L, 5L))
print(l1)
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 1 1 1 1
#> [2,] 1 1 1 1 1
#> [3,] 1 1 1 1 1
#> [4,] 1 1 1 1 1
#> [5,] 1 1 1 1 1
```
### 5\.3\.2 A numpy array from R vectors
For this matrix, or 2D tensor, we use three R vectors:
```
X <- np$array(rbind(c(1,2,3), c(4,5,6), c(7,8,9)))
print(X)
```
```
#> [,1] [,2] [,3]
#> [1,] 1 2 3
#> [2,] 4 5 6
#> [3,] 7 8 9
```
And we could transpose the array using `numpy` as well:
```
np$transpose(X)
```
```
#> [,1] [,2] [,3]
#> [1,] 1 4 7
#> [2,] 2 5 8
#> [3,] 3 6 9
```
### 5\.3\.3 numpy arrays to tensors
```
a = np$array(list(1, 2, 3)) # a numpy array
t = torch$as_tensor(a) # convert it to tensor
print(t)
```
```
#> tensor([1., 2., 3.])
```
### 5\.3\.4 Create and fill a tensor
We can create the tensor directly from R using `tensor()`:
```
torch$tensor(list( 1, 2, 3)) # create a tensor
t[1L]$fill_(-1) # fill element with -1
print(a)
```
```
#> tensor([1., 2., 3.])
#> tensor(-1.)
#> [1] -1 2 3
```
### 5\.3\.5 Tensor to array, and viceversa
This is a very common operation in machine learning:
```
# convert tensor to a numpy array
a = torch$rand(5L, 4L)
b = a$numpy()
print(b)
```
```
#> [,1] [,2] [,3] [,4]
#> [1,] 0.5596 0.1791 0.0149 0.568
#> [2,] 0.0946 0.0738 0.9916 0.685
#> [3,] 0.4065 0.1239 0.2190 0.905
#> [4,] 0.2055 0.0958 0.0788 0.193
#> [5,] 0.6578 0.8162 0.2609 0.097
```
```
# convert a numpy array to a tensor
np_a = np$array(c(c(3, 4), c(3, 6)))
t_a = torch$from_numpy(np_a)
print(t_a)
```
```
#> tensor([3., 4., 3., 6.])
```
### 5\.3\.1 Python tuples and R vectors
In `numpy` the shape of a multidimensional array needs to be defined using a `tuple`. in R we do it instead with a `vector`. There are not tuples in R.
In Python, we use a tuple, `(5, 5)` to indicate the shape of the array:
```
import numpy as np
print(np.ones((5, 5)))
```
```
#> [[1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]
#> [1. 1. 1. 1. 1.]]
```
In R, we use a vector `c(5L, 5L)`. The `L` indicates an integer.
```
l1 <- np$ones(c(5L, 5L))
print(l1)
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 1 1 1 1
#> [2,] 1 1 1 1 1
#> [3,] 1 1 1 1 1
#> [4,] 1 1 1 1 1
#> [5,] 1 1 1 1 1
```
### 5\.3\.2 A numpy array from R vectors
For this matrix, or 2D tensor, we use three R vectors:
```
X <- np$array(rbind(c(1,2,3), c(4,5,6), c(7,8,9)))
print(X)
```
```
#> [,1] [,2] [,3]
#> [1,] 1 2 3
#> [2,] 4 5 6
#> [3,] 7 8 9
```
And we could transpose the array using `numpy` as well:
```
np$transpose(X)
```
```
#> [,1] [,2] [,3]
#> [1,] 1 4 7
#> [2,] 2 5 8
#> [3,] 3 6 9
```
### 5\.3\.3 numpy arrays to tensors
```
a = np$array(list(1, 2, 3)) # a numpy array
t = torch$as_tensor(a) # convert it to tensor
print(t)
```
```
#> tensor([1., 2., 3.])
```
### 5\.3\.4 Create and fill a tensor
We can create the tensor directly from R using `tensor()`:
```
torch$tensor(list( 1, 2, 3)) # create a tensor
t[1L]$fill_(-1) # fill element with -1
print(a)
```
```
#> tensor([1., 2., 3.])
#> tensor(-1.)
#> [1] -1 2 3
```
### 5\.3\.5 Tensor to array, and viceversa
This is a very common operation in machine learning:
```
# convert tensor to a numpy array
a = torch$rand(5L, 4L)
b = a$numpy()
print(b)
```
```
#> [,1] [,2] [,3] [,4]
#> [1,] 0.5596 0.1791 0.0149 0.568
#> [2,] 0.0946 0.0738 0.9916 0.685
#> [3,] 0.4065 0.1239 0.2190 0.905
#> [4,] 0.2055 0.0958 0.0788 0.193
#> [5,] 0.6578 0.8162 0.2609 0.097
```
```
# convert a numpy array to a tensor
np_a = np$array(c(c(3, 4), c(3, 6)))
t_a = torch$from_numpy(np_a)
print(t_a)
```
```
#> tensor([3., 4., 3., 6.])
```
5\.4 Create tensors
-------------------
A random 1D tensor:
```
ft1 <- torch$FloatTensor(np$random$rand(5L))
print(ft1)
```
```
#> tensor([0.5074, 0.2779, 0.1923, 0.8058, 0.3472], dtype=torch.float32)
```
Force a tensor as a `float` of 64\-bits:
```
ft2 <- torch$as_tensor(np$random$rand(5L), dtype= torch$float64)
print(ft2)
```
```
#> tensor([0.0704, 0.9035, 0.6435, 0.5640, 0.0108])
```
Convert the tensor to a `float` of 16\-bits:
```
ft2_dbl <- torch$as_tensor(ft2, dtype = torch$float16)
ft2_dbl
```
```
#> tensor([0.0704, 0.9033, 0.6436, 0.5640, 0.0108], dtype=torch.float16)
```
Create a tensor of size (5 x 7\) with uninitialized memory:
```
a <- torch$FloatTensor(5L, 7L)
print(a)
```
```
#> tensor([[0.0000e+00, 0.0000e+00, 1.1811e+16, 3.0700e-41, 0.0000e+00, 0.0000e+00,
#> 1.4013e-45],
#> [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
#> 0.0000e+00],
#> [4.9982e+14, 3.0700e-41, 0.0000e+00, 0.0000e+00, 4.6368e+14, 3.0700e-41,
#> 0.0000e+00],
#> [0.0000e+00, 1.4013e-45, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
#> 0.0000e+00],
#> [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
#> 0.0000e+00]], dtype=torch.float32)
```
Using arange to create a tensor. `arange` starts at 0\.
```
v = torch$arange(9L)
print(v)
```
```
#> tensor([0, 1, 2, 3, 4, 5, 6, 7, 8])
```
```
# reshape
(v = v$view(3L, 3L))
```
```
#> tensor([[0, 1, 2],
#> [3, 4, 5],
#> [6, 7, 8]])
```
### 5\.4\.1 Tensor fill
On this tensor:
```
(v = torch$ones(3L, 3L))
```
```
#> tensor([[1., 1., 1.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
Fill row 1 with 2s:
```
invisible(v[1L, ]$fill_(2L))
print(v)
```
```
#> tensor([[2., 2., 2.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
Fill row 2 with 3s:
```
invisible(v[2L, ]$fill_(3L))
print(v)
```
```
#> tensor([[2., 2., 2.],
#> [3., 3., 3.],
#> [1., 1., 1.]])
```
Fill column 3 with fours (4\):
```
invisible(v[, 3]$fill_(4L))
print(v)
```
```
#> tensor([[2., 2., 4.],
#> [3., 3., 4.],
#> [1., 1., 4.]])
```
### 5\.4\.2 Tensor with a range of values
```
# Initialize Tensor with a range of value
v = torch$arange(10L) # similar to range(5) but creating a Tensor
(v = torch$arange(0L, 10L, step = 1L)) # Size 5. Similar to range(0, 5, 1)
```
```
#> tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
```
### 5\.4\.3 Linear or log scale Tensor
Create a tensor with 10 linear points for (1, 10\) inclusive:
```
(v = torch$linspace(1L, 10L, steps = 10L))
```
```
#> tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
```
Create a tensor with 10 logarithmic points for (1, 10\) inclusive:
```
(v = torch$logspace(start=-10L, end = 10L, steps = 5L))
```
```
#> tensor([1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])
```
### 5\.4\.4 In\-place / Out\-of\-place fill
On this uninitialized tensor:
```
(a <- torch$FloatTensor(5L, 7L))
```
```
#> tensor([[0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.]], dtype=torch.float32)
```
Fill the tensor with the value 3\.5:
```
a$fill_(3.5)
```
```
#> tensor([[3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]],
#> dtype=torch.float32)
```
Add a scalar to the tensor:
```
b <- a$add(4.0)
```
The tensor `a` is still filled with 3\.5\. A new tensor `b` is returned with values 3\.5 \+ 4\.0 \= 7\.5
```
print(a)
print(b)
```
```
#> tensor([[3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]],
#> dtype=torch.float32)
#> tensor([[7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000]],
#> dtype=torch.float32)
```
### 5\.4\.1 Tensor fill
On this tensor:
```
(v = torch$ones(3L, 3L))
```
```
#> tensor([[1., 1., 1.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
Fill row 1 with 2s:
```
invisible(v[1L, ]$fill_(2L))
print(v)
```
```
#> tensor([[2., 2., 2.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
Fill row 2 with 3s:
```
invisible(v[2L, ]$fill_(3L))
print(v)
```
```
#> tensor([[2., 2., 2.],
#> [3., 3., 3.],
#> [1., 1., 1.]])
```
Fill column 3 with fours (4\):
```
invisible(v[, 3]$fill_(4L))
print(v)
```
```
#> tensor([[2., 2., 4.],
#> [3., 3., 4.],
#> [1., 1., 4.]])
```
### 5\.4\.2 Tensor with a range of values
```
# Initialize Tensor with a range of value
v = torch$arange(10L) # similar to range(5) but creating a Tensor
(v = torch$arange(0L, 10L, step = 1L)) # Size 5. Similar to range(0, 5, 1)
```
```
#> tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
```
### 5\.4\.3 Linear or log scale Tensor
Create a tensor with 10 linear points for (1, 10\) inclusive:
```
(v = torch$linspace(1L, 10L, steps = 10L))
```
```
#> tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
```
Create a tensor with 10 logarithmic points for (1, 10\) inclusive:
```
(v = torch$logspace(start=-10L, end = 10L, steps = 5L))
```
```
#> tensor([1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])
```
### 5\.4\.4 In\-place / Out\-of\-place fill
On this uninitialized tensor:
```
(a <- torch$FloatTensor(5L, 7L))
```
```
#> tensor([[0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0., 0., 0.]], dtype=torch.float32)
```
Fill the tensor with the value 3\.5:
```
a$fill_(3.5)
```
```
#> tensor([[3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]],
#> dtype=torch.float32)
```
Add a scalar to the tensor:
```
b <- a$add(4.0)
```
The tensor `a` is still filled with 3\.5\. A new tensor `b` is returned with values 3\.5 \+ 4\.0 \= 7\.5
```
print(a)
print(b)
```
```
#> tensor([[3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
#> [3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]],
#> dtype=torch.float32)
#> tensor([[7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000],
#> [7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000, 7.5000]],
#> dtype=torch.float32)
```
5\.5 Tensor resizing
--------------------
```
x = torch$randn(2L, 3L) # Size 2x3
print(x)
#> tensor([[-0.4375, 1.2873, -0.5258],
#> [ 0.7870, -0.8505, -1.2215]])
y = x$view(6L) # Resize x to size 6
print(y)
#> tensor([-0.4375, 1.2873, -0.5258, 0.7870, -0.8505, -1.2215])
z = x$view(-1L, 2L) # Size 3x2
print(z)
#> tensor([[-0.4375, 1.2873],
#> [-0.5258, 0.7870],
#> [-0.8505, -1.2215]])
print(z$shape)
#> torch.Size([3, 2])
```
### 5\.5\.1 Exercise
Reproduce this tensor:
```
0 1 2
3 4 5
6 7 8
```
```
# create a vector with the number of elements
v = torch$arange(9L)
# resize to a 3x3 tensor
(v = v$view(3L, 3L))
```
```
#> tensor([[0, 1, 2],
#> [3, 4, 5],
#> [6, 7, 8]])
```
### 5\.5\.1 Exercise
Reproduce this tensor:
```
0 1 2
3 4 5
6 7 8
```
```
# create a vector with the number of elements
v = torch$arange(9L)
# resize to a 3x3 tensor
(v = v$view(3L, 3L))
```
```
#> tensor([[0, 1, 2],
#> [3, 4, 5],
#> [6, 7, 8]])
```
5\.6 Concatenate tensors
------------------------
```
x = torch$randn(2L, 3L)
print(x)
print(x$shape)
```
```
#> tensor([[-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826]])
#> torch.Size([2, 3])
```
### 5\.6\.1 Concatenate by rows
```
(x0 <- torch$cat(list(x, x, x), 0L))
print(x0$shape)
```
```
#> tensor([[-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826],
#> [-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826],
#> [-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826]])
#> torch.Size([6, 3])
```
### 5\.6\.2 Concatenate by columns
```
(x1 <- torch$cat(list(x, x, x), 1L))
print(x1$shape)
```
```
#> tensor([[-0.3954, 1.4149, 0.2381, -0.3954, 1.4149, 0.2381, -0.3954, 1.4149,
#> 0.2381],
#> [-1.2126, 0.7869, 0.0826, -1.2126, 0.7869, 0.0826, -1.2126, 0.7869,
#> 0.0826]])
#> torch.Size([2, 9])
```
### 5\.6\.1 Concatenate by rows
```
(x0 <- torch$cat(list(x, x, x), 0L))
print(x0$shape)
```
```
#> tensor([[-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826],
#> [-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826],
#> [-0.3954, 1.4149, 0.2381],
#> [-1.2126, 0.7869, 0.0826]])
#> torch.Size([6, 3])
```
### 5\.6\.2 Concatenate by columns
```
(x1 <- torch$cat(list(x, x, x), 1L))
print(x1$shape)
```
```
#> tensor([[-0.3954, 1.4149, 0.2381, -0.3954, 1.4149, 0.2381, -0.3954, 1.4149,
#> 0.2381],
#> [-1.2126, 0.7869, 0.0826, -1.2126, 0.7869, 0.0826, -1.2126, 0.7869,
#> 0.0826]])
#> torch.Size([2, 9])
```
5\.7 Reshape tensors
--------------------
### 5\.7\.1 With `chunk()`:
Let’s say this is an image tensor with the 3\-channels and 28x28 pixels
```
# ----- Reshape tensors -----
img <- torch$ones(3L, 28L, 28L) # Create the tensor of ones
print(img$size())
```
```
#> torch.Size([3, 28, 28])
```
On the first dimension `dim = 0L`, reshape the tensor:
```
img_chunks <- torch$chunk(img, chunks = 3L, dim = 0L)
print(length(img_chunks))
print(class(img_chunks))
```
```
#> [1] 3
#> [1] "list"
```
`img_chunks` is a `list` of three members.
The first chunk member:
```
# 1st chunk member
img_chunk <- img_chunks[[1]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
The second chunk member:
```
# 2nd chunk member
img_chunk <- img_chunks[[2]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
```
# 3rd chunk member
img_chunk <- img_chunks[[3]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
#### 5\.7\.1\.1 Exercise
1. Create a tensor of shape 3x28x28 filled with values 0\.25 on the first channel
2. The second channel with 0\.5
3. The third chanel with 0\.75
4. Find the sum for ecah separate channel
5. Find the sum of all channels
### 5\.7\.2 With `index_select()`:
```
img <- torch$ones(3L, 28L, 28L) # Create the tensor of ones
img$size()
```
```
#> torch.Size([3, 28, 28])
```
This is the layer 1:
```
# index_select. get layer 1
indices = torch$tensor(c(0L))
img_layer_1 <- torch$index_select(img, dim = 0L, index = indices)
```
The size of the layer:
```
print(img_layer_1$size())
```
```
#> torch.Size([1, 28, 28])
```
The sum of all elements in that layer:
```
print(img_layer_1$sum())
```
```
#> tensor(784.)
```
This is the layer 2:
```
# index_select. get layer 2
indices = torch$tensor(c(1L))
img_layer_2 <- torch$index_select(img, dim = 0L, index = indices)
print(img_layer_2$size())
print(img_layer_2$sum())
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
This is the layer 3:
```
# index_select. get layer 3
indices = torch$tensor(c(2L))
img_layer_3 <- torch$index_select(img, dim = 0L, index = indices)
print(img_layer_3$size())
print(img_layer_3$sum())
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
### 5\.7\.1 With `chunk()`:
Let’s say this is an image tensor with the 3\-channels and 28x28 pixels
```
# ----- Reshape tensors -----
img <- torch$ones(3L, 28L, 28L) # Create the tensor of ones
print(img$size())
```
```
#> torch.Size([3, 28, 28])
```
On the first dimension `dim = 0L`, reshape the tensor:
```
img_chunks <- torch$chunk(img, chunks = 3L, dim = 0L)
print(length(img_chunks))
print(class(img_chunks))
```
```
#> [1] 3
#> [1] "list"
```
`img_chunks` is a `list` of three members.
The first chunk member:
```
# 1st chunk member
img_chunk <- img_chunks[[1]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
The second chunk member:
```
# 2nd chunk member
img_chunk <- img_chunks[[2]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
```
# 3rd chunk member
img_chunk <- img_chunks[[3]]
print(img_chunk$size())
print(img_chunk$sum()) # if the tensor had all ones, what is the sum?
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
#### 5\.7\.1\.1 Exercise
1. Create a tensor of shape 3x28x28 filled with values 0\.25 on the first channel
2. The second channel with 0\.5
3. The third chanel with 0\.75
4. Find the sum for ecah separate channel
5. Find the sum of all channels
#### 5\.7\.1\.1 Exercise
1. Create a tensor of shape 3x28x28 filled with values 0\.25 on the first channel
2. The second channel with 0\.5
3. The third chanel with 0\.75
4. Find the sum for ecah separate channel
5. Find the sum of all channels
### 5\.7\.2 With `index_select()`:
```
img <- torch$ones(3L, 28L, 28L) # Create the tensor of ones
img$size()
```
```
#> torch.Size([3, 28, 28])
```
This is the layer 1:
```
# index_select. get layer 1
indices = torch$tensor(c(0L))
img_layer_1 <- torch$index_select(img, dim = 0L, index = indices)
```
The size of the layer:
```
print(img_layer_1$size())
```
```
#> torch.Size([1, 28, 28])
```
The sum of all elements in that layer:
```
print(img_layer_1$sum())
```
```
#> tensor(784.)
```
This is the layer 2:
```
# index_select. get layer 2
indices = torch$tensor(c(1L))
img_layer_2 <- torch$index_select(img, dim = 0L, index = indices)
print(img_layer_2$size())
print(img_layer_2$sum())
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
This is the layer 3:
```
# index_select. get layer 3
indices = torch$tensor(c(2L))
img_layer_3 <- torch$index_select(img, dim = 0L, index = indices)
print(img_layer_3$size())
print(img_layer_3$sum())
```
```
#> torch.Size([1, 28, 28])
#> tensor(784.)
```
5\.8 Special tensors
--------------------
### 5\.8\.1 Identity matrix
```
# identity matrix
eye = torch$eye(3L) # Create an identity 3x3 tensor
print(eye)
```
```
#> tensor([[1., 0., 0.],
#> [0., 1., 0.],
#> [0., 0., 1.]])
```
```
# a 5x5 identity or unit matrix
torch$eye(5L)
```
```
#> tensor([[1., 0., 0., 0., 0.],
#> [0., 1., 0., 0., 0.],
#> [0., 0., 1., 0., 0.],
#> [0., 0., 0., 1., 0.],
#> [0., 0., 0., 0., 1.]])
```
### 5\.8\.2 Ones
```
(v = torch$ones(10L)) # A tensor of size 10 containing all ones
# reshape
(v = torch$ones(2L, 1L, 2L, 1L)) # Size 2x1x2x1, a 4D tensor
```
```
#> tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
#> tensor([[[[1.],
#> [1.]]],
#>
#>
#> [[[1.],
#> [1.]]]])
```
The *matrix of ones* is also called \``unitary matrix`. This is a `4x4` unitary matrix.
```
torch$ones(c(4L, 4L))
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
```
```
# eye tensor
eye = torch$eye(3L)
print(eye)
# like eye tensor
v = torch$ones_like(eye) # A tensor with same shape as eye. Fill it with 1.
v
```
```
#> tensor([[1., 0., 0.],
#> [0., 1., 0.],
#> [0., 0., 1.]])
#> tensor([[1., 1., 1.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
### 5\.8\.3 Zeros
```
(z = torch$zeros(10L)) # A tensor of size 10 containing all zeros
```
```
#> tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
```
```
# matrix of zeros
torch$zeros(c(4L, 4L))
```
```
#> tensor([[0., 0., 0., 0.],
#> [0., 0., 0., 0.],
#> [0., 0., 0., 0.],
#> [0., 0., 0., 0.]])
```
```
# a 3D tensor of zeros
torch$zeros(c(3L, 4L, 2L))
```
```
#> tensor([[[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]],
#>
#> [[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]],
#>
#> [[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]]])
```
### 5\.8\.4 Diagonal operations
Given the 1D tensor
```
a <- torch$tensor(c(1L, 2L, 3L))
a
```
```
#> tensor([1, 2, 3])
```
#### 5\.8\.4\.1 Diagonal matrix
We want to fill the main diagonal with the vector:
```
torch$diag(a)
```
```
#> tensor([[1, 0, 0],
#> [0, 2, 0],
#> [0, 0, 3]])
```
What about filling the diagonal above the main:
```
torch$diag(a, 1L)
```
```
#> tensor([[0, 1, 0, 0],
#> [0, 0, 2, 0],
#> [0, 0, 0, 3],
#> [0, 0, 0, 0]])
```
Or the diagonal below the main:
```
torch$diag(a, -1L)
```
```
#> tensor([[0, 0, 0, 0],
#> [1, 0, 0, 0],
#> [0, 2, 0, 0],
#> [0, 0, 3, 0]])
```
### 5\.8\.1 Identity matrix
```
# identity matrix
eye = torch$eye(3L) # Create an identity 3x3 tensor
print(eye)
```
```
#> tensor([[1., 0., 0.],
#> [0., 1., 0.],
#> [0., 0., 1.]])
```
```
# a 5x5 identity or unit matrix
torch$eye(5L)
```
```
#> tensor([[1., 0., 0., 0., 0.],
#> [0., 1., 0., 0., 0.],
#> [0., 0., 1., 0., 0.],
#> [0., 0., 0., 1., 0.],
#> [0., 0., 0., 0., 1.]])
```
### 5\.8\.2 Ones
```
(v = torch$ones(10L)) # A tensor of size 10 containing all ones
# reshape
(v = torch$ones(2L, 1L, 2L, 1L)) # Size 2x1x2x1, a 4D tensor
```
```
#> tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
#> tensor([[[[1.],
#> [1.]]],
#>
#>
#> [[[1.],
#> [1.]]]])
```
The *matrix of ones* is also called \``unitary matrix`. This is a `4x4` unitary matrix.
```
torch$ones(c(4L, 4L))
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
```
```
# eye tensor
eye = torch$eye(3L)
print(eye)
# like eye tensor
v = torch$ones_like(eye) # A tensor with same shape as eye. Fill it with 1.
v
```
```
#> tensor([[1., 0., 0.],
#> [0., 1., 0.],
#> [0., 0., 1.]])
#> tensor([[1., 1., 1.],
#> [1., 1., 1.],
#> [1., 1., 1.]])
```
### 5\.8\.3 Zeros
```
(z = torch$zeros(10L)) # A tensor of size 10 containing all zeros
```
```
#> tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
```
```
# matrix of zeros
torch$zeros(c(4L, 4L))
```
```
#> tensor([[0., 0., 0., 0.],
#> [0., 0., 0., 0.],
#> [0., 0., 0., 0.],
#> [0., 0., 0., 0.]])
```
```
# a 3D tensor of zeros
torch$zeros(c(3L, 4L, 2L))
```
```
#> tensor([[[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]],
#>
#> [[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]],
#>
#> [[0., 0.],
#> [0., 0.],
#> [0., 0.],
#> [0., 0.]]])
```
### 5\.8\.4 Diagonal operations
Given the 1D tensor
```
a <- torch$tensor(c(1L, 2L, 3L))
a
```
```
#> tensor([1, 2, 3])
```
#### 5\.8\.4\.1 Diagonal matrix
We want to fill the main diagonal with the vector:
```
torch$diag(a)
```
```
#> tensor([[1, 0, 0],
#> [0, 2, 0],
#> [0, 0, 3]])
```
What about filling the diagonal above the main:
```
torch$diag(a, 1L)
```
```
#> tensor([[0, 1, 0, 0],
#> [0, 0, 2, 0],
#> [0, 0, 0, 3],
#> [0, 0, 0, 0]])
```
Or the diagonal below the main:
```
torch$diag(a, -1L)
```
```
#> tensor([[0, 0, 0, 0],
#> [1, 0, 0, 0],
#> [0, 2, 0, 0],
#> [0, 0, 3, 0]])
```
#### 5\.8\.4\.1 Diagonal matrix
We want to fill the main diagonal with the vector:
```
torch$diag(a)
```
```
#> tensor([[1, 0, 0],
#> [0, 2, 0],
#> [0, 0, 3]])
```
What about filling the diagonal above the main:
```
torch$diag(a, 1L)
```
```
#> tensor([[0, 1, 0, 0],
#> [0, 0, 2, 0],
#> [0, 0, 0, 3],
#> [0, 0, 0, 0]])
```
Or the diagonal below the main:
```
torch$diag(a, -1L)
```
```
#> tensor([[0, 0, 0, 0],
#> [1, 0, 0, 0],
#> [0, 2, 0, 0],
#> [0, 0, 3, 0]])
```
5\.9 Access to tensor elements
------------------------------
```
# replace an element at position 0, 0
(new_tensor = torch$Tensor(list(list(1, 2), list(3, 4))))
```
```
#> tensor([[1., 2.],
#> [3., 4.]])
```
Print element at position `1,1`:
```
print(new_tensor[1L, 1L])
```
```
#> tensor(1.)
```
Fill element at position `1,1` with 5:
```
new_tensor[1L, 1L]$fill_(5)
```
```
#> tensor(5.)
```
Show the modified tensor:
```
print(new_tensor) # tensor([[ 5., 2.],[ 3., 4.]])
```
```
#> tensor([[5., 2.],
#> [3., 4.]])
```
Access an element at position `1, 0`:
```
print(new_tensor[2L, 1L]) # tensor([ 3.])
print(new_tensor[2L, 1L]$item()) # 3.
```
```
#> tensor(3.)
#> [1] 3
```
### 5\.9\.1 Indices to tensor elements
On this tensor:
```
x = torch$randn(3L, 4L)
print(x)
```
```
#> tensor([[ 0.7076, 0.0816, -0.0431, 2.0698],
#> [ 0.6320, 0.5760, 0.1177, -1.9255],
#> [ 0.1964, -0.1771, -2.2976, -0.1239]])
```
Select indices, `dim=0`:
```
indices = torch$tensor(list(0L, 2L))
torch$index_select(x, 0L, indices)
```
```
#> tensor([[ 0.7076, 0.0816, -0.0431, 2.0698],
#> [ 0.1964, -0.1771, -2.2976, -0.1239]])
```
Select indices, `dim=1`:
```
torch$index_select(x, 1L, indices)
```
```
#> tensor([[ 0.7076, -0.0431],
#> [ 0.6320, 0.1177],
#> [ 0.1964, -2.2976]])
```
### 5\.9\.2 Using the `take` function
```
# Take by indices
src = torch$tensor(list(list(4, 3, 5),
list(6, 7, 8)) )
print(src)
print( torch$take(src, torch$tensor(list(0L, 2L, 5L))) )
```
```
#> tensor([[4., 3., 5.],
#> [6., 7., 8.]])
#> tensor([4., 5., 8.])
```
### 5\.9\.1 Indices to tensor elements
On this tensor:
```
x = torch$randn(3L, 4L)
print(x)
```
```
#> tensor([[ 0.7076, 0.0816, -0.0431, 2.0698],
#> [ 0.6320, 0.5760, 0.1177, -1.9255],
#> [ 0.1964, -0.1771, -2.2976, -0.1239]])
```
Select indices, `dim=0`:
```
indices = torch$tensor(list(0L, 2L))
torch$index_select(x, 0L, indices)
```
```
#> tensor([[ 0.7076, 0.0816, -0.0431, 2.0698],
#> [ 0.1964, -0.1771, -2.2976, -0.1239]])
```
Select indices, `dim=1`:
```
torch$index_select(x, 1L, indices)
```
```
#> tensor([[ 0.7076, -0.0431],
#> [ 0.6320, 0.1177],
#> [ 0.1964, -2.2976]])
```
### 5\.9\.2 Using the `take` function
```
# Take by indices
src = torch$tensor(list(list(4, 3, 5),
list(6, 7, 8)) )
print(src)
print( torch$take(src, torch$tensor(list(0L, 2L, 5L))) )
```
```
#> tensor([[4., 3., 5.],
#> [6., 7., 8.]])
#> tensor([4., 5., 8.])
```
5\.10 Other tensor operations
-----------------------------
### 5\.10\.1 Cross product
```
m1 = torch$ones(3L, 5L)
m2 = torch$ones(3L, 5L)
v1 = torch$ones(3L)
# Cross product
# Size 3x5
(r = torch$cross(m1, m2))
```
```
#> tensor([[0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0.]])
```
### 5\.10\.2 Dot product
```
# Dot product of 2 tensors
# Dot product of 2 tensors
p <- torch$Tensor(list(4L, 2L))
q <- torch$Tensor(list(3L, 1L))
(r = torch$dot(p, q)) # 14
#> tensor(14.)
(r <- p %.*% q) # 14
#> tensor(14.)
```
### 5\.10\.1 Cross product
```
m1 = torch$ones(3L, 5L)
m2 = torch$ones(3L, 5L)
v1 = torch$ones(3L)
# Cross product
# Size 3x5
(r = torch$cross(m1, m2))
```
```
#> tensor([[0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0.],
#> [0., 0., 0., 0., 0.]])
```
### 5\.10\.2 Dot product
```
# Dot product of 2 tensors
# Dot product of 2 tensors
p <- torch$Tensor(list(4L, 2L))
q <- torch$Tensor(list(3L, 1L))
(r = torch$dot(p, q)) # 14
#> tensor(14.)
(r <- p %.*% q) # 14
#> tensor(14.)
```
5\.11 Logical operations
------------------------
```
m0 = torch$zeros(3L, 5L)
m1 = torch$ones(3L, 5L)
m2 = torch$eye(3L, 5L)
print(m1 == m0)
#> tensor([[False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False]])
```
```
print(m1 != m1)
#> tensor([[False, False, False, False, False],
#> [False, False, False, False, False],
#> [False, False, False, False, False]])
```
```
print(m2 == m2)
#> tensor([[True, True, True, True, True],
#> [True, True, True, True, True],
#> [True, True, True, True, True]])
```
```
# AND
m1 & m1
#> tensor([[1, 1, 1, 1, 1],
#> [1, 1, 1, 1, 1],
#> [1, 1, 1, 1, 1]], dtype=torch.uint8)
```
```
# OR
m0 | m2
#> tensor([[1, 0, 0, 0, 0],
#> [0, 1, 0, 0, 0],
#> [0, 0, 1, 0, 0]], dtype=torch.uint8)
```
```
# OR
m1 | m2
#> tensor([[1, 1, 1, 1, 1],
#> [1, 1, 1, 1, 1],
#> [1, 1, 1, 1, 1]], dtype=torch.uint8)
```
### 5\.11\.1 Extract a unique logical result
With `all`:
```
# tensor is less than
A <- torch$ones(60000L, 1L, 28L, 28L)
C <- A * 0.5
# is C < A
all(torch$lt(C, A))
#> tensor(1, dtype=torch.uint8)
all(C < A)
#> tensor(1, dtype=torch.uint8)
# is A < C
all(A < C)
#> tensor(0, dtype=torch.uint8)
```
With function `all_boolean`:
```
all_boolean <- function(x) {
# convert tensor of 1s and 0s to a unique boolean
as.logical(torch$all(x)$numpy())
}
# is C < A
all_boolean(torch$lt(C, A))
#> [1] TRUE
all_boolean(C < A)
#> [1] TRUE
# is A < C
all_boolean(A < C)
#> [1] FALSE
```
### 5\.11\.2 Greater than (`gt`)
```
# tensor is greater than
A <- torch$ones(60000L, 1L, 28L, 28L)
D <- A * 2.0
all(torch$gt(D, A))
#> tensor(1, dtype=torch.uint8)
all(torch$gt(A, D))
#> tensor(0, dtype=torch.uint8)
```
### 5\.11\.3 Less than or equal (`le`)
```
# tensor is less than or equal
A1 <- torch$ones(60000L, 1L, 28L, 28L)
all(torch$le(A1, A1))
#> tensor(1, dtype=torch.uint8)
all(A1 <= A1)
#> tensor(1, dtype=torch.uint8)
# tensor is greater than or equal
A0 <- torch$zeros(60000L, 1L, 28L, 28L)
all(torch$ge(A0, A0))
#> tensor(1, dtype=torch.uint8)
all(A0 >= A0)
#> tensor(1, dtype=torch.uint8)
all(A1 >= A0)
#> tensor(1, dtype=torch.uint8)
all(A1 <= A0)
#> tensor(0, dtype=torch.uint8)
```
### 5\.11\.4 Logical NOT (`!`)
```
all_true <- torch$BoolTensor(list(TRUE, TRUE, TRUE, TRUE))
all_true
#> tensor([True, True, True, True])
# logical NOT
not_all_true <- !all_true
not_all_true
#> tensor([False, False, False, False])
```
```
diag <- torch$eye(5L)
diag
#> tensor([[1., 0., 0., 0., 0.],
#> [0., 1., 0., 0., 0.],
#> [0., 0., 1., 0., 0.],
#> [0., 0., 0., 1., 0.],
#> [0., 0., 0., 0., 1.]])
# logical NOT
not_diag <- !diag
# convert to integer
not_diag$to(dtype=torch$uint8)
#> tensor([[0, 1, 1, 1, 1],
#> [1, 0, 1, 1, 1],
#> [1, 1, 0, 1, 1],
#> [1, 1, 1, 0, 1],
#> [1, 1, 1, 1, 0]], dtype=torch.uint8)
```
### 5\.11\.1 Extract a unique logical result
With `all`:
```
# tensor is less than
A <- torch$ones(60000L, 1L, 28L, 28L)
C <- A * 0.5
# is C < A
all(torch$lt(C, A))
#> tensor(1, dtype=torch.uint8)
all(C < A)
#> tensor(1, dtype=torch.uint8)
# is A < C
all(A < C)
#> tensor(0, dtype=torch.uint8)
```
With function `all_boolean`:
```
all_boolean <- function(x) {
# convert tensor of 1s and 0s to a unique boolean
as.logical(torch$all(x)$numpy())
}
# is C < A
all_boolean(torch$lt(C, A))
#> [1] TRUE
all_boolean(C < A)
#> [1] TRUE
# is A < C
all_boolean(A < C)
#> [1] FALSE
```
### 5\.11\.2 Greater than (`gt`)
```
# tensor is greater than
A <- torch$ones(60000L, 1L, 28L, 28L)
D <- A * 2.0
all(torch$gt(D, A))
#> tensor(1, dtype=torch.uint8)
all(torch$gt(A, D))
#> tensor(0, dtype=torch.uint8)
```
### 5\.11\.3 Less than or equal (`le`)
```
# tensor is less than or equal
A1 <- torch$ones(60000L, 1L, 28L, 28L)
all(torch$le(A1, A1))
#> tensor(1, dtype=torch.uint8)
all(A1 <= A1)
#> tensor(1, dtype=torch.uint8)
# tensor is greater than or equal
A0 <- torch$zeros(60000L, 1L, 28L, 28L)
all(torch$ge(A0, A0))
#> tensor(1, dtype=torch.uint8)
all(A0 >= A0)
#> tensor(1, dtype=torch.uint8)
all(A1 >= A0)
#> tensor(1, dtype=torch.uint8)
all(A1 <= A0)
#> tensor(0, dtype=torch.uint8)
```
### 5\.11\.4 Logical NOT (`!`)
```
all_true <- torch$BoolTensor(list(TRUE, TRUE, TRUE, TRUE))
all_true
#> tensor([True, True, True, True])
# logical NOT
not_all_true <- !all_true
not_all_true
#> tensor([False, False, False, False])
```
```
diag <- torch$eye(5L)
diag
#> tensor([[1., 0., 0., 0., 0.],
#> [0., 1., 0., 0., 0.],
#> [0., 0., 1., 0., 0.],
#> [0., 0., 0., 1., 0.],
#> [0., 0., 0., 0., 1.]])
# logical NOT
not_diag <- !diag
# convert to integer
not_diag$to(dtype=torch$uint8)
#> tensor([[0, 1, 1, 1, 1],
#> [1, 0, 1, 1, 1],
#> [1, 1, 0, 1, 1],
#> [1, 1, 1, 0, 1],
#> [1, 1, 1, 1, 0]], dtype=torch.uint8)
```
5\.12 Distributions
-------------------
Initialize a tensor randomized with a normal distribution with `mean=0`, `var=1`:
```
n <- torch$randn(3500L)
n
#> tensor([-0.2087, 0.6850, -0.8386, ..., 1.2029, -0.1329, -0.0998])
plot(n$numpy())
hist(n$numpy())
```
```
a <- torch$randn(8L, 5L, 6L)
# print(a)
print(a$size())
#> torch.Size([8, 5, 6])
plot(a$flatten()$numpy())
hist(a$flatten()$numpy())
```
### 5\.12\.1 Uniform matrix
```
library(rTorch)
# 3x5 matrix uniformly distributed between 0 and 1
mat0 <- torch$FloatTensor(13L, 15L)$uniform_(0L, 1L)
plot(mat0$flatten()$numpy())
hist(mat0$flatten()$numpy())
```
```
# fill a 3x5 matrix with 0.1
mat1 <- torch$FloatTensor(30L, 50L)$uniform_(0.1, 0.2)
plot(mat1$flatten()$numpy())
hist(mat1$flatten()$numpy())
```
```
# a vector with all ones
mat2 <- torch$FloatTensor(500L)$uniform_(1, 2)
plot(mat2$flatten()$numpy())
hist(mat2$flatten()$numpy())
```
### 5\.12\.2 Binomial distribution
```
Binomial <- torch$distributions$binomial$Binomial
m = Binomial(100, torch$tensor(list(0 , .2, .8, 1)))
(x = m$sample())
#> tensor([ 0., 23., 78., 100.])
```
```
m = Binomial(torch$tensor(list(list(5.), list(10.))),
torch$tensor(list(0.5, 0.8)))
(x = m$sample())
#> tensor([[3., 4.],
#> [6., 8.]])
```
```
binom <- Binomial(100, torch$FloatTensor(5L, 10L))
print(binom)
#> Binomial(total_count: torch.Size([5, 10]), probs: torch.Size([5, 10]), logits: torch.Size([5, 10]))
```
```
print(binom$sample_n(100L)$shape)
#> torch.Size([100, 5, 10])
plot(binom$sample_n(100L)$flatten()$numpy())
hist(binom$sample_n(100L)$flatten()$numpy())
```
### 5\.12\.3 Exponential distribution
```
Exponential <- torch$distributions$exponential$Exponential
m = Exponential(torch$tensor(list(1.0)))
m
#> Exponential(rate: tensor([1.]))
m$sample() # Exponential distributed with rate=1
#> tensor([0.4171])
```
```
expo <- Exponential(rate=0.25)
expo_sample <- expo$sample_n(250L) # generate 250 samples
print(expo_sample$shape)
#> torch.Size([250])
plot(expo_sample$flatten()$numpy())
hist(expo_sample$flatten()$numpy())
```
### 5\.12\.4 Weibull distribution
```
Weibull <- torch$distributions$weibull$Weibull
m = Weibull(torch$tensor(list(1.0)), torch$tensor(list(1.0)))
m$sample() # sample from a Weibull distribution with scale=1, concentration=1
#> tensor([1.7026])
```
#### 5\.12\.4\.1 Constant `scale`
```
# constant scale
for (k in 1:10) {
wei <- Weibull(scale=100, concentration=k)
wei_sample <- wei$sample_n(500L)
# plot(wei_sample$flatten()$numpy())
hist(main=paste0("Scale=100; Concentration=", k),
wei_sample$flatten()$numpy())
}
```
#### 5\.12\.4\.2 Constant `concentration`
```
# constant concentration
for (s in seq(100, 1000, 100)) {
wei <- Weibull(scale=s, concentration=1)
wei_sample <- wei$sample_n(500L)
# plot(wei_sample$flatten()$numpy())
hist(main=paste0("Concentration=1; Scale=", s),
wei_sample$flatten()$numpy())
}
```
### 5\.12\.1 Uniform matrix
```
library(rTorch)
# 3x5 matrix uniformly distributed between 0 and 1
mat0 <- torch$FloatTensor(13L, 15L)$uniform_(0L, 1L)
plot(mat0$flatten()$numpy())
hist(mat0$flatten()$numpy())
```
```
# fill a 3x5 matrix with 0.1
mat1 <- torch$FloatTensor(30L, 50L)$uniform_(0.1, 0.2)
plot(mat1$flatten()$numpy())
hist(mat1$flatten()$numpy())
```
```
# a vector with all ones
mat2 <- torch$FloatTensor(500L)$uniform_(1, 2)
plot(mat2$flatten()$numpy())
hist(mat2$flatten()$numpy())
```
### 5\.12\.2 Binomial distribution
```
Binomial <- torch$distributions$binomial$Binomial
m = Binomial(100, torch$tensor(list(0 , .2, .8, 1)))
(x = m$sample())
#> tensor([ 0., 23., 78., 100.])
```
```
m = Binomial(torch$tensor(list(list(5.), list(10.))),
torch$tensor(list(0.5, 0.8)))
(x = m$sample())
#> tensor([[3., 4.],
#> [6., 8.]])
```
```
binom <- Binomial(100, torch$FloatTensor(5L, 10L))
print(binom)
#> Binomial(total_count: torch.Size([5, 10]), probs: torch.Size([5, 10]), logits: torch.Size([5, 10]))
```
```
print(binom$sample_n(100L)$shape)
#> torch.Size([100, 5, 10])
plot(binom$sample_n(100L)$flatten()$numpy())
hist(binom$sample_n(100L)$flatten()$numpy())
```
### 5\.12\.3 Exponential distribution
```
Exponential <- torch$distributions$exponential$Exponential
m = Exponential(torch$tensor(list(1.0)))
m
#> Exponential(rate: tensor([1.]))
m$sample() # Exponential distributed with rate=1
#> tensor([0.4171])
```
```
expo <- Exponential(rate=0.25)
expo_sample <- expo$sample_n(250L) # generate 250 samples
print(expo_sample$shape)
#> torch.Size([250])
plot(expo_sample$flatten()$numpy())
hist(expo_sample$flatten()$numpy())
```
### 5\.12\.4 Weibull distribution
```
Weibull <- torch$distributions$weibull$Weibull
m = Weibull(torch$tensor(list(1.0)), torch$tensor(list(1.0)))
m$sample() # sample from a Weibull distribution with scale=1, concentration=1
#> tensor([1.7026])
```
#### 5\.12\.4\.1 Constant `scale`
```
# constant scale
for (k in 1:10) {
wei <- Weibull(scale=100, concentration=k)
wei_sample <- wei$sample_n(500L)
# plot(wei_sample$flatten()$numpy())
hist(main=paste0("Scale=100; Concentration=", k),
wei_sample$flatten()$numpy())
}
```
#### 5\.12\.4\.2 Constant `concentration`
```
# constant concentration
for (s in seq(100, 1000, 100)) {
wei <- Weibull(scale=s, concentration=1)
wei_sample <- wei$sample_n(500L)
# plot(wei_sample$flatten()$numpy())
hist(main=paste0("Concentration=1; Scale=", s),
wei_sample$flatten()$numpy())
}
```
#### 5\.12\.4\.1 Constant `scale`
```
# constant scale
for (k in 1:10) {
wei <- Weibull(scale=100, concentration=k)
wei_sample <- wei$sample_n(500L)
# plot(wei_sample$flatten()$numpy())
hist(main=paste0("Scale=100; Concentration=", k),
wei_sample$flatten()$numpy())
}
```
#### 5\.12\.4\.2 Constant `concentration`
```
# constant concentration
for (s in seq(100, 1000, 100)) {
wei <- Weibull(scale=s, concentration=1)
wei_sample <- wei$sample_n(500L)
# plot(wei_sample$flatten()$numpy())
hist(main=paste0("Concentration=1; Scale=", s),
wei_sample$flatten()$numpy())
}
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/linearalgebra.html |
Chapter 6 Linear Algebra with Torch
===================================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
The following are basic operations of Linear Algebra using PyTorch.
```
library(rTorch)
```
6\.1 Scalars
------------
```
torch$scalar_tensor(2.78654)
torch$scalar_tensor(0L)
torch$scalar_tensor(1L)
torch$scalar_tensor(TRUE)
torch$scalar_tensor(FALSE)
```
```
#> tensor(2.7865)
#> tensor(0.)
#> tensor(1.)
#> tensor(1.)
#> tensor(0.)
```
6\.2 Vectors
------------
```
v <- c(0, 1, 2, 3, 4, 5)
torch$as_tensor(v)
```
```
#> tensor([0., 1., 2., 3., 4., 5.])
```
### 6\.2\.1 Vector to matrix
```
# row-vector
message("R matrix")
```
```
#> R matrix
```
```
(mr <- matrix(1:10, nrow=1))
message("as_tensor")
```
```
#> as_tensor
```
```
torch$as_tensor(mr)
message("shape_of_tensor")
```
```
#> shape_of_tensor
```
```
torch$as_tensor(mr)$shape
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 1 2 3 4 5 6 7 8 9 10
#> tensor([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]], dtype=torch.int32)
#> torch.Size([1, 10])
```
### 6\.2\.2 Matrix to tensor
```
# column-vector
message("R matrix, one column")
```
```
#> R matrix, one column
```
```
(mc <- matrix(1:10, ncol=1))
message("as_tensor")
```
```
#> as_tensor
```
```
torch$as_tensor(mc)
message("size of tensor")
```
```
#> size of tensor
```
```
torch$as_tensor(mc)$shape
```
```
#> [,1]
#> [1,] 1
#> [2,] 2
#> [3,] 3
#> [4,] 4
#> [5,] 5
#> [6,] 6
#> [7,] 7
#> [8,] 8
#> [9,] 9
#> [10,] 10
#> tensor([[ 1],
#> [ 2],
#> [ 3],
#> [ 4],
#> [ 5],
#> [ 6],
#> [ 7],
#> [ 8],
#> [ 9],
#> [10]], dtype=torch.int32)
#> torch.Size([10, 1])
```
6\.3 Matrices
-------------
```
message("R matrix")
```
```
#> R matrix
```
```
(m1 <- matrix(1:24, nrow = 3, byrow = TRUE))
message("as_tensor")
```
```
#> as_tensor
```
```
(t1 <- torch$as_tensor(m1))
message("shape")
```
```
#> shape
```
```
torch$as_tensor(m1)$shape
message("size")
```
```
#> size
```
```
torch$as_tensor(m1)$size()
message("dim")
```
```
#> dim
```
```
dim(torch$as_tensor(m1))
message("length")
```
```
#> length
```
```
length(torch$as_tensor(m1))
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] 1 2 3 4 5 6 7 8
#> [2,] 9 10 11 12 13 14 15 16
#> [3,] 17 18 19 20 21 22 23 24
#> tensor([[ 1, 2, 3, 4, 5, 6, 7, 8],
#> [ 9, 10, 11, 12, 13, 14, 15, 16],
#> [17, 18, 19, 20, 21, 22, 23, 24]], dtype=torch.int32)
#> torch.Size([3, 8])
#> torch.Size([3, 8])
#> [1] 3 8
#> [1] 24
```
```
message("R matrix")
```
```
#> R matrix
```
```
(m2 <- matrix(0:99, ncol = 10))
message("as_tensor")
```
```
#> as_tensor
```
```
(t2 <- torch$as_tensor(m2))
message("shape")
```
```
#> shape
```
```
t2$shape
message("dim")
```
```
#> dim
```
```
dim(torch$as_tensor(m2))
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 0 10 20 30 40 50 60 70 80 90
#> [2,] 1 11 21 31 41 51 61 71 81 91
#> [3,] 2 12 22 32 42 52 62 72 82 92
#> [4,] 3 13 23 33 43 53 63 73 83 93
#> [5,] 4 14 24 34 44 54 64 74 84 94
#> [6,] 5 15 25 35 45 55 65 75 85 95
#> [7,] 6 16 26 36 46 56 66 76 86 96
#> [8,] 7 17 27 37 47 57 67 77 87 97
#> [9,] 8 18 28 38 48 58 68 78 88 98
#> [10,] 9 19 29 39 49 59 69 79 89 99
#> tensor([[ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90],
#> [ 1, 11, 21, 31, 41, 51, 61, 71, 81, 91],
#> [ 2, 12, 22, 32, 42, 52, 62, 72, 82, 92],
#> [ 3, 13, 23, 33, 43, 53, 63, 73, 83, 93],
#> [ 4, 14, 24, 34, 44, 54, 64, 74, 84, 94],
#> [ 5, 15, 25, 35, 45, 55, 65, 75, 85, 95],
#> [ 6, 16, 26, 36, 46, 56, 66, 76, 86, 96],
#> [ 7, 17, 27, 37, 47, 57, 67, 77, 87, 97],
#> [ 8, 18, 28, 38, 48, 58, 68, 78, 88, 98],
#> [ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99]], dtype=torch.int32)
#> torch.Size([10, 10])
#> [1] 10 10
```
```
m1[1, 1]
m2[1, 1]
```
```
#> [1] 1
#> [1] 0
```
```
t1[1, 1]
t2[1, 1]
```
```
#> tensor(1, dtype=torch.int32)
#> tensor(0, dtype=torch.int32)
```
6\.4 3D\+ tensors
-----------------
```
# RGB color image has three axes
(img <- torch$rand(3L, 28L, 28L))
img$shape
```
```
#> tensor([[[0.4349, 0.1164, 0.5637, ..., 0.7674, 0.0530, 0.5104],
#> [0.5074, 0.0026, 0.8199, ..., 0.1035, 0.9890, 0.0948],
#> [0.5082, 0.6629, 0.4485, ..., 0.2037, 0.5876, 0.7726],
#> ...,
#> [0.9531, 0.4397, 0.1301, ..., 0.9004, 0.7199, 0.6334],
#> [0.2234, 0.0349, 0.3215, ..., 0.9437, 0.9297, 0.9696],
#> [0.5090, 0.7271, 0.0736, ..., 0.3271, 0.0580, 0.7623]],
#>
#> [[0.0232, 0.7732, 0.9972, ..., 0.4132, 0.1901, 0.6690],
#> [0.3026, 0.6929, 0.1662, ..., 0.8764, 0.8435, 0.3876],
#> [0.6784, 0.5015, 0.4514, ..., 0.9874, 0.0386, 0.1774],
#> ...,
#> [0.3697, 0.0044, 0.4686, ..., 0.9114, 0.5276, 0.0438],
#> [0.3210, 0.0769, 0.4184, ..., 0.1150, 0.0206, 0.3720],
#> [0.6467, 0.1786, 0.5240, ..., 0.2346, 0.0390, 0.2670]],
#>
#> [[0.9525, 0.0805, 0.0763, ..., 0.5606, 0.2202, 0.5187],
#> [0.0708, 0.3832, 0.7780, ..., 0.6198, 0.0404, 0.4178],
#> [0.8492, 0.3753, 0.2217, ..., 0.4277, 0.1597, 0.9825],
#> ...,
#> [0.0025, 0.2161, 0.5639, ..., 0.8237, 0.4728, 0.0648],
#> [0.8162, 0.7106, 0.0972, ..., 0.4748, 0.0605, 0.7730],
#> [0.8349, 0.5473, 0.5700, ..., 0.7152, 0.1603, 0.5442]]])
#> torch.Size([3, 28, 28])
```
```
img[1, 1, 1]
img[3, 28, 28]
```
```
#> tensor(0.4349)
#> tensor(0.5442)
```
6\.5 Transpose of a matrix
--------------------------
```
(m3 <- matrix(1:25, ncol = 5))
# transpose
message("transpose")
```
```
#> transpose
```
```
tm3 <- t(m3)
tm3
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 6 11 16 21
#> [2,] 2 7 12 17 22
#> [3,] 3 8 13 18 23
#> [4,] 4 9 14 19 24
#> [5,] 5 10 15 20 25
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 2 3 4 5
#> [2,] 6 7 8 9 10
#> [3,] 11 12 13 14 15
#> [4,] 16 17 18 19 20
#> [5,] 21 22 23 24 25
```
```
message("as_tensor")
```
```
#> as_tensor
```
```
(t3 <- torch$as_tensor(m3))
message("transpose")
```
```
#> transpose
```
```
tt3 <- t3$transpose(dim0 = 0L, dim1 = 1L)
tt3
```
```
#> tensor([[ 1, 6, 11, 16, 21],
#> [ 2, 7, 12, 17, 22],
#> [ 3, 8, 13, 18, 23],
#> [ 4, 9, 14, 19, 24],
#> [ 5, 10, 15, 20, 25]], dtype=torch.int32)
#> tensor([[ 1, 2, 3, 4, 5],
#> [ 6, 7, 8, 9, 10],
#> [11, 12, 13, 14, 15],
#> [16, 17, 18, 19, 20],
#> [21, 22, 23, 24, 25]], dtype=torch.int32)
```
```
tm3 == tt3$numpy() # convert first the tensor to numpy
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] TRUE TRUE TRUE TRUE TRUE
#> [2,] TRUE TRUE TRUE TRUE TRUE
#> [3,] TRUE TRUE TRUE TRUE TRUE
#> [4,] TRUE TRUE TRUE TRUE TRUE
#> [5,] TRUE TRUE TRUE TRUE TRUE
```
6\.6 Vectors, special case of a matrix
--------------------------------------
```
message("R matrix")
```
```
#> R matrix
```
```
m2 <- matrix(0:99, ncol = 10)
message("as_tensor")
```
```
#> as_tensor
```
```
(t2 <- torch$as_tensor(m2))
# in R
message("select column of matrix")
```
```
#> select column of matrix
```
```
(v1 <- m2[, 1])
message("select row of matrix")
```
```
#> select row of matrix
```
```
(v2 <- m2[10, ])
```
```
#> tensor([[ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90],
#> [ 1, 11, 21, 31, 41, 51, 61, 71, 81, 91],
#> [ 2, 12, 22, 32, 42, 52, 62, 72, 82, 92],
#> [ 3, 13, 23, 33, 43, 53, 63, 73, 83, 93],
#> [ 4, 14, 24, 34, 44, 54, 64, 74, 84, 94],
#> [ 5, 15, 25, 35, 45, 55, 65, 75, 85, 95],
#> [ 6, 16, 26, 36, 46, 56, 66, 76, 86, 96],
#> [ 7, 17, 27, 37, 47, 57, 67, 77, 87, 97],
#> [ 8, 18, 28, 38, 48, 58, 68, 78, 88, 98],
#> [ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99]], dtype=torch.int32)
#> [1] 0 1 2 3 4 5 6 7 8 9
#> [1] 9 19 29 39 49 59 69 79 89 99
```
```
# PyTorch
message()
```
```
#>
```
```
t2c <- t2[, 1]
t2r <- t2[10, ]
t2c
t2r
```
```
#> tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=torch.int32)
#> tensor([ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99], dtype=torch.int32)
```
In vectors, the vector and its transpose are equal.
```
tt2r <- t2r$transpose(dim0 = 0L, dim1 = 0L)
tt2r
```
```
#> tensor([ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99], dtype=torch.int32)
```
```
# a tensor of booleans. is vector equal to its transposed?
t2r == tt2r
```
```
#> tensor([True, True, True, True, True, True, True, True, True, True])
```
6\.7 Tensor arithmetic
----------------------
```
message("x")
```
```
#> x
```
```
(x = torch$ones(5L, 4L))
message("y")
```
```
#> y
```
```
(y = torch$ones(5L, 4L))
message("x+y")
```
```
#> x+y
```
```
x + y
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
#> tensor([[2., 2., 2., 2.],
#> [2., 2., 2., 2.],
#> [2., 2., 2., 2.],
#> [2., 2., 2., 2.],
#> [2., 2., 2., 2.]])
```
\\\[A \+ B \= B \+ A\\]
```
x + y == y + x
```
```
#> tensor([[True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True]])
```
6\.8 Add a scalar to a tensor
-----------------------------
```
s <- 0.5 # scalar
x + s
```
```
#> tensor([[1.5000, 1.5000, 1.5000, 1.5000],
#> [1.5000, 1.5000, 1.5000, 1.5000],
#> [1.5000, 1.5000, 1.5000, 1.5000],
#> [1.5000, 1.5000, 1.5000, 1.5000],
#> [1.5000, 1.5000, 1.5000, 1.5000]])
```
```
# scalar multiplying two tensors
s * (x + y)
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
```
6\.9 Multiplying tensors
------------------------
\\\[A \* B \= B \* A\\]
```
message("x")
```
```
#> x
```
```
(x = torch$ones(5L, 4L))
message("y")
```
```
#> y
```
```
(y = torch$ones(5L, 4L))
message("2x+4y")
```
```
#> 2x+4y
```
```
(z = 2 * x + 4 * y)
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
#> tensor([[6., 6., 6., 6.],
#> [6., 6., 6., 6.],
#> [6., 6., 6., 6.],
#> [6., 6., 6., 6.],
#> [6., 6., 6., 6.]])
```
```
x * y == y * x
```
```
#> tensor([[True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True]])
```
6\.10 Dot product
-----------------
\\\[dot(a,b)\_{i,j,k,a,b,c} \= \\sum\_m a\_{i,j,k,m}b\_{a,b,m,c}\\]
```
torch$dot(torch$tensor(c(2, 3)), torch$tensor(c(2, 1)))
```
```
#> tensor(7.)
```
### 6\.10\.1 2D array using Python
```
import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([[1, 2], [3, 4]])
print(a)
```
```
#> [[1 2]
#> [3 4]]
```
```
print(b)
```
```
#> [[1 2]
#> [3 4]]
```
```
np.dot(a, b)
```
```
#> array([[ 7, 10],
#> [15, 22]])
```
### 6\.10\.2 2D array using R
```
a <- np$array(list(list(1, 2), list(3, 4)))
a
b <- np$array(list(list(1, 2), list(3, 4)))
b
np$dot(a, b)
```
```
#> [,1] [,2]
#> [1,] 1 2
#> [2,] 3 4
#> [,1] [,2]
#> [1,] 1 2
#> [2,] 3 4
#> [,1] [,2]
#> [1,] 7 10
#> [2,] 15 22
```
`torch.dot()` treats both \\(a\\) and \\(b\\) as **1D** vectors (irrespective of their original shape) and computes their inner product.
```
at <- torch$as_tensor(a)
bt <- torch$as_tensor(b)
# torch$dot(at, bt) <- RuntimeError: dot: Expected 1-D argument self, but got 2-D
# at %.*% bt
```
If we perform the same dot product operation in Python, we get the same error:
```
import torch
import numpy as np
a = np.array([[1, 2], [3, 4]])
a
```
```
#> array([[1, 2],
#> [3, 4]])
```
```
b = np.array([[1, 2], [3, 4]])
b
```
```
#> array([[1, 2],
#> [3, 4]])
```
```
np.dot(a, b)
```
```
#> array([[ 7, 10],
#> [15, 22]])
```
```
at = torch.as_tensor(a)
bt = torch.as_tensor(b)
at
```
```
#> tensor([[1, 2],
#> [3, 4]])
```
```
bt
```
```
#> tensor([[1, 2],
#> [3, 4]])
```
```
torch.dot(at, bt)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
#>
#> Detailed traceback:
#> File "<string>", line 1, in <module>
```
```
a <- torch$Tensor(list(list(1, 2), list(3, 4)))
b <- torch$Tensor(c(c(1, 2), c(3, 4)))
c <- torch$Tensor(list(list(11, 12), list(13, 14)))
a
b
torch$dot(a, b)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 1D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
# this is another way of performing dot product in PyTorch
# a$dot(a)
```
```
#> tensor([[1., 2.],
#> [3., 4.]])
#> tensor([1., 2., 3., 4.])
```
```
o1 <- torch$ones(2L, 2L)
o2 <- torch$ones(2L, 2L)
o1
o2
torch$dot(o1, o2)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
o1$dot(o2)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
#> tensor([[1., 1.],
#> [1., 1.]])
#> tensor([[1., 1.],
#> [1., 1.]])
```
```
# 1D tensors work fine
r = torch$dot(torch$Tensor(list(4L, 2L, 4L)), torch$Tensor(list(3L, 4L, 1L)))
r
```
```
#> tensor(24.)
```
### 6\.10\.3 `mm` and `matmul` functions
So, if we cannor perform 2D tensor operations with the `dot` product, how do we manage then?
```
## mm and matmul seem to address the dot product we are looking for in tensors
a = torch$randn(2L, 3L)
b = torch$randn(3L, 4L)
a$mm(b)
a$matmul(b)
```
```
#> tensor([[ 1.0735, 2.0763, -0.2199, 0.3611],
#> [-1.3501, 4.1254, -2.2058, 0.8386]])
#> tensor([[ 1.0735, 2.0763, -0.2199, 0.3611],
#> [-1.3501, 4.1254, -2.2058, 0.8386]])
```
Here is a good explanation: <https://stackoverflow.com/a/44525687/5270873>
Let’s now prove the associative property of tensors:
\\\[(A B)^T \= B^T A^T\\]
```
abt <- torch$mm(a, b)$transpose(dim0=0L, dim1=1L)
abt
```
```
#> tensor([[ 1.0735, -1.3501],
#> [ 2.0763, 4.1254],
#> [-0.2199, -2.2058],
#> [ 0.3611, 0.8386]])
```
```
at <- a$transpose(dim0=0L, dim1=1L)
bt <- b$transpose(dim0=0L, dim1=1L)
btat <- torch$matmul(bt, at)
btat
```
```
#> tensor([[ 1.0735, -1.3501],
#> [ 2.0763, 4.1254],
#> [-0.2199, -2.2058],
#> [ 0.3611, 0.8386]])
```
And we could unit test if the results are nearly the same with `allclose()`:
```
# tolerance
torch$allclose(abt, btat, rtol=0.0001)
```
```
#> [1] TRUE
```
6\.1 Scalars
------------
```
torch$scalar_tensor(2.78654)
torch$scalar_tensor(0L)
torch$scalar_tensor(1L)
torch$scalar_tensor(TRUE)
torch$scalar_tensor(FALSE)
```
```
#> tensor(2.7865)
#> tensor(0.)
#> tensor(1.)
#> tensor(1.)
#> tensor(0.)
```
6\.2 Vectors
------------
```
v <- c(0, 1, 2, 3, 4, 5)
torch$as_tensor(v)
```
```
#> tensor([0., 1., 2., 3., 4., 5.])
```
### 6\.2\.1 Vector to matrix
```
# row-vector
message("R matrix")
```
```
#> R matrix
```
```
(mr <- matrix(1:10, nrow=1))
message("as_tensor")
```
```
#> as_tensor
```
```
torch$as_tensor(mr)
message("shape_of_tensor")
```
```
#> shape_of_tensor
```
```
torch$as_tensor(mr)$shape
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 1 2 3 4 5 6 7 8 9 10
#> tensor([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]], dtype=torch.int32)
#> torch.Size([1, 10])
```
### 6\.2\.2 Matrix to tensor
```
# column-vector
message("R matrix, one column")
```
```
#> R matrix, one column
```
```
(mc <- matrix(1:10, ncol=1))
message("as_tensor")
```
```
#> as_tensor
```
```
torch$as_tensor(mc)
message("size of tensor")
```
```
#> size of tensor
```
```
torch$as_tensor(mc)$shape
```
```
#> [,1]
#> [1,] 1
#> [2,] 2
#> [3,] 3
#> [4,] 4
#> [5,] 5
#> [6,] 6
#> [7,] 7
#> [8,] 8
#> [9,] 9
#> [10,] 10
#> tensor([[ 1],
#> [ 2],
#> [ 3],
#> [ 4],
#> [ 5],
#> [ 6],
#> [ 7],
#> [ 8],
#> [ 9],
#> [10]], dtype=torch.int32)
#> torch.Size([10, 1])
```
### 6\.2\.1 Vector to matrix
```
# row-vector
message("R matrix")
```
```
#> R matrix
```
```
(mr <- matrix(1:10, nrow=1))
message("as_tensor")
```
```
#> as_tensor
```
```
torch$as_tensor(mr)
message("shape_of_tensor")
```
```
#> shape_of_tensor
```
```
torch$as_tensor(mr)$shape
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 1 2 3 4 5 6 7 8 9 10
#> tensor([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]], dtype=torch.int32)
#> torch.Size([1, 10])
```
### 6\.2\.2 Matrix to tensor
```
# column-vector
message("R matrix, one column")
```
```
#> R matrix, one column
```
```
(mc <- matrix(1:10, ncol=1))
message("as_tensor")
```
```
#> as_tensor
```
```
torch$as_tensor(mc)
message("size of tensor")
```
```
#> size of tensor
```
```
torch$as_tensor(mc)$shape
```
```
#> [,1]
#> [1,] 1
#> [2,] 2
#> [3,] 3
#> [4,] 4
#> [5,] 5
#> [6,] 6
#> [7,] 7
#> [8,] 8
#> [9,] 9
#> [10,] 10
#> tensor([[ 1],
#> [ 2],
#> [ 3],
#> [ 4],
#> [ 5],
#> [ 6],
#> [ 7],
#> [ 8],
#> [ 9],
#> [10]], dtype=torch.int32)
#> torch.Size([10, 1])
```
6\.3 Matrices
-------------
```
message("R matrix")
```
```
#> R matrix
```
```
(m1 <- matrix(1:24, nrow = 3, byrow = TRUE))
message("as_tensor")
```
```
#> as_tensor
```
```
(t1 <- torch$as_tensor(m1))
message("shape")
```
```
#> shape
```
```
torch$as_tensor(m1)$shape
message("size")
```
```
#> size
```
```
torch$as_tensor(m1)$size()
message("dim")
```
```
#> dim
```
```
dim(torch$as_tensor(m1))
message("length")
```
```
#> length
```
```
length(torch$as_tensor(m1))
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] 1 2 3 4 5 6 7 8
#> [2,] 9 10 11 12 13 14 15 16
#> [3,] 17 18 19 20 21 22 23 24
#> tensor([[ 1, 2, 3, 4, 5, 6, 7, 8],
#> [ 9, 10, 11, 12, 13, 14, 15, 16],
#> [17, 18, 19, 20, 21, 22, 23, 24]], dtype=torch.int32)
#> torch.Size([3, 8])
#> torch.Size([3, 8])
#> [1] 3 8
#> [1] 24
```
```
message("R matrix")
```
```
#> R matrix
```
```
(m2 <- matrix(0:99, ncol = 10))
message("as_tensor")
```
```
#> as_tensor
```
```
(t2 <- torch$as_tensor(m2))
message("shape")
```
```
#> shape
```
```
t2$shape
message("dim")
```
```
#> dim
```
```
dim(torch$as_tensor(m2))
```
```
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#> [1,] 0 10 20 30 40 50 60 70 80 90
#> [2,] 1 11 21 31 41 51 61 71 81 91
#> [3,] 2 12 22 32 42 52 62 72 82 92
#> [4,] 3 13 23 33 43 53 63 73 83 93
#> [5,] 4 14 24 34 44 54 64 74 84 94
#> [6,] 5 15 25 35 45 55 65 75 85 95
#> [7,] 6 16 26 36 46 56 66 76 86 96
#> [8,] 7 17 27 37 47 57 67 77 87 97
#> [9,] 8 18 28 38 48 58 68 78 88 98
#> [10,] 9 19 29 39 49 59 69 79 89 99
#> tensor([[ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90],
#> [ 1, 11, 21, 31, 41, 51, 61, 71, 81, 91],
#> [ 2, 12, 22, 32, 42, 52, 62, 72, 82, 92],
#> [ 3, 13, 23, 33, 43, 53, 63, 73, 83, 93],
#> [ 4, 14, 24, 34, 44, 54, 64, 74, 84, 94],
#> [ 5, 15, 25, 35, 45, 55, 65, 75, 85, 95],
#> [ 6, 16, 26, 36, 46, 56, 66, 76, 86, 96],
#> [ 7, 17, 27, 37, 47, 57, 67, 77, 87, 97],
#> [ 8, 18, 28, 38, 48, 58, 68, 78, 88, 98],
#> [ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99]], dtype=torch.int32)
#> torch.Size([10, 10])
#> [1] 10 10
```
```
m1[1, 1]
m2[1, 1]
```
```
#> [1] 1
#> [1] 0
```
```
t1[1, 1]
t2[1, 1]
```
```
#> tensor(1, dtype=torch.int32)
#> tensor(0, dtype=torch.int32)
```
6\.4 3D\+ tensors
-----------------
```
# RGB color image has three axes
(img <- torch$rand(3L, 28L, 28L))
img$shape
```
```
#> tensor([[[0.4349, 0.1164, 0.5637, ..., 0.7674, 0.0530, 0.5104],
#> [0.5074, 0.0026, 0.8199, ..., 0.1035, 0.9890, 0.0948],
#> [0.5082, 0.6629, 0.4485, ..., 0.2037, 0.5876, 0.7726],
#> ...,
#> [0.9531, 0.4397, 0.1301, ..., 0.9004, 0.7199, 0.6334],
#> [0.2234, 0.0349, 0.3215, ..., 0.9437, 0.9297, 0.9696],
#> [0.5090, 0.7271, 0.0736, ..., 0.3271, 0.0580, 0.7623]],
#>
#> [[0.0232, 0.7732, 0.9972, ..., 0.4132, 0.1901, 0.6690],
#> [0.3026, 0.6929, 0.1662, ..., 0.8764, 0.8435, 0.3876],
#> [0.6784, 0.5015, 0.4514, ..., 0.9874, 0.0386, 0.1774],
#> ...,
#> [0.3697, 0.0044, 0.4686, ..., 0.9114, 0.5276, 0.0438],
#> [0.3210, 0.0769, 0.4184, ..., 0.1150, 0.0206, 0.3720],
#> [0.6467, 0.1786, 0.5240, ..., 0.2346, 0.0390, 0.2670]],
#>
#> [[0.9525, 0.0805, 0.0763, ..., 0.5606, 0.2202, 0.5187],
#> [0.0708, 0.3832, 0.7780, ..., 0.6198, 0.0404, 0.4178],
#> [0.8492, 0.3753, 0.2217, ..., 0.4277, 0.1597, 0.9825],
#> ...,
#> [0.0025, 0.2161, 0.5639, ..., 0.8237, 0.4728, 0.0648],
#> [0.8162, 0.7106, 0.0972, ..., 0.4748, 0.0605, 0.7730],
#> [0.8349, 0.5473, 0.5700, ..., 0.7152, 0.1603, 0.5442]]])
#> torch.Size([3, 28, 28])
```
```
img[1, 1, 1]
img[3, 28, 28]
```
```
#> tensor(0.4349)
#> tensor(0.5442)
```
6\.5 Transpose of a matrix
--------------------------
```
(m3 <- matrix(1:25, ncol = 5))
# transpose
message("transpose")
```
```
#> transpose
```
```
tm3 <- t(m3)
tm3
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 6 11 16 21
#> [2,] 2 7 12 17 22
#> [3,] 3 8 13 18 23
#> [4,] 4 9 14 19 24
#> [5,] 5 10 15 20 25
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 2 3 4 5
#> [2,] 6 7 8 9 10
#> [3,] 11 12 13 14 15
#> [4,] 16 17 18 19 20
#> [5,] 21 22 23 24 25
```
```
message("as_tensor")
```
```
#> as_tensor
```
```
(t3 <- torch$as_tensor(m3))
message("transpose")
```
```
#> transpose
```
```
tt3 <- t3$transpose(dim0 = 0L, dim1 = 1L)
tt3
```
```
#> tensor([[ 1, 6, 11, 16, 21],
#> [ 2, 7, 12, 17, 22],
#> [ 3, 8, 13, 18, 23],
#> [ 4, 9, 14, 19, 24],
#> [ 5, 10, 15, 20, 25]], dtype=torch.int32)
#> tensor([[ 1, 2, 3, 4, 5],
#> [ 6, 7, 8, 9, 10],
#> [11, 12, 13, 14, 15],
#> [16, 17, 18, 19, 20],
#> [21, 22, 23, 24, 25]], dtype=torch.int32)
```
```
tm3 == tt3$numpy() # convert first the tensor to numpy
```
```
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] TRUE TRUE TRUE TRUE TRUE
#> [2,] TRUE TRUE TRUE TRUE TRUE
#> [3,] TRUE TRUE TRUE TRUE TRUE
#> [4,] TRUE TRUE TRUE TRUE TRUE
#> [5,] TRUE TRUE TRUE TRUE TRUE
```
6\.6 Vectors, special case of a matrix
--------------------------------------
```
message("R matrix")
```
```
#> R matrix
```
```
m2 <- matrix(0:99, ncol = 10)
message("as_tensor")
```
```
#> as_tensor
```
```
(t2 <- torch$as_tensor(m2))
# in R
message("select column of matrix")
```
```
#> select column of matrix
```
```
(v1 <- m2[, 1])
message("select row of matrix")
```
```
#> select row of matrix
```
```
(v2 <- m2[10, ])
```
```
#> tensor([[ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90],
#> [ 1, 11, 21, 31, 41, 51, 61, 71, 81, 91],
#> [ 2, 12, 22, 32, 42, 52, 62, 72, 82, 92],
#> [ 3, 13, 23, 33, 43, 53, 63, 73, 83, 93],
#> [ 4, 14, 24, 34, 44, 54, 64, 74, 84, 94],
#> [ 5, 15, 25, 35, 45, 55, 65, 75, 85, 95],
#> [ 6, 16, 26, 36, 46, 56, 66, 76, 86, 96],
#> [ 7, 17, 27, 37, 47, 57, 67, 77, 87, 97],
#> [ 8, 18, 28, 38, 48, 58, 68, 78, 88, 98],
#> [ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99]], dtype=torch.int32)
#> [1] 0 1 2 3 4 5 6 7 8 9
#> [1] 9 19 29 39 49 59 69 79 89 99
```
```
# PyTorch
message()
```
```
#>
```
```
t2c <- t2[, 1]
t2r <- t2[10, ]
t2c
t2r
```
```
#> tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=torch.int32)
#> tensor([ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99], dtype=torch.int32)
```
In vectors, the vector and its transpose are equal.
```
tt2r <- t2r$transpose(dim0 = 0L, dim1 = 0L)
tt2r
```
```
#> tensor([ 9, 19, 29, 39, 49, 59, 69, 79, 89, 99], dtype=torch.int32)
```
```
# a tensor of booleans. is vector equal to its transposed?
t2r == tt2r
```
```
#> tensor([True, True, True, True, True, True, True, True, True, True])
```
6\.7 Tensor arithmetic
----------------------
```
message("x")
```
```
#> x
```
```
(x = torch$ones(5L, 4L))
message("y")
```
```
#> y
```
```
(y = torch$ones(5L, 4L))
message("x+y")
```
```
#> x+y
```
```
x + y
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
#> tensor([[2., 2., 2., 2.],
#> [2., 2., 2., 2.],
#> [2., 2., 2., 2.],
#> [2., 2., 2., 2.],
#> [2., 2., 2., 2.]])
```
\\\[A \+ B \= B \+ A\\]
```
x + y == y + x
```
```
#> tensor([[True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True]])
```
6\.8 Add a scalar to a tensor
-----------------------------
```
s <- 0.5 # scalar
x + s
```
```
#> tensor([[1.5000, 1.5000, 1.5000, 1.5000],
#> [1.5000, 1.5000, 1.5000, 1.5000],
#> [1.5000, 1.5000, 1.5000, 1.5000],
#> [1.5000, 1.5000, 1.5000, 1.5000],
#> [1.5000, 1.5000, 1.5000, 1.5000]])
```
```
# scalar multiplying two tensors
s * (x + y)
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
```
6\.9 Multiplying tensors
------------------------
\\\[A \* B \= B \* A\\]
```
message("x")
```
```
#> x
```
```
(x = torch$ones(5L, 4L))
message("y")
```
```
#> y
```
```
(y = torch$ones(5L, 4L))
message("2x+4y")
```
```
#> 2x+4y
```
```
(z = 2 * x + 4 * y)
```
```
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
#> tensor([[1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.],
#> [1., 1., 1., 1.]])
#> tensor([[6., 6., 6., 6.],
#> [6., 6., 6., 6.],
#> [6., 6., 6., 6.],
#> [6., 6., 6., 6.],
#> [6., 6., 6., 6.]])
```
```
x * y == y * x
```
```
#> tensor([[True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True],
#> [True, True, True, True]])
```
6\.10 Dot product
-----------------
\\\[dot(a,b)\_{i,j,k,a,b,c} \= \\sum\_m a\_{i,j,k,m}b\_{a,b,m,c}\\]
```
torch$dot(torch$tensor(c(2, 3)), torch$tensor(c(2, 1)))
```
```
#> tensor(7.)
```
### 6\.10\.1 2D array using Python
```
import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([[1, 2], [3, 4]])
print(a)
```
```
#> [[1 2]
#> [3 4]]
```
```
print(b)
```
```
#> [[1 2]
#> [3 4]]
```
```
np.dot(a, b)
```
```
#> array([[ 7, 10],
#> [15, 22]])
```
### 6\.10\.2 2D array using R
```
a <- np$array(list(list(1, 2), list(3, 4)))
a
b <- np$array(list(list(1, 2), list(3, 4)))
b
np$dot(a, b)
```
```
#> [,1] [,2]
#> [1,] 1 2
#> [2,] 3 4
#> [,1] [,2]
#> [1,] 1 2
#> [2,] 3 4
#> [,1] [,2]
#> [1,] 7 10
#> [2,] 15 22
```
`torch.dot()` treats both \\(a\\) and \\(b\\) as **1D** vectors (irrespective of their original shape) and computes their inner product.
```
at <- torch$as_tensor(a)
bt <- torch$as_tensor(b)
# torch$dot(at, bt) <- RuntimeError: dot: Expected 1-D argument self, but got 2-D
# at %.*% bt
```
If we perform the same dot product operation in Python, we get the same error:
```
import torch
import numpy as np
a = np.array([[1, 2], [3, 4]])
a
```
```
#> array([[1, 2],
#> [3, 4]])
```
```
b = np.array([[1, 2], [3, 4]])
b
```
```
#> array([[1, 2],
#> [3, 4]])
```
```
np.dot(a, b)
```
```
#> array([[ 7, 10],
#> [15, 22]])
```
```
at = torch.as_tensor(a)
bt = torch.as_tensor(b)
at
```
```
#> tensor([[1, 2],
#> [3, 4]])
```
```
bt
```
```
#> tensor([[1, 2],
#> [3, 4]])
```
```
torch.dot(at, bt)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
#>
#> Detailed traceback:
#> File "<string>", line 1, in <module>
```
```
a <- torch$Tensor(list(list(1, 2), list(3, 4)))
b <- torch$Tensor(c(c(1, 2), c(3, 4)))
c <- torch$Tensor(list(list(11, 12), list(13, 14)))
a
b
torch$dot(a, b)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 1D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
# this is another way of performing dot product in PyTorch
# a$dot(a)
```
```
#> tensor([[1., 2.],
#> [3., 4.]])
#> tensor([1., 2., 3., 4.])
```
```
o1 <- torch$ones(2L, 2L)
o2 <- torch$ones(2L, 2L)
o1
o2
torch$dot(o1, o2)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
o1$dot(o2)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
#> tensor([[1., 1.],
#> [1., 1.]])
#> tensor([[1., 1.],
#> [1., 1.]])
```
```
# 1D tensors work fine
r = torch$dot(torch$Tensor(list(4L, 2L, 4L)), torch$Tensor(list(3L, 4L, 1L)))
r
```
```
#> tensor(24.)
```
### 6\.10\.3 `mm` and `matmul` functions
So, if we cannor perform 2D tensor operations with the `dot` product, how do we manage then?
```
## mm and matmul seem to address the dot product we are looking for in tensors
a = torch$randn(2L, 3L)
b = torch$randn(3L, 4L)
a$mm(b)
a$matmul(b)
```
```
#> tensor([[ 1.0735, 2.0763, -0.2199, 0.3611],
#> [-1.3501, 4.1254, -2.2058, 0.8386]])
#> tensor([[ 1.0735, 2.0763, -0.2199, 0.3611],
#> [-1.3501, 4.1254, -2.2058, 0.8386]])
```
Here is a good explanation: <https://stackoverflow.com/a/44525687/5270873>
Let’s now prove the associative property of tensors:
\\\[(A B)^T \= B^T A^T\\]
```
abt <- torch$mm(a, b)$transpose(dim0=0L, dim1=1L)
abt
```
```
#> tensor([[ 1.0735, -1.3501],
#> [ 2.0763, 4.1254],
#> [-0.2199, -2.2058],
#> [ 0.3611, 0.8386]])
```
```
at <- a$transpose(dim0=0L, dim1=1L)
bt <- b$transpose(dim0=0L, dim1=1L)
btat <- torch$matmul(bt, at)
btat
```
```
#> tensor([[ 1.0735, -1.3501],
#> [ 2.0763, 4.1254],
#> [-0.2199, -2.2058],
#> [ 0.3611, 0.8386]])
```
And we could unit test if the results are nearly the same with `allclose()`:
```
# tolerance
torch$allclose(abt, btat, rtol=0.0001)
```
```
#> [1] TRUE
```
### 6\.10\.1 2D array using Python
```
import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([[1, 2], [3, 4]])
print(a)
```
```
#> [[1 2]
#> [3 4]]
```
```
print(b)
```
```
#> [[1 2]
#> [3 4]]
```
```
np.dot(a, b)
```
```
#> array([[ 7, 10],
#> [15, 22]])
```
### 6\.10\.2 2D array using R
```
a <- np$array(list(list(1, 2), list(3, 4)))
a
b <- np$array(list(list(1, 2), list(3, 4)))
b
np$dot(a, b)
```
```
#> [,1] [,2]
#> [1,] 1 2
#> [2,] 3 4
#> [,1] [,2]
#> [1,] 1 2
#> [2,] 3 4
#> [,1] [,2]
#> [1,] 7 10
#> [2,] 15 22
```
`torch.dot()` treats both \\(a\\) and \\(b\\) as **1D** vectors (irrespective of their original shape) and computes their inner product.
```
at <- torch$as_tensor(a)
bt <- torch$as_tensor(b)
# torch$dot(at, bt) <- RuntimeError: dot: Expected 1-D argument self, but got 2-D
# at %.*% bt
```
If we perform the same dot product operation in Python, we get the same error:
```
import torch
import numpy as np
a = np.array([[1, 2], [3, 4]])
a
```
```
#> array([[1, 2],
#> [3, 4]])
```
```
b = np.array([[1, 2], [3, 4]])
b
```
```
#> array([[1, 2],
#> [3, 4]])
```
```
np.dot(a, b)
```
```
#> array([[ 7, 10],
#> [15, 22]])
```
```
at = torch.as_tensor(a)
bt = torch.as_tensor(b)
at
```
```
#> tensor([[1, 2],
#> [3, 4]])
```
```
bt
```
```
#> tensor([[1, 2],
#> [3, 4]])
```
```
torch.dot(at, bt)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
#>
#> Detailed traceback:
#> File "<string>", line 1, in <module>
```
```
a <- torch$Tensor(list(list(1, 2), list(3, 4)))
b <- torch$Tensor(c(c(1, 2), c(3, 4)))
c <- torch$Tensor(list(list(11, 12), list(13, 14)))
a
b
torch$dot(a, b)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 1D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
# this is another way of performing dot product in PyTorch
# a$dot(a)
```
```
#> tensor([[1., 2.],
#> [3., 4.]])
#> tensor([1., 2., 3., 4.])
```
```
o1 <- torch$ones(2L, 2L)
o2 <- torch$ones(2L, 2L)
o1
o2
torch$dot(o1, o2)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
o1$dot(o2)
```
```
#> Error in py_call_impl(callable, dots$args, dots$keywords): RuntimeError: 1D tensors expected, got 2D, 2D tensors at /opt/conda/conda-bld/pytorch_1595629401553/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:83
```
```
#> tensor([[1., 1.],
#> [1., 1.]])
#> tensor([[1., 1.],
#> [1., 1.]])
```
```
# 1D tensors work fine
r = torch$dot(torch$Tensor(list(4L, 2L, 4L)), torch$Tensor(list(3L, 4L, 1L)))
r
```
```
#> tensor(24.)
```
### 6\.10\.3 `mm` and `matmul` functions
So, if we cannor perform 2D tensor operations with the `dot` product, how do we manage then?
```
## mm and matmul seem to address the dot product we are looking for in tensors
a = torch$randn(2L, 3L)
b = torch$randn(3L, 4L)
a$mm(b)
a$matmul(b)
```
```
#> tensor([[ 1.0735, 2.0763, -0.2199, 0.3611],
#> [-1.3501, 4.1254, -2.2058, 0.8386]])
#> tensor([[ 1.0735, 2.0763, -0.2199, 0.3611],
#> [-1.3501, 4.1254, -2.2058, 0.8386]])
```
Here is a good explanation: <https://stackoverflow.com/a/44525687/5270873>
Let’s now prove the associative property of tensors:
\\\[(A B)^T \= B^T A^T\\]
```
abt <- torch$mm(a, b)$transpose(dim0=0L, dim1=1L)
abt
```
```
#> tensor([[ 1.0735, -1.3501],
#> [ 2.0763, 4.1254],
#> [-0.2199, -2.2058],
#> [ 0.3611, 0.8386]])
```
```
at <- a$transpose(dim0=0L, dim1=1L)
bt <- b$transpose(dim0=0L, dim1=1L)
btat <- torch$matmul(bt, at)
btat
```
```
#> tensor([[ 1.0735, -1.3501],
#> [ 2.0763, 4.1254],
#> [-0.2199, -2.2058],
#> [ 0.3611, 0.8386]])
```
And we could unit test if the results are nearly the same with `allclose()`:
```
# tolerance
torch$allclose(abt, btat, rtol=0.0001)
```
```
#> [1] TRUE
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/creating-pytorch-classes.html |
Chapter 7 Creating PyTorch classes
==================================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
7\.1 Build a PyTorch model class
--------------------------------
PyTorch classes cannot not directly be instantiated from `R`. Yet. We need an intermediate step to create a class. For this, we use `reticulate` functions like `py_run_string()` that will read the class implementation in `Python` code, and then assign it to an R object.
### 7\.1\.1 Example 1: One layer NN
```
py_run_string("import torch")
main = py_run_string(
"
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = torch.nn.Linear(1, 1)
def forward(self, x):
x = self.layer(x)
return x
")
# build a Linear Rgression model
net <- main$Net()
```
The R object `net` now contains all the object in the PyTorch class `Net`.
### 7\.1\.2 Example 2: Logistic Regression
```
main <- py_run_string(
"
import torch.nn as nn
class LogisticRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LogisticRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
")
# build a Logistic Rgression model
LogisticRegressionModel <- main$LogisticRegressionModel
```
The R object `LogisticRegressionModel` now contains all the objects in the PyTorch class `LogisticRegressionModel`.
7\.1 Build a PyTorch model class
--------------------------------
PyTorch classes cannot not directly be instantiated from `R`. Yet. We need an intermediate step to create a class. For this, we use `reticulate` functions like `py_run_string()` that will read the class implementation in `Python` code, and then assign it to an R object.
### 7\.1\.1 Example 1: One layer NN
```
py_run_string("import torch")
main = py_run_string(
"
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = torch.nn.Linear(1, 1)
def forward(self, x):
x = self.layer(x)
return x
")
# build a Linear Rgression model
net <- main$Net()
```
The R object `net` now contains all the object in the PyTorch class `Net`.
### 7\.1\.2 Example 2: Logistic Regression
```
main <- py_run_string(
"
import torch.nn as nn
class LogisticRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LogisticRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
")
# build a Logistic Rgression model
LogisticRegressionModel <- main$LogisticRegressionModel
```
The R object `LogisticRegressionModel` now contains all the objects in the PyTorch class `LogisticRegressionModel`.
### 7\.1\.1 Example 1: One layer NN
```
py_run_string("import torch")
main = py_run_string(
"
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = torch.nn.Linear(1, 1)
def forward(self, x):
x = self.layer(x)
return x
")
# build a Linear Rgression model
net <- main$Net()
```
The R object `net` now contains all the object in the PyTorch class `Net`.
### 7\.1\.2 Example 2: Logistic Regression
```
main <- py_run_string(
"
import torch.nn as nn
class LogisticRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LogisticRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
")
# build a Logistic Rgression model
LogisticRegressionModel <- main$LogisticRegressionModel
```
The R object `LogisticRegressionModel` now contains all the objects in the PyTorch class `LogisticRegressionModel`.
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/example-1-a-classification-problem.html |
Chapter 8 Example 1: A classification problem
=============================================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
8\.1 Code in Python
-------------------
I will combine here R and Python code just to show how easy is integrating R and Python. First thing we have to do is loading the package `rTorch`. We do that in a chunk:
```
library(rTorch)
```
Then, we proceed to copy the standard Python code but in their own `Python` chunks. This is a very nice example that I found in the web. It explains the classic challenge of classification.
When `rTorch` is loaded, a number of Python libraries are also loaded, which enable us the immediate use of numpy, torch and matplotlib.
```
# Logistic Regression
# https://m-alcu.github.io/blog/2018/02/10/logit-pytorch/
import numpy as np
import torch
import torch.nn.functional as F
from torch.autograd import Variable
import matplotlib.pyplot as plt
```
The next thing we do is setting a seed to make the example repeatable, in my machine and yours.
```
np.random.seed(2048)
```
Then we generate some random samples.
```
N = 100
D = 2
X = np.random.randn(N, D) * 2
ctr = int(N/2)
# center the first N/2 points at (-2,-2)
X[:ctr,:] = X[:ctr,:] - 2 * np.ones((ctr, D))
# center the last N/2 points at (2, 2)
X[ctr:,:] = X[ctr:,:] + 2 * np.ones((ctr, D))
# labels: first N/2 are 0, last N/2 are 1
# mark the first half with 0 and the sceond half with 1
T = np.array([0] * ctr + [1] * ctr).reshape(100, 1)
```
And plot the original data for reference.
```
# plot the data. color the dots using T
plt.scatter(X[:,0], X[:,1], c=T.reshape(N), s=100, alpha=0.5)
plt.xlabel('X(1)')
plt.ylabel('X(2)')
```
What follows is the definition of the model using a neural network and train the model. We set up the model:
```
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(2, 1) # 2 in and 1 out
def forward(self, x):
y_pred = torch.sigmoid(self.linear(x))
return y_pred
# Our model
model = Model()
criterion = torch.nn.BCELoss(reduction='mean')
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
Train the model:
```
x_data = Variable(torch.Tensor(X))
y_data = Variable(torch.Tensor(T))
# Training loop
for epoch in range(1000):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x_data)
# Compute and print loss
loss = criterion(y_pred, y_data)
# print(epoch, loss.data[0])
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
w = list(model.parameters())
w0 = w[0].data.numpy()
w1 = w[1].data.numpy()
```
Finally, we plot the results, by tracing the line that separates two classes, 0 and 1, which are both colored in the plot.
```
print("Final gradient descend:", w)
# plot the data and separating line
```
```
#> Final gradient descend: [Parameter containing:
#> tensor([[1.1277, 1.1242]], requires_grad=True), Parameter containing:
#> tensor([0.3226], requires_grad=True)]
```
```
plt.scatter(X[:,0], X[:,1], c=T.reshape(N), s=100, alpha=0.5)
x_axis = np.linspace(-6, 6, 100)
y_axis = -(w1[0] + x_axis * w0[0][0]) / w0[0][1]
line_up, = plt.plot(x_axis, y_axis,'r--', label='gradient descent')
plt.legend(handles=[line_up])
plt.xlabel('X(1)')
plt.ylabel('X(2)')
plt.show()
```
8\.1 Code in Python
-------------------
I will combine here R and Python code just to show how easy is integrating R and Python. First thing we have to do is loading the package `rTorch`. We do that in a chunk:
```
library(rTorch)
```
Then, we proceed to copy the standard Python code but in their own `Python` chunks. This is a very nice example that I found in the web. It explains the classic challenge of classification.
When `rTorch` is loaded, a number of Python libraries are also loaded, which enable us the immediate use of numpy, torch and matplotlib.
```
# Logistic Regression
# https://m-alcu.github.io/blog/2018/02/10/logit-pytorch/
import numpy as np
import torch
import torch.nn.functional as F
from torch.autograd import Variable
import matplotlib.pyplot as plt
```
The next thing we do is setting a seed to make the example repeatable, in my machine and yours.
```
np.random.seed(2048)
```
Then we generate some random samples.
```
N = 100
D = 2
X = np.random.randn(N, D) * 2
ctr = int(N/2)
# center the first N/2 points at (-2,-2)
X[:ctr,:] = X[:ctr,:] - 2 * np.ones((ctr, D))
# center the last N/2 points at (2, 2)
X[ctr:,:] = X[ctr:,:] + 2 * np.ones((ctr, D))
# labels: first N/2 are 0, last N/2 are 1
# mark the first half with 0 and the sceond half with 1
T = np.array([0] * ctr + [1] * ctr).reshape(100, 1)
```
And plot the original data for reference.
```
# plot the data. color the dots using T
plt.scatter(X[:,0], X[:,1], c=T.reshape(N), s=100, alpha=0.5)
plt.xlabel('X(1)')
plt.ylabel('X(2)')
```
What follows is the definition of the model using a neural network and train the model. We set up the model:
```
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(2, 1) # 2 in and 1 out
def forward(self, x):
y_pred = torch.sigmoid(self.linear(x))
return y_pred
# Our model
model = Model()
criterion = torch.nn.BCELoss(reduction='mean')
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
Train the model:
```
x_data = Variable(torch.Tensor(X))
y_data = Variable(torch.Tensor(T))
# Training loop
for epoch in range(1000):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x_data)
# Compute and print loss
loss = criterion(y_pred, y_data)
# print(epoch, loss.data[0])
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
w = list(model.parameters())
w0 = w[0].data.numpy()
w1 = w[1].data.numpy()
```
Finally, we plot the results, by tracing the line that separates two classes, 0 and 1, which are both colored in the plot.
```
print("Final gradient descend:", w)
# plot the data and separating line
```
```
#> Final gradient descend: [Parameter containing:
#> tensor([[1.1277, 1.1242]], requires_grad=True), Parameter containing:
#> tensor([0.3226], requires_grad=True)]
```
```
plt.scatter(X[:,0], X[:,1], c=T.reshape(N), s=100, alpha=0.5)
x_axis = np.linspace(-6, 6, 100)
y_axis = -(w1[0] + x_axis * w0[0][0]) / w0[0][1]
line_up, = plt.plot(x_axis, y_axis,'r--', label='gradient descent')
plt.legend(handles=[line_up])
plt.xlabel('X(1)')
plt.ylabel('X(2)')
plt.show()
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/mnistdigits.html |
Chapter 9 Example 2: MNIST handwritten digits
=============================================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
9\.1 Code in R
--------------
Source: [https://github.com/yunjey/pytorch\-tutorial/blob/master/tutorials/01\-basics/logistic\_regression/main.py](https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/01-basics/logistic_regression/main.py)
```
library(rTorch)
nn <- torch$nn
transforms <- torchvision$transforms
torch$set_default_dtype(torch$float)
```
### 9\.1\.1 Hyperparameters
```
# Hyper-parameters
input_size <- 784L
num_classes <- 10L
num_epochs <- 5L
batch_size <- 100L
learning_rate <- 0.001
```
### 9\.1\.2 Read datasets
```
# MNIST dataset (images and labels)
# IDX format
local_folder <- './datasets/raw_data'
train_dataset = torchvision$datasets$MNIST(root=local_folder,
train=TRUE,
transform=transforms$ToTensor(),
download=TRUE)
test_dataset = torchvision$datasets$MNIST(root=local_folder,
train=FALSE,
transform=transforms$ToTensor())
# Data loader (input pipeline). Make the datasets iteratble
train_loader = torch$utils$data$DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=TRUE)
test_loader = torch$utils$data$DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=FALSE)
```
```
class(train_loader)
length(train_loader)
```
```
#> [1] "torch.utils.data.dataloader.DataLoader"
#> [2] "python.builtin.object"
#> [1] 2
```
### 9\.1\.3 Define the model
```
# Logistic regression model
model = nn$Linear(input_size, num_classes)
# Loss and optimizer
# nn.CrossEntropyLoss() computes softmax internally
criterion = nn$CrossEntropyLoss()
optimizer = torch$optim$SGD(model$parameters(), lr=learning_rate)
print(model)
```
```
#> Linear(in_features=784, out_features=10, bias=True)
```
### 9\.1\.4 Training
```
# Train the model
iter_train_loader <- iterate(train_loader)
total_step <-length(iter_train_loader)
```
```
for (epoch in 1:num_epochs) {
i <- 0
for (obj in iter_train_loader) {
images <- obj[[1]] # tensor torch.Size([64, 3, 28, 28])
labels <- obj[[2]] # tensor torch.Size([64]), labels from 0 to 9
# cat(i, "\t"); print(images$shape)
# Reshape images to (batch_size, input_size)
images <- images$reshape(-1L, 28L*28L)
# images <- torch$as_tensor(images$reshape(-1L, 28L*28L), dtype=torch$double)
# Forward pass
outputs <- model(images)
loss <- criterion(outputs, labels)
# Backward and optimize
optimizer$zero_grad()
loss$backward()
optimizer$step()
if ((i+1) %% 100 == 0) {
cat(sprintf('Epoch [%d/%d], Step [%d/%d], Loss: %f \n',
epoch+1, num_epochs, i+1, total_step, loss$item()))
}
i <- i + 1
}
}
```
```
#> Epoch [2/5], Step [100/600], Loss: 2.202640
#> Epoch [2/5], Step [200/600], Loss: 2.131556
#> Epoch [2/5], Step [300/600], Loss: 2.009567
#> Epoch [2/5], Step [400/600], Loss: 1.909900
#> Epoch [2/5], Step [500/600], Loss: 1.807800
#> Epoch [2/5], Step [600/600], Loss: 1.763934
#> Epoch [3/5], Step [100/600], Loss: 1.748977
#> Epoch [3/5], Step [200/600], Loss: 1.719241
#> Epoch [3/5], Step [300/600], Loss: 1.575805
#> Epoch [3/5], Step [400/600], Loss: 1.533629
#> Epoch [3/5], Step [500/600], Loss: 1.441434
#> Epoch [3/5], Step [600/600], Loss: 1.422432
#> Epoch [4/5], Step [100/600], Loss: 1.457393
#> Epoch [4/5], Step [200/600], Loss: 1.446077
#> Epoch [4/5], Step [300/600], Loss: 1.299167
#> Epoch [4/5], Step [400/600], Loss: 1.294534
#> Epoch [4/5], Step [500/600], Loss: 1.208139
#> Epoch [4/5], Step [600/600], Loss: 1.201451
#> Epoch [5/5], Step [100/600], Loss: 1.263761
#> Epoch [5/5], Step [200/600], Loss: 1.262581
#> Epoch [5/5], Step [300/600], Loss: 1.115774
#> Epoch [5/5], Step [400/600], Loss: 1.135691
#> Epoch [5/5], Step [500/600], Loss: 1.052254
#> Epoch [5/5], Step [600/600], Loss: 1.051521
#> Epoch [6/5], Step [100/600], Loss: 1.129794
#> Epoch [6/5], Step [200/600], Loss: 1.133942
#> Epoch [6/5], Step [300/600], Loss: 0.988441
#> Epoch [6/5], Step [400/600], Loss: 1.024993
#> Epoch [6/5], Step [500/600], Loss: 0.942753
#> Epoch [6/5], Step [600/600], Loss: 0.944552
```
### 9\.1\.5 Prediction
```
# Adjust weights and reset gradients
iter_test_loader <- iterate(test_loader)
with(torch$no_grad(), {
correct <- 0
total <- 0
for (obj in iter_test_loader) {
images <- obj[[1]] # tensor torch.Size([64, 3, 28, 28])
labels <- obj[[2]] # tensor torch.Size([64]), labels from 0 to 9
images = images$reshape(-1L, 28L*28L)
# images <- torch$as_tensor(images$reshape(-1L, 28L*28L), dtype=torch$double)
outputs = model(images)
.predicted = torch$max(outputs$data, 1L)
predicted <- .predicted[1L]
total = total + labels$size(0L)
correct = correct + sum((predicted$numpy() == labels$numpy()))
}
cat(sprintf('Accuracy of the model on the 10000 test images: %f %%', (100 * correct / total)))
})
```
```
#> Accuracy of the model on the 10000 test images: 83.080000 %
```
### 9\.1\.6 Save the model
```
# Save the model checkpoint
torch$save(model$state_dict(), 'model.ckpt')
```
9\.2 Code in Python
-------------------
```
import torch
import torchvision
import torch.nn as nn
import torchvision.transforms as transforms
# Hyper-parameters
input_size = 784
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001
# MNIST dataset (images and labels)
# IDX format
local_folder = './datasets/raw_data'
train_dataset = torchvision.datasets.MNIST(root=local_folder,
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root=local_folder,
train=False,
transform=transforms.ToTensor())
# Data loader (input pipeline). Make the datasets iteratble
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
```
9\.1 Code in R
--------------
Source: [https://github.com/yunjey/pytorch\-tutorial/blob/master/tutorials/01\-basics/logistic\_regression/main.py](https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/01-basics/logistic_regression/main.py)
```
library(rTorch)
nn <- torch$nn
transforms <- torchvision$transforms
torch$set_default_dtype(torch$float)
```
### 9\.1\.1 Hyperparameters
```
# Hyper-parameters
input_size <- 784L
num_classes <- 10L
num_epochs <- 5L
batch_size <- 100L
learning_rate <- 0.001
```
### 9\.1\.2 Read datasets
```
# MNIST dataset (images and labels)
# IDX format
local_folder <- './datasets/raw_data'
train_dataset = torchvision$datasets$MNIST(root=local_folder,
train=TRUE,
transform=transforms$ToTensor(),
download=TRUE)
test_dataset = torchvision$datasets$MNIST(root=local_folder,
train=FALSE,
transform=transforms$ToTensor())
# Data loader (input pipeline). Make the datasets iteratble
train_loader = torch$utils$data$DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=TRUE)
test_loader = torch$utils$data$DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=FALSE)
```
```
class(train_loader)
length(train_loader)
```
```
#> [1] "torch.utils.data.dataloader.DataLoader"
#> [2] "python.builtin.object"
#> [1] 2
```
### 9\.1\.3 Define the model
```
# Logistic regression model
model = nn$Linear(input_size, num_classes)
# Loss and optimizer
# nn.CrossEntropyLoss() computes softmax internally
criterion = nn$CrossEntropyLoss()
optimizer = torch$optim$SGD(model$parameters(), lr=learning_rate)
print(model)
```
```
#> Linear(in_features=784, out_features=10, bias=True)
```
### 9\.1\.4 Training
```
# Train the model
iter_train_loader <- iterate(train_loader)
total_step <-length(iter_train_loader)
```
```
for (epoch in 1:num_epochs) {
i <- 0
for (obj in iter_train_loader) {
images <- obj[[1]] # tensor torch.Size([64, 3, 28, 28])
labels <- obj[[2]] # tensor torch.Size([64]), labels from 0 to 9
# cat(i, "\t"); print(images$shape)
# Reshape images to (batch_size, input_size)
images <- images$reshape(-1L, 28L*28L)
# images <- torch$as_tensor(images$reshape(-1L, 28L*28L), dtype=torch$double)
# Forward pass
outputs <- model(images)
loss <- criterion(outputs, labels)
# Backward and optimize
optimizer$zero_grad()
loss$backward()
optimizer$step()
if ((i+1) %% 100 == 0) {
cat(sprintf('Epoch [%d/%d], Step [%d/%d], Loss: %f \n',
epoch+1, num_epochs, i+1, total_step, loss$item()))
}
i <- i + 1
}
}
```
```
#> Epoch [2/5], Step [100/600], Loss: 2.202640
#> Epoch [2/5], Step [200/600], Loss: 2.131556
#> Epoch [2/5], Step [300/600], Loss: 2.009567
#> Epoch [2/5], Step [400/600], Loss: 1.909900
#> Epoch [2/5], Step [500/600], Loss: 1.807800
#> Epoch [2/5], Step [600/600], Loss: 1.763934
#> Epoch [3/5], Step [100/600], Loss: 1.748977
#> Epoch [3/5], Step [200/600], Loss: 1.719241
#> Epoch [3/5], Step [300/600], Loss: 1.575805
#> Epoch [3/5], Step [400/600], Loss: 1.533629
#> Epoch [3/5], Step [500/600], Loss: 1.441434
#> Epoch [3/5], Step [600/600], Loss: 1.422432
#> Epoch [4/5], Step [100/600], Loss: 1.457393
#> Epoch [4/5], Step [200/600], Loss: 1.446077
#> Epoch [4/5], Step [300/600], Loss: 1.299167
#> Epoch [4/5], Step [400/600], Loss: 1.294534
#> Epoch [4/5], Step [500/600], Loss: 1.208139
#> Epoch [4/5], Step [600/600], Loss: 1.201451
#> Epoch [5/5], Step [100/600], Loss: 1.263761
#> Epoch [5/5], Step [200/600], Loss: 1.262581
#> Epoch [5/5], Step [300/600], Loss: 1.115774
#> Epoch [5/5], Step [400/600], Loss: 1.135691
#> Epoch [5/5], Step [500/600], Loss: 1.052254
#> Epoch [5/5], Step [600/600], Loss: 1.051521
#> Epoch [6/5], Step [100/600], Loss: 1.129794
#> Epoch [6/5], Step [200/600], Loss: 1.133942
#> Epoch [6/5], Step [300/600], Loss: 0.988441
#> Epoch [6/5], Step [400/600], Loss: 1.024993
#> Epoch [6/5], Step [500/600], Loss: 0.942753
#> Epoch [6/5], Step [600/600], Loss: 0.944552
```
### 9\.1\.5 Prediction
```
# Adjust weights and reset gradients
iter_test_loader <- iterate(test_loader)
with(torch$no_grad(), {
correct <- 0
total <- 0
for (obj in iter_test_loader) {
images <- obj[[1]] # tensor torch.Size([64, 3, 28, 28])
labels <- obj[[2]] # tensor torch.Size([64]), labels from 0 to 9
images = images$reshape(-1L, 28L*28L)
# images <- torch$as_tensor(images$reshape(-1L, 28L*28L), dtype=torch$double)
outputs = model(images)
.predicted = torch$max(outputs$data, 1L)
predicted <- .predicted[1L]
total = total + labels$size(0L)
correct = correct + sum((predicted$numpy() == labels$numpy()))
}
cat(sprintf('Accuracy of the model on the 10000 test images: %f %%', (100 * correct / total)))
})
```
```
#> Accuracy of the model on the 10000 test images: 83.080000 %
```
### 9\.1\.6 Save the model
```
# Save the model checkpoint
torch$save(model$state_dict(), 'model.ckpt')
```
### 9\.1\.1 Hyperparameters
```
# Hyper-parameters
input_size <- 784L
num_classes <- 10L
num_epochs <- 5L
batch_size <- 100L
learning_rate <- 0.001
```
### 9\.1\.2 Read datasets
```
# MNIST dataset (images and labels)
# IDX format
local_folder <- './datasets/raw_data'
train_dataset = torchvision$datasets$MNIST(root=local_folder,
train=TRUE,
transform=transforms$ToTensor(),
download=TRUE)
test_dataset = torchvision$datasets$MNIST(root=local_folder,
train=FALSE,
transform=transforms$ToTensor())
# Data loader (input pipeline). Make the datasets iteratble
train_loader = torch$utils$data$DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=TRUE)
test_loader = torch$utils$data$DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=FALSE)
```
```
class(train_loader)
length(train_loader)
```
```
#> [1] "torch.utils.data.dataloader.DataLoader"
#> [2] "python.builtin.object"
#> [1] 2
```
### 9\.1\.3 Define the model
```
# Logistic regression model
model = nn$Linear(input_size, num_classes)
# Loss and optimizer
# nn.CrossEntropyLoss() computes softmax internally
criterion = nn$CrossEntropyLoss()
optimizer = torch$optim$SGD(model$parameters(), lr=learning_rate)
print(model)
```
```
#> Linear(in_features=784, out_features=10, bias=True)
```
### 9\.1\.4 Training
```
# Train the model
iter_train_loader <- iterate(train_loader)
total_step <-length(iter_train_loader)
```
```
for (epoch in 1:num_epochs) {
i <- 0
for (obj in iter_train_loader) {
images <- obj[[1]] # tensor torch.Size([64, 3, 28, 28])
labels <- obj[[2]] # tensor torch.Size([64]), labels from 0 to 9
# cat(i, "\t"); print(images$shape)
# Reshape images to (batch_size, input_size)
images <- images$reshape(-1L, 28L*28L)
# images <- torch$as_tensor(images$reshape(-1L, 28L*28L), dtype=torch$double)
# Forward pass
outputs <- model(images)
loss <- criterion(outputs, labels)
# Backward and optimize
optimizer$zero_grad()
loss$backward()
optimizer$step()
if ((i+1) %% 100 == 0) {
cat(sprintf('Epoch [%d/%d], Step [%d/%d], Loss: %f \n',
epoch+1, num_epochs, i+1, total_step, loss$item()))
}
i <- i + 1
}
}
```
```
#> Epoch [2/5], Step [100/600], Loss: 2.202640
#> Epoch [2/5], Step [200/600], Loss: 2.131556
#> Epoch [2/5], Step [300/600], Loss: 2.009567
#> Epoch [2/5], Step [400/600], Loss: 1.909900
#> Epoch [2/5], Step [500/600], Loss: 1.807800
#> Epoch [2/5], Step [600/600], Loss: 1.763934
#> Epoch [3/5], Step [100/600], Loss: 1.748977
#> Epoch [3/5], Step [200/600], Loss: 1.719241
#> Epoch [3/5], Step [300/600], Loss: 1.575805
#> Epoch [3/5], Step [400/600], Loss: 1.533629
#> Epoch [3/5], Step [500/600], Loss: 1.441434
#> Epoch [3/5], Step [600/600], Loss: 1.422432
#> Epoch [4/5], Step [100/600], Loss: 1.457393
#> Epoch [4/5], Step [200/600], Loss: 1.446077
#> Epoch [4/5], Step [300/600], Loss: 1.299167
#> Epoch [4/5], Step [400/600], Loss: 1.294534
#> Epoch [4/5], Step [500/600], Loss: 1.208139
#> Epoch [4/5], Step [600/600], Loss: 1.201451
#> Epoch [5/5], Step [100/600], Loss: 1.263761
#> Epoch [5/5], Step [200/600], Loss: 1.262581
#> Epoch [5/5], Step [300/600], Loss: 1.115774
#> Epoch [5/5], Step [400/600], Loss: 1.135691
#> Epoch [5/5], Step [500/600], Loss: 1.052254
#> Epoch [5/5], Step [600/600], Loss: 1.051521
#> Epoch [6/5], Step [100/600], Loss: 1.129794
#> Epoch [6/5], Step [200/600], Loss: 1.133942
#> Epoch [6/5], Step [300/600], Loss: 0.988441
#> Epoch [6/5], Step [400/600], Loss: 1.024993
#> Epoch [6/5], Step [500/600], Loss: 0.942753
#> Epoch [6/5], Step [600/600], Loss: 0.944552
```
### 9\.1\.5 Prediction
```
# Adjust weights and reset gradients
iter_test_loader <- iterate(test_loader)
with(torch$no_grad(), {
correct <- 0
total <- 0
for (obj in iter_test_loader) {
images <- obj[[1]] # tensor torch.Size([64, 3, 28, 28])
labels <- obj[[2]] # tensor torch.Size([64]), labels from 0 to 9
images = images$reshape(-1L, 28L*28L)
# images <- torch$as_tensor(images$reshape(-1L, 28L*28L), dtype=torch$double)
outputs = model(images)
.predicted = torch$max(outputs$data, 1L)
predicted <- .predicted[1L]
total = total + labels$size(0L)
correct = correct + sum((predicted$numpy() == labels$numpy()))
}
cat(sprintf('Accuracy of the model on the 10000 test images: %f %%', (100 * correct / total)))
})
```
```
#> Accuracy of the model on the 10000 test images: 83.080000 %
```
### 9\.1\.6 Save the model
```
# Save the model checkpoint
torch$save(model$state_dict(), 'model.ckpt')
```
9\.2 Code in Python
-------------------
```
import torch
import torchvision
import torch.nn as nn
import torchvision.transforms as transforms
# Hyper-parameters
input_size = 784
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001
# MNIST dataset (images and labels)
# IDX format
local_folder = './datasets/raw_data'
train_dataset = torchvision.datasets.MNIST(root=local_folder,
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root=local_folder,
train=False,
transform=transforms.ToTensor())
# Data loader (input pipeline). Make the datasets iteratble
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/linear-regression.html |
Chapter 10 Linear Regression
============================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
10\.1 Introduction
------------------
Source: [https://www.guru99\.com/pytorch\-tutorial.html](https://www.guru99.com/pytorch-tutorial.html)
```
library(rTorch)
nn <- torch$nn
Variable <- torch$autograd$Variable
invisible(torch$manual_seed(123))
```
10\.2 Generate the dataset
--------------------------
Before you start the training process, you need to know our data. You make a random function to test our model. \\(Y \= x3 sin(x)\+ 3x\+0\.8 rand(100\)\\)
```
np$random$seed(123L)
x = np$random$rand(100L)
y = np$sin(x) * np$power(x, 3L) + 3L * x + np$random$rand(100L) * 0.8
plot(x, y)
```
10\.3 Convert arrays to tensors
-------------------------------
Before you start the training process, you need to convert the numpy array to Variables that supported by Torch and autograd.
10\.4 `numpy` array to tensor
-----------------------------
Notice that before converting to a Torch tensor, we need first to convert the R numeric vector to a `numpy` array:
```
# convert numpy array to tensor in shape of input size
x <- r_to_py(x)
y <- r_to_py(y)
x = torch$from_numpy(x$reshape(-1L, 1L))$float()
y = torch$from_numpy(y$reshape(-1L, 1L))$float()
print(x, y)
```
```
#> tensor([[0.6965],
#> [0.2861],
#> [0.2269],
#> [0.5513],
#> [0.7195],
#> [0.4231],
#> [0.9808],
#> [0.6848],
#> [0.4809],
#> [0.3921],
#> [0.3432],
#> [0.7290],
#> [0.4386],
#> [0.0597],
#> [0.3980],
#> [0.7380],
#> [0.1825],
#> [0.1755],
#> [0.5316],
#> [0.5318],
#> [0.6344],
#> [0.8494],
#> [0.7245],
#> [0.6110],
#> [0.7224],
#> [0.3230],
#> [0.3618],
#> [0.2283],
#> [0.2937],
#> [0.6310],
#> [0.0921],
#> [0.4337],
#> [0.4309],
#> [0.4937],
#> [0.4258],
#> [0.3123],
#> [0.4264],
#> [0.8934],
#> [0.9442],
#> [0.5018],
#> [0.6240],
#> [0.1156],
#> [0.3173],
#> [0.4148],
#> [0.8663],
#> [0.2505],
#> [0.4830],
#> [0.9856],
#> [0.5195],
#> [0.6129],
#> [0.1206],
#> [0.8263],
#> [0.6031],
#> [0.5451],
#> [0.3428],
#> [0.3041],
#> [0.4170],
#> [0.6813],
#> [0.8755],
#> [0.5104],
#> [0.6693],
#> [0.5859],
#> [0.6249],
#> [0.6747],
#> [0.8423],
#> [0.0832],
#> [0.7637],
#> [0.2437],
#> [0.1942],
#> [0.5725],
#> [0.0957],
#> [0.8853],
#> [0.6272],
#> [0.7234],
#> [0.0161],
#> [0.5944],
#> [0.5568],
#> [0.1590],
#> [0.1531],
#> [0.6955],
#> [0.3188],
#> [0.6920],
#> [0.5544],
#> [0.3890],
#> [0.9251],
#> [0.8417],
#> [0.3574],
#> [0.0436],
#> [0.3048],
#> [0.3982],
#> [0.7050],
#> [0.9954],
#> [0.3559],
#> [0.7625],
#> [0.5932],
#> [0.6917],
#> [0.1511],
#> [0.3989],
#> [0.2409],
#> [0.3435]])
```
10\.5 Creating the network model
--------------------------------
Our network model is a simple Linear layer with an input and an output shape of one.
And the network output should be like this
```
Net(
(hidden): Linear(in_features=1, out_features=1, bias=True)
)
```
```
py_run_string("import torch")
main = py_run_string(
"
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = torch.nn.Linear(1, 1)
def forward(self, x):
x = self.layer(x)
return x
")
# build a Linear Rgression model
net <- main$Net()
print(net)
```
```
#> Net(
#> (layer): Linear(in_features=1, out_features=1, bias=True)
#> )
```
10\.6 Optimizer and Loss
------------------------
Next, you should define the Optimizer and the Loss Function for our training process.
```
# Define Optimizer and Loss Function
optimizer <- torch$optim$SGD(net$parameters(), lr=0.2)
loss_func <- torch$nn$MSELoss()
print(optimizer)
print(loss_func)
```
```
#> SGD (
#> Parameter Group 0
#> dampening: 0
#> lr: 0.2
#> momentum: 0
#> nesterov: False
#> weight_decay: 0
#> )
#> MSELoss()
```
10\.7 Training
--------------
Now let’s start our training process. With an epoch of 250, you will iterate our data to find the best value for our hyperparameters.
```
# x = x$type(torch$float) # make it a a FloatTensor
# y = y$type(torch$float)
# x <- torch$as_tensor(x, dtype = torch$float)
# y <- torch$as_tensor(y, dtype = torch$float)
inputs = Variable(x)
outputs = Variable(y)
# base plot
plot(x$data$numpy(), y$data$numpy(), col = "blue")
for (i in 1:250) {
prediction = net(inputs)
loss = loss_func(prediction, outputs)
optimizer$zero_grad()
loss$backward()
optimizer$step()
if (i > 1) break
if (i %% 10 == 0) {
# plot and show learning process
# points(x$data$numpy(), y$data$numpy())
points(x$data$numpy(), prediction$data$numpy(), col="red")
# cat(i, loss$data$numpy(), "\n")
}
}
```
10\.8 Results
-------------
As you can see, you successfully performed regression with a neural network. Actually, on every iteration, the red line in the plot will update and change its position to fit the data. But in this picture, you only show you the final result.
10\.1 Introduction
------------------
Source: [https://www.guru99\.com/pytorch\-tutorial.html](https://www.guru99.com/pytorch-tutorial.html)
```
library(rTorch)
nn <- torch$nn
Variable <- torch$autograd$Variable
invisible(torch$manual_seed(123))
```
10\.2 Generate the dataset
--------------------------
Before you start the training process, you need to know our data. You make a random function to test our model. \\(Y \= x3 sin(x)\+ 3x\+0\.8 rand(100\)\\)
```
np$random$seed(123L)
x = np$random$rand(100L)
y = np$sin(x) * np$power(x, 3L) + 3L * x + np$random$rand(100L) * 0.8
plot(x, y)
```
10\.3 Convert arrays to tensors
-------------------------------
Before you start the training process, you need to convert the numpy array to Variables that supported by Torch and autograd.
10\.4 `numpy` array to tensor
-----------------------------
Notice that before converting to a Torch tensor, we need first to convert the R numeric vector to a `numpy` array:
```
# convert numpy array to tensor in shape of input size
x <- r_to_py(x)
y <- r_to_py(y)
x = torch$from_numpy(x$reshape(-1L, 1L))$float()
y = torch$from_numpy(y$reshape(-1L, 1L))$float()
print(x, y)
```
```
#> tensor([[0.6965],
#> [0.2861],
#> [0.2269],
#> [0.5513],
#> [0.7195],
#> [0.4231],
#> [0.9808],
#> [0.6848],
#> [0.4809],
#> [0.3921],
#> [0.3432],
#> [0.7290],
#> [0.4386],
#> [0.0597],
#> [0.3980],
#> [0.7380],
#> [0.1825],
#> [0.1755],
#> [0.5316],
#> [0.5318],
#> [0.6344],
#> [0.8494],
#> [0.7245],
#> [0.6110],
#> [0.7224],
#> [0.3230],
#> [0.3618],
#> [0.2283],
#> [0.2937],
#> [0.6310],
#> [0.0921],
#> [0.4337],
#> [0.4309],
#> [0.4937],
#> [0.4258],
#> [0.3123],
#> [0.4264],
#> [0.8934],
#> [0.9442],
#> [0.5018],
#> [0.6240],
#> [0.1156],
#> [0.3173],
#> [0.4148],
#> [0.8663],
#> [0.2505],
#> [0.4830],
#> [0.9856],
#> [0.5195],
#> [0.6129],
#> [0.1206],
#> [0.8263],
#> [0.6031],
#> [0.5451],
#> [0.3428],
#> [0.3041],
#> [0.4170],
#> [0.6813],
#> [0.8755],
#> [0.5104],
#> [0.6693],
#> [0.5859],
#> [0.6249],
#> [0.6747],
#> [0.8423],
#> [0.0832],
#> [0.7637],
#> [0.2437],
#> [0.1942],
#> [0.5725],
#> [0.0957],
#> [0.8853],
#> [0.6272],
#> [0.7234],
#> [0.0161],
#> [0.5944],
#> [0.5568],
#> [0.1590],
#> [0.1531],
#> [0.6955],
#> [0.3188],
#> [0.6920],
#> [0.5544],
#> [0.3890],
#> [0.9251],
#> [0.8417],
#> [0.3574],
#> [0.0436],
#> [0.3048],
#> [0.3982],
#> [0.7050],
#> [0.9954],
#> [0.3559],
#> [0.7625],
#> [0.5932],
#> [0.6917],
#> [0.1511],
#> [0.3989],
#> [0.2409],
#> [0.3435]])
```
10\.5 Creating the network model
--------------------------------
Our network model is a simple Linear layer with an input and an output shape of one.
And the network output should be like this
```
Net(
(hidden): Linear(in_features=1, out_features=1, bias=True)
)
```
```
py_run_string("import torch")
main = py_run_string(
"
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = torch.nn.Linear(1, 1)
def forward(self, x):
x = self.layer(x)
return x
")
# build a Linear Rgression model
net <- main$Net()
print(net)
```
```
#> Net(
#> (layer): Linear(in_features=1, out_features=1, bias=True)
#> )
```
10\.6 Optimizer and Loss
------------------------
Next, you should define the Optimizer and the Loss Function for our training process.
```
# Define Optimizer and Loss Function
optimizer <- torch$optim$SGD(net$parameters(), lr=0.2)
loss_func <- torch$nn$MSELoss()
print(optimizer)
print(loss_func)
```
```
#> SGD (
#> Parameter Group 0
#> dampening: 0
#> lr: 0.2
#> momentum: 0
#> nesterov: False
#> weight_decay: 0
#> )
#> MSELoss()
```
10\.7 Training
--------------
Now let’s start our training process. With an epoch of 250, you will iterate our data to find the best value for our hyperparameters.
```
# x = x$type(torch$float) # make it a a FloatTensor
# y = y$type(torch$float)
# x <- torch$as_tensor(x, dtype = torch$float)
# y <- torch$as_tensor(y, dtype = torch$float)
inputs = Variable(x)
outputs = Variable(y)
# base plot
plot(x$data$numpy(), y$data$numpy(), col = "blue")
for (i in 1:250) {
prediction = net(inputs)
loss = loss_func(prediction, outputs)
optimizer$zero_grad()
loss$backward()
optimizer$step()
if (i > 1) break
if (i %% 10 == 0) {
# plot and show learning process
# points(x$data$numpy(), y$data$numpy())
points(x$data$numpy(), prediction$data$numpy(), col="red")
# cat(i, loss$data$numpy(), "\n")
}
}
```
10\.8 Results
-------------
As you can see, you successfully performed regression with a neural network. Actually, on every iteration, the red line in the plot will update and change its position to fit the data. But in this picture, you only show you the final result.
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/linear-regression-1.html |
Chapter 11 Linear Regression
============================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
11\.1 Rainfall prediction
-------------------------
```
library(rTorch)
```
Select the device: CPU or GPU
```
invisible(torch$manual_seed(0))
device = torch$device('cpu')
```
11\.2 Training data
-------------------
The training data can be represented using 2 matrices (inputs and targets), each with one row per observation, and one column per variable.
```
# Input (temp, rainfall, humidity)
inputs = np$array(list(list(73, 67, 43),
list(91, 88, 64),
list(87, 134, 58),
list(102, 43, 37),
list(69, 96, 70)), dtype='float32')
# Targets (apples, oranges)
targets = np$array(list(list(56, 70),
list(81, 101),
list(119, 133),
list(22, 37),
list(103, 119)), dtype='float32')
```
11\.3 Convert arrays to tensors
-------------------------------
Before we build a model, we need to convert inputs and targets to PyTorch tensors.
```
# Convert inputs and targets to tensors
inputs = torch$from_numpy(inputs)
targets = torch$from_numpy(targets)
print(inputs)
print(targets)
```
```
#> tensor([[ 73., 67., 43.],
#> [ 91., 88., 64.],
#> [ 87., 134., 58.],
#> [102., 43., 37.],
#> [ 69., 96., 70.]], dtype=torch.float64)
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]], dtype=torch.float64)
```
The weights and biases can also be represented as matrices, initialized with random values. The first row of \\(w\\) and the first element of \\(b\\) are used to predict the first target variable, i.e. yield for apples, and, similarly, the second for oranges.
```
# random numbers for weights and biases. Then convert to double()
torch$set_default_dtype(torch$double)
w = torch$randn(2L, 3L, requires_grad=TRUE) #$double()
b = torch$randn(2L, requires_grad=TRUE) #$double()
print(w)
print(b)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([0.4033, 0.8380], requires_grad=True)
```
11\.4 Build the model
---------------------
The model is simply a function that performs a matrix multiplication of the input \\(x\\) and the weights \\(w\\) (transposed), and adds the bias \\(b\\) (replicated for each observation).
```
model <- function(x) {
wt <- w$t()
return(torch$add(torch$mm(x, wt), b))
}
```
11\.5 Generate predictions
--------------------------
The matrix obtained by passing the input data to the model is a set of predictions for the target variables.
```
# Generate predictions
preds = model(inputs)
print(preds)
```
```
#> tensor([[ -0.4516, -90.4691],
#> [ -24.6303, -132.3828],
#> [ -31.2192, -176.1530],
#> [ 64.3523, -39.5645],
#> [ -73.9524, -161.9560]], grad_fn=<AddBackward0>)
```
```
# Compare with targets
print(targets)
```
```
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
Because we’ve started with random weights and biases, the model does not a very good job of predicting the target variables.
11\.6 Loss Function
-------------------
We can compare the predictions with the actual targets, using the following method:
* Calculate the difference between the two matrices (preds and targets).
* Square all elements of the difference matrix to remove negative values.
* Calculate the average of the elements in the resulting matrix.
The result is a single number, known as the mean squared error (MSE).
```
# MSE loss
mse = function(t1, t2) {
diff <- torch$sub(t1, t2)
mul <- torch$sum(torch$mul(diff, diff))
return(torch$div(mul, diff$numel()))
}
print(mse)
```
```
#> function(t1, t2) {
#> diff <- torch$sub(t1, t2)
#> mul <- torch$sum(torch$mul(diff, diff))
#> return(torch$div(mul, diff$numel()))
#> }
```
11\.7 Step by step process
--------------------------
### 11\.7\.1 Compute the losses
```
# Compute loss
loss = mse(preds, targets)
print(loss)
# 46194
# 33060.8070
```
```
#> tensor(33060.8053, grad_fn=<DivBackward0>)
```
The resulting number is called the **loss**, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model.
### 11\.7\.2 Compute Gradients
With PyTorch, we can automatically compute the gradient or derivative of the loss w.r.t. to the weights and biases, because they have `requires_grad` set to True.
```
# Compute gradients
loss$backward()
```
The gradients are stored in the .grad property of the respective tensors.
```
# Gradients for weights
print(w)
print(w$grad)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([[ -6938.4351, -9674.6757, -5744.0206],
#> [-17408.7861, -20595.9333, -12453.4702]])
```
```
# Gradients for bias
print(b)
print(b$grad)
```
```
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([ -89.3802, -212.1051])
```
A key insight from calculus is that the gradient indicates the rate of change of the loss, or the slope of the loss function w.r.t. the weights and biases.
* If a gradient element is positive:
+ increasing the element’s value slightly will increase the loss.
+ decreasing the element’s value slightly will decrease the loss.
* If a gradient element is negative,
+ increasing the element’s value slightly will decrease the loss.
+ decreasing the element’s value slightly will increase the loss.
The increase or decrease is proportional to the value of the gradient.
### 11\.7\.3 Reset the gradients
Finally, we’ll reset the gradients to zero before moving forward, because PyTorch accumulates gradients.
```
# Reset the gradients
w$grad$zero_()
b$grad$zero_()
print(w$grad)
print(b$grad)
```
```
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
```
#### 11\.7\.3\.1 Adjust weights and biases
We’ll reduce the loss and improve our model using the gradient descent algorithm, which has the following steps:
1. Generate predictions
2. Calculate the loss
3. Compute gradients w.r.t the weights and biases
4. Adjust the weights by subtracting a small quantity proportional to the gradient
5. Reset the gradients to zero
```
# Generate predictions
preds = model(inputs)
print(preds)
```
```
#> tensor([[ -0.4516, -90.4691],
#> [ -24.6303, -132.3828],
#> [ -31.2192, -176.1530],
#> [ 64.3523, -39.5645],
#> [ -73.9524, -161.9560]], grad_fn=<AddBackward0>)
```
```
# Calculate the loss
loss = mse(preds, targets)
print(loss)
```
```
#> tensor(33060.8053, grad_fn=<DivBackward0>)
```
```
# Compute gradients
loss$backward()
print(w$grad)
print(b$grad)
```
```
#> tensor([[ -6938.4351, -9674.6757, -5744.0206],
#> [-17408.7861, -20595.9333, -12453.4702]])
#> tensor([ -89.3802, -212.1051])
```
```
# Adjust weights and reset gradients
with(torch$no_grad(), {
print(w); print(b) # requires_grad attribute remains
w$data <- torch$sub(w$data, torch$mul(w$grad$data, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad$data, torch$scalar_tensor(1e-5)))
print(w$grad$data$zero_())
print(b$grad$data$zero_())
})
print(w)
print(b)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[ 1.6104, -0.1967, -2.1213],
#> [ 0.7425, -0.8786, -1.2741]], requires_grad=True)
#> tensor([0.4042, 0.8401], requires_grad=True)
```
With the new weights and biases, the model should have a lower loss.
```
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
```
```
#> tensor(23432.4894, grad_fn=<DivBackward0>)
```
11\.8 All together
------------------
\#\#\#T Training for multiple epochs
To reduce the loss further, we repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an **epoch**.
```
# Running all together
# Adjust weights and reset gradients
num_epochs <- 100
for (i in 1:num_epochs) {
preds = model(inputs)
loss = mse(preds, targets)
loss$backward()
with(torch$no_grad(), {
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad, torch$scalar_tensor(1e-5)))
w$grad$zero_()
b$grad$zero_()
})
}
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
# predictions
preds
# Targets
targets
```
```
#> tensor(1258.0216, grad_fn=<DivBackward0>)
#> tensor([[ 69.2462, 80.2082],
#> [ 73.7183, 97.2052],
#> [118.5780, 124.9272],
#> [ 89.2282, 92.7052],
#> [ 47.4648, 80.7782]], grad_fn=<AddBackward0>)
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
11\.1 Rainfall prediction
-------------------------
```
library(rTorch)
```
Select the device: CPU or GPU
```
invisible(torch$manual_seed(0))
device = torch$device('cpu')
```
11\.2 Training data
-------------------
The training data can be represented using 2 matrices (inputs and targets), each with one row per observation, and one column per variable.
```
# Input (temp, rainfall, humidity)
inputs = np$array(list(list(73, 67, 43),
list(91, 88, 64),
list(87, 134, 58),
list(102, 43, 37),
list(69, 96, 70)), dtype='float32')
# Targets (apples, oranges)
targets = np$array(list(list(56, 70),
list(81, 101),
list(119, 133),
list(22, 37),
list(103, 119)), dtype='float32')
```
11\.3 Convert arrays to tensors
-------------------------------
Before we build a model, we need to convert inputs and targets to PyTorch tensors.
```
# Convert inputs and targets to tensors
inputs = torch$from_numpy(inputs)
targets = torch$from_numpy(targets)
print(inputs)
print(targets)
```
```
#> tensor([[ 73., 67., 43.],
#> [ 91., 88., 64.],
#> [ 87., 134., 58.],
#> [102., 43., 37.],
#> [ 69., 96., 70.]], dtype=torch.float64)
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]], dtype=torch.float64)
```
The weights and biases can also be represented as matrices, initialized with random values. The first row of \\(w\\) and the first element of \\(b\\) are used to predict the first target variable, i.e. yield for apples, and, similarly, the second for oranges.
```
# random numbers for weights and biases. Then convert to double()
torch$set_default_dtype(torch$double)
w = torch$randn(2L, 3L, requires_grad=TRUE) #$double()
b = torch$randn(2L, requires_grad=TRUE) #$double()
print(w)
print(b)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([0.4033, 0.8380], requires_grad=True)
```
11\.4 Build the model
---------------------
The model is simply a function that performs a matrix multiplication of the input \\(x\\) and the weights \\(w\\) (transposed), and adds the bias \\(b\\) (replicated for each observation).
```
model <- function(x) {
wt <- w$t()
return(torch$add(torch$mm(x, wt), b))
}
```
11\.5 Generate predictions
--------------------------
The matrix obtained by passing the input data to the model is a set of predictions for the target variables.
```
# Generate predictions
preds = model(inputs)
print(preds)
```
```
#> tensor([[ -0.4516, -90.4691],
#> [ -24.6303, -132.3828],
#> [ -31.2192, -176.1530],
#> [ 64.3523, -39.5645],
#> [ -73.9524, -161.9560]], grad_fn=<AddBackward0>)
```
```
# Compare with targets
print(targets)
```
```
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
Because we’ve started with random weights and biases, the model does not a very good job of predicting the target variables.
11\.6 Loss Function
-------------------
We can compare the predictions with the actual targets, using the following method:
* Calculate the difference between the two matrices (preds and targets).
* Square all elements of the difference matrix to remove negative values.
* Calculate the average of the elements in the resulting matrix.
The result is a single number, known as the mean squared error (MSE).
```
# MSE loss
mse = function(t1, t2) {
diff <- torch$sub(t1, t2)
mul <- torch$sum(torch$mul(diff, diff))
return(torch$div(mul, diff$numel()))
}
print(mse)
```
```
#> function(t1, t2) {
#> diff <- torch$sub(t1, t2)
#> mul <- torch$sum(torch$mul(diff, diff))
#> return(torch$div(mul, diff$numel()))
#> }
```
11\.7 Step by step process
--------------------------
### 11\.7\.1 Compute the losses
```
# Compute loss
loss = mse(preds, targets)
print(loss)
# 46194
# 33060.8070
```
```
#> tensor(33060.8053, grad_fn=<DivBackward0>)
```
The resulting number is called the **loss**, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model.
### 11\.7\.2 Compute Gradients
With PyTorch, we can automatically compute the gradient or derivative of the loss w.r.t. to the weights and biases, because they have `requires_grad` set to True.
```
# Compute gradients
loss$backward()
```
The gradients are stored in the .grad property of the respective tensors.
```
# Gradients for weights
print(w)
print(w$grad)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([[ -6938.4351, -9674.6757, -5744.0206],
#> [-17408.7861, -20595.9333, -12453.4702]])
```
```
# Gradients for bias
print(b)
print(b$grad)
```
```
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([ -89.3802, -212.1051])
```
A key insight from calculus is that the gradient indicates the rate of change of the loss, or the slope of the loss function w.r.t. the weights and biases.
* If a gradient element is positive:
+ increasing the element’s value slightly will increase the loss.
+ decreasing the element’s value slightly will decrease the loss.
* If a gradient element is negative,
+ increasing the element’s value slightly will decrease the loss.
+ decreasing the element’s value slightly will increase the loss.
The increase or decrease is proportional to the value of the gradient.
### 11\.7\.3 Reset the gradients
Finally, we’ll reset the gradients to zero before moving forward, because PyTorch accumulates gradients.
```
# Reset the gradients
w$grad$zero_()
b$grad$zero_()
print(w$grad)
print(b$grad)
```
```
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
```
#### 11\.7\.3\.1 Adjust weights and biases
We’ll reduce the loss and improve our model using the gradient descent algorithm, which has the following steps:
1. Generate predictions
2. Calculate the loss
3. Compute gradients w.r.t the weights and biases
4. Adjust the weights by subtracting a small quantity proportional to the gradient
5. Reset the gradients to zero
```
# Generate predictions
preds = model(inputs)
print(preds)
```
```
#> tensor([[ -0.4516, -90.4691],
#> [ -24.6303, -132.3828],
#> [ -31.2192, -176.1530],
#> [ 64.3523, -39.5645],
#> [ -73.9524, -161.9560]], grad_fn=<AddBackward0>)
```
```
# Calculate the loss
loss = mse(preds, targets)
print(loss)
```
```
#> tensor(33060.8053, grad_fn=<DivBackward0>)
```
```
# Compute gradients
loss$backward()
print(w$grad)
print(b$grad)
```
```
#> tensor([[ -6938.4351, -9674.6757, -5744.0206],
#> [-17408.7861, -20595.9333, -12453.4702]])
#> tensor([ -89.3802, -212.1051])
```
```
# Adjust weights and reset gradients
with(torch$no_grad(), {
print(w); print(b) # requires_grad attribute remains
w$data <- torch$sub(w$data, torch$mul(w$grad$data, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad$data, torch$scalar_tensor(1e-5)))
print(w$grad$data$zero_())
print(b$grad$data$zero_())
})
print(w)
print(b)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[ 1.6104, -0.1967, -2.1213],
#> [ 0.7425, -0.8786, -1.2741]], requires_grad=True)
#> tensor([0.4042, 0.8401], requires_grad=True)
```
With the new weights and biases, the model should have a lower loss.
```
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
```
```
#> tensor(23432.4894, grad_fn=<DivBackward0>)
```
### 11\.7\.1 Compute the losses
```
# Compute loss
loss = mse(preds, targets)
print(loss)
# 46194
# 33060.8070
```
```
#> tensor(33060.8053, grad_fn=<DivBackward0>)
```
The resulting number is called the **loss**, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model.
### 11\.7\.2 Compute Gradients
With PyTorch, we can automatically compute the gradient or derivative of the loss w.r.t. to the weights and biases, because they have `requires_grad` set to True.
```
# Compute gradients
loss$backward()
```
The gradients are stored in the .grad property of the respective tensors.
```
# Gradients for weights
print(w)
print(w$grad)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([[ -6938.4351, -9674.6757, -5744.0206],
#> [-17408.7861, -20595.9333, -12453.4702]])
```
```
# Gradients for bias
print(b)
print(b$grad)
```
```
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([ -89.3802, -212.1051])
```
A key insight from calculus is that the gradient indicates the rate of change of the loss, or the slope of the loss function w.r.t. the weights and biases.
* If a gradient element is positive:
+ increasing the element’s value slightly will increase the loss.
+ decreasing the element’s value slightly will decrease the loss.
* If a gradient element is negative,
+ increasing the element’s value slightly will decrease the loss.
+ decreasing the element’s value slightly will increase the loss.
The increase or decrease is proportional to the value of the gradient.
### 11\.7\.3 Reset the gradients
Finally, we’ll reset the gradients to zero before moving forward, because PyTorch accumulates gradients.
```
# Reset the gradients
w$grad$zero_()
b$grad$zero_()
print(w$grad)
print(b$grad)
```
```
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
```
#### 11\.7\.3\.1 Adjust weights and biases
We’ll reduce the loss and improve our model using the gradient descent algorithm, which has the following steps:
1. Generate predictions
2. Calculate the loss
3. Compute gradients w.r.t the weights and biases
4. Adjust the weights by subtracting a small quantity proportional to the gradient
5. Reset the gradients to zero
```
# Generate predictions
preds = model(inputs)
print(preds)
```
```
#> tensor([[ -0.4516, -90.4691],
#> [ -24.6303, -132.3828],
#> [ -31.2192, -176.1530],
#> [ 64.3523, -39.5645],
#> [ -73.9524, -161.9560]], grad_fn=<AddBackward0>)
```
```
# Calculate the loss
loss = mse(preds, targets)
print(loss)
```
```
#> tensor(33060.8053, grad_fn=<DivBackward0>)
```
```
# Compute gradients
loss$backward()
print(w$grad)
print(b$grad)
```
```
#> tensor([[ -6938.4351, -9674.6757, -5744.0206],
#> [-17408.7861, -20595.9333, -12453.4702]])
#> tensor([ -89.3802, -212.1051])
```
```
# Adjust weights and reset gradients
with(torch$no_grad(), {
print(w); print(b) # requires_grad attribute remains
w$data <- torch$sub(w$data, torch$mul(w$grad$data, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad$data, torch$scalar_tensor(1e-5)))
print(w$grad$data$zero_())
print(b$grad$data$zero_())
})
print(w)
print(b)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[ 1.6104, -0.1967, -2.1213],
#> [ 0.7425, -0.8786, -1.2741]], requires_grad=True)
#> tensor([0.4042, 0.8401], requires_grad=True)
```
With the new weights and biases, the model should have a lower loss.
```
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
```
```
#> tensor(23432.4894, grad_fn=<DivBackward0>)
```
#### 11\.7\.3\.1 Adjust weights and biases
We’ll reduce the loss and improve our model using the gradient descent algorithm, which has the following steps:
1. Generate predictions
2. Calculate the loss
3. Compute gradients w.r.t the weights and biases
4. Adjust the weights by subtracting a small quantity proportional to the gradient
5. Reset the gradients to zero
```
# Generate predictions
preds = model(inputs)
print(preds)
```
```
#> tensor([[ -0.4516, -90.4691],
#> [ -24.6303, -132.3828],
#> [ -31.2192, -176.1530],
#> [ 64.3523, -39.5645],
#> [ -73.9524, -161.9560]], grad_fn=<AddBackward0>)
```
```
# Calculate the loss
loss = mse(preds, targets)
print(loss)
```
```
#> tensor(33060.8053, grad_fn=<DivBackward0>)
```
```
# Compute gradients
loss$backward()
print(w$grad)
print(b$grad)
```
```
#> tensor([[ -6938.4351, -9674.6757, -5744.0206],
#> [-17408.7861, -20595.9333, -12453.4702]])
#> tensor([ -89.3802, -212.1051])
```
```
# Adjust weights and reset gradients
with(torch$no_grad(), {
print(w); print(b) # requires_grad attribute remains
w$data <- torch$sub(w$data, torch$mul(w$grad$data, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad$data, torch$scalar_tensor(1e-5)))
print(w$grad$data$zero_())
print(b$grad$data$zero_())
})
print(w)
print(b)
```
```
#> tensor([[ 1.5410, -0.2934, -2.1788],
#> [ 0.5684, -1.0845, -1.3986]], requires_grad=True)
#> tensor([0.4033, 0.8380], requires_grad=True)
#> tensor([[0., 0., 0.],
#> [0., 0., 0.]])
#> tensor([0., 0.])
#> tensor([[ 1.6104, -0.1967, -2.1213],
#> [ 0.7425, -0.8786, -1.2741]], requires_grad=True)
#> tensor([0.4042, 0.8401], requires_grad=True)
```
With the new weights and biases, the model should have a lower loss.
```
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
```
```
#> tensor(23432.4894, grad_fn=<DivBackward0>)
```
11\.8 All together
------------------
\#\#\#T Training for multiple epochs
To reduce the loss further, we repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an **epoch**.
```
# Running all together
# Adjust weights and reset gradients
num_epochs <- 100
for (i in 1:num_epochs) {
preds = model(inputs)
loss = mse(preds, targets)
loss$backward()
with(torch$no_grad(), {
w$data <- torch$sub(w$data, torch$mul(w$grad, torch$scalar_tensor(1e-5)))
b$data <- torch$sub(b$data, torch$mul(b$grad, torch$scalar_tensor(1e-5)))
w$grad$zero_()
b$grad$zero_()
})
}
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
# predictions
preds
# Targets
targets
```
```
#> tensor(1258.0216, grad_fn=<DivBackward0>)
#> tensor([[ 69.2462, 80.2082],
#> [ 73.7183, 97.2052],
#> [118.5780, 124.9272],
#> [ 89.2282, 92.7052],
#> [ 47.4648, 80.7782]], grad_fn=<AddBackward0>)
#> tensor([[ 56., 70.],
#> [ 81., 101.],
#> [119., 133.],
#> [ 22., 37.],
#> [103., 119.]])
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/neural-networks.html |
Chapter 12 Neural Networks
==========================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
12\.1 rTorch and PyTorch
------------------------
We will compare three neural networks:
* a neural network written in `numpy`
* a neural network written in `r-base`
* a neural network written in `PyTorch`
* a neural network written in `rTorch`
12\.2 A neural network with `numpy`
-----------------------------------
We start the neural network by simply using `numpy`:
```
library(rTorch)
```
```
# A simple neural network using NumPy
# Code in file tensor/two_layer_net_numpy.py
import time
import numpy as np
tic = time.process_time()
np.random.seed(123) # set a seed for reproducibility
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)
# print(x.shape)
# print(y.shape)
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)
# print(w1.shape)
# print(w2.shape)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.dot(w1)
# print(t, h.max())
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)
# Compute and print loss
sq = np.square(y_pred - y)
loss = sq.sum()
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
# processing time
```
```
#> 0 28624200.800938517
#> 1 24402861.381040636
#> 2 23157437.29147552
#> 3 21617191.63397175
#> 4 18598190.361558598
#> 5 14198211.419692844
#> 6 9786244.45261814
#> 7 6233451.217340663
#> 8 3862647.267829599
#> 9 2412366.632764836
#> 10 1569915.4392193707
#> 11 1078501.3381487518
#> 12 785163.9233288621
#> 13 601495.2825043725
#> 14 479906.0403613456
#> 15 394555.19331746205
#> 16 331438.6987273826
#> 17 282679.6687873873
#> 18 243807.84432087594
#> 19 211970.18110708205
#> 20 185451.6861514274
#> 21 163078.20881862927
#> 22 144011.80160918707
#> 23 127662.96132466741
#> 24 113546.29175681781
#> 25 101291.55288493488
#> 26 90623.20833654879
#> 27 81307.32590692889
#> 28 73135.24710426925
#> 29 65937.50294095621
#> 30 59570.26425368039
#> 31 53923.82804264227
#> 32 48909.69273028215
#> 33 44438.89933807681
#> 34 40445.34031569733
#> 35 36873.30041989413
#> 36 33664.990437423825
#> 37 30781.198962949587
#> 38 28184.24227268406
#> 39 25843.99793108194
#> 40 23727.282448406426
#> 41 21810.062067327668
#> 42 20071.326437572196
#> 43 18492.63752543329
#> 44 17056.72779714255
#> 45 15749.299484025236
#> 46 14557.324481207237
#> 47 13468.469764338035
#> 48 12473.575866914027
#> 49 11562.485809665774
#> 50 10727.865926563407
#> 51 9962.411372816146
#> 52 9259.619803682268
#> 53 8613.269071227103
#> 54 8018.523834750763
#> 55 7471.080819104451
#> 56 6966.00651845651
#> 57 6499.96685422581
#> 58 6069.576425345411
#> 59 5671.2821228408475
#> 60 5302.644980086279
#> 61 4961.339043761728
#> 62 4645.02541423451
#> 63 4351.473575805103
#> 64 4079.2165446062972
#> 65 3826.1480820887655
#> 66 3590.887308956795
#> 67 3372.0103280622666
#> 68 3168.173408650748
#> 69 2978.362100081684
#> 70 2801.302649097963
#> 71 2636.037950790892
#> 72 2481.7354010452655
#> 73 2337.6093944873246
#> 74 2202.8250425683987
#> 75 2076.8872560589616
#> 76 1958.9976460120263
#> 77 1848.5060338548483
#> 78 1744.9993380824799
#> 79 1647.9807349258715
#> 80 1556.9947585282196
#> 81 1471.7081797400347
#> 82 1391.6136870762566
#> 83 1316.3329239757227
#> 84 1245.5902641069824
#> 85 1179.0691783286234
#> 86 1116.5095209528572
#> 87 1057.6662051951396
#> 88 1002.2519686823666
#> 89 950.0167505993219
#> 90 900.7916929993518
#> 91 854.3816389576979
#> 92 810.6277767708903
#> 93 769.3592041348505
#> 94 730.3836012940042
#> 95 693.5644048073411
#> 96 658.7807027999521
#> 97 625.9238747325827
#> 98 594.8758111695068
#> 99 565.4973547949257
#> 100 537.7012178149556
#> 101 511.3901106843991
#> 102 486.4837276215478
#> 103 462.90746955458474
#> 104 440.5787622887435
#> 105 419.4121231392399
#> 106 399.34612374957226
#> 107 380.3221777272873
#> 108 362.2821345456067
#> 109 345.18049757120184
#> 110 328.94028615976936
#> 111 313.5191206271147
#> 112 298.8754770672758
#> 113 284.96926791620496
#> 114 271.7642984526849
#> 115 259.2246266311472
#> 116 247.30122156531897
#> 117 235.96203976771662
#> 118 225.17874184522793
#> 119 214.9253969806085
#> 120 205.16916168826197
#> 121 195.88920014324063
#> 122 187.0522150132689
#> 123 178.6428873875804
#> 124 170.63479897325027
#> 125 163.00806018890546
#> 126 155.7440191346056
#> 127 148.83352898111042
#> 128 142.2496666996878
#> 129 135.97509122834504
#> 130 129.98982612428355
#> 131 124.28418865778005
#> 132 118.84482149781273
#> 133 113.65645952102406
#> 134 108.7054397008061
#> 135 103.98144604072209
#> 136 99.47512083365962
#> 137 95.17318303450762
#> 138 91.06775169947714
#> 139 87.14952592945869
#> 140 83.4075554849774
#> 141 79.8333553283839
#> 142 76.41993249926654
#> 143 73.159531678603
#> 144 70.04535899921396
#> 145 67.0700037713867
#> 146 64.22536514818646
#> 147 61.50715956099643
#> 148 58.90970110703718
#> 149 56.42818157298958
#> 150 54.053456343974474
#> 151 51.78409899250521
#> 152 49.613042222061935
#> 153 47.537088681832714
#> 154 45.55073951374691
#> 155 43.651385230775375
#> 156 41.8333828820336
#> 157 40.0944925576898
#> 158 38.4304655768987
#> 159 36.83773398481151
#> 160 35.313368600585044
#> 161 33.85436928433868
#> 162 32.457997092726586
#> 163 31.120973836567913
#> 164 29.841057186484246
#> 165 28.61536631365921
#> 166 27.441646501921213
#> 167 26.31767712811449
#> 168 25.241065734351473
#> 169 24.210568668753154
#> 170 23.223366825888164
#> 171 22.27691447596546
#> 172 21.370561777029383
#> 173 20.502013041055037
#> 174 19.669605151002397
#> 175 18.872156637147214
#> 176 18.107932697664136
#> 177 17.375347093063624
#> 178 16.67329705241241
#> 179 16.000313127916616
#> 180 15.355056259809643
#> 181 14.736642044314163
#> 182 14.143657665391123
#> 183 13.575482981169435
#> 184 13.03055792072713
#> 185 12.507813624903267
#> 186 12.00650847964371
#> 187 11.525873890625666
#> 188 11.064924569594556
#> 189 10.622845128602144
#> 190 10.199224278747348
#> 191 9.79248532294249
#> 192 9.40221537769526
#> 193 9.027996925837858
#> 194 8.668895520243254
#> 195 8.324385761675554
#> 196 7.99390867066041
#> 197 7.676665609325665
#> 198 7.3722991001285685
#> 199 7.080233920966563
#> 200 6.7999405980009
#> 201 6.530984430178585
#> 202 6.2728878687947365
#> 203 6.025197539285438
#> 204 5.787473375780924
#> 205 5.559253501791474
#> 206 5.340172472449113
#> 207 5.129896948041436
#> 208 4.928007606815918
#> 209 4.734225282679221
#> 210 4.548186858907342
#> 211 4.369651328446663
#> 212 4.198236457646962
#> 213 4.033565011138579
#> 214 3.8754625080281464
#> 215 3.7236914115521316
#> 216 3.5779627242857224
#> 217 3.4379821914239286
#> 218 3.303565587540205
#> 219 3.174454405800678
#> 220 3.0504743070396323
#> 221 2.931383709316906
#> 222 2.8170418304762785
#> 223 2.7072412196038553
#> 224 2.6017277000868093
#> 225 2.50040409121904
#> 226 2.403078781570677
#> 227 2.309594481835507
#> 228 2.219794799730801
#> 229 2.133526678637347
#> 230 2.0506760423604566
#> 231 1.9710453639295484
#> 232 1.894559024310974
#> 233 1.8211210547720629
#> 234 1.7505340383436803
#> 235 1.6826932948721067
#> 236 1.6175070289508109
#> 237 1.5549072300348752
#> 238 1.4947316986695944
#> 239 1.436912502600996
#> 240 1.381372987946563
#> 241 1.3279854205041584
#> 242 1.2766884038688984
#> 243 1.2273848146334094
#> 244 1.1800217450316255
#> 245 1.1344919105891025
#> 246 1.0907369940975837
#> 247 1.0486826235693274
#> 248 1.0082656206399931
#> 249 0.9694282665755529
#> 250 0.9320976601575675
#> 251 0.8962339607475229
#> 252 0.8617533865905884
#> 253 0.8286151485833971
#> 254 0.7967578289852474
#> 255 0.7661404678425654
#> 256 0.7367202044072118
#> 257 0.708422713667491
#> 258 0.6812311487720265
#> 259 0.6550822696783506
#> 260 0.6299469090210432
#> 261 0.605786995355434
#> 262 0.5825650778276774
#> 263 0.5602382140936045
#> 264 0.5387735503110371
#> 265 0.5181403816556053
#> 266 0.49830590931295304
#> 267 0.47922937308117297
#> 268 0.46088901492620127
#> 269 0.44325464817119054
#> 270 0.42630408406116316
#> 271 0.41000543380657917
#> 272 0.39433673295843236
#> 273 0.37927114581493265
#> 274 0.36478176529460243
#> 275 0.35085044445134994
#> 276 0.3374578361158044
#> 277 0.32457682402453136
#> 278 0.31219123729919207
#> 279 0.300296586147234
#> 280 0.28884848624094894
#> 281 0.27783526470539743
#> 282 0.26724487697010957
#> 283 0.2570618106928273
#> 284 0.2472693951468085
#> 285 0.23785306876436113
#> 286 0.22879648231270536
#> 287 0.22008909643106767
#> 288 0.21171318526106842
#> 289 0.2036578219834066
#> 290 0.19591133993811427
#> 291 0.18846041746510728
#> 292 0.18129477007162065
#> 293 0.174405315161736
#> 294 0.16777998120837712
#> 295 0.16140610523836268
#> 296 0.1552756501716649
#> 297 0.14937904644542377
#> 298 0.14370793039467633
#> 299 0.13825290527822973
#> 300 0.13300640130439656
#> 301 0.12796012311324031
#> 302 0.12310750541656884
#> 303 0.11844182274749851
#> 304 0.11395158652041627
#> 305 0.10963187686672912
#> 306 0.10547640155933785
#> 307 0.10148022089409026
#> 308 0.0976363799328684
#> 309 0.09393976586801374
#> 310 0.09038186218007657
#> 311 0.08696004033318867
#> 312 0.08366808215670352
#> 313 0.08050159133387036
#> 314 0.0774556507265311
#> 315 0.07452541616811464
#> 316 0.07170677388789805
#> 317 0.06899492388917926
#> 318 0.06638632065320674
#> 319 0.06387707772657374
#> 320 0.06146291085125196
#> 321 0.0591402294396231
#> 322 0.05690662209831464
#> 323 0.05475707395743591
#> 324 0.05268944906989688
#> 325 0.05069984545069233
#> 326 0.048785688597973095
#> 327 0.046944795197577285
#> 328 0.045173966618895535
#> 329 0.043469382749897256
#> 330 0.04182932192085659
#> 331 0.04025154186795582
#> 332 0.038733588417595735
#> 333 0.03727299017402862
#> 334 0.03586799441058297
#> 335 0.03451589218265247
#> 336 0.03321501089199479
#> 337 0.03196371785309425
#> 338 0.030759357425241718
#> 339 0.029600888472444742
#> 340 0.028485919148238392
#> 341 0.02741317225069457
#> 342 0.026380963792005673
#> 343 0.025387828276963217
#> 344 0.02443225636975702
#> 345 0.02351279471955997
#> 346 0.02262815392798661
#> 347 0.02177684408442846
#> 348 0.02095765200803268
#> 349 0.02016947466161515
#> 350 0.019410962895712616
#> 351 0.018681045066734122
#> 352 0.017978879513468316
#> 353 0.017303468563130222
#> 354 0.016653437842251186
#> 355 0.01602766278432409
#> 356 0.015425464893044428
#> 357 0.01484594678906112
#> 358 0.014288249850265784
#> 359 0.01375163575426638
#> 360 0.01323528665049373
#> 361 0.012738339025978556
#> 362 0.012260186918304262
#> 363 0.011799970856220952
#> 364 0.011357085981162363
#> 365 0.010930950268775873
#> 366 0.010520842685022909
#> 367 0.010126145830079638
#> 368 0.009746393154855839
#> 369 0.009380889339520658
#> 370 0.009029161386689313
#> 371 0.00869059833698051
#> 372 0.00836477207696539
#> 373 0.008051209390678065
#> 374 0.0077494325069793705
#> 375 0.007459023266150334
#> 376 0.007179590434333104
#> 377 0.006910623445853765
#> 378 0.006651749941578513
#> 379 0.006402648026678379
#> 380 0.006162978285307884
#> 381 0.005932194796367616
#> 382 0.005710085052295781
#> 383 0.005496310244895275
#> 384 0.0052906289241425215
#> 385 0.0050926241688279104
#> 386 0.004902076613033862
#> 387 0.004718638851167859
#> 388 0.004542078962047164
#> 389 0.004372164586665975
#> 390 0.004208618626839021
#> 391 0.004051226677923414
#> 392 0.0038997374494828298
#> 393 0.003753918301513866
#> 394 0.003613561837935153
#> 395 0.0034784786917529164
#> 396 0.003348462575629662
#> 397 0.003223327362263324
#> 398 0.0031028635490837437
#> 399 0.002986912218213565
#> 400 0.002875348146367024
#> 401 0.0027679524720207994
#> 402 0.0026645903412969877
#> 403 0.00256506728009952
#> 404 0.0024692701898842025
#> 405 0.0023770671718814063
#> 406 0.0022883091777422303
#> 407 0.0022029269889801703
#> 408 0.0021207379368966914
#> 409 0.0020415781423120893
#> 410 0.001965380838191689
#> 411 0.0018920388674650765
#> 412 0.0018214489876606395
#> 413 0.0017534990549357195
#> 414 0.0016880979054376358
#> 415 0.0016251364192863505
#> 416 0.0015645343026947606
#> 417 0.0015062064772070694
#> 418 0.0014500530088225327
#> 419 0.0013959868097274688
#> 420 0.001343946421404061
#> 421 0.0012938496041169677
#> 422 0.001245622397754905
#> 423 0.0011992050880615885
#> 424 0.0011545283489900085
#> 425 0.0011115075856686302
#> 426 0.001070100670544413
#> 427 0.0010302364937566674
#> 428 0.0009918591300819473
#> 429 0.000954924393232083
#> 430 0.0009193639132775486
#> 431 0.0008851308467932729
#> 432 0.0008521777959560448
#> 433 0.0008204570911784497
#> 434 0.0007899223397731109
#> 435 0.0007605278374214596
#> 436 0.0007322343466954752
#> 437 0.0007049830914115257
#> 438 0.0006787512341473519
#> 439 0.00065350212037464
#> 440 0.0006291921955255096
#> 441 0.0006057856348208776
#> 442 0.0005832525024800561
#> 443 0.0005615598539424442
#> 444 0.0005406761235200468
#> 445 0.0005205750249286578
#> 446 0.0005012184845940066
#> 447 0.0004825848028301716
#> 448 0.0004646447575300741
#> 449 0.0004473739461918762
#> 450 0.0004307513759213604
#> 451 0.00041474810355609723
#> 452 0.00039933580480713945
#> 453 0.0003844970781264902
#> 454 0.0003702109250696993
#> 455 0.00035645948619340297
#> 456 0.0003432213223641764
#> 457 0.0003304723731848576
#> 458 0.00031819830164465815
#> 459 0.00030638121798918724
#> 460 0.0002950045353519474
#> 461 0.0002840533130499193
#> 462 0.00027350873727298176
#> 463 0.00026335657398426546
#> 464 0.000253581258369829
#> 465 0.00024416913722126747
#> 466 0.0002351142689424904
#> 467 0.0002263919313737711
#> 468 0.00021799257674327073
#> 469 0.00020990427540056088
#> 470 0.0002021174506938248
#> 471 0.00019462054044199915
#> 472 0.00018740325426984858
#> 473 0.00018045252249983815
#> 474 0.000173759960543912
#> 475 0.00016731630060690805
#> 476 0.0001611122710715995
#> 477 0.00015513993832625702
#> 478 0.00014938925941558148
#> 479 0.00014385207870578823
#> 480 0.00013852014130375656
#> 481 0.00013338601187671428
#> 482 0.000128442793294424
#> 483 0.0001236841045646944
#> 484 0.00011910150087090696
#> 485 0.00011468967274610794
#> 486 0.00011044058002490428
#> 487 0.00010634983745106246
#> 488 0.00010241132940006558
#> 489 9.861901302344988e-05
#> 490 9.496682985475842e-05
#> 491 9.144989845880715e-05
#> 492 8.806354488018214e-05
#> 493 8.480312707749194e-05
#> 494 8.166404591653792e-05
#> 495 7.864135637113095e-05
#> 496 7.573027443124469e-05
#> 497 7.292787602990206e-05
#> 498 7.023030228370285e-05
#> 499 6.763183953445079e-05
```
```
toc = time.process_time()
print(toc - tic, "seconds")
```
```
#> 6.927609346000001 seconds
```
12\.3 A neural network with `r-base`
------------------------------------
It is the same algorithm above in `numpy` but written in R base.
```
library(tictoc)
tic()
set.seed(123)
N <- 64; D_in <- 1000; H <- 100; D_out <- 10;
# Create random input and output data
x <- array(rnorm(N * D_in), dim = c(N, D_in))
y <- array(rnorm(N * D_out), dim = c(N, D_out))
# Randomly initialize weights
w1 <- array(rnorm(D_in * H), dim = c(D_in, H))
w2 <- array(rnorm(H * D_out), dim = c(H, D_out))
learning_rate <- 1e-6
for (t in seq(1, 500)) {
# Forward pass: compute predicted y
h = x %*% w1
h_relu = pmax(h, 0)
y_pred = h_relu %*% w2
# Compute and print loss
sq <- (y_pred - y)^2
loss = sum(sq)
cat(t, loss, "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = t(h_relu) %*% grad_y_pred
grad_h_relu = grad_y_pred %*% t(w2)
# grad_h <- sapply(grad_h_relu, function(i) i, simplify = FALSE ) # grad_h = grad_h_relu.copy()
grad_h <- rlang::duplicate(grad_h_relu)
grad_h[h < 0] <- 0
grad_w1 = t(x) %*% grad_h
# Update weights
w1 = w1 - learning_rate * grad_w1
w2 = w2 - learning_rate * grad_w2
}
toc()
```
```
#> 1 2.8e+07
#> 2 25505803
#> 3 29441299
#> 4 35797650
#> 5 39517126
#> 6 34884942
#> 7 23333535
#> 8 11927525
#> 9 5352787
#> 10 2496984
#> 11 1379780
#> 12 918213
#> 13 695760
#> 14 564974
#> 15 474479
#> 16 405370
#> 17 349747
#> 18 303724
#> 19 265075
#> 20 232325
#> 21 204394
#> 22 180414
#> 23 159752
#> 24 141895
#> 25 126374
#> 26 112820
#> 27 100959
#> 28 90536
#> 29 81352
#> 30 73244
#> 31 66058
#> 32 59675
#> 33 53993
#> 34 48921
#> 35 44388
#> 36 40328
#> 37 36687
#> 38 33414
#> 39 30469
#> 40 27816
#> 41 25419
#> 42 23251
#> 43 21288
#> 44 19508
#> 45 17893
#> 46 16426
#> 47 15092
#> 48 13877
#> 49 12769
#> 50 11758
#> 51 10835
#> 52 9991
#> 53 9218
#> 54 8510
#> 55 7862
#> 56 7267
#> 57 6719
#> 58 6217
#> 59 5754
#> 60 5329
#> 61 4938
#> 62 4577
#> 63 4245
#> 64 3938
#> 65 3655
#> 66 3394
#> 67 3153
#> 68 2930
#> 69 2724
#> 70 2533
#> 71 2357
#> 72 2193
#> 73 2042
#> 74 1902
#> 75 1772
#> 76 1651
#> 77 1539
#> 78 1435
#> 79 1338
#> 80 1249
#> 81 1165
#> 82 1088
#> 83 1016
#> 84 949
#> 85 886
#> 86 828
#> 87 774
#> 88 724
#> 89 677
#> 90 633
#> 91 592
#> 92 554
#> 93 519
#> 94 486
#> 95 455
#> 96 426
#> 97 399
#> 98 374
#> 99 350
#> 100 328
#> 101 308
#> 102 289
#> 103 271
#> 104 254
#> 105 238
#> 106 224
#> 107 210
#> 108 197
#> 109 185
#> 110 174
#> 111 163
#> 112 153
#> 113 144
#> 114 135
#> 115 127
#> 116 119
#> 117 112
#> 118 106
#> 119 99.2
#> 120 93.3
#> 121 87.8
#> 122 82.6
#> 123 77.7
#> 124 73.1
#> 125 68.8
#> 126 64.7
#> 127 60.9
#> 128 57.4
#> 129 54
#> 130 50.9
#> 131 47.9
#> 132 45.1
#> 133 42.5
#> 134 40.1
#> 135 37.8
#> 136 35.6
#> 137 33.5
#> 138 31.6
#> 139 29.8
#> 140 28.1
#> 141 26.5
#> 142 25
#> 143 23.6
#> 144 22.2
#> 145 21
#> 146 19.8
#> 147 18.7
#> 148 17.6
#> 149 16.6
#> 150 15.7
#> 151 14.8
#> 152 14
#> 153 13.2
#> 154 12.5
#> 155 11.8
#> 156 11.1
#> 157 10.5
#> 158 9.94
#> 159 9.39
#> 160 8.87
#> 161 8.38
#> 162 7.92
#> 163 7.49
#> 164 7.08
#> 165 6.69
#> 166 6.32
#> 167 5.98
#> 168 5.65
#> 169 5.35
#> 170 5.06
#> 171 4.78
#> 172 4.52
#> 173 4.28
#> 174 4.05
#> 175 3.83
#> 176 3.62
#> 177 3.43
#> 178 3.25
#> 179 3.07
#> 180 2.91
#> 181 2.75
#> 182 2.6
#> 183 2.47
#> 184 2.33
#> 185 2.21
#> 186 2.09
#> 187 1.98
#> 188 1.88
#> 189 1.78
#> 190 1.68
#> 191 1.6
#> 192 1.51
#> 193 1.43
#> 194 1.36
#> 195 1.29
#> 196 1.22
#> 197 1.15
#> 198 1.09
#> 199 1.04
#> 200 0.983
#> 201 0.932
#> 202 0.883
#> 203 0.837
#> 204 0.794
#> 205 0.753
#> 206 0.714
#> 207 0.677
#> 208 0.642
#> 209 0.609
#> 210 0.577
#> 211 0.548
#> 212 0.519
#> 213 0.493
#> 214 0.467
#> 215 0.443
#> 216 0.421
#> 217 0.399
#> 218 0.379
#> 219 0.359
#> 220 0.341
#> 221 0.324
#> 222 0.307
#> 223 0.292
#> 224 0.277
#> 225 0.263
#> 226 0.249
#> 227 0.237
#> 228 0.225
#> 229 0.213
#> 230 0.203
#> 231 0.192
#> 232 0.183
#> 233 0.173
#> 234 0.165
#> 235 0.156
#> 236 0.149
#> 237 0.141
#> 238 0.134
#> 239 0.127
#> 240 0.121
#> 241 0.115
#> 242 0.109
#> 243 0.104
#> 244 0.0985
#> 245 0.0936
#> 246 0.0889
#> 247 0.0845
#> 248 0.0803
#> 249 0.0763
#> 250 0.0725
#> 251 0.0689
#> 252 0.0655
#> 253 0.0623
#> 254 0.0592
#> 255 0.0563
#> 256 0.0535
#> 257 0.0508
#> 258 0.0483
#> 259 0.0459
#> 260 0.0437
#> 261 0.0415
#> 262 0.0395
#> 263 0.0375
#> 264 0.0357
#> 265 0.0339
#> 266 0.0323
#> 267 0.0307
#> 268 0.0292
#> 269 0.0278
#> 270 0.0264
#> 271 0.0251
#> 272 0.0239
#> 273 0.0227
#> 274 0.0216
#> 275 0.0206
#> 276 0.0196
#> 277 0.0186
#> 278 0.0177
#> 279 0.0168
#> 280 0.016
#> 281 0.0152
#> 282 0.0145
#> 283 0.0138
#> 284 0.0131
#> 285 0.0125
#> 286 0.0119
#> 287 0.0113
#> 288 0.0108
#> 289 0.0102
#> 290 0.00975
#> 291 0.00927
#> 292 0.00883
#> 293 0.0084
#> 294 0.008
#> 295 0.00761
#> 296 0.00724
#> 297 0.0069
#> 298 0.00656
#> 299 0.00625
#> 300 0.00595
#> 301 0.00566
#> 302 0.00539
#> 303 0.00513
#> 304 0.00489
#> 305 0.00465
#> 306 0.00443
#> 307 0.00422
#> 308 0.00401
#> 309 0.00382
#> 310 0.00364
#> 311 0.00347
#> 312 0.0033
#> 313 0.00314
#> 314 0.00299
#> 315 0.00285
#> 316 0.00271
#> 317 0.00259
#> 318 0.00246
#> 319 0.00234
#> 320 0.00223
#> 321 0.00213
#> 322 0.00203
#> 323 0.00193
#> 324 0.00184
#> 325 0.00175
#> 326 0.00167
#> 327 0.00159
#> 328 0.00151
#> 329 0.00144
#> 330 0.00137
#> 331 0.00131
#> 332 0.00125
#> 333 0.00119
#> 334 0.00113
#> 335 0.00108
#> 336 0.00103
#> 337 0.000979
#> 338 0.000932
#> 339 0.000888
#> 340 0.000846
#> 341 0.000807
#> 342 0.000768
#> 343 0.000732
#> 344 0.000698
#> 345 0.000665
#> 346 0.000634
#> 347 0.000604
#> 348 0.000575
#> 349 0.000548
#> 350 0.000523
#> 351 0.000498
#> 352 0.000475
#> 353 0.000452
#> 354 0.000431
#> 355 0.000411
#> 356 0.000392
#> 357 0.000373
#> 358 0.000356
#> 359 0.000339
#> 360 0.000323
#> 361 0.000308
#> 362 0.000294
#> 363 0.00028
#> 364 0.000267
#> 365 0.000254
#> 366 0.000243
#> 367 0.000231
#> 368 0.00022
#> 369 0.00021
#> 370 2e-04
#> 371 0.000191
#> 372 0.000182
#> 373 0.000174
#> 374 0.000165
#> 375 0.000158
#> 376 0.00015
#> 377 0.000143
#> 378 0.000137
#> 379 0.00013
#> 380 0.000124
#> 381 0.000119
#> 382 0.000113
#> 383 0.000108
#> 384 0.000103
#> 385 9.8e-05
#> 386 9.34e-05
#> 387 8.91e-05
#> 388 8.49e-05
#> 389 8.1e-05
#> 390 7.72e-05
#> 391 7.37e-05
#> 392 7.02e-05
#> 393 6.7e-05
#> 394 6.39e-05
#> 395 6.09e-05
#> 396 5.81e-05
#> 397 5.54e-05
#> 398 5.28e-05
#> 399 5.04e-05
#> 400 4.81e-05
#> 401 4.58e-05
#> 402 4.37e-05
#> 403 4.17e-05
#> 404 3.98e-05
#> 405 3.79e-05
#> 406 3.62e-05
#> 407 3.45e-05
#> 408 3.29e-05
#> 409 3.14e-05
#> 410 2.99e-05
#> 411 2.86e-05
#> 412 2.72e-05
#> 413 2.6e-05
#> 414 2.48e-05
#> 415 2.36e-05
#> 416 2.25e-05
#> 417 2.15e-05
#> 418 2.05e-05
#> 419 1.96e-05
#> 420 1.87e-05
#> 421 1.78e-05
#> 422 1.7e-05
#> 423 1.62e-05
#> 424 1.55e-05
#> 425 1.48e-05
#> 426 1.41e-05
#> 427 1.34e-05
#> 428 1.28e-05
#> 429 1.22e-05
#> 430 1.17e-05
#> 431 1.11e-05
#> 432 1.06e-05
#> 433 1.01e-05
#> 434 9.66e-06
#> 435 9.22e-06
#> 436 8.79e-06
#> 437 8.39e-06
#> 438 8e-06
#> 439 7.64e-06
#> 440 7.29e-06
#> 441 6.95e-06
#> 442 6.63e-06
#> 443 6.33e-06
#> 444 6.04e-06
#> 445 5.76e-06
#> 446 5.5e-06
#> 447 5.25e-06
#> 448 5.01e-06
#> 449 4.78e-06
#> 450 4.56e-06
#> 451 4.35e-06
#> 452 4.15e-06
#> 453 3.96e-06
#> 454 3.78e-06
#> 455 3.61e-06
#> 456 3.44e-06
#> 457 3.28e-06
#> 458 3.13e-06
#> 459 2.99e-06
#> 460 2.85e-06
#> 461 2.72e-06
#> 462 2.6e-06
#> 463 2.48e-06
#> 464 2.37e-06
#> 465 2.26e-06
#> 466 2.15e-06
#> 467 2.06e-06
#> 468 1.96e-06
#> 469 1.87e-06
#> 470 1.79e-06
#> 471 1.71e-06
#> 472 1.63e-06
#> 473 1.55e-06
#> 474 1.48e-06
#> 475 1.42e-06
#> 476 1.35e-06
#> 477 1.29e-06
#> 478 1.23e-06
#> 479 1.17e-06
#> 480 1.12e-06
#> 481 1.07e-06
#> 482 1.02e-06
#> 483 9.74e-07
#> 484 9.3e-07
#> 485 8.88e-07
#> 486 8.47e-07
#> 487 8.09e-07
#> 488 7.72e-07
#> 489 7.37e-07
#> 490 7.03e-07
#> 491 6.71e-07
#> 492 6.41e-07
#> 493 6.12e-07
#> 494 5.84e-07
#> 495 5.57e-07
#> 496 5.32e-07
#> 497 5.08e-07
#> 498 4.85e-07
#> 499 4.63e-07
#> 500 4.42e-07
#> 2.83 sec elapsed
```
12\.4 A `PyTorch` neural network
--------------------------------
Here is the same example we have used above but written in PyTorch. Notice the following differences with the `numpy` code:
* we select the computation device which could be `cpu` or `gpu`
* when building or creating the tensors, we specify which device we want to use
* the tensors have `torch` methods and properties. Example: `mm()`, `clamp()`, `sum()`, `clone()`, and `t()`,
* also notice the use some `torch` functions: `device()`, `randn()`
```
reticulate::use_condaenv("r-torch")
```
```
# Code in file tensor/two_layer_net_tensor.py
import torch
import time
ms = torch.manual_seed(0)
tic = time.process_time()
device = torch.device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = torch.randn(N, D_in, device=device)
y = torch.randn(N, D_out, device=device)
# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device)
w2 = torch.randn(H, D_out, device=device)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.mm(w1)
h_relu = h.clamp(min=0)
y_pred = h_relu.mm(w2)
# Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss = (y_pred - y).pow(2).sum()
print(t, loss.item())
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h < 0] = 0
grad_w1 = x.t().mm(grad_h)
# Update weights using gradient descent
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
```
```
#> 0 29428664.0
#> 1 22739448.0
#> 2 20605260.0
#> 3 19520372.0
#> 4 17810224.0
#> 5 14999204.0
#> 6 11483334.0
#> 7 8096649.0
#> 8 5398717.5
#> 9 3521559.75
#> 10 2315861.5
#> 11 1570273.5
#> 12 1111700.375
#> 13 825062.8125
#> 14 639684.4375
#> 15 514220.625
#> 16 425155.3125
#> 17 358904.5625
#> 18 307636.71875
#> 19 266625.90625
#> 20 232998.625
#> 21 204887.296875
#> 22 181051.0625
#> 23 160643.0
#> 24 143036.09375
#> 25 127729.578125
#> 26 114360.25
#> 27 102621.0234375
#> 28 92276.9375
#> 29 83144.0859375
#> 30 75053.3984375
#> 31 67870.3984375
#> 32 61485.79296875
#> 33 55786.6328125
#> 34 50690.8515625
#> 35 46128.6328125
#> 36 42029.546875
#> 37 38341.875
#> 38 35017.33203125
#> 39 32016.68359375
#> 40 29303.43359375
#> 41 26847.1484375
#> 42 24620.376953125
#> 43 22599.46875
#> 44 20762.5625
#> 45 19090.986328125
#> 46 17568.359375
#> 47 16180.1083984375
#> 48 14911.99609375
#> 49 13753.8525390625
#> 50 12694.0205078125
#> 51 11723.640625
#> 52 10834.490234375
#> 53 10019.25390625
#> 54 9270.923828125
#> 55 8583.36328125
#> 56 7950.5625
#> 57 7368.46875
#> 58 6832.73779296875
#> 59 6339.20703125
#> 60 5884.1484375
#> 61 5464.44384765625
#> 62 5077.45849609375
#> 63 4719.9833984375
#> 64 4389.5400390625
#> 65 4084.009765625
#> 66 3801.313232421875
#> 67 3539.627197265625
#> 68 3297.266845703125
#> 69 3072.8017578125
#> 70 2864.869140625
#> 71 2672.025390625
#> 72 2493.096435546875
#> 73 2326.89697265625
#> 74 2172.523193359375
#> 75 2029.1279296875
#> 76 1895.768310546875
#> 77 1771.71435546875
#> 78 1656.3409423828125
#> 79 1548.9505615234375
#> 80 1448.9840087890625
#> 81 1355.846923828125
#> 82 1269.0556640625
#> 83 1188.1507568359375
#> 84 1112.7042236328125
#> 85 1042.3167724609375
#> 86 976.61328125
#> 87 915.2999267578125
#> 88 858.0404052734375
#> 89 804.5496826171875
#> 90 754.5780029296875
#> 91 707.8599243164062
#> 92 664.1988525390625
#> 93 623.3640747070312
#> 94 585.147216796875
#> 95 549.3995971679688
#> 96 515.9583740234375
#> 97 484.6272277832031
#> 98 455.28955078125
#> 99 427.81829833984375
#> 100 402.0847473144531
#> 101 377.9535827636719
#> 102 355.3477783203125
#> 103 334.1396179199219
#> 104 314.2633361816406
#> 105 295.61749267578125
#> 106 278.1217346191406
#> 107 261.7001953125
#> 108 246.2969512939453
#> 109 231.8272247314453
#> 110 218.24240112304688
#> 111 205.48812866210938
#> 112 193.5052490234375
#> 113 182.24417114257812
#> 114 171.66690063476562
#> 115 161.72601318359375
#> 116 152.3784942626953
#> 117 143.59078979492188
#> 118 135.32354736328125
#> 119 127.55582427978516
#> 120 120.24463653564453
#> 121 113.36481475830078
#> 122 106.89350128173828
#> 123 100.80726623535156
#> 124 95.07266998291016
#> 125 89.6752700805664
#> 126 84.59477233886719
#> 127 79.80913543701172
#> 128 75.30223083496094
#> 129 71.0572509765625
#> 130 67.05980682373047
#> 131 63.292694091796875
#> 132 59.7408447265625
#> 133 56.394203186035156
#> 134 53.243412017822266
#> 135 50.2683219909668
#> 136 47.46772003173828
#> 137 44.82497787475586
#> 138 42.33271408081055
#> 139 39.983646392822266
#> 140 37.76749801635742
#> 141 35.67666244506836
#> 142 33.70509338378906
#> 143 31.84467124938965
#> 144 30.089385986328125
#> 145 28.432872772216797
#> 146 26.869369506835938
#> 147 25.39266586303711
#> 148 23.999008178710938
#> 149 22.684724807739258
#> 150 21.4434757232666
#> 151 20.270301818847656
#> 152 19.164194107055664
#> 153 18.11824607849121
#> 154 17.131380081176758
#> 155 16.199291229248047
#> 156 15.318136215209961
#> 157 14.486746788024902
#> 158 13.700006484985352
#> 159 12.957758903503418
#> 160 12.256866455078125
#> 161 11.593376159667969
#> 162 10.96681022644043
#> 163 10.374650955200195
#> 164 9.815613746643066
#> 165 9.286172866821289
#> 166 8.78611946105957
#> 167 8.313515663146973
#> 168 7.866476058959961
#> 169 7.443814754486084
#> 170 7.044161319732666
#> 171 6.666952133178711
#> 172 6.309534072875977
#> 173 5.9717559814453125
#> 174 5.652008056640625
#> 175 5.3500075340271
#> 176 5.06421422958374
#> 177 4.793882846832275
#> 178 4.538228511810303
#> 179 4.296501159667969
#> 180 4.067446708679199
#> 181 3.8510499000549316
#> 182 3.6461739540100098
#> 183 3.4524216651916504
#> 184 3.2690694332122803
#> 185 3.0956828594207764
#> 186 2.9311866760253906
#> 187 2.7758116722106934
#> 188 2.628840684890747
#> 189 2.4897918701171875
#> 190 2.357895851135254
#> 191 2.2333240509033203
#> 192 2.1151578426361084
#> 193 2.003354072570801
#> 194 1.897698998451233
#> 195 1.7976123094558716
#> 196 1.7029246091842651
#> 197 1.6131364107131958
#> 198 1.5283033847808838
#> 199 1.4478871822357178
#> 200 1.371699333190918
#> 201 1.2994897365570068
#> 202 1.231500267982483
#> 203 1.1667163372039795
#> 204 1.1054186820983887
#> 205 1.0472912788391113
#> 206 0.9924129247665405
#> 207 0.9405249953269958
#> 208 0.8911417722702026
#> 209 0.8445178866386414
#> 210 0.8003085851669312
#> 211 0.758423388004303
#> 212 0.7187696099281311
#> 213 0.6812056303024292
#> 214 0.6455042362213135
#> 215 0.6117878556251526
#> 216 0.5798596739768982
#> 217 0.5495442152023315
#> 218 0.5209972858428955
#> 219 0.4938827455043793
#> 220 0.46809014678001404
#> 221 0.4436979293823242
#> 222 0.42065465450286865
#> 223 0.3987467288970947
#> 224 0.3779408633708954
#> 225 0.35838788747787476
#> 226 0.3397265076637268
#> 227 0.3221140503883362
#> 228 0.30536866188049316
#> 229 0.2895379662513733
#> 230 0.27451151609420776
#> 231 0.2602919638156891
#> 232 0.24681799113750458
#> 233 0.23405984044075012
#> 234 0.22187164425849915
#> 235 0.2103630006313324
#> 236 0.19945508241653442
#> 237 0.18917179107666016
#> 238 0.1794165074825287
#> 239 0.1700771450996399
#> 240 0.1613144725561142
#> 241 0.152926966547966
#> 242 0.14506009221076965
#> 243 0.1375567466020584
#> 244 0.13043273985385895
#> 245 0.12370903044939041
#> 246 0.11734490096569061
#> 247 0.11129261553287506
#> 248 0.10555146634578705
#> 249 0.10010744631290436
#> 250 0.09495128691196442
#> 251 0.09006303548812866
#> 252 0.08542166650295258
#> 253 0.08105342835187912
#> 254 0.07687549293041229
#> 255 0.07293462008237839
#> 256 0.06918356567621231
#> 257 0.06564081460237503
#> 258 0.062239713966846466
#> 259 0.059055205434560776
#> 260 0.05602336302399635
#> 261 0.05314234644174576
#> 262 0.05042209476232529
#> 263 0.04785769432783127
#> 264 0.045423999428749084
#> 265 0.04309770092368126
#> 266 0.04090772941708565
#> 267 0.03880797326564789
#> 268 0.03683297708630562
#> 269 0.03495331108570099
#> 270 0.03315659612417221
#> 271 0.031475357711315155
#> 272 0.029864072799682617
#> 273 0.028345633298158646
#> 274 0.026901375502347946
#> 275 0.025526201352477074
#> 276 0.024225471541285515
#> 277 0.023021651431918144
#> 278 0.021845556795597076
#> 279 0.020738258957862854
#> 280 0.01967737451195717
#> 281 0.01868186891078949
#> 282 0.017737826332449913
#> 283 0.016843702644109726
#> 284 0.015994098037481308
#> 285 0.015187159180641174
#> 286 0.014432456344366074
#> 287 0.013691866770386696
#> 288 0.013026118278503418
#> 289 0.012365361675620079
#> 290 0.011741021648049355
#> 291 0.011153185740113258
#> 292 0.010602883994579315
#> 293 0.010070282965898514
#> 294 0.009570850059390068
#> 295 0.009099053218960762
#> 296 0.008648849092423916
#> 297 0.008217266760766506
#> 298 0.007814647629857063
#> 299 0.007436459884047508
#> 300 0.007072300184518099
#> 301 0.006720009259879589
#> 302 0.006387100555002689
#> 303 0.00608158390969038
#> 304 0.00578821636736393
#> 305 0.005504274740815163
#> 306 0.005235536955296993
#> 307 0.004986326675862074
#> 308 0.004750200547277927
#> 309 0.004520890768617392
#> 310 0.004305804148316383
#> 311 0.004104197025299072
#> 312 0.003908107057213783
#> 313 0.0037259890232235193
#> 314 0.0035482768435031176
#> 315 0.0033842488192021847
#> 316 0.0032260832376778126
#> 317 0.0030806262511759996
#> 318 0.002938204212114215
#> 319 0.002802144968882203
#> 320 0.002674166578799486
#> 321 0.0025522327050566673
#> 322 0.0024338625371456146
#> 323 0.002325983252376318
#> 324 0.0022217126097530127
#> 325 0.002122103003785014
#> 326 0.0020273567643016577
#> 327 0.0019368595676496625
#> 328 0.0018519405275583267
#> 329 0.0017723542405292392
#> 330 0.0016958083724603057
#> 331 0.00162519421428442
#> 332 0.001555908122099936
#> 333 0.0014901482500135899
#> 334 0.0014247691724449396
#> 335 0.0013653874630108476
#> 336 0.001307258615270257
#> 337 0.0012546550715342164
#> 338 0.0012025412870571017
#> 339 0.0011545777088031173
#> 340 0.001107968739233911
#> 341 0.0010642317356541753
#> 342 0.0010200864635407925
#> 343 0.0009793058270588517
#> 344 0.0009410151396878064
#> 345 0.0009048299980349839
#> 346 0.0008693647105246782
#> 347 0.000835308397654444
#> 348 0.0008031500619836152
#> 349 0.0007735351100564003
#> 350 0.000744393328204751
#> 351 0.00071698147803545
#> 352 0.00069050322053954
#> 353 0.0006645384710282087
#> 354 0.0006397517863661051
#> 355 0.0006177832838147879
#> 356 0.0005949471960775554
#> 357 0.0005744362715631723
#> 358 0.0005537742399610579
#> 359 0.0005348395789042115
#> 360 0.0005162699380889535
#> 361 0.000499469693750143
#> 362 0.00048172459355555475
#> 363 0.0004661969724111259
#> 364 0.0004515194450505078
#> 365 0.0004358708392828703
#> 366 0.0004218583053443581
#> 367 0.00040883725159801543
#> 368 0.0003956131695304066
#> 369 0.0003827497421298176
#> 370 0.000370656605809927
#> 371 0.00036004791036248207
#> 372 0.0003480703162495047
#> 373 0.0003388348559383303
#> 374 0.000327684567309916
#> 375 0.0003175089950673282
#> 376 0.0003082627372350544
#> 377 0.0002986858307849616
#> 378 0.00028960598865523934
#> 379 0.0002815576735883951
#> 380 0.0002736181777436286
#> 381 0.0002657140721566975
#> 382 0.00025785667821764946
#> 383 0.0002509196347091347
#> 384 0.00024437913089059293
#> 385 0.00023740741016808897
#> 386 0.0002299495681654662
#> 387 0.0002234804560430348
#> 388 0.0002169939107261598
#> 389 0.00021134663256816566
#> 390 0.0002056143421214074
#> 391 0.00020046206191182137
#> 392 0.00019536828040145338
#> 393 0.00019056514429394156
#> 394 0.00018598540918901563
#> 395 0.00018159380124416202
#> 396 0.00017640764417592436
#> 397 0.00017208821373060346
#> 398 0.000168110869708471
#> 399 0.00016350964142475277
#> 400 0.00015964081103447825
#> 401 0.00015596051525790244
#> 402 0.00015269994037225842
#> 403 0.00014866374840494245
#> 404 0.00014477886725217104
#> 405 0.00014148686022963375
#> 406 0.00013842849875800312
#> 407 0.00013507613039109856
#> 408 0.0001322997995885089
#> 409 0.00012896949192509055
#> 410 0.00012618394976016134
#> 411 0.00012356613297015429
#> 412 0.00012060831068083644
#> 413 0.00011798611376434565
#> 414 0.0001152795521193184
#> 415 0.00011272911069681868
#> 416 0.00011033188638975844
#> 417 0.00010773474059533328
#> 418 0.00010584026313154027
#> 419 0.00010329326323699206
#> 420 0.00010140397353097796
#> 421 9.970468090614304e-05
#> 422 9.72362540778704e-05
#> 423 9.54945498961024e-05
#> 424 9.346337174065411e-05
#> 425 9.128850797424093e-05
#> 426 8.97917925613001e-05
#> 427 8.779048221185803e-05
#> 428 8.59305146150291e-05
#> 429 8.416303899139166e-05
#> 430 8.247063669841737e-05
#> 431 8.109148620860651e-05
#> 432 7.982019451446831e-05
#> 433 7.818565791239962e-05
#> 434 7.673520303796977e-05
#> 435 7.54009815864265e-05
#> 436 7.374506094492972e-05
#> 437 7.267539331223816e-05
#> 438 7.122510578483343e-05
#> 439 6.98604853823781e-05
#> 440 6.852982915006578e-05
#> 441 6.75098126521334e-05
#> 442 6.636354373767972e-05
#> 443 6.522039620904252e-05
#> 444 6.410140485968441e-05
#> 445 6.307245348580182e-05
#> 446 6.221079092938453e-05
#> 447 6.089429371058941e-05
#> 448 5.975936437607743e-05
#> 449 5.893126945011318e-05
#> 450 5.780566425528377e-05
#> 451 5.694766514352523e-05
#> 452 5.5986300139920786e-05
#> 453 5.502309068106115e-05
#> 454 5.420695379143581e-05
#> 455 5.31858422618825e-05
#> 456 5.239694655756466e-05
#> 457 5.1775907195406035e-05
#> 458 5.109262929181568e-05
#> 459 5.0413200369803235e-05
#> 460 4.956878183293156e-05
#> 461 4.8856254579732195e-05
#> 462 4.8221645556623116e-05
#> 463 4.7429402911802754e-05
#> 464 4.700458885054104e-05
#> 465 4.615000216290355e-05
#> 466 4.5314704038901255e-05
#> 467 4.466490645427257e-05
#> 468 4.406480729812756e-05
#> 469 4.344138142187148e-05
#> 470 4.302451270632446e-05
#> 471 4.255307430867106e-05
#> 472 4.1863419028231874e-05
#> 473 4.148659354541451e-05
#> 474 4.099802754353732e-05
#> 475 4.034798257634975e-05
#> 476 3.994005237473175e-05
#> 477 3.94669477827847e-05
#> 478 3.9117549022194e-05
#> 479 3.8569156458834186e-05
#> 480 3.8105612475192174e-05
#> 481 3.753463170141913e-05
#> 482 3.679965084302239e-05
#> 483 3.646357436082326e-05
#> 484 3.597680915845558e-05
#> 485 3.555299190338701e-05
#> 486 3.504360938677564e-05
#> 487 3.449235737207346e-05
#> 488 3.391931386431679e-05
#> 489 3.374389780219644e-05
#> 490 3.328040838823654e-05
#> 491 3.31329574692063e-05
#> 492 3.259751247242093e-05
#> 493 3.2441555958939716e-05
#> 494 3.1837684218771756e-05
#> 495 3.1491359550273046e-05
#> 496 3.120429755654186e-05
#> 497 3.089967503910884e-05
#> 498 3.059657319681719e-05
#> 499 3.0050463465158828e-05
```
```
toc = time.process_time()
print(toc - tic, "seconds")
```
```
#> 30.475184615000003 seconds
```
12\.5 A neural network in `rTorch`
----------------------------------
The example shows the long and manual way of calculating the forward and backward passes but using `rTorch`. The objective is getting familiarized with the rTorch tensor operations.
The following example was converted from **PyTorch** to **rTorch** to show differences and similarities of both approaches. The original source can be found here: [Source](https://github.com/jcjohnson/pytorch-examples#pytorch-tensors).
### 12\.5\.1 Load the libraries
```
library(rTorch)
library(ggplot2)
device = torch$device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
invisible(torch$manual_seed(0))
```
* `N` is batch size;
* `D_in` is input dimension;
* `H` is hidden dimension;
* `D_out` is output dimension.
### 12\.5\.2 Dataset
We will create a random dataset for a **two layer neural network**.
```
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x <- torch$randn(N, D_in, device=device)
y <- torch$randn(N, D_out, device=device)
# dimensions of both tensors
dim(x)
dim(y)
```
```
#> [1] 64 1000
#> [1] 64 10
```
### 12\.5\.3 Initialize the weights
```
# Randomly initialize weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
dim(w1)
dim(w2)
```
```
#> [1] 1000 100
#> [1] 100 10
```
### 12\.5\.4 Iterate through the dataset
Now, we are going to train our neural network on the `training` dataset. The equestion is: *“how many times do we have to expose the training data to the algorithm?”.* By looking at the graph of the loss we may get an idea when we should stop.
#### 12\.5\.4\.1 Iterate 50 times
Let’s say that for the sake of time we select to run only 50 iterations of the loop doing the training.
```
learning_rate = 1e-6
# loop
for (t in 1:50) {
# Forward pass: compute predicted y, y_pred
h <- x$mm(w1) # matrix multiplication, x*w1
h_relu <- h$clamp(min=0) # make elements greater than zero
y_pred <- h_relu$mm(w2) # matrix multiplication, h_relu*w2
# Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum() # sum((y_pred-y)^2)
# cat(t, "\t")
# cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred) # compute gradient of w2
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0) # filter values lower than zero
torch$masked_select(grad_h, mask)$fill_(0.0) # make them equal to zero
grad_w1 <- x$t()$mm(grad_h) # compute gradient of w1
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
# y vs predicted y
df_50 <- data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = 50)
ggplot(df_50, aes(x = y, y = y_pred)) +
geom_point()
```
We see a lot of dispersion between the predicted values, \\(y\_{pred}\\) and the real values, \\(y\\). We are far from our goal.
Let’s take a look at the dataframe:
```
library('DT')
datatable(df_50, options = list(pageLength = 10))
```
#### 12\.5\.4\.2 A training function
Now, we convert the script above to a function, so we could reuse it several times. We want to study the effect of the iteration on the performance of the algorithm.
This time we create a function `train` to input the number of iterations that we want to run:
```
train <- function(iterations) {
# Randomly initialize weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
learning_rate = 1e-6
# loop
for (t in 1:iterations) {
# Forward pass: compute predicted y
h <- x$mm(w1)
h_relu <- h$clamp(min=0)
y_pred <- h_relu$mm(w2)
# Compute and print loss; loss is a scalar stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum()
# cat(t, "\t"); cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred)
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0)
torch$masked_select(grad_h, mask)$fill_(0.0)
grad_w1 <- x$t()$mm(grad_h)
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = iterations)
}
```
#### 12\.5\.4\.3 Run it at 100 iterations
```
# retrieve the results and store them in a dataframe
df_100 <- train(iterations = 100)
datatable(df_100, options = list(pageLength = 10))
# plot
ggplot(df_100, aes(x = y_pred, y = y)) +
geom_point()
```
#### 12\.5\.4\.4 250 iterations
Still there are differences between the value and the prediction. Let’s try with more iterations, like **250**:
```
df_250 <- train(iterations = 200)
datatable(df_250, options = list(pageLength = 25))
# plot
ggplot(df_250, aes(x = y_pred, y = y)) +
geom_point()
```
We see the formation of a line between the values and prediction, which means we are getting closer at finding the right algorithm, in this particular case, weights and bias.
#### 12\.5\.4\.5 500 iterations
Let’s try one more time with 500 iterations:
```
df_500 <- train(iterations = 500)
datatable(df_500, options = list(pageLength = 25))
ggplot(df_500, aes(x = y_pred, y = y)) +
geom_point()
```
12\.6 Full Neural Network in rTorch
-----------------------------------
```
library(rTorch)
library(ggplot2)
library(tictoc)
tic()
device = torch$device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
invisible(torch$manual_seed(0))
# Properties of tensors and neural network
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x <- torch$randn(N, D_in, device=device)
y <- torch$randn(N, D_out, device=device)
# dimensions of both tensors
# initialize the weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
learning_rate = 1e-6
# loop
for (t in 1:500) {
# Forward pass: compute predicted y, y_pred
h <- x$mm(w1) # matrix multiplication, x*w1
h_relu <- h$clamp(min=0) # make elements greater than zero
y_pred <- h_relu$mm(w2) # matrix multiplication, h_relu*w2
# Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum() # sum((y_pred-y)^2)
# cat(t, "\t")
# cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred) # compute gradient of w2
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0) # filter values lower than zero
torch$masked_select(grad_h, mask)$fill_(0.0) # make them equal to zero
grad_w1 <- x$t()$mm(grad_h) # compute gradient of w1
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
# y vs predicted y
df<- data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = 500)
datatable(df, options = list(pageLength = 25))
ggplot(df, aes(x = y_pred, y = y)) +
geom_point()
toc()
```
```
#> 22.945 sec elapsed
```
12\.7 Exercise
--------------
1. Rewrite the code in `rTorch` but including and plotting the loss at each iteration
2. On the neural network written in `PyTorch`, code, instead of printing a long table, print the table by pages that we could navigate using vertical and horizontal bars. Tip: read the PyThon data structure from R and plot it with `ggplot2`
12\.1 rTorch and PyTorch
------------------------
We will compare three neural networks:
* a neural network written in `numpy`
* a neural network written in `r-base`
* a neural network written in `PyTorch`
* a neural network written in `rTorch`
12\.2 A neural network with `numpy`
-----------------------------------
We start the neural network by simply using `numpy`:
```
library(rTorch)
```
```
# A simple neural network using NumPy
# Code in file tensor/two_layer_net_numpy.py
import time
import numpy as np
tic = time.process_time()
np.random.seed(123) # set a seed for reproducibility
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)
# print(x.shape)
# print(y.shape)
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)
# print(w1.shape)
# print(w2.shape)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.dot(w1)
# print(t, h.max())
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)
# Compute and print loss
sq = np.square(y_pred - y)
loss = sq.sum()
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
# processing time
```
```
#> 0 28624200.800938517
#> 1 24402861.381040636
#> 2 23157437.29147552
#> 3 21617191.63397175
#> 4 18598190.361558598
#> 5 14198211.419692844
#> 6 9786244.45261814
#> 7 6233451.217340663
#> 8 3862647.267829599
#> 9 2412366.632764836
#> 10 1569915.4392193707
#> 11 1078501.3381487518
#> 12 785163.9233288621
#> 13 601495.2825043725
#> 14 479906.0403613456
#> 15 394555.19331746205
#> 16 331438.6987273826
#> 17 282679.6687873873
#> 18 243807.84432087594
#> 19 211970.18110708205
#> 20 185451.6861514274
#> 21 163078.20881862927
#> 22 144011.80160918707
#> 23 127662.96132466741
#> 24 113546.29175681781
#> 25 101291.55288493488
#> 26 90623.20833654879
#> 27 81307.32590692889
#> 28 73135.24710426925
#> 29 65937.50294095621
#> 30 59570.26425368039
#> 31 53923.82804264227
#> 32 48909.69273028215
#> 33 44438.89933807681
#> 34 40445.34031569733
#> 35 36873.30041989413
#> 36 33664.990437423825
#> 37 30781.198962949587
#> 38 28184.24227268406
#> 39 25843.99793108194
#> 40 23727.282448406426
#> 41 21810.062067327668
#> 42 20071.326437572196
#> 43 18492.63752543329
#> 44 17056.72779714255
#> 45 15749.299484025236
#> 46 14557.324481207237
#> 47 13468.469764338035
#> 48 12473.575866914027
#> 49 11562.485809665774
#> 50 10727.865926563407
#> 51 9962.411372816146
#> 52 9259.619803682268
#> 53 8613.269071227103
#> 54 8018.523834750763
#> 55 7471.080819104451
#> 56 6966.00651845651
#> 57 6499.96685422581
#> 58 6069.576425345411
#> 59 5671.2821228408475
#> 60 5302.644980086279
#> 61 4961.339043761728
#> 62 4645.02541423451
#> 63 4351.473575805103
#> 64 4079.2165446062972
#> 65 3826.1480820887655
#> 66 3590.887308956795
#> 67 3372.0103280622666
#> 68 3168.173408650748
#> 69 2978.362100081684
#> 70 2801.302649097963
#> 71 2636.037950790892
#> 72 2481.7354010452655
#> 73 2337.6093944873246
#> 74 2202.8250425683987
#> 75 2076.8872560589616
#> 76 1958.9976460120263
#> 77 1848.5060338548483
#> 78 1744.9993380824799
#> 79 1647.9807349258715
#> 80 1556.9947585282196
#> 81 1471.7081797400347
#> 82 1391.6136870762566
#> 83 1316.3329239757227
#> 84 1245.5902641069824
#> 85 1179.0691783286234
#> 86 1116.5095209528572
#> 87 1057.6662051951396
#> 88 1002.2519686823666
#> 89 950.0167505993219
#> 90 900.7916929993518
#> 91 854.3816389576979
#> 92 810.6277767708903
#> 93 769.3592041348505
#> 94 730.3836012940042
#> 95 693.5644048073411
#> 96 658.7807027999521
#> 97 625.9238747325827
#> 98 594.8758111695068
#> 99 565.4973547949257
#> 100 537.7012178149556
#> 101 511.3901106843991
#> 102 486.4837276215478
#> 103 462.90746955458474
#> 104 440.5787622887435
#> 105 419.4121231392399
#> 106 399.34612374957226
#> 107 380.3221777272873
#> 108 362.2821345456067
#> 109 345.18049757120184
#> 110 328.94028615976936
#> 111 313.5191206271147
#> 112 298.8754770672758
#> 113 284.96926791620496
#> 114 271.7642984526849
#> 115 259.2246266311472
#> 116 247.30122156531897
#> 117 235.96203976771662
#> 118 225.17874184522793
#> 119 214.9253969806085
#> 120 205.16916168826197
#> 121 195.88920014324063
#> 122 187.0522150132689
#> 123 178.6428873875804
#> 124 170.63479897325027
#> 125 163.00806018890546
#> 126 155.7440191346056
#> 127 148.83352898111042
#> 128 142.2496666996878
#> 129 135.97509122834504
#> 130 129.98982612428355
#> 131 124.28418865778005
#> 132 118.84482149781273
#> 133 113.65645952102406
#> 134 108.7054397008061
#> 135 103.98144604072209
#> 136 99.47512083365962
#> 137 95.17318303450762
#> 138 91.06775169947714
#> 139 87.14952592945869
#> 140 83.4075554849774
#> 141 79.8333553283839
#> 142 76.41993249926654
#> 143 73.159531678603
#> 144 70.04535899921396
#> 145 67.0700037713867
#> 146 64.22536514818646
#> 147 61.50715956099643
#> 148 58.90970110703718
#> 149 56.42818157298958
#> 150 54.053456343974474
#> 151 51.78409899250521
#> 152 49.613042222061935
#> 153 47.537088681832714
#> 154 45.55073951374691
#> 155 43.651385230775375
#> 156 41.8333828820336
#> 157 40.0944925576898
#> 158 38.4304655768987
#> 159 36.83773398481151
#> 160 35.313368600585044
#> 161 33.85436928433868
#> 162 32.457997092726586
#> 163 31.120973836567913
#> 164 29.841057186484246
#> 165 28.61536631365921
#> 166 27.441646501921213
#> 167 26.31767712811449
#> 168 25.241065734351473
#> 169 24.210568668753154
#> 170 23.223366825888164
#> 171 22.27691447596546
#> 172 21.370561777029383
#> 173 20.502013041055037
#> 174 19.669605151002397
#> 175 18.872156637147214
#> 176 18.107932697664136
#> 177 17.375347093063624
#> 178 16.67329705241241
#> 179 16.000313127916616
#> 180 15.355056259809643
#> 181 14.736642044314163
#> 182 14.143657665391123
#> 183 13.575482981169435
#> 184 13.03055792072713
#> 185 12.507813624903267
#> 186 12.00650847964371
#> 187 11.525873890625666
#> 188 11.064924569594556
#> 189 10.622845128602144
#> 190 10.199224278747348
#> 191 9.79248532294249
#> 192 9.40221537769526
#> 193 9.027996925837858
#> 194 8.668895520243254
#> 195 8.324385761675554
#> 196 7.99390867066041
#> 197 7.676665609325665
#> 198 7.3722991001285685
#> 199 7.080233920966563
#> 200 6.7999405980009
#> 201 6.530984430178585
#> 202 6.2728878687947365
#> 203 6.025197539285438
#> 204 5.787473375780924
#> 205 5.559253501791474
#> 206 5.340172472449113
#> 207 5.129896948041436
#> 208 4.928007606815918
#> 209 4.734225282679221
#> 210 4.548186858907342
#> 211 4.369651328446663
#> 212 4.198236457646962
#> 213 4.033565011138579
#> 214 3.8754625080281464
#> 215 3.7236914115521316
#> 216 3.5779627242857224
#> 217 3.4379821914239286
#> 218 3.303565587540205
#> 219 3.174454405800678
#> 220 3.0504743070396323
#> 221 2.931383709316906
#> 222 2.8170418304762785
#> 223 2.7072412196038553
#> 224 2.6017277000868093
#> 225 2.50040409121904
#> 226 2.403078781570677
#> 227 2.309594481835507
#> 228 2.219794799730801
#> 229 2.133526678637347
#> 230 2.0506760423604566
#> 231 1.9710453639295484
#> 232 1.894559024310974
#> 233 1.8211210547720629
#> 234 1.7505340383436803
#> 235 1.6826932948721067
#> 236 1.6175070289508109
#> 237 1.5549072300348752
#> 238 1.4947316986695944
#> 239 1.436912502600996
#> 240 1.381372987946563
#> 241 1.3279854205041584
#> 242 1.2766884038688984
#> 243 1.2273848146334094
#> 244 1.1800217450316255
#> 245 1.1344919105891025
#> 246 1.0907369940975837
#> 247 1.0486826235693274
#> 248 1.0082656206399931
#> 249 0.9694282665755529
#> 250 0.9320976601575675
#> 251 0.8962339607475229
#> 252 0.8617533865905884
#> 253 0.8286151485833971
#> 254 0.7967578289852474
#> 255 0.7661404678425654
#> 256 0.7367202044072118
#> 257 0.708422713667491
#> 258 0.6812311487720265
#> 259 0.6550822696783506
#> 260 0.6299469090210432
#> 261 0.605786995355434
#> 262 0.5825650778276774
#> 263 0.5602382140936045
#> 264 0.5387735503110371
#> 265 0.5181403816556053
#> 266 0.49830590931295304
#> 267 0.47922937308117297
#> 268 0.46088901492620127
#> 269 0.44325464817119054
#> 270 0.42630408406116316
#> 271 0.41000543380657917
#> 272 0.39433673295843236
#> 273 0.37927114581493265
#> 274 0.36478176529460243
#> 275 0.35085044445134994
#> 276 0.3374578361158044
#> 277 0.32457682402453136
#> 278 0.31219123729919207
#> 279 0.300296586147234
#> 280 0.28884848624094894
#> 281 0.27783526470539743
#> 282 0.26724487697010957
#> 283 0.2570618106928273
#> 284 0.2472693951468085
#> 285 0.23785306876436113
#> 286 0.22879648231270536
#> 287 0.22008909643106767
#> 288 0.21171318526106842
#> 289 0.2036578219834066
#> 290 0.19591133993811427
#> 291 0.18846041746510728
#> 292 0.18129477007162065
#> 293 0.174405315161736
#> 294 0.16777998120837712
#> 295 0.16140610523836268
#> 296 0.1552756501716649
#> 297 0.14937904644542377
#> 298 0.14370793039467633
#> 299 0.13825290527822973
#> 300 0.13300640130439656
#> 301 0.12796012311324031
#> 302 0.12310750541656884
#> 303 0.11844182274749851
#> 304 0.11395158652041627
#> 305 0.10963187686672912
#> 306 0.10547640155933785
#> 307 0.10148022089409026
#> 308 0.0976363799328684
#> 309 0.09393976586801374
#> 310 0.09038186218007657
#> 311 0.08696004033318867
#> 312 0.08366808215670352
#> 313 0.08050159133387036
#> 314 0.0774556507265311
#> 315 0.07452541616811464
#> 316 0.07170677388789805
#> 317 0.06899492388917926
#> 318 0.06638632065320674
#> 319 0.06387707772657374
#> 320 0.06146291085125196
#> 321 0.0591402294396231
#> 322 0.05690662209831464
#> 323 0.05475707395743591
#> 324 0.05268944906989688
#> 325 0.05069984545069233
#> 326 0.048785688597973095
#> 327 0.046944795197577285
#> 328 0.045173966618895535
#> 329 0.043469382749897256
#> 330 0.04182932192085659
#> 331 0.04025154186795582
#> 332 0.038733588417595735
#> 333 0.03727299017402862
#> 334 0.03586799441058297
#> 335 0.03451589218265247
#> 336 0.03321501089199479
#> 337 0.03196371785309425
#> 338 0.030759357425241718
#> 339 0.029600888472444742
#> 340 0.028485919148238392
#> 341 0.02741317225069457
#> 342 0.026380963792005673
#> 343 0.025387828276963217
#> 344 0.02443225636975702
#> 345 0.02351279471955997
#> 346 0.02262815392798661
#> 347 0.02177684408442846
#> 348 0.02095765200803268
#> 349 0.02016947466161515
#> 350 0.019410962895712616
#> 351 0.018681045066734122
#> 352 0.017978879513468316
#> 353 0.017303468563130222
#> 354 0.016653437842251186
#> 355 0.01602766278432409
#> 356 0.015425464893044428
#> 357 0.01484594678906112
#> 358 0.014288249850265784
#> 359 0.01375163575426638
#> 360 0.01323528665049373
#> 361 0.012738339025978556
#> 362 0.012260186918304262
#> 363 0.011799970856220952
#> 364 0.011357085981162363
#> 365 0.010930950268775873
#> 366 0.010520842685022909
#> 367 0.010126145830079638
#> 368 0.009746393154855839
#> 369 0.009380889339520658
#> 370 0.009029161386689313
#> 371 0.00869059833698051
#> 372 0.00836477207696539
#> 373 0.008051209390678065
#> 374 0.0077494325069793705
#> 375 0.007459023266150334
#> 376 0.007179590434333104
#> 377 0.006910623445853765
#> 378 0.006651749941578513
#> 379 0.006402648026678379
#> 380 0.006162978285307884
#> 381 0.005932194796367616
#> 382 0.005710085052295781
#> 383 0.005496310244895275
#> 384 0.0052906289241425215
#> 385 0.0050926241688279104
#> 386 0.004902076613033862
#> 387 0.004718638851167859
#> 388 0.004542078962047164
#> 389 0.004372164586665975
#> 390 0.004208618626839021
#> 391 0.004051226677923414
#> 392 0.0038997374494828298
#> 393 0.003753918301513866
#> 394 0.003613561837935153
#> 395 0.0034784786917529164
#> 396 0.003348462575629662
#> 397 0.003223327362263324
#> 398 0.0031028635490837437
#> 399 0.002986912218213565
#> 400 0.002875348146367024
#> 401 0.0027679524720207994
#> 402 0.0026645903412969877
#> 403 0.00256506728009952
#> 404 0.0024692701898842025
#> 405 0.0023770671718814063
#> 406 0.0022883091777422303
#> 407 0.0022029269889801703
#> 408 0.0021207379368966914
#> 409 0.0020415781423120893
#> 410 0.001965380838191689
#> 411 0.0018920388674650765
#> 412 0.0018214489876606395
#> 413 0.0017534990549357195
#> 414 0.0016880979054376358
#> 415 0.0016251364192863505
#> 416 0.0015645343026947606
#> 417 0.0015062064772070694
#> 418 0.0014500530088225327
#> 419 0.0013959868097274688
#> 420 0.001343946421404061
#> 421 0.0012938496041169677
#> 422 0.001245622397754905
#> 423 0.0011992050880615885
#> 424 0.0011545283489900085
#> 425 0.0011115075856686302
#> 426 0.001070100670544413
#> 427 0.0010302364937566674
#> 428 0.0009918591300819473
#> 429 0.000954924393232083
#> 430 0.0009193639132775486
#> 431 0.0008851308467932729
#> 432 0.0008521777959560448
#> 433 0.0008204570911784497
#> 434 0.0007899223397731109
#> 435 0.0007605278374214596
#> 436 0.0007322343466954752
#> 437 0.0007049830914115257
#> 438 0.0006787512341473519
#> 439 0.00065350212037464
#> 440 0.0006291921955255096
#> 441 0.0006057856348208776
#> 442 0.0005832525024800561
#> 443 0.0005615598539424442
#> 444 0.0005406761235200468
#> 445 0.0005205750249286578
#> 446 0.0005012184845940066
#> 447 0.0004825848028301716
#> 448 0.0004646447575300741
#> 449 0.0004473739461918762
#> 450 0.0004307513759213604
#> 451 0.00041474810355609723
#> 452 0.00039933580480713945
#> 453 0.0003844970781264902
#> 454 0.0003702109250696993
#> 455 0.00035645948619340297
#> 456 0.0003432213223641764
#> 457 0.0003304723731848576
#> 458 0.00031819830164465815
#> 459 0.00030638121798918724
#> 460 0.0002950045353519474
#> 461 0.0002840533130499193
#> 462 0.00027350873727298176
#> 463 0.00026335657398426546
#> 464 0.000253581258369829
#> 465 0.00024416913722126747
#> 466 0.0002351142689424904
#> 467 0.0002263919313737711
#> 468 0.00021799257674327073
#> 469 0.00020990427540056088
#> 470 0.0002021174506938248
#> 471 0.00019462054044199915
#> 472 0.00018740325426984858
#> 473 0.00018045252249983815
#> 474 0.000173759960543912
#> 475 0.00016731630060690805
#> 476 0.0001611122710715995
#> 477 0.00015513993832625702
#> 478 0.00014938925941558148
#> 479 0.00014385207870578823
#> 480 0.00013852014130375656
#> 481 0.00013338601187671428
#> 482 0.000128442793294424
#> 483 0.0001236841045646944
#> 484 0.00011910150087090696
#> 485 0.00011468967274610794
#> 486 0.00011044058002490428
#> 487 0.00010634983745106246
#> 488 0.00010241132940006558
#> 489 9.861901302344988e-05
#> 490 9.496682985475842e-05
#> 491 9.144989845880715e-05
#> 492 8.806354488018214e-05
#> 493 8.480312707749194e-05
#> 494 8.166404591653792e-05
#> 495 7.864135637113095e-05
#> 496 7.573027443124469e-05
#> 497 7.292787602990206e-05
#> 498 7.023030228370285e-05
#> 499 6.763183953445079e-05
```
```
toc = time.process_time()
print(toc - tic, "seconds")
```
```
#> 6.927609346000001 seconds
```
12\.3 A neural network with `r-base`
------------------------------------
It is the same algorithm above in `numpy` but written in R base.
```
library(tictoc)
tic()
set.seed(123)
N <- 64; D_in <- 1000; H <- 100; D_out <- 10;
# Create random input and output data
x <- array(rnorm(N * D_in), dim = c(N, D_in))
y <- array(rnorm(N * D_out), dim = c(N, D_out))
# Randomly initialize weights
w1 <- array(rnorm(D_in * H), dim = c(D_in, H))
w2 <- array(rnorm(H * D_out), dim = c(H, D_out))
learning_rate <- 1e-6
for (t in seq(1, 500)) {
# Forward pass: compute predicted y
h = x %*% w1
h_relu = pmax(h, 0)
y_pred = h_relu %*% w2
# Compute and print loss
sq <- (y_pred - y)^2
loss = sum(sq)
cat(t, loss, "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = t(h_relu) %*% grad_y_pred
grad_h_relu = grad_y_pred %*% t(w2)
# grad_h <- sapply(grad_h_relu, function(i) i, simplify = FALSE ) # grad_h = grad_h_relu.copy()
grad_h <- rlang::duplicate(grad_h_relu)
grad_h[h < 0] <- 0
grad_w1 = t(x) %*% grad_h
# Update weights
w1 = w1 - learning_rate * grad_w1
w2 = w2 - learning_rate * grad_w2
}
toc()
```
```
#> 1 2.8e+07
#> 2 25505803
#> 3 29441299
#> 4 35797650
#> 5 39517126
#> 6 34884942
#> 7 23333535
#> 8 11927525
#> 9 5352787
#> 10 2496984
#> 11 1379780
#> 12 918213
#> 13 695760
#> 14 564974
#> 15 474479
#> 16 405370
#> 17 349747
#> 18 303724
#> 19 265075
#> 20 232325
#> 21 204394
#> 22 180414
#> 23 159752
#> 24 141895
#> 25 126374
#> 26 112820
#> 27 100959
#> 28 90536
#> 29 81352
#> 30 73244
#> 31 66058
#> 32 59675
#> 33 53993
#> 34 48921
#> 35 44388
#> 36 40328
#> 37 36687
#> 38 33414
#> 39 30469
#> 40 27816
#> 41 25419
#> 42 23251
#> 43 21288
#> 44 19508
#> 45 17893
#> 46 16426
#> 47 15092
#> 48 13877
#> 49 12769
#> 50 11758
#> 51 10835
#> 52 9991
#> 53 9218
#> 54 8510
#> 55 7862
#> 56 7267
#> 57 6719
#> 58 6217
#> 59 5754
#> 60 5329
#> 61 4938
#> 62 4577
#> 63 4245
#> 64 3938
#> 65 3655
#> 66 3394
#> 67 3153
#> 68 2930
#> 69 2724
#> 70 2533
#> 71 2357
#> 72 2193
#> 73 2042
#> 74 1902
#> 75 1772
#> 76 1651
#> 77 1539
#> 78 1435
#> 79 1338
#> 80 1249
#> 81 1165
#> 82 1088
#> 83 1016
#> 84 949
#> 85 886
#> 86 828
#> 87 774
#> 88 724
#> 89 677
#> 90 633
#> 91 592
#> 92 554
#> 93 519
#> 94 486
#> 95 455
#> 96 426
#> 97 399
#> 98 374
#> 99 350
#> 100 328
#> 101 308
#> 102 289
#> 103 271
#> 104 254
#> 105 238
#> 106 224
#> 107 210
#> 108 197
#> 109 185
#> 110 174
#> 111 163
#> 112 153
#> 113 144
#> 114 135
#> 115 127
#> 116 119
#> 117 112
#> 118 106
#> 119 99.2
#> 120 93.3
#> 121 87.8
#> 122 82.6
#> 123 77.7
#> 124 73.1
#> 125 68.8
#> 126 64.7
#> 127 60.9
#> 128 57.4
#> 129 54
#> 130 50.9
#> 131 47.9
#> 132 45.1
#> 133 42.5
#> 134 40.1
#> 135 37.8
#> 136 35.6
#> 137 33.5
#> 138 31.6
#> 139 29.8
#> 140 28.1
#> 141 26.5
#> 142 25
#> 143 23.6
#> 144 22.2
#> 145 21
#> 146 19.8
#> 147 18.7
#> 148 17.6
#> 149 16.6
#> 150 15.7
#> 151 14.8
#> 152 14
#> 153 13.2
#> 154 12.5
#> 155 11.8
#> 156 11.1
#> 157 10.5
#> 158 9.94
#> 159 9.39
#> 160 8.87
#> 161 8.38
#> 162 7.92
#> 163 7.49
#> 164 7.08
#> 165 6.69
#> 166 6.32
#> 167 5.98
#> 168 5.65
#> 169 5.35
#> 170 5.06
#> 171 4.78
#> 172 4.52
#> 173 4.28
#> 174 4.05
#> 175 3.83
#> 176 3.62
#> 177 3.43
#> 178 3.25
#> 179 3.07
#> 180 2.91
#> 181 2.75
#> 182 2.6
#> 183 2.47
#> 184 2.33
#> 185 2.21
#> 186 2.09
#> 187 1.98
#> 188 1.88
#> 189 1.78
#> 190 1.68
#> 191 1.6
#> 192 1.51
#> 193 1.43
#> 194 1.36
#> 195 1.29
#> 196 1.22
#> 197 1.15
#> 198 1.09
#> 199 1.04
#> 200 0.983
#> 201 0.932
#> 202 0.883
#> 203 0.837
#> 204 0.794
#> 205 0.753
#> 206 0.714
#> 207 0.677
#> 208 0.642
#> 209 0.609
#> 210 0.577
#> 211 0.548
#> 212 0.519
#> 213 0.493
#> 214 0.467
#> 215 0.443
#> 216 0.421
#> 217 0.399
#> 218 0.379
#> 219 0.359
#> 220 0.341
#> 221 0.324
#> 222 0.307
#> 223 0.292
#> 224 0.277
#> 225 0.263
#> 226 0.249
#> 227 0.237
#> 228 0.225
#> 229 0.213
#> 230 0.203
#> 231 0.192
#> 232 0.183
#> 233 0.173
#> 234 0.165
#> 235 0.156
#> 236 0.149
#> 237 0.141
#> 238 0.134
#> 239 0.127
#> 240 0.121
#> 241 0.115
#> 242 0.109
#> 243 0.104
#> 244 0.0985
#> 245 0.0936
#> 246 0.0889
#> 247 0.0845
#> 248 0.0803
#> 249 0.0763
#> 250 0.0725
#> 251 0.0689
#> 252 0.0655
#> 253 0.0623
#> 254 0.0592
#> 255 0.0563
#> 256 0.0535
#> 257 0.0508
#> 258 0.0483
#> 259 0.0459
#> 260 0.0437
#> 261 0.0415
#> 262 0.0395
#> 263 0.0375
#> 264 0.0357
#> 265 0.0339
#> 266 0.0323
#> 267 0.0307
#> 268 0.0292
#> 269 0.0278
#> 270 0.0264
#> 271 0.0251
#> 272 0.0239
#> 273 0.0227
#> 274 0.0216
#> 275 0.0206
#> 276 0.0196
#> 277 0.0186
#> 278 0.0177
#> 279 0.0168
#> 280 0.016
#> 281 0.0152
#> 282 0.0145
#> 283 0.0138
#> 284 0.0131
#> 285 0.0125
#> 286 0.0119
#> 287 0.0113
#> 288 0.0108
#> 289 0.0102
#> 290 0.00975
#> 291 0.00927
#> 292 0.00883
#> 293 0.0084
#> 294 0.008
#> 295 0.00761
#> 296 0.00724
#> 297 0.0069
#> 298 0.00656
#> 299 0.00625
#> 300 0.00595
#> 301 0.00566
#> 302 0.00539
#> 303 0.00513
#> 304 0.00489
#> 305 0.00465
#> 306 0.00443
#> 307 0.00422
#> 308 0.00401
#> 309 0.00382
#> 310 0.00364
#> 311 0.00347
#> 312 0.0033
#> 313 0.00314
#> 314 0.00299
#> 315 0.00285
#> 316 0.00271
#> 317 0.00259
#> 318 0.00246
#> 319 0.00234
#> 320 0.00223
#> 321 0.00213
#> 322 0.00203
#> 323 0.00193
#> 324 0.00184
#> 325 0.00175
#> 326 0.00167
#> 327 0.00159
#> 328 0.00151
#> 329 0.00144
#> 330 0.00137
#> 331 0.00131
#> 332 0.00125
#> 333 0.00119
#> 334 0.00113
#> 335 0.00108
#> 336 0.00103
#> 337 0.000979
#> 338 0.000932
#> 339 0.000888
#> 340 0.000846
#> 341 0.000807
#> 342 0.000768
#> 343 0.000732
#> 344 0.000698
#> 345 0.000665
#> 346 0.000634
#> 347 0.000604
#> 348 0.000575
#> 349 0.000548
#> 350 0.000523
#> 351 0.000498
#> 352 0.000475
#> 353 0.000452
#> 354 0.000431
#> 355 0.000411
#> 356 0.000392
#> 357 0.000373
#> 358 0.000356
#> 359 0.000339
#> 360 0.000323
#> 361 0.000308
#> 362 0.000294
#> 363 0.00028
#> 364 0.000267
#> 365 0.000254
#> 366 0.000243
#> 367 0.000231
#> 368 0.00022
#> 369 0.00021
#> 370 2e-04
#> 371 0.000191
#> 372 0.000182
#> 373 0.000174
#> 374 0.000165
#> 375 0.000158
#> 376 0.00015
#> 377 0.000143
#> 378 0.000137
#> 379 0.00013
#> 380 0.000124
#> 381 0.000119
#> 382 0.000113
#> 383 0.000108
#> 384 0.000103
#> 385 9.8e-05
#> 386 9.34e-05
#> 387 8.91e-05
#> 388 8.49e-05
#> 389 8.1e-05
#> 390 7.72e-05
#> 391 7.37e-05
#> 392 7.02e-05
#> 393 6.7e-05
#> 394 6.39e-05
#> 395 6.09e-05
#> 396 5.81e-05
#> 397 5.54e-05
#> 398 5.28e-05
#> 399 5.04e-05
#> 400 4.81e-05
#> 401 4.58e-05
#> 402 4.37e-05
#> 403 4.17e-05
#> 404 3.98e-05
#> 405 3.79e-05
#> 406 3.62e-05
#> 407 3.45e-05
#> 408 3.29e-05
#> 409 3.14e-05
#> 410 2.99e-05
#> 411 2.86e-05
#> 412 2.72e-05
#> 413 2.6e-05
#> 414 2.48e-05
#> 415 2.36e-05
#> 416 2.25e-05
#> 417 2.15e-05
#> 418 2.05e-05
#> 419 1.96e-05
#> 420 1.87e-05
#> 421 1.78e-05
#> 422 1.7e-05
#> 423 1.62e-05
#> 424 1.55e-05
#> 425 1.48e-05
#> 426 1.41e-05
#> 427 1.34e-05
#> 428 1.28e-05
#> 429 1.22e-05
#> 430 1.17e-05
#> 431 1.11e-05
#> 432 1.06e-05
#> 433 1.01e-05
#> 434 9.66e-06
#> 435 9.22e-06
#> 436 8.79e-06
#> 437 8.39e-06
#> 438 8e-06
#> 439 7.64e-06
#> 440 7.29e-06
#> 441 6.95e-06
#> 442 6.63e-06
#> 443 6.33e-06
#> 444 6.04e-06
#> 445 5.76e-06
#> 446 5.5e-06
#> 447 5.25e-06
#> 448 5.01e-06
#> 449 4.78e-06
#> 450 4.56e-06
#> 451 4.35e-06
#> 452 4.15e-06
#> 453 3.96e-06
#> 454 3.78e-06
#> 455 3.61e-06
#> 456 3.44e-06
#> 457 3.28e-06
#> 458 3.13e-06
#> 459 2.99e-06
#> 460 2.85e-06
#> 461 2.72e-06
#> 462 2.6e-06
#> 463 2.48e-06
#> 464 2.37e-06
#> 465 2.26e-06
#> 466 2.15e-06
#> 467 2.06e-06
#> 468 1.96e-06
#> 469 1.87e-06
#> 470 1.79e-06
#> 471 1.71e-06
#> 472 1.63e-06
#> 473 1.55e-06
#> 474 1.48e-06
#> 475 1.42e-06
#> 476 1.35e-06
#> 477 1.29e-06
#> 478 1.23e-06
#> 479 1.17e-06
#> 480 1.12e-06
#> 481 1.07e-06
#> 482 1.02e-06
#> 483 9.74e-07
#> 484 9.3e-07
#> 485 8.88e-07
#> 486 8.47e-07
#> 487 8.09e-07
#> 488 7.72e-07
#> 489 7.37e-07
#> 490 7.03e-07
#> 491 6.71e-07
#> 492 6.41e-07
#> 493 6.12e-07
#> 494 5.84e-07
#> 495 5.57e-07
#> 496 5.32e-07
#> 497 5.08e-07
#> 498 4.85e-07
#> 499 4.63e-07
#> 500 4.42e-07
#> 2.83 sec elapsed
```
12\.4 A `PyTorch` neural network
--------------------------------
Here is the same example we have used above but written in PyTorch. Notice the following differences with the `numpy` code:
* we select the computation device which could be `cpu` or `gpu`
* when building or creating the tensors, we specify which device we want to use
* the tensors have `torch` methods and properties. Example: `mm()`, `clamp()`, `sum()`, `clone()`, and `t()`,
* also notice the use some `torch` functions: `device()`, `randn()`
```
reticulate::use_condaenv("r-torch")
```
```
# Code in file tensor/two_layer_net_tensor.py
import torch
import time
ms = torch.manual_seed(0)
tic = time.process_time()
device = torch.device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = torch.randn(N, D_in, device=device)
y = torch.randn(N, D_out, device=device)
# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device)
w2 = torch.randn(H, D_out, device=device)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.mm(w1)
h_relu = h.clamp(min=0)
y_pred = h_relu.mm(w2)
# Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss = (y_pred - y).pow(2).sum()
print(t, loss.item())
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h < 0] = 0
grad_w1 = x.t().mm(grad_h)
# Update weights using gradient descent
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
```
```
#> 0 29428664.0
#> 1 22739448.0
#> 2 20605260.0
#> 3 19520372.0
#> 4 17810224.0
#> 5 14999204.0
#> 6 11483334.0
#> 7 8096649.0
#> 8 5398717.5
#> 9 3521559.75
#> 10 2315861.5
#> 11 1570273.5
#> 12 1111700.375
#> 13 825062.8125
#> 14 639684.4375
#> 15 514220.625
#> 16 425155.3125
#> 17 358904.5625
#> 18 307636.71875
#> 19 266625.90625
#> 20 232998.625
#> 21 204887.296875
#> 22 181051.0625
#> 23 160643.0
#> 24 143036.09375
#> 25 127729.578125
#> 26 114360.25
#> 27 102621.0234375
#> 28 92276.9375
#> 29 83144.0859375
#> 30 75053.3984375
#> 31 67870.3984375
#> 32 61485.79296875
#> 33 55786.6328125
#> 34 50690.8515625
#> 35 46128.6328125
#> 36 42029.546875
#> 37 38341.875
#> 38 35017.33203125
#> 39 32016.68359375
#> 40 29303.43359375
#> 41 26847.1484375
#> 42 24620.376953125
#> 43 22599.46875
#> 44 20762.5625
#> 45 19090.986328125
#> 46 17568.359375
#> 47 16180.1083984375
#> 48 14911.99609375
#> 49 13753.8525390625
#> 50 12694.0205078125
#> 51 11723.640625
#> 52 10834.490234375
#> 53 10019.25390625
#> 54 9270.923828125
#> 55 8583.36328125
#> 56 7950.5625
#> 57 7368.46875
#> 58 6832.73779296875
#> 59 6339.20703125
#> 60 5884.1484375
#> 61 5464.44384765625
#> 62 5077.45849609375
#> 63 4719.9833984375
#> 64 4389.5400390625
#> 65 4084.009765625
#> 66 3801.313232421875
#> 67 3539.627197265625
#> 68 3297.266845703125
#> 69 3072.8017578125
#> 70 2864.869140625
#> 71 2672.025390625
#> 72 2493.096435546875
#> 73 2326.89697265625
#> 74 2172.523193359375
#> 75 2029.1279296875
#> 76 1895.768310546875
#> 77 1771.71435546875
#> 78 1656.3409423828125
#> 79 1548.9505615234375
#> 80 1448.9840087890625
#> 81 1355.846923828125
#> 82 1269.0556640625
#> 83 1188.1507568359375
#> 84 1112.7042236328125
#> 85 1042.3167724609375
#> 86 976.61328125
#> 87 915.2999267578125
#> 88 858.0404052734375
#> 89 804.5496826171875
#> 90 754.5780029296875
#> 91 707.8599243164062
#> 92 664.1988525390625
#> 93 623.3640747070312
#> 94 585.147216796875
#> 95 549.3995971679688
#> 96 515.9583740234375
#> 97 484.6272277832031
#> 98 455.28955078125
#> 99 427.81829833984375
#> 100 402.0847473144531
#> 101 377.9535827636719
#> 102 355.3477783203125
#> 103 334.1396179199219
#> 104 314.2633361816406
#> 105 295.61749267578125
#> 106 278.1217346191406
#> 107 261.7001953125
#> 108 246.2969512939453
#> 109 231.8272247314453
#> 110 218.24240112304688
#> 111 205.48812866210938
#> 112 193.5052490234375
#> 113 182.24417114257812
#> 114 171.66690063476562
#> 115 161.72601318359375
#> 116 152.3784942626953
#> 117 143.59078979492188
#> 118 135.32354736328125
#> 119 127.55582427978516
#> 120 120.24463653564453
#> 121 113.36481475830078
#> 122 106.89350128173828
#> 123 100.80726623535156
#> 124 95.07266998291016
#> 125 89.6752700805664
#> 126 84.59477233886719
#> 127 79.80913543701172
#> 128 75.30223083496094
#> 129 71.0572509765625
#> 130 67.05980682373047
#> 131 63.292694091796875
#> 132 59.7408447265625
#> 133 56.394203186035156
#> 134 53.243412017822266
#> 135 50.2683219909668
#> 136 47.46772003173828
#> 137 44.82497787475586
#> 138 42.33271408081055
#> 139 39.983646392822266
#> 140 37.76749801635742
#> 141 35.67666244506836
#> 142 33.70509338378906
#> 143 31.84467124938965
#> 144 30.089385986328125
#> 145 28.432872772216797
#> 146 26.869369506835938
#> 147 25.39266586303711
#> 148 23.999008178710938
#> 149 22.684724807739258
#> 150 21.4434757232666
#> 151 20.270301818847656
#> 152 19.164194107055664
#> 153 18.11824607849121
#> 154 17.131380081176758
#> 155 16.199291229248047
#> 156 15.318136215209961
#> 157 14.486746788024902
#> 158 13.700006484985352
#> 159 12.957758903503418
#> 160 12.256866455078125
#> 161 11.593376159667969
#> 162 10.96681022644043
#> 163 10.374650955200195
#> 164 9.815613746643066
#> 165 9.286172866821289
#> 166 8.78611946105957
#> 167 8.313515663146973
#> 168 7.866476058959961
#> 169 7.443814754486084
#> 170 7.044161319732666
#> 171 6.666952133178711
#> 172 6.309534072875977
#> 173 5.9717559814453125
#> 174 5.652008056640625
#> 175 5.3500075340271
#> 176 5.06421422958374
#> 177 4.793882846832275
#> 178 4.538228511810303
#> 179 4.296501159667969
#> 180 4.067446708679199
#> 181 3.8510499000549316
#> 182 3.6461739540100098
#> 183 3.4524216651916504
#> 184 3.2690694332122803
#> 185 3.0956828594207764
#> 186 2.9311866760253906
#> 187 2.7758116722106934
#> 188 2.628840684890747
#> 189 2.4897918701171875
#> 190 2.357895851135254
#> 191 2.2333240509033203
#> 192 2.1151578426361084
#> 193 2.003354072570801
#> 194 1.897698998451233
#> 195 1.7976123094558716
#> 196 1.7029246091842651
#> 197 1.6131364107131958
#> 198 1.5283033847808838
#> 199 1.4478871822357178
#> 200 1.371699333190918
#> 201 1.2994897365570068
#> 202 1.231500267982483
#> 203 1.1667163372039795
#> 204 1.1054186820983887
#> 205 1.0472912788391113
#> 206 0.9924129247665405
#> 207 0.9405249953269958
#> 208 0.8911417722702026
#> 209 0.8445178866386414
#> 210 0.8003085851669312
#> 211 0.758423388004303
#> 212 0.7187696099281311
#> 213 0.6812056303024292
#> 214 0.6455042362213135
#> 215 0.6117878556251526
#> 216 0.5798596739768982
#> 217 0.5495442152023315
#> 218 0.5209972858428955
#> 219 0.4938827455043793
#> 220 0.46809014678001404
#> 221 0.4436979293823242
#> 222 0.42065465450286865
#> 223 0.3987467288970947
#> 224 0.3779408633708954
#> 225 0.35838788747787476
#> 226 0.3397265076637268
#> 227 0.3221140503883362
#> 228 0.30536866188049316
#> 229 0.2895379662513733
#> 230 0.27451151609420776
#> 231 0.2602919638156891
#> 232 0.24681799113750458
#> 233 0.23405984044075012
#> 234 0.22187164425849915
#> 235 0.2103630006313324
#> 236 0.19945508241653442
#> 237 0.18917179107666016
#> 238 0.1794165074825287
#> 239 0.1700771450996399
#> 240 0.1613144725561142
#> 241 0.152926966547966
#> 242 0.14506009221076965
#> 243 0.1375567466020584
#> 244 0.13043273985385895
#> 245 0.12370903044939041
#> 246 0.11734490096569061
#> 247 0.11129261553287506
#> 248 0.10555146634578705
#> 249 0.10010744631290436
#> 250 0.09495128691196442
#> 251 0.09006303548812866
#> 252 0.08542166650295258
#> 253 0.08105342835187912
#> 254 0.07687549293041229
#> 255 0.07293462008237839
#> 256 0.06918356567621231
#> 257 0.06564081460237503
#> 258 0.062239713966846466
#> 259 0.059055205434560776
#> 260 0.05602336302399635
#> 261 0.05314234644174576
#> 262 0.05042209476232529
#> 263 0.04785769432783127
#> 264 0.045423999428749084
#> 265 0.04309770092368126
#> 266 0.04090772941708565
#> 267 0.03880797326564789
#> 268 0.03683297708630562
#> 269 0.03495331108570099
#> 270 0.03315659612417221
#> 271 0.031475357711315155
#> 272 0.029864072799682617
#> 273 0.028345633298158646
#> 274 0.026901375502347946
#> 275 0.025526201352477074
#> 276 0.024225471541285515
#> 277 0.023021651431918144
#> 278 0.021845556795597076
#> 279 0.020738258957862854
#> 280 0.01967737451195717
#> 281 0.01868186891078949
#> 282 0.017737826332449913
#> 283 0.016843702644109726
#> 284 0.015994098037481308
#> 285 0.015187159180641174
#> 286 0.014432456344366074
#> 287 0.013691866770386696
#> 288 0.013026118278503418
#> 289 0.012365361675620079
#> 290 0.011741021648049355
#> 291 0.011153185740113258
#> 292 0.010602883994579315
#> 293 0.010070282965898514
#> 294 0.009570850059390068
#> 295 0.009099053218960762
#> 296 0.008648849092423916
#> 297 0.008217266760766506
#> 298 0.007814647629857063
#> 299 0.007436459884047508
#> 300 0.007072300184518099
#> 301 0.006720009259879589
#> 302 0.006387100555002689
#> 303 0.00608158390969038
#> 304 0.00578821636736393
#> 305 0.005504274740815163
#> 306 0.005235536955296993
#> 307 0.004986326675862074
#> 308 0.004750200547277927
#> 309 0.004520890768617392
#> 310 0.004305804148316383
#> 311 0.004104197025299072
#> 312 0.003908107057213783
#> 313 0.0037259890232235193
#> 314 0.0035482768435031176
#> 315 0.0033842488192021847
#> 316 0.0032260832376778126
#> 317 0.0030806262511759996
#> 318 0.002938204212114215
#> 319 0.002802144968882203
#> 320 0.002674166578799486
#> 321 0.0025522327050566673
#> 322 0.0024338625371456146
#> 323 0.002325983252376318
#> 324 0.0022217126097530127
#> 325 0.002122103003785014
#> 326 0.0020273567643016577
#> 327 0.0019368595676496625
#> 328 0.0018519405275583267
#> 329 0.0017723542405292392
#> 330 0.0016958083724603057
#> 331 0.00162519421428442
#> 332 0.001555908122099936
#> 333 0.0014901482500135899
#> 334 0.0014247691724449396
#> 335 0.0013653874630108476
#> 336 0.001307258615270257
#> 337 0.0012546550715342164
#> 338 0.0012025412870571017
#> 339 0.0011545777088031173
#> 340 0.001107968739233911
#> 341 0.0010642317356541753
#> 342 0.0010200864635407925
#> 343 0.0009793058270588517
#> 344 0.0009410151396878064
#> 345 0.0009048299980349839
#> 346 0.0008693647105246782
#> 347 0.000835308397654444
#> 348 0.0008031500619836152
#> 349 0.0007735351100564003
#> 350 0.000744393328204751
#> 351 0.00071698147803545
#> 352 0.00069050322053954
#> 353 0.0006645384710282087
#> 354 0.0006397517863661051
#> 355 0.0006177832838147879
#> 356 0.0005949471960775554
#> 357 0.0005744362715631723
#> 358 0.0005537742399610579
#> 359 0.0005348395789042115
#> 360 0.0005162699380889535
#> 361 0.000499469693750143
#> 362 0.00048172459355555475
#> 363 0.0004661969724111259
#> 364 0.0004515194450505078
#> 365 0.0004358708392828703
#> 366 0.0004218583053443581
#> 367 0.00040883725159801543
#> 368 0.0003956131695304066
#> 369 0.0003827497421298176
#> 370 0.000370656605809927
#> 371 0.00036004791036248207
#> 372 0.0003480703162495047
#> 373 0.0003388348559383303
#> 374 0.000327684567309916
#> 375 0.0003175089950673282
#> 376 0.0003082627372350544
#> 377 0.0002986858307849616
#> 378 0.00028960598865523934
#> 379 0.0002815576735883951
#> 380 0.0002736181777436286
#> 381 0.0002657140721566975
#> 382 0.00025785667821764946
#> 383 0.0002509196347091347
#> 384 0.00024437913089059293
#> 385 0.00023740741016808897
#> 386 0.0002299495681654662
#> 387 0.0002234804560430348
#> 388 0.0002169939107261598
#> 389 0.00021134663256816566
#> 390 0.0002056143421214074
#> 391 0.00020046206191182137
#> 392 0.00019536828040145338
#> 393 0.00019056514429394156
#> 394 0.00018598540918901563
#> 395 0.00018159380124416202
#> 396 0.00017640764417592436
#> 397 0.00017208821373060346
#> 398 0.000168110869708471
#> 399 0.00016350964142475277
#> 400 0.00015964081103447825
#> 401 0.00015596051525790244
#> 402 0.00015269994037225842
#> 403 0.00014866374840494245
#> 404 0.00014477886725217104
#> 405 0.00014148686022963375
#> 406 0.00013842849875800312
#> 407 0.00013507613039109856
#> 408 0.0001322997995885089
#> 409 0.00012896949192509055
#> 410 0.00012618394976016134
#> 411 0.00012356613297015429
#> 412 0.00012060831068083644
#> 413 0.00011798611376434565
#> 414 0.0001152795521193184
#> 415 0.00011272911069681868
#> 416 0.00011033188638975844
#> 417 0.00010773474059533328
#> 418 0.00010584026313154027
#> 419 0.00010329326323699206
#> 420 0.00010140397353097796
#> 421 9.970468090614304e-05
#> 422 9.72362540778704e-05
#> 423 9.54945498961024e-05
#> 424 9.346337174065411e-05
#> 425 9.128850797424093e-05
#> 426 8.97917925613001e-05
#> 427 8.779048221185803e-05
#> 428 8.59305146150291e-05
#> 429 8.416303899139166e-05
#> 430 8.247063669841737e-05
#> 431 8.109148620860651e-05
#> 432 7.982019451446831e-05
#> 433 7.818565791239962e-05
#> 434 7.673520303796977e-05
#> 435 7.54009815864265e-05
#> 436 7.374506094492972e-05
#> 437 7.267539331223816e-05
#> 438 7.122510578483343e-05
#> 439 6.98604853823781e-05
#> 440 6.852982915006578e-05
#> 441 6.75098126521334e-05
#> 442 6.636354373767972e-05
#> 443 6.522039620904252e-05
#> 444 6.410140485968441e-05
#> 445 6.307245348580182e-05
#> 446 6.221079092938453e-05
#> 447 6.089429371058941e-05
#> 448 5.975936437607743e-05
#> 449 5.893126945011318e-05
#> 450 5.780566425528377e-05
#> 451 5.694766514352523e-05
#> 452 5.5986300139920786e-05
#> 453 5.502309068106115e-05
#> 454 5.420695379143581e-05
#> 455 5.31858422618825e-05
#> 456 5.239694655756466e-05
#> 457 5.1775907195406035e-05
#> 458 5.109262929181568e-05
#> 459 5.0413200369803235e-05
#> 460 4.956878183293156e-05
#> 461 4.8856254579732195e-05
#> 462 4.8221645556623116e-05
#> 463 4.7429402911802754e-05
#> 464 4.700458885054104e-05
#> 465 4.615000216290355e-05
#> 466 4.5314704038901255e-05
#> 467 4.466490645427257e-05
#> 468 4.406480729812756e-05
#> 469 4.344138142187148e-05
#> 470 4.302451270632446e-05
#> 471 4.255307430867106e-05
#> 472 4.1863419028231874e-05
#> 473 4.148659354541451e-05
#> 474 4.099802754353732e-05
#> 475 4.034798257634975e-05
#> 476 3.994005237473175e-05
#> 477 3.94669477827847e-05
#> 478 3.9117549022194e-05
#> 479 3.8569156458834186e-05
#> 480 3.8105612475192174e-05
#> 481 3.753463170141913e-05
#> 482 3.679965084302239e-05
#> 483 3.646357436082326e-05
#> 484 3.597680915845558e-05
#> 485 3.555299190338701e-05
#> 486 3.504360938677564e-05
#> 487 3.449235737207346e-05
#> 488 3.391931386431679e-05
#> 489 3.374389780219644e-05
#> 490 3.328040838823654e-05
#> 491 3.31329574692063e-05
#> 492 3.259751247242093e-05
#> 493 3.2441555958939716e-05
#> 494 3.1837684218771756e-05
#> 495 3.1491359550273046e-05
#> 496 3.120429755654186e-05
#> 497 3.089967503910884e-05
#> 498 3.059657319681719e-05
#> 499 3.0050463465158828e-05
```
```
toc = time.process_time()
print(toc - tic, "seconds")
```
```
#> 30.475184615000003 seconds
```
12\.5 A neural network in `rTorch`
----------------------------------
The example shows the long and manual way of calculating the forward and backward passes but using `rTorch`. The objective is getting familiarized with the rTorch tensor operations.
The following example was converted from **PyTorch** to **rTorch** to show differences and similarities of both approaches. The original source can be found here: [Source](https://github.com/jcjohnson/pytorch-examples#pytorch-tensors).
### 12\.5\.1 Load the libraries
```
library(rTorch)
library(ggplot2)
device = torch$device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
invisible(torch$manual_seed(0))
```
* `N` is batch size;
* `D_in` is input dimension;
* `H` is hidden dimension;
* `D_out` is output dimension.
### 12\.5\.2 Dataset
We will create a random dataset for a **two layer neural network**.
```
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x <- torch$randn(N, D_in, device=device)
y <- torch$randn(N, D_out, device=device)
# dimensions of both tensors
dim(x)
dim(y)
```
```
#> [1] 64 1000
#> [1] 64 10
```
### 12\.5\.3 Initialize the weights
```
# Randomly initialize weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
dim(w1)
dim(w2)
```
```
#> [1] 1000 100
#> [1] 100 10
```
### 12\.5\.4 Iterate through the dataset
Now, we are going to train our neural network on the `training` dataset. The equestion is: *“how many times do we have to expose the training data to the algorithm?”.* By looking at the graph of the loss we may get an idea when we should stop.
#### 12\.5\.4\.1 Iterate 50 times
Let’s say that for the sake of time we select to run only 50 iterations of the loop doing the training.
```
learning_rate = 1e-6
# loop
for (t in 1:50) {
# Forward pass: compute predicted y, y_pred
h <- x$mm(w1) # matrix multiplication, x*w1
h_relu <- h$clamp(min=0) # make elements greater than zero
y_pred <- h_relu$mm(w2) # matrix multiplication, h_relu*w2
# Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum() # sum((y_pred-y)^2)
# cat(t, "\t")
# cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred) # compute gradient of w2
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0) # filter values lower than zero
torch$masked_select(grad_h, mask)$fill_(0.0) # make them equal to zero
grad_w1 <- x$t()$mm(grad_h) # compute gradient of w1
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
# y vs predicted y
df_50 <- data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = 50)
ggplot(df_50, aes(x = y, y = y_pred)) +
geom_point()
```
We see a lot of dispersion between the predicted values, \\(y\_{pred}\\) and the real values, \\(y\\). We are far from our goal.
Let’s take a look at the dataframe:
```
library('DT')
datatable(df_50, options = list(pageLength = 10))
```
#### 12\.5\.4\.2 A training function
Now, we convert the script above to a function, so we could reuse it several times. We want to study the effect of the iteration on the performance of the algorithm.
This time we create a function `train` to input the number of iterations that we want to run:
```
train <- function(iterations) {
# Randomly initialize weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
learning_rate = 1e-6
# loop
for (t in 1:iterations) {
# Forward pass: compute predicted y
h <- x$mm(w1)
h_relu <- h$clamp(min=0)
y_pred <- h_relu$mm(w2)
# Compute and print loss; loss is a scalar stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum()
# cat(t, "\t"); cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred)
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0)
torch$masked_select(grad_h, mask)$fill_(0.0)
grad_w1 <- x$t()$mm(grad_h)
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = iterations)
}
```
#### 12\.5\.4\.3 Run it at 100 iterations
```
# retrieve the results and store them in a dataframe
df_100 <- train(iterations = 100)
datatable(df_100, options = list(pageLength = 10))
# plot
ggplot(df_100, aes(x = y_pred, y = y)) +
geom_point()
```
#### 12\.5\.4\.4 250 iterations
Still there are differences between the value and the prediction. Let’s try with more iterations, like **250**:
```
df_250 <- train(iterations = 200)
datatable(df_250, options = list(pageLength = 25))
# plot
ggplot(df_250, aes(x = y_pred, y = y)) +
geom_point()
```
We see the formation of a line between the values and prediction, which means we are getting closer at finding the right algorithm, in this particular case, weights and bias.
#### 12\.5\.4\.5 500 iterations
Let’s try one more time with 500 iterations:
```
df_500 <- train(iterations = 500)
datatable(df_500, options = list(pageLength = 25))
ggplot(df_500, aes(x = y_pred, y = y)) +
geom_point()
```
### 12\.5\.1 Load the libraries
```
library(rTorch)
library(ggplot2)
device = torch$device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
invisible(torch$manual_seed(0))
```
* `N` is batch size;
* `D_in` is input dimension;
* `H` is hidden dimension;
* `D_out` is output dimension.
### 12\.5\.2 Dataset
We will create a random dataset for a **two layer neural network**.
```
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x <- torch$randn(N, D_in, device=device)
y <- torch$randn(N, D_out, device=device)
# dimensions of both tensors
dim(x)
dim(y)
```
```
#> [1] 64 1000
#> [1] 64 10
```
### 12\.5\.3 Initialize the weights
```
# Randomly initialize weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
dim(w1)
dim(w2)
```
```
#> [1] 1000 100
#> [1] 100 10
```
### 12\.5\.4 Iterate through the dataset
Now, we are going to train our neural network on the `training` dataset. The equestion is: *“how many times do we have to expose the training data to the algorithm?”.* By looking at the graph of the loss we may get an idea when we should stop.
#### 12\.5\.4\.1 Iterate 50 times
Let’s say that for the sake of time we select to run only 50 iterations of the loop doing the training.
```
learning_rate = 1e-6
# loop
for (t in 1:50) {
# Forward pass: compute predicted y, y_pred
h <- x$mm(w1) # matrix multiplication, x*w1
h_relu <- h$clamp(min=0) # make elements greater than zero
y_pred <- h_relu$mm(w2) # matrix multiplication, h_relu*w2
# Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum() # sum((y_pred-y)^2)
# cat(t, "\t")
# cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred) # compute gradient of w2
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0) # filter values lower than zero
torch$masked_select(grad_h, mask)$fill_(0.0) # make them equal to zero
grad_w1 <- x$t()$mm(grad_h) # compute gradient of w1
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
# y vs predicted y
df_50 <- data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = 50)
ggplot(df_50, aes(x = y, y = y_pred)) +
geom_point()
```
We see a lot of dispersion between the predicted values, \\(y\_{pred}\\) and the real values, \\(y\\). We are far from our goal.
Let’s take a look at the dataframe:
```
library('DT')
datatable(df_50, options = list(pageLength = 10))
```
#### 12\.5\.4\.2 A training function
Now, we convert the script above to a function, so we could reuse it several times. We want to study the effect of the iteration on the performance of the algorithm.
This time we create a function `train` to input the number of iterations that we want to run:
```
train <- function(iterations) {
# Randomly initialize weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
learning_rate = 1e-6
# loop
for (t in 1:iterations) {
# Forward pass: compute predicted y
h <- x$mm(w1)
h_relu <- h$clamp(min=0)
y_pred <- h_relu$mm(w2)
# Compute and print loss; loss is a scalar stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum()
# cat(t, "\t"); cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred)
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0)
torch$masked_select(grad_h, mask)$fill_(0.0)
grad_w1 <- x$t()$mm(grad_h)
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = iterations)
}
```
#### 12\.5\.4\.3 Run it at 100 iterations
```
# retrieve the results and store them in a dataframe
df_100 <- train(iterations = 100)
datatable(df_100, options = list(pageLength = 10))
# plot
ggplot(df_100, aes(x = y_pred, y = y)) +
geom_point()
```
#### 12\.5\.4\.4 250 iterations
Still there are differences between the value and the prediction. Let’s try with more iterations, like **250**:
```
df_250 <- train(iterations = 200)
datatable(df_250, options = list(pageLength = 25))
# plot
ggplot(df_250, aes(x = y_pred, y = y)) +
geom_point()
```
We see the formation of a line between the values and prediction, which means we are getting closer at finding the right algorithm, in this particular case, weights and bias.
#### 12\.5\.4\.5 500 iterations
Let’s try one more time with 500 iterations:
```
df_500 <- train(iterations = 500)
datatable(df_500, options = list(pageLength = 25))
ggplot(df_500, aes(x = y_pred, y = y)) +
geom_point()
```
#### 12\.5\.4\.1 Iterate 50 times
Let’s say that for the sake of time we select to run only 50 iterations of the loop doing the training.
```
learning_rate = 1e-6
# loop
for (t in 1:50) {
# Forward pass: compute predicted y, y_pred
h <- x$mm(w1) # matrix multiplication, x*w1
h_relu <- h$clamp(min=0) # make elements greater than zero
y_pred <- h_relu$mm(w2) # matrix multiplication, h_relu*w2
# Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum() # sum((y_pred-y)^2)
# cat(t, "\t")
# cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred) # compute gradient of w2
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0) # filter values lower than zero
torch$masked_select(grad_h, mask)$fill_(0.0) # make them equal to zero
grad_w1 <- x$t()$mm(grad_h) # compute gradient of w1
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
# y vs predicted y
df_50 <- data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = 50)
ggplot(df_50, aes(x = y, y = y_pred)) +
geom_point()
```
We see a lot of dispersion between the predicted values, \\(y\_{pred}\\) and the real values, \\(y\\). We are far from our goal.
Let’s take a look at the dataframe:
```
library('DT')
datatable(df_50, options = list(pageLength = 10))
```
#### 12\.5\.4\.2 A training function
Now, we convert the script above to a function, so we could reuse it several times. We want to study the effect of the iteration on the performance of the algorithm.
This time we create a function `train` to input the number of iterations that we want to run:
```
train <- function(iterations) {
# Randomly initialize weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
learning_rate = 1e-6
# loop
for (t in 1:iterations) {
# Forward pass: compute predicted y
h <- x$mm(w1)
h_relu <- h$clamp(min=0)
y_pred <- h_relu$mm(w2)
# Compute and print loss; loss is a scalar stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum()
# cat(t, "\t"); cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred)
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0)
torch$masked_select(grad_h, mask)$fill_(0.0)
grad_w1 <- x$t()$mm(grad_h)
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = iterations)
}
```
#### 12\.5\.4\.3 Run it at 100 iterations
```
# retrieve the results and store them in a dataframe
df_100 <- train(iterations = 100)
datatable(df_100, options = list(pageLength = 10))
# plot
ggplot(df_100, aes(x = y_pred, y = y)) +
geom_point()
```
#### 12\.5\.4\.4 250 iterations
Still there are differences between the value and the prediction. Let’s try with more iterations, like **250**:
```
df_250 <- train(iterations = 200)
datatable(df_250, options = list(pageLength = 25))
# plot
ggplot(df_250, aes(x = y_pred, y = y)) +
geom_point()
```
We see the formation of a line between the values and prediction, which means we are getting closer at finding the right algorithm, in this particular case, weights and bias.
#### 12\.5\.4\.5 500 iterations
Let’s try one more time with 500 iterations:
```
df_500 <- train(iterations = 500)
datatable(df_500, options = list(pageLength = 25))
ggplot(df_500, aes(x = y_pred, y = y)) +
geom_point()
```
12\.6 Full Neural Network in rTorch
-----------------------------------
```
library(rTorch)
library(ggplot2)
library(tictoc)
tic()
device = torch$device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
invisible(torch$manual_seed(0))
# Properties of tensors and neural network
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x <- torch$randn(N, D_in, device=device)
y <- torch$randn(N, D_out, device=device)
# dimensions of both tensors
# initialize the weights
w1 <- torch$randn(D_in, H, device=device) # layer 1
w2 <- torch$randn(H, D_out, device=device) # layer 2
learning_rate = 1e-6
# loop
for (t in 1:500) {
# Forward pass: compute predicted y, y_pred
h <- x$mm(w1) # matrix multiplication, x*w1
h_relu <- h$clamp(min=0) # make elements greater than zero
y_pred <- h_relu$mm(w2) # matrix multiplication, h_relu*w2
# Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
# of shape (); we can get its value as a Python number with loss.item().
loss <- (torch$sub(y_pred, y))$pow(2)$sum() # sum((y_pred-y)^2)
# cat(t, "\t")
# cat(loss$item(), "\n")
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred <- torch$mul(torch$scalar_tensor(2.0), torch$sub(y_pred, y))
grad_w2 <- h_relu$t()$mm(grad_y_pred) # compute gradient of w2
grad_h_relu <- grad_y_pred$mm(w2$t())
grad_h <- grad_h_relu$clone()
mask <- grad_h$lt(0) # filter values lower than zero
torch$masked_select(grad_h, mask)$fill_(0.0) # make them equal to zero
grad_w1 <- x$t()$mm(grad_h) # compute gradient of w1
# Update weights using gradient descent
w1 <- torch$sub(w1, torch$mul(learning_rate, grad_w1))
w2 <- torch$sub(w2, torch$mul(learning_rate, grad_w2))
}
# y vs predicted y
df<- data.frame(y = y$flatten()$numpy(),
y_pred = y_pred$flatten()$numpy(), iter = 500)
datatable(df, options = list(pageLength = 25))
ggplot(df, aes(x = y_pred, y = y)) +
geom_point()
toc()
```
```
#> 22.945 sec elapsed
```
12\.7 Exercise
--------------
1. Rewrite the code in `rTorch` but including and plotting the loss at each iteration
2. On the neural network written in `PyTorch`, code, instead of printing a long table, print the table by pages that we could navigate using vertical and horizontal bars. Tip: read the PyThon data structure from R and plot it with `ggplot2`
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/a-neural-network-step-by-step.html |
Chapter 13 A neural network step\-by\-step
==========================================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
13\.1 Introduction
------------------
Source: [https://github.com/jcjohnson/pytorch\-examples\#pytorch\-nn](https://github.com/jcjohnson/pytorch-examples#pytorch-nn)
In this example we use the torch `nn` package to implement our two\-layer network:
13\.2 Select device
-------------------
```
library(rTorch)
device = torch$device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
```
* `N` is batch size;
* `D_in` is input dimension;
* `H` is hidden dimension;
* `D_out` is output dimension.
13\.3 Create the dataset
------------------------
```
invisible(torch$manual_seed(0)) # do not show the generator output
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x = torch$randn(N, D_in, device=device)
y = torch$randn(N, D_out, device=device)
```
13\.4 Define the model
----------------------
We use the `nn` package to define our model as a sequence of layers. `nn.Sequential` applies these leayers in sequence to produce an output. Each *Linear Module* computes the output by using a linear function, and holds also tensors for its weights and biases. After constructing the model we use the `.to()` method to move it to the desired device, which could be `CPU` or `GPU`. Remember that we selected `CPU` with `torch$device('cpu')`.
```
model <- torch$nn$Sequential(
torch$nn$Linear(D_in, H), # first layer
torch$nn$ReLU(),
torch$nn$Linear(H, D_out))$to(device) # output layer
print(model)
```
```
#> Sequential(
#> (0): Linear(in_features=1000, out_features=100, bias=True)
#> (1): ReLU()
#> (2): Linear(in_features=100, out_features=10, bias=True)
#> )
```
13\.5 The Loss function
-----------------------
The `nn` package also contains definitions of several loss functions; in this case we will use **Mean Squared Error** (\\(MSE\\)) as our loss function. Setting `reduction='sum'` means that we are computing the *sum* of squared errors rather than the **mean**; this is for consistency with the examples above where we manually compute the loss, but in practice it is more common to use the mean squared error as a loss by setting `reduction='elementwise_mean'`.
```
loss_fn = torch$nn$MSELoss(reduction = 'sum')
```
13\.6 Iterate through the dataset
---------------------------------
```
learning_rate = 1e-4
for (t in 1:500) {
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the loss.
loss = loss_fn(y_pred, y)
cat(t, "\t")
cat(loss$item(), "\n")
# Zero the gradients before running the backward pass.
model$zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss$backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its data and gradients like we did before.
with(torch$no_grad(), {
for (param in iterate(model$parameters())) {
# in Python this code is much simpler. In R we have to do some conversions
# param$data <- torch$sub(param$data,
# torch$mul(param$grad$float(),
# torch$scalar_tensor(learning_rate)))
param$data <- param$data - param$grad * learning_rate
}
})
}
```
```
#> 1 628
#> 2 585
#> 3 547
#> 4 513
#> 5 482
#> 6 455
#> 7 430
#> 8 406
#> 9 385
#> 10 364
#> 11 345
#> 12 328
#> 13 311
#> 14 295
#> 15 280
#> 16 265
#> 17 252
#> 18 239
#> 19 226
#> 20 214
#> 21 203
#> 22 192
#> 23 181
#> 24 172
#> 25 162
#> 26 153
#> 27 145
#> 28 137
#> 29 129
#> 30 122
#> 31 115
#> 32 109
#> 33 103
#> 34 96.9
#> 35 91.5
#> 36 86.3
#> 37 81.5
#> 38 76.9
#> 39 72.6
#> 40 68.5
#> 41 64.6
#> 42 61
#> 43 57.6
#> 44 54.3
#> 45 51.3
#> 46 48.5
#> 47 45.8
#> 48 43.2
#> 49 40.9
#> 50 38.6
#> 51 36.5
#> 52 34.5
#> 53 32.7
#> 54 30.9
#> 55 29.3
#> 56 27.8
#> 57 26.3
#> 58 24.9
#> 59 23.7
#> 60 22.4
#> 61 21.3
#> 62 20.2
#> 63 19.2
#> 64 18.2
#> 65 17.3
#> 66 16.5
#> 67 15.7
#> 68 14.9
#> 69 14.2
#> 70 13.5
#> 71 12.9
#> 72 12.3
#> 73 11.7
#> 74 11.1
#> 75 10.6
#> 76 10.1
#> 77 9.67
#> 78 9.24
#> 79 8.82
#> 80 8.42
#> 81 8.05
#> 82 7.69
#> 83 7.35
#> 84 7.03
#> 85 6.72
#> 86 6.43
#> 87 6.16
#> 88 5.9
#> 89 5.65
#> 90 5.41
#> 91 5.18
#> 92 4.97
#> 93 4.76
#> 94 4.57
#> 95 4.38
#> 96 4.2
#> 97 4.03
#> 98 3.87
#> 99 3.72
#> 100 3.57
#> 101 3.43
#> 102 3.29
#> 103 3.17
#> 104 3.04
#> 105 2.92
#> 106 2.81
#> 107 2.7
#> 108 2.6
#> 109 2.5
#> 110 2.41
#> 111 2.31
#> 112 2.23
#> 113 2.14
#> 114 2.06
#> 115 1.99
#> 116 1.91
#> 117 1.84
#> 118 1.77
#> 119 1.71
#> 120 1.65
#> 121 1.59
#> 122 1.53
#> 123 1.47
#> 124 1.42
#> 125 1.37
#> 126 1.32
#> 127 1.27
#> 128 1.23
#> 129 1.18
#> 130 1.14
#> 131 1.1
#> 132 1.06
#> 133 1.02
#> 134 0.989
#> 135 0.954
#> 136 0.921
#> 137 0.889
#> 138 0.858
#> 139 0.828
#> 140 0.799
#> 141 0.772
#> 142 0.745
#> 143 0.719
#> 144 0.695
#> 145 0.671
#> 146 0.648
#> 147 0.626
#> 148 0.605
#> 149 0.584
#> 150 0.564
#> 151 0.545
#> 152 0.527
#> 153 0.509
#> 154 0.492
#> 155 0.476
#> 156 0.46
#> 157 0.444
#> 158 0.43
#> 159 0.415
#> 160 0.402
#> 161 0.388
#> 162 0.375
#> 163 0.363
#> 164 0.351
#> 165 0.339
#> 166 0.328
#> 167 0.318
#> 168 0.307
#> 169 0.297
#> 170 0.287
#> 171 0.278
#> 172 0.269
#> 173 0.26
#> 174 0.252
#> 175 0.244
#> 176 0.236
#> 177 0.228
#> 178 0.221
#> 179 0.214
#> 180 0.207
#> 181 0.2
#> 182 0.194
#> 183 0.187
#> 184 0.181
#> 185 0.176
#> 186 0.17
#> 187 0.165
#> 188 0.159
#> 189 0.154
#> 190 0.149
#> 191 0.145
#> 192 0.14
#> 193 0.136
#> 194 0.131
#> 195 0.127
#> 196 0.123
#> 197 0.119
#> 198 0.115
#> 199 0.112
#> 200 0.108
#> 201 0.105
#> 202 0.102
#> 203 0.0983
#> 204 0.0952
#> 205 0.0923
#> 206 0.0894
#> 207 0.0866
#> 208 0.0838
#> 209 0.0812
#> 210 0.0787
#> 211 0.0762
#> 212 0.0739
#> 213 0.0716
#> 214 0.0693
#> 215 0.0672
#> 216 0.0651
#> 217 0.0631
#> 218 0.0611
#> 219 0.0592
#> 220 0.0574
#> 221 0.0556
#> 222 0.0539
#> 223 0.0522
#> 224 0.0506
#> 225 0.0491
#> 226 0.0476
#> 227 0.0461
#> 228 0.0447
#> 229 0.0433
#> 230 0.042
#> 231 0.0407
#> 232 0.0394
#> 233 0.0382
#> 234 0.0371
#> 235 0.0359
#> 236 0.0348
#> 237 0.0338
#> 238 0.0327
#> 239 0.0317
#> 240 0.0308
#> 241 0.0298
#> 242 0.0289
#> 243 0.028
#> 244 0.0272
#> 245 0.0263
#> 246 0.0255
#> 247 0.0248
#> 248 0.024
#> 249 0.0233
#> 250 0.0226
#> 251 0.0219
#> 252 0.0212
#> 253 0.0206
#> 254 0.02
#> 255 0.0194
#> 256 0.0188
#> 257 0.0182
#> 258 0.0177
#> 259 0.0171
#> 260 0.0166
#> 261 0.0161
#> 262 0.0156
#> 263 0.0151
#> 264 0.0147
#> 265 0.0142
#> 266 0.0138
#> 267 0.0134
#> 268 0.013
#> 269 0.0126
#> 270 0.0122
#> 271 0.0119
#> 272 0.0115
#> 273 0.0112
#> 274 0.0108
#> 275 0.0105
#> 276 0.0102
#> 277 0.00988
#> 278 0.00959
#> 279 0.0093
#> 280 0.00902
#> 281 0.00875
#> 282 0.00849
#> 283 0.00824
#> 284 0.00799
#> 285 0.00775
#> 286 0.00752
#> 287 0.0073
#> 288 0.00708
#> 289 0.00687
#> 290 0.00666
#> 291 0.00647
#> 292 0.00627
#> 293 0.00609
#> 294 0.00591
#> 295 0.00573
#> 296 0.00556
#> 297 0.0054
#> 298 0.00524
#> 299 0.00508
#> 300 0.00493
#> 301 0.00478
#> 302 0.00464
#> 303 0.0045
#> 304 0.00437
#> 305 0.00424
#> 306 0.00412
#> 307 0.00399
#> 308 0.00388
#> 309 0.00376
#> 310 0.00365
#> 311 0.00354
#> 312 0.00344
#> 313 0.00334
#> 314 0.00324
#> 315 0.00314
#> 316 0.00305
#> 317 0.00296
#> 318 0.00287
#> 319 0.00279
#> 320 0.00271
#> 321 0.00263
#> 322 0.00255
#> 323 0.00248
#> 324 0.0024
#> 325 0.00233
#> 326 0.00226
#> 327 0.0022
#> 328 0.00213
#> 329 0.00207
#> 330 0.00201
#> 331 0.00195
#> 332 0.00189
#> 333 0.00184
#> 334 0.00178
#> 335 0.00173
#> 336 0.00168
#> 337 0.00163
#> 338 0.00158
#> 339 0.00154
#> 340 0.00149
#> 341 0.00145
#> 342 0.00141
#> 343 0.00137
#> 344 0.00133
#> 345 0.00129
#> 346 0.00125
#> 347 0.00121
#> 348 0.00118
#> 349 0.00114
#> 350 0.00111
#> 351 0.00108
#> 352 0.00105
#> 353 0.00102
#> 354 0.000987
#> 355 0.000958
#> 356 0.000931
#> 357 0.000904
#> 358 0.000877
#> 359 0.000852
#> 360 0.000827
#> 361 0.000803
#> 362 0.00078
#> 363 0.000757
#> 364 0.000735
#> 365 0.000714
#> 366 0.000693
#> 367 0.000673
#> 368 0.000654
#> 369 0.000635
#> 370 0.000617
#> 371 0.000599
#> 372 0.000581
#> 373 0.000565
#> 374 0.000548
#> 375 0.000532
#> 376 0.000517
#> 377 0.000502
#> 378 0.000488
#> 379 0.000474
#> 380 0.00046
#> 381 0.000447
#> 382 0.000434
#> 383 0.000421
#> 384 0.000409
#> 385 0.000397
#> 386 0.000386
#> 387 0.000375
#> 388 0.000364
#> 389 0.000354
#> 390 0.000343
#> 391 0.000334
#> 392 0.000324
#> 393 0.000315
#> 394 0.000306
#> 395 0.000297
#> 396 0.000288
#> 397 0.00028
#> 398 0.000272
#> 399 0.000264
#> 400 0.000257
#> 401 0.000249
#> 402 0.000242
#> 403 0.000235
#> 404 0.000228
#> 405 0.000222
#> 406 0.000216
#> 407 0.000209
#> 408 0.000203
#> 409 0.000198
#> 410 0.000192
#> 411 0.000186
#> 412 0.000181
#> 413 0.000176
#> 414 0.000171
#> 415 0.000166
#> 416 0.000161
#> 417 0.000157
#> 418 0.000152
#> 419 0.000148
#> 420 0.000144
#> 421 0.00014
#> 422 0.000136
#> 423 0.000132
#> 424 0.000128
#> 425 0.000124
#> 426 0.000121
#> 427 0.000117
#> 428 0.000114
#> 429 0.000111
#> 430 0.000108
#> 431 0.000105
#> 432 0.000102
#> 433 9.87e-05
#> 434 9.59e-05
#> 435 9.32e-05
#> 436 9.06e-05
#> 437 8.8e-05
#> 438 8.55e-05
#> 439 8.31e-05
#> 440 8.07e-05
#> 441 7.84e-05
#> 442 7.62e-05
#> 443 7.4e-05
#> 444 7.2e-05
#> 445 6.99e-05
#> 446 6.79e-05
#> 447 6.6e-05
#> 448 6.41e-05
#> 449 6.23e-05
#> 450 6.06e-05
#> 451 5.89e-05
#> 452 5.72e-05
#> 453 5.56e-05
#> 454 5.4e-05
#> 455 5.25e-05
#> 456 5.1e-05
#> 457 4.96e-05
#> 458 4.82e-05
#> 459 4.68e-05
#> 460 4.55e-05
#> 461 4.42e-05
#> 462 4.3e-05
#> 463 4.18e-05
#> 464 4.06e-05
#> 465 3.94e-05
#> 466 3.83e-05
#> 467 3.72e-05
#> 468 3.62e-05
#> 469 3.52e-05
#> 470 3.42e-05
#> 471 3.32e-05
#> 472 3.23e-05
#> 473 3.14e-05
#> 474 3.05e-05
#> 475 2.96e-05
#> 476 2.88e-05
#> 477 2.8e-05
#> 478 2.72e-05
#> 479 2.65e-05
#> 480 2.57e-05
#> 481 2.5e-05
#> 482 2.43e-05
#> 483 2.36e-05
#> 484 2.29e-05
#> 485 2.23e-05
#> 486 2.17e-05
#> 487 2.11e-05
#> 488 2.05e-05
#> 489 1.99e-05
#> 490 1.94e-05
#> 491 1.88e-05
#> 492 1.83e-05
#> 493 1.78e-05
#> 494 1.73e-05
#> 495 1.68e-05
#> 496 1.63e-05
#> 497 1.59e-05
#> 498 1.54e-05
#> 499 1.5e-05
#> 500 1.46e-05
```
13\.7 Using R generics
----------------------
### 13\.7\.1 Simplify tensor operations
The following two expressions are equivalent, with the first being the long version natural way of doing it in **PyTorch**. The second is using the generics in R for subtraction, multiplication and scalar conversion.
```
param$data <- torch$sub(param$data,
torch$mul(param$grad$float(),
torch$scalar_tensor(learning_rate)))
```
```
param$data <- param$data - param$grad * learning_rate
```
13\.8 An elegant neural network
-------------------------------
```
invisible(torch$manual_seed(0)) # do not show the generator output
# layer properties
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x = torch$randn(N, D_in, device=device)
y = torch$randn(N, D_out, device=device)
# set up the neural network
model <- torch$nn$Sequential(
torch$nn$Linear(D_in, H), # first layer
torch$nn$ReLU(), # activation
torch$nn$Linear(H, D_out))$to(device) # output layer
# specify how we will be computing the loss
loss_fn = torch$nn$MSELoss(reduction = 'sum')
learning_rate = 1e-4
loss_row <- list(vector()) # collect a list for the final dataframe
for (t in 1:500) {
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the loss.
loss = loss_fn(y_pred, y) # (y_pred - y) is a tensor; loss_fn output is a scalar
loss_row[[t]] <- c(t, loss$item())
# Zero the gradients before running the backward pass.
model$zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each module are stored
# in tensors with `requires_grad=True`, so this call will compute gradients for
# all learnable parameters in the model.
loss$backward()
# Update the weights using gradient descent. Each parameter is a tensor, so
# we can access its data and gradients like we did before.
with(torch$no_grad(), {
for (param in iterate(model$parameters())) {
# using R generics
param$data <- param$data - param$grad * learning_rate
}
})
}
```
13\.9 A browseable dataframe
----------------------------
```
library(DT)
loss_df <- data.frame(Reduce(rbind, loss_row), row.names = NULL)
names(loss_df)[1] <- "iter"
names(loss_df)[2] <- "loss"
DT::datatable(loss_df)
```
13\.10 Plot the loss at each iteration
--------------------------------------
```
library(ggplot2)
# plot
ggplot(loss_df, aes(x = iter, y = loss)) +
geom_point()
```
13\.1 Introduction
------------------
Source: [https://github.com/jcjohnson/pytorch\-examples\#pytorch\-nn](https://github.com/jcjohnson/pytorch-examples#pytorch-nn)
In this example we use the torch `nn` package to implement our two\-layer network:
13\.2 Select device
-------------------
```
library(rTorch)
device = torch$device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU
```
* `N` is batch size;
* `D_in` is input dimension;
* `H` is hidden dimension;
* `D_out` is output dimension.
13\.3 Create the dataset
------------------------
```
invisible(torch$manual_seed(0)) # do not show the generator output
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x = torch$randn(N, D_in, device=device)
y = torch$randn(N, D_out, device=device)
```
13\.4 Define the model
----------------------
We use the `nn` package to define our model as a sequence of layers. `nn.Sequential` applies these leayers in sequence to produce an output. Each *Linear Module* computes the output by using a linear function, and holds also tensors for its weights and biases. After constructing the model we use the `.to()` method to move it to the desired device, which could be `CPU` or `GPU`. Remember that we selected `CPU` with `torch$device('cpu')`.
```
model <- torch$nn$Sequential(
torch$nn$Linear(D_in, H), # first layer
torch$nn$ReLU(),
torch$nn$Linear(H, D_out))$to(device) # output layer
print(model)
```
```
#> Sequential(
#> (0): Linear(in_features=1000, out_features=100, bias=True)
#> (1): ReLU()
#> (2): Linear(in_features=100, out_features=10, bias=True)
#> )
```
13\.5 The Loss function
-----------------------
The `nn` package also contains definitions of several loss functions; in this case we will use **Mean Squared Error** (\\(MSE\\)) as our loss function. Setting `reduction='sum'` means that we are computing the *sum* of squared errors rather than the **mean**; this is for consistency with the examples above where we manually compute the loss, but in practice it is more common to use the mean squared error as a loss by setting `reduction='elementwise_mean'`.
```
loss_fn = torch$nn$MSELoss(reduction = 'sum')
```
13\.6 Iterate through the dataset
---------------------------------
```
learning_rate = 1e-4
for (t in 1:500) {
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the loss.
loss = loss_fn(y_pred, y)
cat(t, "\t")
cat(loss$item(), "\n")
# Zero the gradients before running the backward pass.
model$zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss$backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its data and gradients like we did before.
with(torch$no_grad(), {
for (param in iterate(model$parameters())) {
# in Python this code is much simpler. In R we have to do some conversions
# param$data <- torch$sub(param$data,
# torch$mul(param$grad$float(),
# torch$scalar_tensor(learning_rate)))
param$data <- param$data - param$grad * learning_rate
}
})
}
```
```
#> 1 628
#> 2 585
#> 3 547
#> 4 513
#> 5 482
#> 6 455
#> 7 430
#> 8 406
#> 9 385
#> 10 364
#> 11 345
#> 12 328
#> 13 311
#> 14 295
#> 15 280
#> 16 265
#> 17 252
#> 18 239
#> 19 226
#> 20 214
#> 21 203
#> 22 192
#> 23 181
#> 24 172
#> 25 162
#> 26 153
#> 27 145
#> 28 137
#> 29 129
#> 30 122
#> 31 115
#> 32 109
#> 33 103
#> 34 96.9
#> 35 91.5
#> 36 86.3
#> 37 81.5
#> 38 76.9
#> 39 72.6
#> 40 68.5
#> 41 64.6
#> 42 61
#> 43 57.6
#> 44 54.3
#> 45 51.3
#> 46 48.5
#> 47 45.8
#> 48 43.2
#> 49 40.9
#> 50 38.6
#> 51 36.5
#> 52 34.5
#> 53 32.7
#> 54 30.9
#> 55 29.3
#> 56 27.8
#> 57 26.3
#> 58 24.9
#> 59 23.7
#> 60 22.4
#> 61 21.3
#> 62 20.2
#> 63 19.2
#> 64 18.2
#> 65 17.3
#> 66 16.5
#> 67 15.7
#> 68 14.9
#> 69 14.2
#> 70 13.5
#> 71 12.9
#> 72 12.3
#> 73 11.7
#> 74 11.1
#> 75 10.6
#> 76 10.1
#> 77 9.67
#> 78 9.24
#> 79 8.82
#> 80 8.42
#> 81 8.05
#> 82 7.69
#> 83 7.35
#> 84 7.03
#> 85 6.72
#> 86 6.43
#> 87 6.16
#> 88 5.9
#> 89 5.65
#> 90 5.41
#> 91 5.18
#> 92 4.97
#> 93 4.76
#> 94 4.57
#> 95 4.38
#> 96 4.2
#> 97 4.03
#> 98 3.87
#> 99 3.72
#> 100 3.57
#> 101 3.43
#> 102 3.29
#> 103 3.17
#> 104 3.04
#> 105 2.92
#> 106 2.81
#> 107 2.7
#> 108 2.6
#> 109 2.5
#> 110 2.41
#> 111 2.31
#> 112 2.23
#> 113 2.14
#> 114 2.06
#> 115 1.99
#> 116 1.91
#> 117 1.84
#> 118 1.77
#> 119 1.71
#> 120 1.65
#> 121 1.59
#> 122 1.53
#> 123 1.47
#> 124 1.42
#> 125 1.37
#> 126 1.32
#> 127 1.27
#> 128 1.23
#> 129 1.18
#> 130 1.14
#> 131 1.1
#> 132 1.06
#> 133 1.02
#> 134 0.989
#> 135 0.954
#> 136 0.921
#> 137 0.889
#> 138 0.858
#> 139 0.828
#> 140 0.799
#> 141 0.772
#> 142 0.745
#> 143 0.719
#> 144 0.695
#> 145 0.671
#> 146 0.648
#> 147 0.626
#> 148 0.605
#> 149 0.584
#> 150 0.564
#> 151 0.545
#> 152 0.527
#> 153 0.509
#> 154 0.492
#> 155 0.476
#> 156 0.46
#> 157 0.444
#> 158 0.43
#> 159 0.415
#> 160 0.402
#> 161 0.388
#> 162 0.375
#> 163 0.363
#> 164 0.351
#> 165 0.339
#> 166 0.328
#> 167 0.318
#> 168 0.307
#> 169 0.297
#> 170 0.287
#> 171 0.278
#> 172 0.269
#> 173 0.26
#> 174 0.252
#> 175 0.244
#> 176 0.236
#> 177 0.228
#> 178 0.221
#> 179 0.214
#> 180 0.207
#> 181 0.2
#> 182 0.194
#> 183 0.187
#> 184 0.181
#> 185 0.176
#> 186 0.17
#> 187 0.165
#> 188 0.159
#> 189 0.154
#> 190 0.149
#> 191 0.145
#> 192 0.14
#> 193 0.136
#> 194 0.131
#> 195 0.127
#> 196 0.123
#> 197 0.119
#> 198 0.115
#> 199 0.112
#> 200 0.108
#> 201 0.105
#> 202 0.102
#> 203 0.0983
#> 204 0.0952
#> 205 0.0923
#> 206 0.0894
#> 207 0.0866
#> 208 0.0838
#> 209 0.0812
#> 210 0.0787
#> 211 0.0762
#> 212 0.0739
#> 213 0.0716
#> 214 0.0693
#> 215 0.0672
#> 216 0.0651
#> 217 0.0631
#> 218 0.0611
#> 219 0.0592
#> 220 0.0574
#> 221 0.0556
#> 222 0.0539
#> 223 0.0522
#> 224 0.0506
#> 225 0.0491
#> 226 0.0476
#> 227 0.0461
#> 228 0.0447
#> 229 0.0433
#> 230 0.042
#> 231 0.0407
#> 232 0.0394
#> 233 0.0382
#> 234 0.0371
#> 235 0.0359
#> 236 0.0348
#> 237 0.0338
#> 238 0.0327
#> 239 0.0317
#> 240 0.0308
#> 241 0.0298
#> 242 0.0289
#> 243 0.028
#> 244 0.0272
#> 245 0.0263
#> 246 0.0255
#> 247 0.0248
#> 248 0.024
#> 249 0.0233
#> 250 0.0226
#> 251 0.0219
#> 252 0.0212
#> 253 0.0206
#> 254 0.02
#> 255 0.0194
#> 256 0.0188
#> 257 0.0182
#> 258 0.0177
#> 259 0.0171
#> 260 0.0166
#> 261 0.0161
#> 262 0.0156
#> 263 0.0151
#> 264 0.0147
#> 265 0.0142
#> 266 0.0138
#> 267 0.0134
#> 268 0.013
#> 269 0.0126
#> 270 0.0122
#> 271 0.0119
#> 272 0.0115
#> 273 0.0112
#> 274 0.0108
#> 275 0.0105
#> 276 0.0102
#> 277 0.00988
#> 278 0.00959
#> 279 0.0093
#> 280 0.00902
#> 281 0.00875
#> 282 0.00849
#> 283 0.00824
#> 284 0.00799
#> 285 0.00775
#> 286 0.00752
#> 287 0.0073
#> 288 0.00708
#> 289 0.00687
#> 290 0.00666
#> 291 0.00647
#> 292 0.00627
#> 293 0.00609
#> 294 0.00591
#> 295 0.00573
#> 296 0.00556
#> 297 0.0054
#> 298 0.00524
#> 299 0.00508
#> 300 0.00493
#> 301 0.00478
#> 302 0.00464
#> 303 0.0045
#> 304 0.00437
#> 305 0.00424
#> 306 0.00412
#> 307 0.00399
#> 308 0.00388
#> 309 0.00376
#> 310 0.00365
#> 311 0.00354
#> 312 0.00344
#> 313 0.00334
#> 314 0.00324
#> 315 0.00314
#> 316 0.00305
#> 317 0.00296
#> 318 0.00287
#> 319 0.00279
#> 320 0.00271
#> 321 0.00263
#> 322 0.00255
#> 323 0.00248
#> 324 0.0024
#> 325 0.00233
#> 326 0.00226
#> 327 0.0022
#> 328 0.00213
#> 329 0.00207
#> 330 0.00201
#> 331 0.00195
#> 332 0.00189
#> 333 0.00184
#> 334 0.00178
#> 335 0.00173
#> 336 0.00168
#> 337 0.00163
#> 338 0.00158
#> 339 0.00154
#> 340 0.00149
#> 341 0.00145
#> 342 0.00141
#> 343 0.00137
#> 344 0.00133
#> 345 0.00129
#> 346 0.00125
#> 347 0.00121
#> 348 0.00118
#> 349 0.00114
#> 350 0.00111
#> 351 0.00108
#> 352 0.00105
#> 353 0.00102
#> 354 0.000987
#> 355 0.000958
#> 356 0.000931
#> 357 0.000904
#> 358 0.000877
#> 359 0.000852
#> 360 0.000827
#> 361 0.000803
#> 362 0.00078
#> 363 0.000757
#> 364 0.000735
#> 365 0.000714
#> 366 0.000693
#> 367 0.000673
#> 368 0.000654
#> 369 0.000635
#> 370 0.000617
#> 371 0.000599
#> 372 0.000581
#> 373 0.000565
#> 374 0.000548
#> 375 0.000532
#> 376 0.000517
#> 377 0.000502
#> 378 0.000488
#> 379 0.000474
#> 380 0.00046
#> 381 0.000447
#> 382 0.000434
#> 383 0.000421
#> 384 0.000409
#> 385 0.000397
#> 386 0.000386
#> 387 0.000375
#> 388 0.000364
#> 389 0.000354
#> 390 0.000343
#> 391 0.000334
#> 392 0.000324
#> 393 0.000315
#> 394 0.000306
#> 395 0.000297
#> 396 0.000288
#> 397 0.00028
#> 398 0.000272
#> 399 0.000264
#> 400 0.000257
#> 401 0.000249
#> 402 0.000242
#> 403 0.000235
#> 404 0.000228
#> 405 0.000222
#> 406 0.000216
#> 407 0.000209
#> 408 0.000203
#> 409 0.000198
#> 410 0.000192
#> 411 0.000186
#> 412 0.000181
#> 413 0.000176
#> 414 0.000171
#> 415 0.000166
#> 416 0.000161
#> 417 0.000157
#> 418 0.000152
#> 419 0.000148
#> 420 0.000144
#> 421 0.00014
#> 422 0.000136
#> 423 0.000132
#> 424 0.000128
#> 425 0.000124
#> 426 0.000121
#> 427 0.000117
#> 428 0.000114
#> 429 0.000111
#> 430 0.000108
#> 431 0.000105
#> 432 0.000102
#> 433 9.87e-05
#> 434 9.59e-05
#> 435 9.32e-05
#> 436 9.06e-05
#> 437 8.8e-05
#> 438 8.55e-05
#> 439 8.31e-05
#> 440 8.07e-05
#> 441 7.84e-05
#> 442 7.62e-05
#> 443 7.4e-05
#> 444 7.2e-05
#> 445 6.99e-05
#> 446 6.79e-05
#> 447 6.6e-05
#> 448 6.41e-05
#> 449 6.23e-05
#> 450 6.06e-05
#> 451 5.89e-05
#> 452 5.72e-05
#> 453 5.56e-05
#> 454 5.4e-05
#> 455 5.25e-05
#> 456 5.1e-05
#> 457 4.96e-05
#> 458 4.82e-05
#> 459 4.68e-05
#> 460 4.55e-05
#> 461 4.42e-05
#> 462 4.3e-05
#> 463 4.18e-05
#> 464 4.06e-05
#> 465 3.94e-05
#> 466 3.83e-05
#> 467 3.72e-05
#> 468 3.62e-05
#> 469 3.52e-05
#> 470 3.42e-05
#> 471 3.32e-05
#> 472 3.23e-05
#> 473 3.14e-05
#> 474 3.05e-05
#> 475 2.96e-05
#> 476 2.88e-05
#> 477 2.8e-05
#> 478 2.72e-05
#> 479 2.65e-05
#> 480 2.57e-05
#> 481 2.5e-05
#> 482 2.43e-05
#> 483 2.36e-05
#> 484 2.29e-05
#> 485 2.23e-05
#> 486 2.17e-05
#> 487 2.11e-05
#> 488 2.05e-05
#> 489 1.99e-05
#> 490 1.94e-05
#> 491 1.88e-05
#> 492 1.83e-05
#> 493 1.78e-05
#> 494 1.73e-05
#> 495 1.68e-05
#> 496 1.63e-05
#> 497 1.59e-05
#> 498 1.54e-05
#> 499 1.5e-05
#> 500 1.46e-05
```
13\.7 Using R generics
----------------------
### 13\.7\.1 Simplify tensor operations
The following two expressions are equivalent, with the first being the long version natural way of doing it in **PyTorch**. The second is using the generics in R for subtraction, multiplication and scalar conversion.
```
param$data <- torch$sub(param$data,
torch$mul(param$grad$float(),
torch$scalar_tensor(learning_rate)))
```
```
param$data <- param$data - param$grad * learning_rate
```
### 13\.7\.1 Simplify tensor operations
The following two expressions are equivalent, with the first being the long version natural way of doing it in **PyTorch**. The second is using the generics in R for subtraction, multiplication and scalar conversion.
```
param$data <- torch$sub(param$data,
torch$mul(param$grad$float(),
torch$scalar_tensor(learning_rate)))
```
```
param$data <- param$data - param$grad * learning_rate
```
13\.8 An elegant neural network
-------------------------------
```
invisible(torch$manual_seed(0)) # do not show the generator output
# layer properties
N <- 64L; D_in <- 1000L; H <- 100L; D_out <- 10L
# Create random Tensors to hold inputs and outputs
x = torch$randn(N, D_in, device=device)
y = torch$randn(N, D_out, device=device)
# set up the neural network
model <- torch$nn$Sequential(
torch$nn$Linear(D_in, H), # first layer
torch$nn$ReLU(), # activation
torch$nn$Linear(H, D_out))$to(device) # output layer
# specify how we will be computing the loss
loss_fn = torch$nn$MSELoss(reduction = 'sum')
learning_rate = 1e-4
loss_row <- list(vector()) # collect a list for the final dataframe
for (t in 1:500) {
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the loss.
loss = loss_fn(y_pred, y) # (y_pred - y) is a tensor; loss_fn output is a scalar
loss_row[[t]] <- c(t, loss$item())
# Zero the gradients before running the backward pass.
model$zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each module are stored
# in tensors with `requires_grad=True`, so this call will compute gradients for
# all learnable parameters in the model.
loss$backward()
# Update the weights using gradient descent. Each parameter is a tensor, so
# we can access its data and gradients like we did before.
with(torch$no_grad(), {
for (param in iterate(model$parameters())) {
# using R generics
param$data <- param$data - param$grad * learning_rate
}
})
}
```
13\.9 A browseable dataframe
----------------------------
```
library(DT)
loss_df <- data.frame(Reduce(rbind, loss_row), row.names = NULL)
names(loss_df)[1] <- "iter"
names(loss_df)[2] <- "loss"
DT::datatable(loss_df)
```
13\.10 Plot the loss at each iteration
--------------------------------------
```
library(ggplot2)
# plot
ggplot(loss_df, aes(x = iter, y = loss)) +
geom_point()
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/working-with-a-dataframe.html |
Chapter 14 Working with a data●frame
====================================
*Last update: Thu Nov 19 14:24:08 2020 \-0600 (ca4f8b4a0\)*
14\.1 Load PyTorch libraries
----------------------------
```
library(rTorch)
torch <- import("torch")
torchvision <- import("torchvision")
nn <- import("torch.nn")
transforms <- import("torchvision.transforms")
dsets <- import("torchvision.datasets")
builtins <- import_builtins()
np <- import("numpy")
```
14\.2 Load dataset
------------------
```
# folders where the images are located
train_data_path = './mnist_png_full/training/'
test_data_path = './mnist_png_full/testing/'
```
```
# read the datasets without normalization
train_dataset = torchvision$datasets$ImageFolder(root = train_data_path,
transform = torchvision$transforms$ToTensor()
)
print(train_dataset)
```
```
#> Dataset ImageFolder
#> Number of datapoints: 60000
#> Root location: ./mnist_png_full/training/
#> StandardTransform
#> Transform: ToTensor()
```
14\.3 Summary statistics for tensors
------------------------------------
### 14\.3\.1 Using `data.frame`
```
library(tictoc)
tic()
fun_list <- list(
size = c("size"),
numel = c("numel"),
sum = c("sum", "item"),
mean = c("mean", "item"),
std = c("std", "item"),
med = c("median", "item"),
max = c("max", "item"),
min = c("min", "item")
)
idx <- seq(0L, 599L) # how many samples
fun_get_tensor <- function(x) py_get_item(train_dataset, x)[[0]]
stat_fun <- function(x, str_fun) {
fun_var <- paste0("fun_get_tensor(x)", "$", str_fun, "()")
sapply(idx, function(x)
ifelse(is.numeric(eval(parse(text = fun_var))), # size return chracater
eval(parse(text = fun_var)), # all else are numeric
as.character(eval(parse(text = fun_var)))))
}
df <- data.frame(ridx = idx+1, # index number for the sample
do.call(data.frame,
lapply(
sapply(fun_list, function(x) paste(x, collapse = "()$")),
function(y) stat_fun(1, y)
)
)
)
```
Summary statistics:
```
head(df, 20)
```
```
#> ridx size numel sum mean std med max min
#> 1 1 torch.Size([3, 28, 28]) 2352 366 0.156 0.329 0 1.000 0
#> 2 2 torch.Size([3, 28, 28]) 2352 284 0.121 0.297 0 1.000 0
#> 3 3 torch.Size([3, 28, 28]) 2352 645 0.274 0.420 0 1.000 0
#> 4 4 torch.Size([3, 28, 28]) 2352 410 0.174 0.355 0 1.000 0
#> 5 5 torch.Size([3, 28, 28]) 2352 321 0.137 0.312 0 1.000 0
#> 6 6 torch.Size([3, 28, 28]) 2352 654 0.278 0.421 0 1.000 0
#> 7 7 torch.Size([3, 28, 28]) 2352 496 0.211 0.374 0 1.000 0
#> 8 8 torch.Size([3, 28, 28]) 2352 549 0.233 0.399 0 1.000 0
#> 9 9 torch.Size([3, 28, 28]) 2352 449 0.191 0.365 0 1.000 0
#> 10 10 torch.Size([3, 28, 28]) 2352 465 0.198 0.367 0 1.000 0
#> 11 11 torch.Size([3, 28, 28]) 2352 383 0.163 0.338 0 1.000 0
#> 12 12 torch.Size([3, 28, 28]) 2352 499 0.212 0.378 0 1.000 0
#> 13 13 torch.Size([3, 28, 28]) 2352 313 0.133 0.309 0 0.996 0
#> 14 14 torch.Size([3, 28, 28]) 2352 360 0.153 0.325 0 1.000 0
#> 15 15 torch.Size([3, 28, 28]) 2352 435 0.185 0.358 0 0.996 0
#> 16 16 torch.Size([3, 28, 28]) 2352 429 0.182 0.358 0 1.000 0
#> 17 17 torch.Size([3, 28, 28]) 2352 596 0.254 0.408 0 1.000 0
#> 18 18 torch.Size([3, 28, 28]) 2352 527 0.224 0.392 0 1.000 0
#> 19 19 torch.Size([3, 28, 28]) 2352 303 0.129 0.301 0 1.000 0
#> 20 20 torch.Size([3, 28, 28]) 2352 458 0.195 0.364 0 1.000 0
```
Elapsed time per size of sample:
```
toc()
# 60 1.663s
# 600 13.5s
# 6000 54.321 sec;
# 60000 553.489 sec elapsed
```
```
#> 17.327 sec elapsed
```
14\.1 Load PyTorch libraries
----------------------------
```
library(rTorch)
torch <- import("torch")
torchvision <- import("torchvision")
nn <- import("torch.nn")
transforms <- import("torchvision.transforms")
dsets <- import("torchvision.datasets")
builtins <- import_builtins()
np <- import("numpy")
```
14\.2 Load dataset
------------------
```
# folders where the images are located
train_data_path = './mnist_png_full/training/'
test_data_path = './mnist_png_full/testing/'
```
```
# read the datasets without normalization
train_dataset = torchvision$datasets$ImageFolder(root = train_data_path,
transform = torchvision$transforms$ToTensor()
)
print(train_dataset)
```
```
#> Dataset ImageFolder
#> Number of datapoints: 60000
#> Root location: ./mnist_png_full/training/
#> StandardTransform
#> Transform: ToTensor()
```
14\.3 Summary statistics for tensors
------------------------------------
### 14\.3\.1 Using `data.frame`
```
library(tictoc)
tic()
fun_list <- list(
size = c("size"),
numel = c("numel"),
sum = c("sum", "item"),
mean = c("mean", "item"),
std = c("std", "item"),
med = c("median", "item"),
max = c("max", "item"),
min = c("min", "item")
)
idx <- seq(0L, 599L) # how many samples
fun_get_tensor <- function(x) py_get_item(train_dataset, x)[[0]]
stat_fun <- function(x, str_fun) {
fun_var <- paste0("fun_get_tensor(x)", "$", str_fun, "()")
sapply(idx, function(x)
ifelse(is.numeric(eval(parse(text = fun_var))), # size return chracater
eval(parse(text = fun_var)), # all else are numeric
as.character(eval(parse(text = fun_var)))))
}
df <- data.frame(ridx = idx+1, # index number for the sample
do.call(data.frame,
lapply(
sapply(fun_list, function(x) paste(x, collapse = "()$")),
function(y) stat_fun(1, y)
)
)
)
```
Summary statistics:
```
head(df, 20)
```
```
#> ridx size numel sum mean std med max min
#> 1 1 torch.Size([3, 28, 28]) 2352 366 0.156 0.329 0 1.000 0
#> 2 2 torch.Size([3, 28, 28]) 2352 284 0.121 0.297 0 1.000 0
#> 3 3 torch.Size([3, 28, 28]) 2352 645 0.274 0.420 0 1.000 0
#> 4 4 torch.Size([3, 28, 28]) 2352 410 0.174 0.355 0 1.000 0
#> 5 5 torch.Size([3, 28, 28]) 2352 321 0.137 0.312 0 1.000 0
#> 6 6 torch.Size([3, 28, 28]) 2352 654 0.278 0.421 0 1.000 0
#> 7 7 torch.Size([3, 28, 28]) 2352 496 0.211 0.374 0 1.000 0
#> 8 8 torch.Size([3, 28, 28]) 2352 549 0.233 0.399 0 1.000 0
#> 9 9 torch.Size([3, 28, 28]) 2352 449 0.191 0.365 0 1.000 0
#> 10 10 torch.Size([3, 28, 28]) 2352 465 0.198 0.367 0 1.000 0
#> 11 11 torch.Size([3, 28, 28]) 2352 383 0.163 0.338 0 1.000 0
#> 12 12 torch.Size([3, 28, 28]) 2352 499 0.212 0.378 0 1.000 0
#> 13 13 torch.Size([3, 28, 28]) 2352 313 0.133 0.309 0 0.996 0
#> 14 14 torch.Size([3, 28, 28]) 2352 360 0.153 0.325 0 1.000 0
#> 15 15 torch.Size([3, 28, 28]) 2352 435 0.185 0.358 0 0.996 0
#> 16 16 torch.Size([3, 28, 28]) 2352 429 0.182 0.358 0 1.000 0
#> 17 17 torch.Size([3, 28, 28]) 2352 596 0.254 0.408 0 1.000 0
#> 18 18 torch.Size([3, 28, 28]) 2352 527 0.224 0.392 0 1.000 0
#> 19 19 torch.Size([3, 28, 28]) 2352 303 0.129 0.301 0 1.000 0
#> 20 20 torch.Size([3, 28, 28]) 2352 458 0.195 0.364 0 1.000 0
```
Elapsed time per size of sample:
```
toc()
# 60 1.663s
# 600 13.5s
# 6000 54.321 sec;
# 60000 553.489 sec elapsed
```
```
#> 17.327 sec elapsed
```
### 14\.3\.1 Using `data.frame`
```
library(tictoc)
tic()
fun_list <- list(
size = c("size"),
numel = c("numel"),
sum = c("sum", "item"),
mean = c("mean", "item"),
std = c("std", "item"),
med = c("median", "item"),
max = c("max", "item"),
min = c("min", "item")
)
idx <- seq(0L, 599L) # how many samples
fun_get_tensor <- function(x) py_get_item(train_dataset, x)[[0]]
stat_fun <- function(x, str_fun) {
fun_var <- paste0("fun_get_tensor(x)", "$", str_fun, "()")
sapply(idx, function(x)
ifelse(is.numeric(eval(parse(text = fun_var))), # size return chracater
eval(parse(text = fun_var)), # all else are numeric
as.character(eval(parse(text = fun_var)))))
}
df <- data.frame(ridx = idx+1, # index number for the sample
do.call(data.frame,
lapply(
sapply(fun_list, function(x) paste(x, collapse = "()$")),
function(y) stat_fun(1, y)
)
)
)
```
Summary statistics:
```
head(df, 20)
```
```
#> ridx size numel sum mean std med max min
#> 1 1 torch.Size([3, 28, 28]) 2352 366 0.156 0.329 0 1.000 0
#> 2 2 torch.Size([3, 28, 28]) 2352 284 0.121 0.297 0 1.000 0
#> 3 3 torch.Size([3, 28, 28]) 2352 645 0.274 0.420 0 1.000 0
#> 4 4 torch.Size([3, 28, 28]) 2352 410 0.174 0.355 0 1.000 0
#> 5 5 torch.Size([3, 28, 28]) 2352 321 0.137 0.312 0 1.000 0
#> 6 6 torch.Size([3, 28, 28]) 2352 654 0.278 0.421 0 1.000 0
#> 7 7 torch.Size([3, 28, 28]) 2352 496 0.211 0.374 0 1.000 0
#> 8 8 torch.Size([3, 28, 28]) 2352 549 0.233 0.399 0 1.000 0
#> 9 9 torch.Size([3, 28, 28]) 2352 449 0.191 0.365 0 1.000 0
#> 10 10 torch.Size([3, 28, 28]) 2352 465 0.198 0.367 0 1.000 0
#> 11 11 torch.Size([3, 28, 28]) 2352 383 0.163 0.338 0 1.000 0
#> 12 12 torch.Size([3, 28, 28]) 2352 499 0.212 0.378 0 1.000 0
#> 13 13 torch.Size([3, 28, 28]) 2352 313 0.133 0.309 0 0.996 0
#> 14 14 torch.Size([3, 28, 28]) 2352 360 0.153 0.325 0 1.000 0
#> 15 15 torch.Size([3, 28, 28]) 2352 435 0.185 0.358 0 0.996 0
#> 16 16 torch.Size([3, 28, 28]) 2352 429 0.182 0.358 0 1.000 0
#> 17 17 torch.Size([3, 28, 28]) 2352 596 0.254 0.408 0 1.000 0
#> 18 18 torch.Size([3, 28, 28]) 2352 527 0.224 0.392 0 1.000 0
#> 19 19 torch.Size([3, 28, 28]) 2352 303 0.129 0.301 0 1.000 0
#> 20 20 torch.Size([3, 28, 28]) 2352 458 0.195 0.364 0 1.000 0
```
Elapsed time per size of sample:
```
toc()
# 60 1.663s
# 600 13.5s
# 6000 54.321 sec;
# 60000 553.489 sec elapsed
```
```
#> 17.327 sec elapsed
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/working-with-datatable.html |
Chapter 15 Working with data●table
==================================
*Last update: Thu Nov 19 14:24:08 2020 \-0600 (ca4f8b4a0\)*
15\.1 Load PyTorch libraries
----------------------------
```
library(rTorch)
torch <- import("torch")
torchvision <- import("torchvision")
nn <- import("torch.nn")
transforms <- import("torchvision.transforms")
dsets <- import("torchvision.datasets")
builtins <- import_builtins()
np <- import("numpy")
```
15\.2 Load dataset
------------------
```
## Dataset iteration batch settings
# folders where the images are located
train_data_path = './mnist_png_full/training/'
test_data_path = './mnist_png_full/testing/'
```
15\.3 Datasets without normalization
------------------------------------
```
train_dataset = torchvision$datasets$ImageFolder(root = train_data_path,
transform = torchvision$transforms$ToTensor()
)
print(train_dataset)
```
```
#> Dataset ImageFolder
#> Number of datapoints: 60000
#> Root location: ./mnist_png_full/training/
#> StandardTransform
#> Transform: ToTensor()
```
15\.4 Using `data.table`
------------------------
```
library(data.table)
library(tictoc)
tic()
fun_list <- list(
numel = c("numel"),
sum = c("sum", "item"),
mean = c("mean", "item"),
std = c("std", "item"),
med = c("median", "item"),
max = c("max", "item"),
min = c("min", "item")
)
idx <- seq(0L, 599L)
fun_get_tensor <- function(x) py_get_item(train_dataset, x)[[0]]
stat_fun <- function(x, str_fun) {
fun_var <- paste0("fun_get_tensor(x)", "$", str_fun, "()")
sapply(idx, function(x)
ifelse(is.numeric(eval(parse(text = fun_var))), # size return character
eval(parse(text = fun_var)), # all else are numeric
as.character(eval(parse(text = fun_var)))))
}
dt <- data.table(ridx = idx+1,
do.call(data.table,
lapply(
sapply(fun_list, function(x) paste(x, collapse = "()$")),
function(y) stat_fun(1, y)
)
)
)
```
Summary statistics:
```
head(dt)
```
```
#> ridx numel sum mean std med max min
#> 1: 1 2352 366 0.156 0.329 0 1 0
#> 2: 2 2352 284 0.121 0.297 0 1 0
#> 3: 3 2352 645 0.274 0.420 0 1 0
#> 4: 4 2352 410 0.174 0.355 0 1 0
#> 5: 5 2352 321 0.137 0.312 0 1 0
#> 6: 6 2352 654 0.278 0.421 0 1 0
```
Elapsed time per size of sample:
```
toc()
# 60 1.266 sec elapsed
# 600 11.798 sec elapsed;
# 6000 119.256 sec elapsed;
# 60000 1117.619 sec elapsed
```
```
#> 14.8 sec elapsed
```
15\.1 Load PyTorch libraries
----------------------------
```
library(rTorch)
torch <- import("torch")
torchvision <- import("torchvision")
nn <- import("torch.nn")
transforms <- import("torchvision.transforms")
dsets <- import("torchvision.datasets")
builtins <- import_builtins()
np <- import("numpy")
```
15\.2 Load dataset
------------------
```
## Dataset iteration batch settings
# folders where the images are located
train_data_path = './mnist_png_full/training/'
test_data_path = './mnist_png_full/testing/'
```
15\.3 Datasets without normalization
------------------------------------
```
train_dataset = torchvision$datasets$ImageFolder(root = train_data_path,
transform = torchvision$transforms$ToTensor()
)
print(train_dataset)
```
```
#> Dataset ImageFolder
#> Number of datapoints: 60000
#> Root location: ./mnist_png_full/training/
#> StandardTransform
#> Transform: ToTensor()
```
15\.4 Using `data.table`
------------------------
```
library(data.table)
library(tictoc)
tic()
fun_list <- list(
numel = c("numel"),
sum = c("sum", "item"),
mean = c("mean", "item"),
std = c("std", "item"),
med = c("median", "item"),
max = c("max", "item"),
min = c("min", "item")
)
idx <- seq(0L, 599L)
fun_get_tensor <- function(x) py_get_item(train_dataset, x)[[0]]
stat_fun <- function(x, str_fun) {
fun_var <- paste0("fun_get_tensor(x)", "$", str_fun, "()")
sapply(idx, function(x)
ifelse(is.numeric(eval(parse(text = fun_var))), # size return character
eval(parse(text = fun_var)), # all else are numeric
as.character(eval(parse(text = fun_var)))))
}
dt <- data.table(ridx = idx+1,
do.call(data.table,
lapply(
sapply(fun_list, function(x) paste(x, collapse = "()$")),
function(y) stat_fun(1, y)
)
)
)
```
Summary statistics:
```
head(dt)
```
```
#> ridx numel sum mean std med max min
#> 1: 1 2352 366 0.156 0.329 0 1 0
#> 2: 2 2352 284 0.121 0.297 0 1 0
#> 3: 3 2352 645 0.274 0.420 0 1 0
#> 4: 4 2352 410 0.174 0.355 0 1 0
#> 5: 5 2352 321 0.137 0.312 0 1 0
#> 6: 6 2352 654 0.278 0.421 0 1 0
```
Elapsed time per size of sample:
```
toc()
# 60 1.266 sec elapsed
# 600 11.798 sec elapsed;
# 6000 119.256 sec elapsed;
# 60000 1117.619 sec elapsed
```
```
#> 14.8 sec elapsed
```
| Machine Learning |
f0nzie.github.io | https://f0nzie.github.io/rtorch-minimal-book/appendixB.html |
B Activation Functions
======================
*Last update: Thu Oct 22 16:46:28 2020 \-0500 (54a46ea04\)*
```
library(rTorch)
library(ggplot2)
```
B.1 Sigmoid
-----------
Using the PyTorch `sigmoid()` function:
```
x <- torch$range(-5., 5., 0.1)
y <- torch$sigmoid(x)
df <- data.frame(x = x$numpy(), sx = y$numpy())
df
ggplot(df, aes(x = x, y = sx)) +
geom_point() +
ggtitle("Sigmoid")
```
```
#> x sx
#> 1 -5.0 0.00669
#> 2 -4.9 0.00739
#> 3 -4.8 0.00816
#> 4 -4.7 0.00901
#> 5 -4.6 0.00995
#> 6 -4.5 0.01099
#> 7 -4.4 0.01213
#> 8 -4.3 0.01339
#> 9 -4.2 0.01477
#> 10 -4.1 0.01630
#> 11 -4.0 0.01799
#> 12 -3.9 0.01984
#> 13 -3.8 0.02188
#> 14 -3.7 0.02413
#> 15 -3.6 0.02660
#> 16 -3.5 0.02931
#> 17 -3.4 0.03230
#> 18 -3.3 0.03557
#> 19 -3.2 0.03917
#> 20 -3.1 0.04311
#> 21 -3.0 0.04743
#> 22 -2.9 0.05215
#> 23 -2.8 0.05732
#> 24 -2.7 0.06297
#> 25 -2.6 0.06914
#> 26 -2.5 0.07586
#> 27 -2.4 0.08317
#> 28 -2.3 0.09112
#> 29 -2.2 0.09975
#> 30 -2.1 0.10910
#> 31 -2.0 0.11920
#> 32 -1.9 0.13011
#> 33 -1.8 0.14185
#> 34 -1.7 0.15447
#> 35 -1.6 0.16798
#> 36 -1.5 0.18243
#> 37 -1.4 0.19782
#> 38 -1.3 0.21417
#> 39 -1.2 0.23148
#> 40 -1.1 0.24974
#> 41 -1.0 0.26894
#> 42 -0.9 0.28905
#> 43 -0.8 0.31003
#> 44 -0.7 0.33181
#> 45 -0.6 0.35434
#> 46 -0.5 0.37754
#> 47 -0.4 0.40131
#> 48 -0.3 0.42556
#> 49 -0.2 0.45017
#> 50 -0.1 0.47502
#> 51 0.0 0.50000
#> 52 0.1 0.52498
#> 53 0.2 0.54983
#> 54 0.3 0.57444
#> 55 0.4 0.59869
#> 56 0.5 0.62246
#> 57 0.6 0.64566
#> 58 0.7 0.66819
#> 59 0.8 0.68997
#> 60 0.9 0.71095
#> 61 1.0 0.73106
#> 62 1.1 0.75026
#> 63 1.2 0.76852
#> 64 1.3 0.78584
#> 65 1.4 0.80218
#> 66 1.5 0.81757
#> 67 1.6 0.83202
#> 68 1.7 0.84553
#> 69 1.8 0.85815
#> 70 1.9 0.86989
#> 71 2.0 0.88080
#> 72 2.1 0.89090
#> 73 2.2 0.90025
#> 74 2.3 0.90888
#> 75 2.4 0.91683
#> 76 2.5 0.92414
#> 77 2.6 0.93086
#> 78 2.7 0.93703
#> 79 2.8 0.94268
#> 80 2.9 0.94785
#> 81 3.0 0.95257
#> 82 3.1 0.95689
#> 83 3.2 0.96083
#> 84 3.3 0.96443
#> 85 3.4 0.96770
#> 86 3.5 0.97069
#> 87 3.6 0.97340
#> 88 3.7 0.97587
#> 89 3.8 0.97812
#> 90 3.9 0.98016
#> 91 4.0 0.98201
#> 92 4.1 0.98370
#> 93 4.2 0.98523
#> 94 4.3 0.98661
#> 95 4.4 0.98787
#> 96 4.5 0.98901
#> 97 4.6 0.99005
#> 98 4.7 0.99099
#> 99 4.8 0.99184
#> 100 4.9 0.99261
#> 101 5.0 0.99331
```
Plot the sigmoid function using an R custom\-made function:
```
sigmoid = function(x) {
1 / (1 + exp(-x))
}
x <- seq(-5, 5, 0.01)
plot(x, sigmoid(x), col = 'blue', cex = 0.5, main = "Sigmoid")
```
B.2 ReLU
--------
Using the PyTorch `relu()` function:
```
x <- torch$range(-5., 5., 0.1)
y <- torch$relu(x)
df <- data.frame(x = x$numpy(), sx = y$numpy())
df
ggplot(df, aes(x = x, y = sx)) +
geom_point() +
ggtitle("ReLU")
```
B.3 tanh
--------
Using the PyTorch `tanh()` function:
```
x <- torch$range(-5., 5., 0.1)
y <- torch$tanh(x)
df <- data.frame(x = x$numpy(), sx = y$numpy())
df
ggplot(df, aes(x = x, y = sx)) +
geom_point() +
ggtitle("tanh")
```
B.4 Softmax
-----------
Using the PyTorch `softmax()` function:
```
x <- torch$range(-5.0, 5.0, 0.1)
y <- torch$softmax(x, dim=0L)
df <- data.frame(x = x$numpy(), sx = y$numpy())
ggplot(df, aes(x = x, y = sx)) +
geom_point() +
ggtitle("Softmax")
```
B.5 Activation functions in Python
----------------------------------
```
library(rTorch)
```
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
```
### Linear activation
```
def Linear(x, derivative=False):
"""
Computes the Linear activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
if derivative: # Return derivative of the function at x
return np.ones_like(x)
else: # Return forward pass of the function at x
return x
```
### Sigmoid activation
```
def Sigmoid(x, derivative=False):
"""
Computes the Sigmoid activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
f = 1/(1+np.exp(-x))
if derivative: # Return derivative of the function at x
return f*(1-f)
else: # Return forward pass of the function at x
return f
```
### Hyperbolic Tangent activation
```
def Tanh(x, derivative=False):
"""
Computes the Hyperbolic Tangent activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
f = (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))
if derivative: # Return derivative of the function at x
return 1-f**2
else: # Return the forward pass of the function at x
return f
```
### Rectifier linear unit (ReLU)
```
def ReLU(x, derivative=False):
"""
Computes the Rectifier Linear Unit activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
if derivative: # Return derivative of the function at x
return (x>0).astype(int)
else: # Return forward pass of the function at x
return np.maximum(x, 0)
```
### Visualization with `matplotlib`
Plotting using `matplotlib`:
```
x = np.linspace(-6, 6, 100)
units = {
"Linear": lambda x: Linear(x),
"Sigmoid": lambda x: Sigmoid(x),
"ReLU": lambda x: ReLU(x),
"tanh": lambda x: Tanh(x)
}
plt.figure(figsize=(5, 5))
[plt.plot(x, unit(x), label=unit_name, lw=2)
for unit_name, unit in units.items()]
```
```
plt.legend(loc=2, fontsize=16)
plt.title('Activation functions', fontsize=20)
plt.ylim([-2, 5])
```
```
plt.xlim([-6, 6])
```
```
plt.show()
```
B.6 Softmax code in Python
--------------------------
```
# Source: https://dataaspirant.com/2017/03/07/difference-between-softmax-function-and-sigmoid-function/
import numpy as np
import matplotlib.pyplot as plt
def softmax(inputs):
"""
Calculate the softmax for the give inputs (array)
:param inputs:
:return:
"""
return np.exp(inputs) / float(sum(np.exp(inputs)))
def line_graph(x, y, x_title, y_title):
"""
Draw line graph with x and y values
:param x:
:param y:
:param x_title:
:param y_title:
:return:
"""
plt.plot(x, y)
plt.xlabel(x_title)
plt.ylabel(y_title)
plt.show()
graph_x = np.linspace(-6, 6, 100)
graph_y = softmax(graph_x)
print("Graph X readings: {}".format(graph_x))
```
```
print("Graph Y readings: {}".format(graph_y))
```
```
line_graph(graph_x, graph_y, "Inputs", "Softmax Scores")
```
B.1 Sigmoid
-----------
Using the PyTorch `sigmoid()` function:
```
x <- torch$range(-5., 5., 0.1)
y <- torch$sigmoid(x)
df <- data.frame(x = x$numpy(), sx = y$numpy())
df
ggplot(df, aes(x = x, y = sx)) +
geom_point() +
ggtitle("Sigmoid")
```
```
#> x sx
#> 1 -5.0 0.00669
#> 2 -4.9 0.00739
#> 3 -4.8 0.00816
#> 4 -4.7 0.00901
#> 5 -4.6 0.00995
#> 6 -4.5 0.01099
#> 7 -4.4 0.01213
#> 8 -4.3 0.01339
#> 9 -4.2 0.01477
#> 10 -4.1 0.01630
#> 11 -4.0 0.01799
#> 12 -3.9 0.01984
#> 13 -3.8 0.02188
#> 14 -3.7 0.02413
#> 15 -3.6 0.02660
#> 16 -3.5 0.02931
#> 17 -3.4 0.03230
#> 18 -3.3 0.03557
#> 19 -3.2 0.03917
#> 20 -3.1 0.04311
#> 21 -3.0 0.04743
#> 22 -2.9 0.05215
#> 23 -2.8 0.05732
#> 24 -2.7 0.06297
#> 25 -2.6 0.06914
#> 26 -2.5 0.07586
#> 27 -2.4 0.08317
#> 28 -2.3 0.09112
#> 29 -2.2 0.09975
#> 30 -2.1 0.10910
#> 31 -2.0 0.11920
#> 32 -1.9 0.13011
#> 33 -1.8 0.14185
#> 34 -1.7 0.15447
#> 35 -1.6 0.16798
#> 36 -1.5 0.18243
#> 37 -1.4 0.19782
#> 38 -1.3 0.21417
#> 39 -1.2 0.23148
#> 40 -1.1 0.24974
#> 41 -1.0 0.26894
#> 42 -0.9 0.28905
#> 43 -0.8 0.31003
#> 44 -0.7 0.33181
#> 45 -0.6 0.35434
#> 46 -0.5 0.37754
#> 47 -0.4 0.40131
#> 48 -0.3 0.42556
#> 49 -0.2 0.45017
#> 50 -0.1 0.47502
#> 51 0.0 0.50000
#> 52 0.1 0.52498
#> 53 0.2 0.54983
#> 54 0.3 0.57444
#> 55 0.4 0.59869
#> 56 0.5 0.62246
#> 57 0.6 0.64566
#> 58 0.7 0.66819
#> 59 0.8 0.68997
#> 60 0.9 0.71095
#> 61 1.0 0.73106
#> 62 1.1 0.75026
#> 63 1.2 0.76852
#> 64 1.3 0.78584
#> 65 1.4 0.80218
#> 66 1.5 0.81757
#> 67 1.6 0.83202
#> 68 1.7 0.84553
#> 69 1.8 0.85815
#> 70 1.9 0.86989
#> 71 2.0 0.88080
#> 72 2.1 0.89090
#> 73 2.2 0.90025
#> 74 2.3 0.90888
#> 75 2.4 0.91683
#> 76 2.5 0.92414
#> 77 2.6 0.93086
#> 78 2.7 0.93703
#> 79 2.8 0.94268
#> 80 2.9 0.94785
#> 81 3.0 0.95257
#> 82 3.1 0.95689
#> 83 3.2 0.96083
#> 84 3.3 0.96443
#> 85 3.4 0.96770
#> 86 3.5 0.97069
#> 87 3.6 0.97340
#> 88 3.7 0.97587
#> 89 3.8 0.97812
#> 90 3.9 0.98016
#> 91 4.0 0.98201
#> 92 4.1 0.98370
#> 93 4.2 0.98523
#> 94 4.3 0.98661
#> 95 4.4 0.98787
#> 96 4.5 0.98901
#> 97 4.6 0.99005
#> 98 4.7 0.99099
#> 99 4.8 0.99184
#> 100 4.9 0.99261
#> 101 5.0 0.99331
```
Plot the sigmoid function using an R custom\-made function:
```
sigmoid = function(x) {
1 / (1 + exp(-x))
}
x <- seq(-5, 5, 0.01)
plot(x, sigmoid(x), col = 'blue', cex = 0.5, main = "Sigmoid")
```
B.2 ReLU
--------
Using the PyTorch `relu()` function:
```
x <- torch$range(-5., 5., 0.1)
y <- torch$relu(x)
df <- data.frame(x = x$numpy(), sx = y$numpy())
df
ggplot(df, aes(x = x, y = sx)) +
geom_point() +
ggtitle("ReLU")
```
B.3 tanh
--------
Using the PyTorch `tanh()` function:
```
x <- torch$range(-5., 5., 0.1)
y <- torch$tanh(x)
df <- data.frame(x = x$numpy(), sx = y$numpy())
df
ggplot(df, aes(x = x, y = sx)) +
geom_point() +
ggtitle("tanh")
```
B.4 Softmax
-----------
Using the PyTorch `softmax()` function:
```
x <- torch$range(-5.0, 5.0, 0.1)
y <- torch$softmax(x, dim=0L)
df <- data.frame(x = x$numpy(), sx = y$numpy())
ggplot(df, aes(x = x, y = sx)) +
geom_point() +
ggtitle("Softmax")
```
B.5 Activation functions in Python
----------------------------------
```
library(rTorch)
```
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
```
### Linear activation
```
def Linear(x, derivative=False):
"""
Computes the Linear activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
if derivative: # Return derivative of the function at x
return np.ones_like(x)
else: # Return forward pass of the function at x
return x
```
### Sigmoid activation
```
def Sigmoid(x, derivative=False):
"""
Computes the Sigmoid activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
f = 1/(1+np.exp(-x))
if derivative: # Return derivative of the function at x
return f*(1-f)
else: # Return forward pass of the function at x
return f
```
### Hyperbolic Tangent activation
```
def Tanh(x, derivative=False):
"""
Computes the Hyperbolic Tangent activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
f = (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))
if derivative: # Return derivative of the function at x
return 1-f**2
else: # Return the forward pass of the function at x
return f
```
### Rectifier linear unit (ReLU)
```
def ReLU(x, derivative=False):
"""
Computes the Rectifier Linear Unit activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
if derivative: # Return derivative of the function at x
return (x>0).astype(int)
else: # Return forward pass of the function at x
return np.maximum(x, 0)
```
### Visualization with `matplotlib`
Plotting using `matplotlib`:
```
x = np.linspace(-6, 6, 100)
units = {
"Linear": lambda x: Linear(x),
"Sigmoid": lambda x: Sigmoid(x),
"ReLU": lambda x: ReLU(x),
"tanh": lambda x: Tanh(x)
}
plt.figure(figsize=(5, 5))
[plt.plot(x, unit(x), label=unit_name, lw=2)
for unit_name, unit in units.items()]
```
```
plt.legend(loc=2, fontsize=16)
plt.title('Activation functions', fontsize=20)
plt.ylim([-2, 5])
```
```
plt.xlim([-6, 6])
```
```
plt.show()
```
### Linear activation
```
def Linear(x, derivative=False):
"""
Computes the Linear activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
if derivative: # Return derivative of the function at x
return np.ones_like(x)
else: # Return forward pass of the function at x
return x
```
### Sigmoid activation
```
def Sigmoid(x, derivative=False):
"""
Computes the Sigmoid activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
f = 1/(1+np.exp(-x))
if derivative: # Return derivative of the function at x
return f*(1-f)
else: # Return forward pass of the function at x
return f
```
### Hyperbolic Tangent activation
```
def Tanh(x, derivative=False):
"""
Computes the Hyperbolic Tangent activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
f = (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))
if derivative: # Return derivative of the function at x
return 1-f**2
else: # Return the forward pass of the function at x
return f
```
### Rectifier linear unit (ReLU)
```
def ReLU(x, derivative=False):
"""
Computes the Rectifier Linear Unit activation function for array x
inputs:
x: array
derivative: if True, return the derivative else the forward pass
"""
if derivative: # Return derivative of the function at x
return (x>0).astype(int)
else: # Return forward pass of the function at x
return np.maximum(x, 0)
```
### Visualization with `matplotlib`
Plotting using `matplotlib`:
```
x = np.linspace(-6, 6, 100)
units = {
"Linear": lambda x: Linear(x),
"Sigmoid": lambda x: Sigmoid(x),
"ReLU": lambda x: ReLU(x),
"tanh": lambda x: Tanh(x)
}
plt.figure(figsize=(5, 5))
[plt.plot(x, unit(x), label=unit_name, lw=2)
for unit_name, unit in units.items()]
```
```
plt.legend(loc=2, fontsize=16)
plt.title('Activation functions', fontsize=20)
plt.ylim([-2, 5])
```
```
plt.xlim([-6, 6])
```
```
plt.show()
```
B.6 Softmax code in Python
--------------------------
```
# Source: https://dataaspirant.com/2017/03/07/difference-between-softmax-function-and-sigmoid-function/
import numpy as np
import matplotlib.pyplot as plt
def softmax(inputs):
"""
Calculate the softmax for the give inputs (array)
:param inputs:
:return:
"""
return np.exp(inputs) / float(sum(np.exp(inputs)))
def line_graph(x, y, x_title, y_title):
"""
Draw line graph with x and y values
:param x:
:param y:
:param x_title:
:param y_title:
:return:
"""
plt.plot(x, y)
plt.xlabel(x_title)
plt.ylabel(y_title)
plt.show()
graph_x = np.linspace(-6, 6, 100)
graph_y = softmax(graph_x)
print("Graph X readings: {}".format(graph_x))
```
```
print("Graph Y readings: {}".format(graph_y))
```
```
line_graph(graph_x, graph_y, "Inputs", "Softmax Scores")
```
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/intro.html |
Chapter 1 Introduction to Behavior and Machine Learning
=======================================================
In the last years, machine learning has surged as one of the key technologies that enables and supports many of the services and products that we use in our everyday lives and is expanding quickly. Machine learning has also helped to accelerate research and development in almost every field including natural sciences, engineering, social sciences, medicine, art and culture. Even though all those fields (and their respective sub\-fields) are very diverse, most of them have something in common: They involve living organisms (cells, microbes, plants, humans, animals, etc.) and living organisms express behaviors. This book teaches you machine learning and data\-driven methods to analyze different types of behaviors. Some of those methods include supervised, unsupervised, and deep learning. You will also learn how to explore, encode, preprocess, and visualize behavioral data. While the examples in this book focus on behavior analysis, the methods and techniques can be applied in any other context.
This chapter starts by introducing the concepts of *behavior* and *machine learning*. Next, basic machine learning terminology is presented and you will build your first classification and regression models. Then, you will learn how to evaluate the performance of your models and important concepts such as *underfitting*, *overfitting*, *bias*, and *variance*.
1\.1 What Is Behavior?
----------------------
Living organisms are constantly sensing and analyzing their surrounding environment. This includes inanimate objects but also other living entities. All of this is with the objective of making decisions and taking actions, either consciously or unconsciously. If we see someone running, we will react differently depending on whether we are at a stadium or in a bank. At the same time, we may also analyze other cues such as the runner’s facial expressions, clothes, items, and the reactions of the other people around us. Based on this aggregated information, we can decide how to react and behave. All this is supported by the organisms’ sensing capabilities and decision\-making processes (the brain and/or chemical reactions). Understanding our environment and how others behave is crucial for conducting our everyday life activities and provides support for other tasks. But, **what is behavior**? The Cambridge dictionary defines behavior as:
> *“the way that a person, an animal, a substance, etc. behaves in a particular situation or under particular conditions”.*
Another definition by dictionary.com is:
> *“observable activity in a human or animal”.*
The definitions are similar and both include humans and animals. Following those definitions, this book will focus on the automatic analysis of human and animal behaviors however, the methods can also be applied to robots and to a wide variety of problems in different domains. There are three main reasons why one may want to analyze behaviors in an automatic manner:
1. **React.** A biological or an artificial agent (or a combination of both) can take actions based on what is happening in the surrounding environment. For example, if suspicious behavior is detected in an airport, preventive actions can be triggered by security systems and the corresponding authorities. Without the possibility to automate such a detection system, it would be infeasible to implement it in practice. Just imagine trying to analyze airport traffic by hand.
2. **Understand.** Analyzing the behavior of an organism can help us to understand other associated behaviors and processes and to answer research questions. For example, Williams et al. ([2020](#ref-williams2020)) found that *Andean condors*, the heaviest soaring bird (see Figure [1\.1](intro.html#fig:condor)), only flap their wings for about \\(1\\%\\) of their total flight time. In one of the cases, a condor flew \\(\\approx 172\\) km without flapping. Those findings were the result of analyzing the birds’ behavior from data recorded by bio\-logging devices. In this book, several examples that make use of inertial devices will be studied.
FIGURE 1\.1: Andean condor. (Hugo Pédel, France, Travail personnel. Cliché réalisé dans le Parc National Argentin Nahuel Huapi, San Carlos de Bariloche, Laguna Tonchek. Source: Wikipedia (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
3. **Document and Archive.** Finally, we may want to document certain behaviors for future use. It could be for evidence purposes or maybe it is not clear how the information can be used now but may come in handy later. Based on the archived information, one could gain new knowledge in the future and use it to react (take decisions/actions), as shown in Figure [1\.2](intro.html#fig:decisionsActions). For example, we could document our nutritional habits (what do we eat, how often, etc.). If there is a health issue, a specialist could use this historical information to make a more precise diagnosis and propose actions.
FIGURE 1\.2: Taking decisions from archived behaviors.
Some behaviors can be used as a proxy to understand other behaviors, states, and/or processes. For example, detecting body movement behaviors during a job interview could serve as the basis to understand stress levels. Behaviors can also be modeled as a composition of lower\-level behaviors. In chapter [7](representations.html#representations), a method called *Bag of Words* that can be used to decompose complex behaviors into a set of simpler ones will be presented.
In order to analyze and monitor behaviors, we need a way to observe them. Living organisms use their available senses such as eyesight, hearing, smell, echolocation (bats, dolphins), thermal senses (snakes, mosquitoes), etc. In the case of machines, they need *sensors* to accomplish or approximate those tasks, for example color and thermal cameras, microphones, temperature sensors, and so on.
The reduction in the size of sensors has allowed the development of more powerful wearable devices. *Wearable devices* are electronic devices that are worn by a user, usually as accessories or embedded in clothes. Examples of wearable devices are smartphones, smartwatches, fitness bracelets, actigraphy watches, etc. These devices have embedded sensors that allow them to monitor different aspects of a user such as activity levels, blood pressure, temperature, and location, to name a few. Examples of sensors that can be found in those devices are accelerometers, magnetometers, gyroscopes, heart rate, microphones, Wi\-Fi, Bluetooth, Galvanic skin response (GSR), etc.
Several of those sensors were initially used for some specific purposes. For example, accelerometers in smartphones were intended to be used for gaming or detecting the device’s orientation. Later, some people started to propose and implement new use cases such as activity recognition ([Shoaib et al. 2015](#ref-shoaib2015)) and fall detection. The magnetometer, which measures the earth’s magnetic field, was mainly used with map applications to determine the orientation of the device, but later, it was found that it can also be used for indoor location purposes ([Brena et al. 2017](#ref-brena2017)).
In general, wearable devices have been successfully applied to track different types of behaviors such as physical activity, sports activities, location, and even mental health states ([Garcia\-Ceja, Riegler, Nordgreen, et al. 2018](#ref-garciaSurvey2018)). Those devices generate a lot of raw data, but it will be our task to process and analyze it. Doing it by hand becomes impossible given the large amounts of data and their variety. In this book, several *machine learning* methods will be introduced that will allow you to extract and analyze different types of behaviors from data. The next section will begin with an introduction to machine learning. The rest of this chapter will introduce the required machine learning concepts before we start analyzing behaviors in chapter [2](classification.html#classification).
1\.2 What Is Machine Learning?
------------------------------
You can think of *machine learning* as a set of computational algorithms that *automatically* find useful patterns and relationships from data. Here, the keyword is *automatic*. When trying to solve a problem, one can hard\-code a predefined set of rules, for example, chained if\-else conditions. For instance, if we want to detect if the object in a picture is an *orange* or a *pear*, we can do something like:
```
if(number_green_pixels > 90%)
return "pear"
else
return "orange"
```
This simple rule should work well and will do the job. Imagine that now your boss tells you that the system needs to recognize *green apples* as well. Our previous rule will no longer work, and we will need to include additional rules and thresholds. On the other hand, a machine learning algorithm will automatically learn such rules based on the updated data. So, you only need to update your data with examples of *green apples* and “click” the re\-train button!
The result of *learning* is *knowledge* that the system can use to solve new instances of a problem. In this case, when you show a new image to the system, it should be able to recognize the type of fruit. Figure [1\.3](intro.html#fig:mlPhases) shows this general idea.
FIGURE 1\.3: Overall Machine Learning phases. The ‘?’ represents the new unknown object for which we want to obtain a prediction using the learned model.
For more formal definitions of machine learning, I recommend you check ([Kononenko and Kukar 2007](#ref-kononenko2007)).
Machine learning methods rely on three main building blocks:
* **Data**
* **Algorithms**
* **Models**
Every machine learning method needs **data** to learn from. For the example of the fruits, we need examples of images for each type of fruit we want to recognize. Additionally, we need their corresponding output types (labels) so the algorithm can learn how to associate each image with its corresponding label.
Not every machine learning method needs the expected output or labels (more on this in the Taxonomy section [1\.3](intro.html#taxonomy)).
Typically, an **algorithm** will use the **data** to learn a **model**. This is called the learning or training phase. The learned **model** can then be used to generate predictions when presented with new data. The data used to train the models is called the **train set**. Since we need a way to test how the model will perform once it is deployed in a real setting (in production), we also need what is known as the **test set**. The test set is used to estimate the model’s performance on data it has never seen before (more on this will be presented in section [1\.6](intro.html#trainingeval)).
1\.3 Types of Machine Learning
------------------------------
Machine learning methods can be grouped into different types. Figure [1\.4](intro.html#fig:mlTaxonomy) depicts a categorization of machine learning ‘types’. This figure is based on ([Biecek et al. 2012](#ref-biecek2012)). The \\(x\\) axis represents the certainty of the labels and the \\(y\\) axis the percent of training data that is labeled. In the previous example, the labels are the names of the fruits associated with each image.
FIGURE 1\.4: Machine learning taxonomy. (Adapted from Biecek, Przemyslaw, et al. “The R package bgmm: mixture modeling with uncertain knowledge.” *Journal of Statistical Software* 47\.i03 (2012\). (CC BY 3\.0\) \[[https://creativecommons.org/licenses/by/3\.0/legalcode](https://creativecommons.org/licenses/by/3.0/legalcode)]).
From the figure, four main types of machine learning methods can be observed:
* **Supervised learning.** In this case, \\(100\\%\\) of the training data is labeled and the certainty of those labels is \\(100\\%\\). This is like the fruits example. For every image used to train the system, the respective label is also known and there is no uncertainty about the label. When the expected output is a category (the type of fruit), this is called **classification**. Examples of classification models (a.k.a classifiers) are decision trees, \\(k\\)\-Nearest Neighbors, Random Forest, neural networks, etc. When the output is a real number (e.g., temperature), it is called **regression**. Examples of regression models are linear regression, regression trees, neural networks, Random Forest, \\(k\\)\-Nearest Neighbors, etc. Note that some models can be used for both classification and regression. A supervised learning problem can be formalized as follows:
\\\[\\begin{equation}
f\\left(x\\right) \= y
\\tag{1\.1}
\\end{equation}\\]
where \\(f\\) is a function that maps some input data \\(x\\) (for example images) to an output \\(y\\) (types of fruits). Usually, an **algorithm** will try to *learn* the best **model** \\(f\\) given some **data** consisting of \\(n\\) pairs \\((x,y)\\) of examples. During learning, the algorithm has access to the expected output/label \\(y\\) for each input \\(x\\). At *inference time*, that is, when we want to make predictions for new examples, we can use the learned model \\(f\\) and feed it with a new input \\(x\\) to obtain the corresponding predicted value \\(y\\).
* **Semi\-supervised learning.** This is the case when the certainty of the labels is \\(100\\%\\) but not all training data are labeled. Usually, this scenario considers the case when only a very small proportion of the data is labeled. That is, the dataset contains pairs of examples of the form \\((x,y)\\) but also examples where \\(y\\) is missing \\((x,?)\\). In supervised learning, both \\(x\\) and \\(y\\) must be present. On the other hand, semi\-supervised algorithms can learn even if some examples are missing the expected output \\(y\\). This is a common situation in real life since labeling data can be expensive and time\-consuming. In the fruits example, someone needs to tag every training image manually before training a model. Semi\-supervised learning methods try to extract information also from the unlabeled data to improve the models. Examples of some semi\-supervised learning methods are self\-learning, co\-training, and tri\-training. ([Triguero, García, and Herrera 2013](#ref-trigueroselflabeled)).
* **Partially\-supervised learning.** This is a generalization that encompasses supervised and semi\-supervised learning. Here, the examples have uncertain (*soft*) labels. For example, the label of a fruit image instead of being an *‘orange’* or *‘pear’* could be a vector \\(\[0\.7, 0\.3]\\) where the first element is the probability that the image corresponds to an orange and the second element is the probability that it’s a pear. Maybe the image was not very clear, and these are the beliefs of the person tagging the images encoded as probabilities. Examples of models that can be used for partially\-supervised learning are mixture models with belief functions ([Côme et al. 2009](#ref-comelearning)) and neural networks.
* **Unsupervised learning.** This is the extreme case when none of the training examples have a label. That is, the dataset only has pairs \\((x,?)\\). Now, you may be wondering: If there are no labels, is it possible to extract information from these data? The answer is *yes*. Imagine you have fruit images with no labels. What you could try to do is to automatically group them into meaningful categories/groups. The categories could be the types of fruits themselves, i.e., trying to form groups in which images within the same category belong to the same type. In the fruits example, we could infer the true types by visually inspecting the images, but in many cases, visual inspection is difficult and the formed groups may not have an easy interpretation, but still, they can be very useful and can be used as a preprocessing step (like in vector quantization). These types of algorithms that find groups (hierarchical groups in some cases) are called **clustering methods**. Examples of clustering methods are \\(k\\)\-means, \\(k\\)\-medoids, and hierarchical clustering. Clustering algorithms are not the only unsupervised learning methods. Association rules, word embeddings, and autoencoders are examples of other unsupervised learning methods. *Note:* Some people may claim that word embeddings and autoencoders are not fully unsupervised methods but for practical purposes, this is not relevant.
Additionally, there is another type of machine learning called **Reinforcement Learning (RL)** which has substantial differences from the previous ones. This type of learning does not rely on example data as the previous ones but on stimuli from an agent’s environment. At any given point in time, an agent can perform an action which will lead it to a new state where a *reward* is collected. The aim is to learn the sequence of actions that maximize the reward. This type of learning is not covered in this book. A good introduction to the topic can be consulted here[2](#fn2).
This book will mainly cover supervised learning problems and more specifically, classification problems. For example, given a set of wearable sensor readings, we want to predict contextual information about a given user such as location, current activity, mood, and so on. Unsupervised learning methods (clustering and association rules) will be covered in chapter [6](unsupervised.html#unsupervised) and autoencoders are introduced in chapter [10](abnormalbehaviors.html#abnormalbehaviors).
1\.4 Terminology
----------------
This section introduces some basic terminology that will be helpful for the rest of the book.
### 1\.4\.1 Tables
Since data is the most important ingredient in machine learning, let’s start with some related terms. First, data needs to be stored/structured so it can be easily manipulated and processed. Most of the time, datasets will be stored as *tables* or in R terminology, *data frames*. Figure [1\.5](intro.html#fig:terminology1) shows the classic `mtcars` dataset[3](#fn3) stored in a data frame.
FIGURE 1\.5: Table/Data frame components. Source: Data from the 1974 Motor Trend US magazine.
The columns represent *variables* and the rows represent *examples* also known as *instances* or *data points*. In this table, there are \\(5\\) variables *mpg*, *cyl*, *disp*, *hp* and the *model* (the first column). In this example, the first column does not have a name, but it is still a variable. Each row represents a specific car model with its values per variable. In machine learning terminology, rows are more commonly called *instances* whereas in statistics they are often called *data points* or *observations*. Here, those terms will be used interchangeably.
Figure [1\.6](intro.html#fig:terminology2) shows a data frame for the `iris` dataset which consists of different kinds of plants ([Fisher 1936](#ref-Fisher1936)). Suppose that we are interested in predicting the *Species* based on the other variables. In machine learning terminology, the variable of interest (the one that depends on the others) is called the *class* or *label* for classification problems. For regression, it is often referred to as *y*. In statistics, it is more commonly known as the *response*, *dependent*, or *y* variable, for both classification and regression.
In machine learning terminology, the rest of the variables are called *features* or *attributes*. In statistics, they are called *predictors*, *independent variables*, or just *X*. From the context, most of the time it should be easy to identify dependent from independent variables regardless of the used terminology. The word **feature vector** is also very common in machine learning. A feature vector is just a structure containing the features of a given instance. For example, the features of the first instance in Figure [1\.6](intro.html#fig:terminology2) can be stored as a feature vector \\(\[5\.4,3\.9,1\.3,0\.4]\\) of size \\(4\\). In a programming language, this can be implemented with an array.
FIGURE 1\.6: Table/Data frame components (cont.). Source: Data from Fisher, Ronald A., “The Use of Multiple Measurements in Taxonomic Problems.” *Annals of Eugenics* 7, no. 2 (1936\): 179–88\.
### 1\.4\.2 Variable Types
When working with machine learning algorithms, the following are the most commonly used variable types. Here, when I talk about variable types, I do not refer to programming\-language\-specific data types (int, boolean, string, etc.) but to more general types regardless of the underlying implementation for each specific programming language.
* **Categorical/Nominal:** These variables take values from a discrete set of possible values (categories). For example, the categorical variable *color* can take the values *‘red’*, *‘green’*, *‘black’*, and so on. Categorical variables do not have an ordering.
* **Numeric:** Real values such as height, weight, price, etc.
* **Integer:** Integer values such as number of rooms, age, number of floors, etc.
* **Ordinal:** Similar to categorical variables, these take their values from a set of discrete values, but they do have an ordering. For example, low \< medium \< high.
### 1\.4\.3 Predictive Models
In machine learning terminology, a *predictive model* is a model that takes some input and produces an output. *Classifiers* and *Regressors* are predictive models. I will use the terms classifier/model and regressor/model interchangeably.
1\.5 Data Analysis Pipeline
---------------------------
Usually, the data analysis pipeline consists of several steps which are depicted in Figure [1\.7](intro.html#fig:pipeline). This is not a complete list but includes the most common steps. It all starts with the data collection. Then the data exploration and so on, until the results are presented. These steps can be followed in sequence, but you can always jump from one step to another one. In fact, most of the time you will end up using an iterative approach by going from one step to the other (forward or backward) as needed.
FIGURE 1\.7: Data analysis pipeline.
The big gray box at the bottom means that machine learning methods can be used in all those steps and not just during training or evaluation. For example, one may use dimensionality reduction methods in the *data exploration* phase to plot the data or classification/regression methods in the *cleaning* phase to impute missing values. Now, let’s give a brief description of each of those phases:
* **Data exploration.** This step aims to familiarize yourself and understand the data so you can make informed decisions during the following steps. Some of the tasks involved in this phase include summarizing your data, generating plots, validating assumptions, and so on. During this phase you can, for example, identify outliers, missing values, or noisy data points that can be cleaned in the next phase. Chapter [4](edavis.html#edavis) will introduce some data exploration techniques. Throughout the book, we will also use some other data exploratory methods but if you are interested in diving deeper into this topic, I recommend you check out the “Exploratory Data Analysis with R” book by Peng ([2016](#ref-peng2016)).
* **Data cleaning.** After the data exploration phase, we can remove the identified outliers, remove noisy data points, remove variables that are not needed for further computation, and so on.
* **Preprocessing.** Predictive models expect the data to be in some structured format and satisfying some constraints. For example, several models are sensitive to class imbalances, i.e., the presence of many instances with a given class but a small number of instances with other classes. In fraud detection scenarios, most of the instances will belong to the normal class but just a small proportion will be of type *‘illegal transaction’*. In this case, we may want to do some preprocessing to try to balance the dataset. Some models are also sensitive to feature\-scale differences. For example, a variable *weight* could be in kilograms but another variable *height* in centimeters. Before training a predictive model, the data needs to be prepared in such a way that the models can get the most out of it. Chapter [5](preprocessing.html#preprocessing) will present some common preprocessing steps.
* **Training and evaluation.** Once the data is preprocessed, we can proceed to train the models. Furthermore, we also need ways to evaluate their generalization performance on new unseen instances. The purpose of this phase is to try, and fine\-tune different models to find the one that performs the best. Later in this chapter, some model evaluation techniques will be introduced.
* **Interpretation and presentation of results.** The purpose of this phase is to analyze and interpret the models’ results. We can use performance metrics derived from the evaluation phase to make informed decisions. We may also want to understand how the models work internally and how the predictions are derived.
1\.6 Evaluating Predictive Models
---------------------------------
Before showing you how to train a machine learning model, in this section, I would like to introduce the process of **evaluating** a predictive model, which is part of the data analysis pipeline. This applies to both classification and regression problems. I’m starting with this topic because it will be a recurring one every time you work with machine learning. You will also be training a lot of models, but you will need ways to validate them as well.
Once you have trained a model (with a training set), that is, finding the best function \\(f\\) that maps inputs to their corresponding outputs, you may want to estimate how good the model is at solving a particular problem when presented with examples it has never seen before (that were not part of the training set). This estimate of how good the model is at predicting the output of new examples is called the **generalization performance**.
To estimate the generalization performance of a model, a dataset is usually divided into a *train set* and a *test set*. As the name implies, the train set is used to train the model (learn its parameters) and the test set is used to evaluate/test its generalization performance. We need independent sets because when deploying models in the wild, they will be presented with new instances never seen before. By dividing the dataset into two subsets, we are simulating this scenario where the test set instances were never seen by the model at training time so the performance estimate will be more accurate rather than if we used the same set to train and then to evaluate the performance. There are two main validation methods that differ in the way the dataset is divided into train and test sets: *hold\-out validation* and *k\-fold cross validation*.
**1\) Hold\-out validation.** This method randomly splits the dataset into train and test sets based on some predefined percentages. For example, randomly select \\(70\\%\\) of the instances and use them as the train set and use the remaining \\(30\\%\\) of the examples for the test set. This will vary depending on the application and the amount of data, but typical splits are \\(50/50\\) and \\(70/30\\) percent for the train and test sets, respectively. Figure [1\.8](intro.html#fig:holdout) shows an example of a dataset divided into \\(70/30\\).
FIGURE 1\.8: Hold\-out validation.
Then, the train set is used to train (fit) a model, and the test set to evaluate how well that model performs on new data. The performance can be measured using performance metrics such as the *accuracy* for classification problems. The accuracy is the percent of correctly classified instances.
It is a good practice to estimate the performance on both, the train and test sets. Usually, the performance on the train set will be better since the model was trained with that very same data. It is also common to measure the performance computing the error instead of accuracy. For example, the percent of misclassified instances. These are called the *train error* and *test error* (also known as the *generalization error*), for both the train and test sets, respectively. Estimating these two errors will allow you to ‘debug’ your models and understand if they are underfitting or overfitting (more on this in the following sections).
**2\) \\(k\\)\-fold cross\-validation.** Hold\-out validation is a good way to evaluate your models when you have a lot of data. However, in many cases, your data will be limited. In those cases, you want to make efficient use of the data. With hold\-out validation, each instance is included either in the train or test set. \\(k\\)\-fold cross\-validation provides a way in which instances take part in both, the test and train set, thus making more efficient use of the data.
This method consists of randomly assigning each instance into one of \\(k\\) folds (subsets) with approximately the same size. Then, \\(k\\) iterations are performed. In each iteration, one of the folds is used to test the model while the remaining ones are used to train it. Each fold is used once as the test set and \\(k\-1\\) times as part of the train set. Typical values for \\(k\\) are \\(3\\), \\(5\\), and \\(10\\). In the extreme case where \\(k\\) is equal to the total number of instances in the dataset, it is called leave\-one\-out cross\-validation (LOOCV). Figure [1\.8](intro.html#fig:holdout) shows an example of cross\-validation with \\(k\=5\\).
FIGURE 1\.9: \\(k\\)\-fold cross validation with \\(k\=5\\) and \\(5\\) iterations.
The generalization performance is then computed by taking the average accuracy/error from each iteration.
Hold\-out validation is typically used when there is a lot of available data and models take significant time to be trained. On the other hand, \\(k\\)\-fold cross\-validation is used when data is limited. However, it is more computational intensive since it requires training \\(k\\) models.
**Validation set.**
Most predictive models require some hyperparameter tuning. For example, a \\(k\\)\-Nearest Neighbors model requires to set \\(k\\), the number of neighbors. For decision trees, one can specify the maximum allowed tree depth, among other hyperparameters. Neural networks require even more hyperparameter tuning to work properly. Also, one may try different preprocessing techniques and features. All those changes affect the final performance. If all those hyperparameter changes are evaluated using the test set, there is a risk of *overfitting* the model. That is, making the model very specific to this particular data. Instead of using the *test set* to fine\-tune parameters, a *validation set* needs to be used instead. Thus, the dataset is randomly partitioned into three subsets: **train/validation/test** sets. The *train set* is used to train the model. The *validation set* is used to estimate the model’s performance while trying different hyperparameters and preprocessing methods. Once you are happy with your final model, you use the *test set* to assess the final generalization performance and this is what you report. The **test set is used only once**. Remember that we want to assess performance on unseen instances. When using *k\-fold cross validation*, first, an independent test set needs to be put aside. Hyperparameters are tuned using cross\-validation and the test set is used at the very end and just once to estimate the final performance.
When working with multi\-user systems, we need to additionally take into account between\-user differences. In those situations, it is advised to perform extra validations. Those multi\-user validation techniques will be covered in chapter [9](multiuser.html#multiuser).
1\.7 Simple Classification Example
----------------------------------
simple\_model.R
So far, a lot of terminology and concepts have been introduced. In this section, we will work through a practical example that will demonstrate how most of these concepts fit together. Here you will build (from scratch) your first classification and regression models! Furthermore, you will learn how to evaluate their generalization performance.
Suppose you have a dataset that contains information about felines including their maximum speed in km/hr and their specific type. For the sake of the example, suppose that these two variables are the only ones that we can observe. As for the types, consider that there are two possibilities: *‘tiger’* and *‘leopard’*. Figure [1\.10](intro.html#fig:felinesTable) shows the first \\(10\\) instances (rows) of the dataset.
FIGURE 1\.10: First 10 instances of felines dataset.
This table has \\(2\\) variables: *speed* and *class*. The first one is a numeric variable. The second one is a categorical variable. In this case, it can take two possible values: *‘tiger’* or *‘leopard’*.
This dataset was synthetically created for illustration purposes, but I promise you that hereafter, we will mostly use real datasets!
The code to reproduce this example is available in the *‘Introduction to Behavior and Machine Learning’* folder in the script file `simple_model.R`. The script contains the code used to generate the dataset. The dataset is stored in a data frame named `dataset`. Let’s start by doing a simple exploratory analysis of the dataset. More detailed exploratory analysis methods will be presented in chapter [4](edavis.html#edavis). First, we can print the data frame dimensions with the `dim()` function.
```
# Print number of rows and columns.
dim(dataset)
#> [1] 100 2
```
The output tells us that the data frame has \\(100\\) rows and \\(2\\) columns. Now we may be interested to know how many of those correspond to *tigers*. We can use the `table()` function to get that information.
```
# Count instances in each class.
table(dataset$class)
#> leopard tiger
#> 50 50
```
Here we see that \\(50\\) instances are of type *‘leopard’* and also that \\(50\\) instances are of type *‘tiger’*. In fact, this is how the dataset was intentionally generated. The next thing we can do is to compute some summary statistics for each column. R already provides a very convenient function for that purpose. Yes, it is the `summary()` function.
```
# Compute some summary statistics.
summary(dataset)
#> speed class
#> Min. :42.96 leopard:50
#> 1st Qu.:48.41 tiger :50
#> Median :51.12
#> Mean :51.53
#> 3rd Qu.:53.99
#> Max. :61.65
```
Since *speed* is a numeric variable, `summary()` computes some statistics like the mean, min, max, etc. The *class* variable is a factor. Thus, it returns row counts instead. In R, categorical variables are usually encoded as factors. It is similar to a string, but R treats factors in a special way. We can already appreciate that with the previous code snippet when the summary function returned class counts.
There are many other ways in which you can explore a dataset, but for now, let’s assume we already feel comfortable and that we have a good understanding of the data. Since this dataset is very simple, we won’t need to do any further data cleaning or preprocessing.
Now, imagine that you are asked to build a model that is able to predict the type of feline based on the observed attributes. In this case, the only thing we can observe is the *speed*. Our task is to build a function that maps speed measurements to classes. That is, we want to be able to predict the type of feline based on how fast it runs. According to the terminology presented in section [1\.4](intro.html#terminology), *speed* would be a **feature** variable and *class* would be the **class** variable.
Based on the types of machine learning methods presented in section [1\.3](intro.html#taxonomy), this one is a **supervised learning** problem because for each instance, the class is available. And, specifically, since we want to predict a category, this is a **classification** problem.
Before building our classification model, it would be worth plotting the data. Figure [1\.11](intro.html#fig:felineSpeeds) shows the speeds for both tigers and leopards.
FIGURE 1\.11: Feline speeds with vertical dashed lines at the means.
Here, I omitted the code for building the plot, but it is included in the script. I also added vertical dashed lines at the mean speeds for the two classes. From this plot, it seems that leopards are faster than tigers (with some exceptions). One thing we can note is that the data points are grouped around the mean values of their corresponding classes. That is, most of the tiger data points are closer to the mean speed for tigers and the same can be observed for leopards. Of course, there are some exceptions where an instance is closer to the mean of the opposite class. This could be because some tigers may be as fast as leopards. Some leopards may also be slower than the average, maybe because they are newborns or they are old. Unfortunately, we do not have more information, so the best we can do is use our single feature *speed*. We can use these insights to come up with a simple model that discriminates between the two classes based on this single feature variable.
One thing we can do for any new instance we want to classify is to compute its distance to the ‘center’ of each class and predict the class that is the closest one. In this case, the center is the mean value. We can formally define our model as the set of \\(n\\) centrality measures where \\(n\\) is the number of classes (\\(2\\) in our example).
\\\[\\begin{equation}
M \= \\{\\mu\_1,\\dots ,\\mu\_n\\}
\\tag{1\.2}
\\end{equation}\\]
Those centrality measures (the class means in this particular case) are called the **parameters** of the model. Training a model consists of finding those optimal parameters that will allow us to achieve the best performance on new instances that were not part of the training data. In most cases, we will need an **algorithm** to find those parameters. In our example, the algorithm consists of simply computing the mean speed for each class. That is, for each class, sum all the corresponding speeds and divide them by the number of data points that belong to that class.
Once those parameters are found, we can start making predictions on new data points. This is called *inference* or *prediction*. In this case, when a new data point arrives, we can predict its class by computing its distance to each of the \\(n\\) centrality measures in \\(M\\) and return the class of the closest one.
The following function implements the training part of our model.
```
# Define a simple classifier that learns
# a centrality measure for each class.
simple.model.train <- function(data, centrality=mean){
# Store unique classes.
classes <- unique(data$class)
# Define an array to store the learned parameters.
params <- numeric(length(classes))
# Make this a named array.
names(params) <- classes
# Iterate through each class and compute its centrality measure.
for(c in classes){
# Filter instances by class.
tmp <- data[which(data$class == c),]
# Compute the centrality measure.
centrality.measure <- centrality(tmp$speed)
# Store the centrality measure for this class.
params[c] <- centrality.measure
}
return(params)
}
```
The first argument is the training data and the second argument is the centrality function we want to use (the mean, by default). This function iterates each class, computes the centrality measure based on the speed, and stores the results in a named array called `params` which is then returned at the end.
Most of the time, training a model involves feeding it with the training data and any additional **hyperparameters** specific to each model. In this case, the centrality measure is a hyperparameter and here, we set it to be the *mean*.
The difference between **parameters** and **hyperparameters** is that the former are learned during training. The **hyperparameters** are settings specific to each model that can be defined before the actual training starts.
Now that we have a function that performs the training, we need another one that performs the actual inference or prediction on new data points. Let’s call this one `simple.classifier.predict()`. Its first argument is a data frame with the instances we want to get predictions for. The second argument is the named vector of parameters learned during training. This function will return an array with the predicted class for each instance in `newdata`.
```
# Define a function that predicts a class
# based on the learned parameters.
simple.classifier.predict <- function(newdata, params){
# Variable to store the predictions of
# each instance in newdata.
predictions <- NULL
# Iterate instances in newdata
for(i in 1:nrow(newdata)){
instance <- newdata[i,]
# Predict the name of the class which
# centrality measure is closest.
pred <- names(which.min(abs(instance$speed - params)))
predictions <- c(predictions, pred)
}
return(predictions)
}
```
This function iterates through each row and computes the distance to each centrality measure and returns the name of the class that was the closest one. The distance computation is done with the following line of code:
```
pred <- names(which.min(abs(instance$speed - params)))
```
First, it computes the absolute difference between the speed and each centrality measure stored in `params` and then, it returns the class name of the minimum one. Now that we have defined the training and prediction procedures, we are ready to test our classifier!
In section [1\.6](intro.html#trainingeval), two evaluation methods were presented. *Hold\-out* and *k\-fold cross\-validation*. These methods allow you to estimate how your model will perform on new data. Let’s start with *hold\-out validation*.
First, we need to split the data into two independent sets. We will use \\(70\\%\\) of the data to train our classifier and the remaining \\(30\\%\\) to test it. The following code splits `dataset` into a `trainset` and `testset`.
```
# Percent to be used as training data.
pctTrain <- 0.7
# Set seed for reproducibility.
set.seed(123)
idxs <- sample(nrow(dataset),
size = nrow(dataset) * pctTrain,
replace = FALSE)
trainset <- dataset[idxs,]
testset <- dataset[-idxs,]
```
The `sample()` function was used to select integer numbers at random from \\(1\\) to \\(n\\), where \\(n\\) is the total number of data points in `dataset`. These randomly selected data points are the ones that will go to the train set. The `size` argument tells the function to return \\(70\\) numbers which correspond to \\(70\\%\\) of the total since `dataset` has \\(100\\) instances.
The last argument `replace` is set to `FALSE` because we do not want repeated instances. The ‘\-’ symbol in `dataset[-idxs,]` is used to select everything that is not in the train set. This ensures that any instance only belongs to either the train or the test set. **We don’t want an instance to be copied into both sets.**
Now it’s time to test our functions. We can train our model using the `trainset` by calling our previously defined function `simple.model.train()`.
```
# Train the model using the trainset.
params <- simple.model.train(trainset, mean)
# Print the learned parameters.
print(params)
#> tiger leopard
#> 48.88246 54.58369
```
After training the model, we print the learned parameters. In this case, the mean for *tiger* is \\(48\.88\\) and for *leopard*, it is \\(54\.58\\). With these parameters, we can start making predictions on our test set! We pass the test set and the newly\-learned parameters to our function `simple.classifier.predict()`.
```
# Predict classes on the test set.
test.predictions <- simple.classifier.predict(testset, params)
# Display first predictions.
head(test.predictions)
#> [1] "tiger" "tiger" "leopard" "tiger" "tiger" "leopard"
```
Our predict function returns predictions for each instance in the test set. We can use the `head()` function to print the first predictions. The first two instances were classified as tigers, the third one as leopard, and so on.
But how good are those predictions? Since we know what the true classes are (also known as **ground truth**) in our test set, we can compute the performance. In this case, we will compute the accuracy, which is the percentage of correct classifications. Note that we did not use the class information when making predictions, we only used the *speed*. We pretended that we didn’t have the true class. We will use the true class only to evaluate the model’s performance.
```
# Compute test accuracy.
sum(test.predictions == as.character(testset$class)) /
nrow(testset)
#> [1] 0.8333333
```
We can compute the accuracy by counting how many predictions were equal to the true classes and divide them by the total number of points in the test set. In this case, the test accuracy was \\(83\.0\\%\\). **Congratulations! you have trained and evaluated your first classifier.**
It is also a good idea to compute the performance on the same train set that was used to train the model.
```
# Compute train accuracy.
train.predictions <- simple.classifier.predict(trainset, params)
sum(train.predictions == as.character(trainset$class)) /
nrow(trainset)
#> [1] 0.8571429
```
The *train accuracy* was \\(85\.7\\%\\). As expected, this was higher than the *test accuracy*. Typically, what you report is the performance on the *test set*, but we can use the performance on the *train set* to look for signs of over/under\-fitting which will be covered in the following sections.
### 1\.7\.1 \\(k\\)\-fold Cross\-validation Example
Now, let’s see how \\(k\\)\-fold cross\-validation can be implemented to test our classifier. I will choose a \\(k\=5\\). This means that \\(5\\) independent sets are going to be generated and \\(5\\) iterations will be run.
```
# Number of folds.
k <- 5
set.seed(123)
# Generate random folds.
folds <- sample(k, size = nrow(dataset), replace = TRUE)
# Print how many instances ended up in each fold.
table(folds)
#> folds
#> 1 2 3 4 5
#> 21 20 23 17 19
```
Again, we can use the `sample()` function. This time we want to select random integers between \\(1\\) and \\(k\\). The total number of integers will be equal to the total number of instances \\(n\\) in the entire dataset. Note that this time we set `replace = TRUE` since \\(k \< n\\), so this implies that we need to pick repeated numbers. Each number will represent the fold to which each instance belongs to. As before, we need to make sure that each instance belongs only to one of the sets. Here, we are guaranteeing that by assigning each instance a single fold number. We can use the `table()` function to print how many instances ended up in each fold. Here, we see that the folds will contain between \\(17\\) and \\(23\\) instances.
\\(k\\)\-fold cross\-validation consists of iterating \\(k\\) times. In each iteration, one of the folds is selected as the test set and the remaining folds are used to build the train set. Within each iteration, the model is trained with the train set and evaluated with the test set. At the end, the average accuracy across folds is reported.
```
# Variables to store accuracies on each fold.
test.accuracies <- NULL
train.accuracies <- NULL
for(i in 1:k){
testset <- dataset[which(folds == i),]
trainset <- dataset[which(folds != i),]
params <- simple.model.train(trainset, mean)
test.predictions <- simple.classifier.predict(testset, params)
train.predictions <- simple.classifier.predict(trainset, params)
# Accuracy on test set.
acc <- sum(test.predictions ==
as.character(testset$class)) /
nrow(testset)
test.accuracies <- c(test.accuracies, acc)
# Accuracy on train set.
acc <- sum(train.predictions ==
as.character(trainset$class)) /
nrow(trainset)
train.accuracies <- c(train.accuracies, acc)
}
# Print mean accuracy across folds on the test set.
mean(test.accuracies)
#> [1] 0.829823
# Print mean accuracy across folds on the train set.
mean(train.accuracies)
#> [1] 0.8422414
```
The test mean accuracy across the \\(5\\) folds was \\(\\approx 83\\%\\) which is very similar to the accuracy estimated by hold\-out validation.
Note that in section [1\.6](intro.html#trainingeval) a **validation set** was also mentioned. This one is useful when you want to fine\-tune a model and/or try different preprocessing methods on your data. In case you are using hold\-out validation, you may want to split your data into three sets: train/validation/test sets. So, you train your model using the train set and estimate its performance using the validation set. Then you can fine\-tune your model. For example, here, instead of the mean as centrality measure, you can try to use the median and measure the performance again with the validation set. When you are pleased with your settings, you estimate the final performance of the model with the test set *only once*.
In the case of \\(k\\)\-fold cross\-validation, you can set aside a test set at the beginning. Then you use the remaining data to perform cross\-validation and fine\-tune your model. Within each iteration, you test the performance with the validation data. Once you are sure you are not going to do any parameter tuning, you can train a model with the train and validation sets and test the generalization performance using the test set.
One of the benefits of machine learning is that it allows us to find patterns based on data freeing us from having to program hard\-coded rules. This means more scalable and flexible code. If for some reason, now, instead of \\(2\\) classes we needed to add another class, for example, a *‘jaguar’*, the only thing we need to do is update our database and retrain our model. We don’t need to modify the internals of the algorithms. They will update themselves based on the data.
We can try this by adding a third class *‘jaguar’* to the dataset as shown in the script `simple_model.R`. It then trains the model as usual and performs predictions.
1\.8 Simple Regression Example
------------------------------
simple\_model.R
As opposed to classification models where the aim is to predict a category, **regression models predict numeric values**. To exemplify this, we can use our felines dataset but instead try to predict *speed* based on the type of feline. The *class* column will be treated as a **feature** variable and *speed* will be the **response variable**. Since there is only one predictor, and it is categorical, the best thing we can do to implement our regression model is to predict the mean speed depending on the class.
Recall that for the classification scenario, our learned parameters consisted of the means for each class. Thus, we can reuse our training function `simple.model.train()`. All we need to do is to define a new predict function that returns the speed based on the class. This is the opposite of what we did in the classification case (return the class based on the speed).
```
# Define a function that predicts speed
# based on the type of feline.
simple.regression.predict <- function(newdata, params){
# Variable to store the predictions of
# each instance in newdata.
predictions <- NULL
# Iterate instances in newdata
for(i in 1:nrow(newdata)){
instance <- newdata[i,]
# Return the mean value of the corresponding class stored in params.
pred <- params[which(names(params) == instance$class)]
predictions <- c(predictions, pred)
}
return(predictions)
}
```
The `simple.regression.predict()` function iterates through each instance in `newdata` and returns the mean speed from `params` for the corresponding class.
Again, we can validate our model using *hold\-out validation*. The train set will contain \\(70\\%\\) of the instances and the remaining will be used as the test set.
```
pctTrain <- 0.7
set.seed(123)
idxs <- sample(nrow(dataset),
size = nrow(dataset) * pctTrain,
replace = FALSE)
trainset <- dataset[idxs,]
testset <- dataset[-idxs,]
# Reuse our train function.
params <- simple.model.train(trainset, mean)
print(params)
#> tiger leopard
#> 48.88246 54.5836
```
Here, we reused our previous function `simple.model.train()` to learn the parameters and then print them. Then we can use those parameters to infer the speed. If a test instance belongs to the class *‘tiger’* then return \\(48\.88\\). If it is of class *‘leopard’* then return \\(54\.58\\).
```
# Predict speeds on the test set.
test.predictions <-
simple.regression.predict(testset, params)
# Print first predictions.
head(test.predictions)
#> 48.88246 54.58369 54.58369 48.88246 48.88246 54.58369
```
Since these are numeric predictions, we cannot use accuracy as in the classification case to evaluate the performance. One way to evaluate the performance of regression models is by computing the **mean absolute error (MAE)**. This measure tells you, on average, how much each prediction deviates from its true value. It is computed by subtracting each prediction from its real value and taking the absolute value: \\(\|predicted \- realValue\|\\). This can be visualized in Figure [1\.12](intro.html#fig:maeExample). The distances between the true and predicted values are the errors and the MAE is the average of all those errors.
FIGURE 1\.12: Prediction errors.
We can use the following code to compute the MAE:
```
# Compute mean absolute error (MAE) on the test set.
mean(abs(test.predictions - testset$speed))
#> [1] 2.562598
```
The MAE on the *test set* was \\(2\.56\\). That is, on average, our simple model had a deviation of \\(2\.56\\) km/hr with respect to the true values, which is not bad. We can also compute the MAE on the *train set*.
```
# Predict speeds on the train set.
train.predictions <-
simple.regression.predict(trainset, params)
# Compute mean absolute error (MAE) on the train set.
mean(abs(train.predictions - trainset$speed))
#> [1] 2.16097
```
The MAE on the *train set* was \\(2\.16\\), which is better than the *test set* MAE (small MAE values are preferred). **Now, you have built, trained, and evaluated a regression model!**
This was a simple example, but it illustrates the basic idea of regression and how it differs from classification. It also shows how the performance of regression models is typically evaluated with the MAE as opposed to the accuracy used in classification. In chapter [8](deeplearning.html#deeplearning), more advanced methods such as neural networks will be introduced, which can be used to solve regression problems.
In this section, we have gone through several of the data analysis pipeline phases. We did a simple exploratory analysis of the data and then we built, trained, and validated the models to perform both classification and regression. Finally, we estimated the overall performance of the models and presented the results. Here, we coded our models from scratch, but in practice, you typically use models that have already been implemented and tested. All in all, I hope these examples have given you the feeling of how it is to work with machine learning.
1\.9 Underfitting and Overfitting
---------------------------------
From the felines classification example, we saw how we can separate two classes by computing the mean for each class. For the two\-class problem, this is equivalent to having a decision line between the two means (Figure [1\.13](intro.html#fig:boundary)). Everything to the right of this decision line will be closer to the mean that corresponds to *‘leopard’* and everything to the left to *‘tiger’*. In this case, the classification function is a vertical line. During learning, the position of the line that reduces the classification error is searched for. We implicitly estimated the position of the line by finding the *mean values* for each of the classes.
FIGURE 1\.13: Decision line between the two classes.
Now, imagine that we do not only have access to the *speed* but also to the felines’ *age*. This extra information could help us reduce the prediction error since age plays an important role in how fast a feline is. Figure [1\.14](intro.html#fig:underOverFitting) (left) shows how it will look like if we plot *age* in the x\-axis and *speed* in the y\-axis. Here, we can see that for both, tigers and leopards, the *speed* seems to increase as *age* increases. Then, at some point, as *age* increases the *speed* begins to decrease.
Constructing a classifier with a single vertical line as we did before will not work in this \\(2\\)\-dimensional case where we have \\(2\\) predictors. Now we will need a more complex decision boundary (function) to separate the two classes. One approach would be to use a line as before but this time we allow the line to have a slope (angle). Everything below the line is classified as *‘tiger’* and everything else as *‘leopard’*. Thus, the learning phase involves finding the line’s *position* and its *slope* that achieves the smallest error.
Figure [1\.14](intro.html#fig:underOverFitting) (left) shows a possible decision line. Even though this function is more complex than a vertical line, it will still produce a lot of misclassifications (it does not clearly separate both classes). This is called **underfitting**, that is, the model is so simple that it is not able to capture the underlying data patterns.
FIGURE 1\.14: Underfitting and overfitting.
Let’s try a more complex function, for example, a curve. Figure [1\.14](intro.html#fig:underOverFitting) (middle) shows that a curve does a better job at separating the two classes with fewer misclassifications but still, \\(3\\) leopards are misclassified as tigers and \\(1\\) tiger is misclassified as leopard. Can we do better than that? Yes, just keep increasing the complexity of the decision function.
Figure [1\.14](intro.html#fig:underOverFitting) (right) shows a more complex function that was able to separate the two classes with \\(100\\%\\) accuracy or equivalently, with a \\(0\\%\\) error. However, there is a problem. This function learned how to accurately separate the *training data*, but it is likely that it will not do as well with a new *test set*. This function became so specialized with respect to this particular data that it failed to capture the overall pattern. This is called **overfitting**. In this case, the model ‘memorizes’ the train set instead of finding general patterns applicable to new unseen instances. If we were to choose a model, the best one would be the one in the middle. Even if it is not perfect on the train data, it will do better than the other models when evaluated on new test data.
Overfitting is a common problem in machine learning. One way to know if a model is overfitting is by checking if the error in the train set is low while it is high on a new set (can be a test or validation set). Figure [1\.15](intro.html#fig:modelComplexity) illustrates this idea. Too\-simple models will produce a high error for both, the train and validation sets (underfitting). As the complexity of the model increases, the errors on both sets are reduced. Then, at some point, the complexity of a model becomes so high that it gets too specific on the train set and fails to perform well on a new independent set (overfitting).
FIGURE 1\.15: Model complexity vs. train and validation error.
In this example, we saw how *underfitting* and *overfitting* can affect the generalization performance of a model in a classification setting but the same can occur in regression problems.
There are several methods that aim to reduce overfitting, but many of them are specific to the type of model. For example, with decision trees (covered in chapter [2](classification.html#classification)), one way to reduce overfitting is to limit their depth or build ensembles of trees (chapter [3](ensemble.html#ensemble)). Neural networks are also highly prone to overfitting since they can be very complex and have millions of parameters. In chapter [8](deeplearning.html#deeplearning), several techniques to reduce the effect of overfitting will be presented.
1\.10 Bias and Variance
-----------------------
So far, we have seen how to train predictive models and evaluate how well they do on new data (test/validation sets). The main goal is to have predictive models that have a low error rate when used with new data. Understanding the source of the error can help us make more informed decisions when building predictive models. The *test error*, also known as the *generalization error* of a predictive model can be decomposed into three components: bias, variance, and noise.
**Noise.** This component is inherent to the data itself and there is nothing we can do about it. For example, two instances having the same values in their features but with a different label.
**Bias.** How much the average prediction differs from the true value. Note the *average* keyword. This means that we make the assumption that an infinite (or very large) number of train sets can be generated and for each, a predictive model is trained. Then we average the predictions of all those models and see how much that average differs from the true value.
**Variance.** How much the predictions change for a given data point when training a model using a different train set each time.
Bias and variance are closely related to underfitting and overfitting. High variance is a sign of overfitting. That is, a model is so complex that it will fit a particular train set very well. Every time it is trained with a different train set, the *train error* will be low, but it will likely generate very different predictions for the same test points and a much higher *test error*.
Figure [1\.16](intro.html#fig:overfittingVariance) illustrates the relation between overfitting and high variance with a regression problem.
FIGURE 1\.16: High variance and overfitting.
Given a feature \\(x\\), two models are trained to predict \\(y\\): i) a *complex model* (top row), and ii) a *simpler model* (bottom row). Both models are fitted with two training sets (\\(a\\) and \\(b\\)) sampled from the same distribution. The complex model fits the train data perfectly but makes very different predictions (big \\(\\Delta\\)) for the same test point when using a different train set. The simpler model does not fit the train data so well but has a smaller \\(\\Delta\\) and a lower error on the test point as well. Visually, the function (red curve) of the complex model also varies a lot across train sets whereas the shapes of the simpler model functions look very similar.
On the other hand, if a model is too simple, it will underfit causing *highly biased* results without being able to capture the input\-output relationships. This results in a high *train error* and in consequence, a high *test error* as well.
A formal definition of the error decomposition is explained in the book “The elements of statistical learning: data mining, inference, and prediction” ([Hastie, Tibshirani, and Friedman 2009](#ref-hastie2009elements)).
1\.11 Summary
-------------
In this chapter, several introductory machine learning concepts and terms were introduced and they are the basis for the methods that will be covered in the following chapters.
* **Behavior** can be defined as *“an observable activity in a human or animal”*.
* Three main reasons of why we may want to analyze behavior automatically were discussed: **react**, **understand**, and **document/archive**.
* One way to observe behavior automatically is through the use of sensors and/or data.
* **Machine Learning** consists of a set of computational algorithms that automatically find useful patterns and relationships from data.
* The three main building blocks of machine learning are: **data**, **algorithms**, and **models**.
* The main types of machine learning are **supervised learning**, **semi\-supervised learning**, **partially\-supervised learning**, and **unsupervised learning**.
* In R, data is usually stored in data frames. Data frames have variables (columns) and instances (rows). Depending on the task, variables can be **independent** or **dependent**.
* A **predictive model** is a model that takes some input and produces an output. *Classifiers* and *regressors* are predictive models.
* A data analysis pipeline consists of several tasks including data collection, cleaning, preprocessing, training/evaluation, and presentation of results.
* Model evaluation can be performed with **hold\-out validation** or **\\(k\\)\-fold cross\-validation**.
* **Overfitting** occurs when a model ‘memorizes’ the training data instead of finding useful underlying patterns.
* The test error can be decomposed into **noise**, **bias**, and **variance**.
1\.1 What Is Behavior?
----------------------
Living organisms are constantly sensing and analyzing their surrounding environment. This includes inanimate objects but also other living entities. All of this is with the objective of making decisions and taking actions, either consciously or unconsciously. If we see someone running, we will react differently depending on whether we are at a stadium or in a bank. At the same time, we may also analyze other cues such as the runner’s facial expressions, clothes, items, and the reactions of the other people around us. Based on this aggregated information, we can decide how to react and behave. All this is supported by the organisms’ sensing capabilities and decision\-making processes (the brain and/or chemical reactions). Understanding our environment and how others behave is crucial for conducting our everyday life activities and provides support for other tasks. But, **what is behavior**? The Cambridge dictionary defines behavior as:
> *“the way that a person, an animal, a substance, etc. behaves in a particular situation or under particular conditions”.*
Another definition by dictionary.com is:
> *“observable activity in a human or animal”.*
The definitions are similar and both include humans and animals. Following those definitions, this book will focus on the automatic analysis of human and animal behaviors however, the methods can also be applied to robots and to a wide variety of problems in different domains. There are three main reasons why one may want to analyze behaviors in an automatic manner:
1. **React.** A biological or an artificial agent (or a combination of both) can take actions based on what is happening in the surrounding environment. For example, if suspicious behavior is detected in an airport, preventive actions can be triggered by security systems and the corresponding authorities. Without the possibility to automate such a detection system, it would be infeasible to implement it in practice. Just imagine trying to analyze airport traffic by hand.
2. **Understand.** Analyzing the behavior of an organism can help us to understand other associated behaviors and processes and to answer research questions. For example, Williams et al. ([2020](#ref-williams2020)) found that *Andean condors*, the heaviest soaring bird (see Figure [1\.1](intro.html#fig:condor)), only flap their wings for about \\(1\\%\\) of their total flight time. In one of the cases, a condor flew \\(\\approx 172\\) km without flapping. Those findings were the result of analyzing the birds’ behavior from data recorded by bio\-logging devices. In this book, several examples that make use of inertial devices will be studied.
FIGURE 1\.1: Andean condor. (Hugo Pédel, France, Travail personnel. Cliché réalisé dans le Parc National Argentin Nahuel Huapi, San Carlos de Bariloche, Laguna Tonchek. Source: Wikipedia (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
3. **Document and Archive.** Finally, we may want to document certain behaviors for future use. It could be for evidence purposes or maybe it is not clear how the information can be used now but may come in handy later. Based on the archived information, one could gain new knowledge in the future and use it to react (take decisions/actions), as shown in Figure [1\.2](intro.html#fig:decisionsActions). For example, we could document our nutritional habits (what do we eat, how often, etc.). If there is a health issue, a specialist could use this historical information to make a more precise diagnosis and propose actions.
FIGURE 1\.2: Taking decisions from archived behaviors.
Some behaviors can be used as a proxy to understand other behaviors, states, and/or processes. For example, detecting body movement behaviors during a job interview could serve as the basis to understand stress levels. Behaviors can also be modeled as a composition of lower\-level behaviors. In chapter [7](representations.html#representations), a method called *Bag of Words* that can be used to decompose complex behaviors into a set of simpler ones will be presented.
In order to analyze and monitor behaviors, we need a way to observe them. Living organisms use their available senses such as eyesight, hearing, smell, echolocation (bats, dolphins), thermal senses (snakes, mosquitoes), etc. In the case of machines, they need *sensors* to accomplish or approximate those tasks, for example color and thermal cameras, microphones, temperature sensors, and so on.
The reduction in the size of sensors has allowed the development of more powerful wearable devices. *Wearable devices* are electronic devices that are worn by a user, usually as accessories or embedded in clothes. Examples of wearable devices are smartphones, smartwatches, fitness bracelets, actigraphy watches, etc. These devices have embedded sensors that allow them to monitor different aspects of a user such as activity levels, blood pressure, temperature, and location, to name a few. Examples of sensors that can be found in those devices are accelerometers, magnetometers, gyroscopes, heart rate, microphones, Wi\-Fi, Bluetooth, Galvanic skin response (GSR), etc.
Several of those sensors were initially used for some specific purposes. For example, accelerometers in smartphones were intended to be used for gaming or detecting the device’s orientation. Later, some people started to propose and implement new use cases such as activity recognition ([Shoaib et al. 2015](#ref-shoaib2015)) and fall detection. The magnetometer, which measures the earth’s magnetic field, was mainly used with map applications to determine the orientation of the device, but later, it was found that it can also be used for indoor location purposes ([Brena et al. 2017](#ref-brena2017)).
In general, wearable devices have been successfully applied to track different types of behaviors such as physical activity, sports activities, location, and even mental health states ([Garcia\-Ceja, Riegler, Nordgreen, et al. 2018](#ref-garciaSurvey2018)). Those devices generate a lot of raw data, but it will be our task to process and analyze it. Doing it by hand becomes impossible given the large amounts of data and their variety. In this book, several *machine learning* methods will be introduced that will allow you to extract and analyze different types of behaviors from data. The next section will begin with an introduction to machine learning. The rest of this chapter will introduce the required machine learning concepts before we start analyzing behaviors in chapter [2](classification.html#classification).
1\.2 What Is Machine Learning?
------------------------------
You can think of *machine learning* as a set of computational algorithms that *automatically* find useful patterns and relationships from data. Here, the keyword is *automatic*. When trying to solve a problem, one can hard\-code a predefined set of rules, for example, chained if\-else conditions. For instance, if we want to detect if the object in a picture is an *orange* or a *pear*, we can do something like:
```
if(number_green_pixels > 90%)
return "pear"
else
return "orange"
```
This simple rule should work well and will do the job. Imagine that now your boss tells you that the system needs to recognize *green apples* as well. Our previous rule will no longer work, and we will need to include additional rules and thresholds. On the other hand, a machine learning algorithm will automatically learn such rules based on the updated data. So, you only need to update your data with examples of *green apples* and “click” the re\-train button!
The result of *learning* is *knowledge* that the system can use to solve new instances of a problem. In this case, when you show a new image to the system, it should be able to recognize the type of fruit. Figure [1\.3](intro.html#fig:mlPhases) shows this general idea.
FIGURE 1\.3: Overall Machine Learning phases. The ‘?’ represents the new unknown object for which we want to obtain a prediction using the learned model.
For more formal definitions of machine learning, I recommend you check ([Kononenko and Kukar 2007](#ref-kononenko2007)).
Machine learning methods rely on three main building blocks:
* **Data**
* **Algorithms**
* **Models**
Every machine learning method needs **data** to learn from. For the example of the fruits, we need examples of images for each type of fruit we want to recognize. Additionally, we need their corresponding output types (labels) so the algorithm can learn how to associate each image with its corresponding label.
Not every machine learning method needs the expected output or labels (more on this in the Taxonomy section [1\.3](intro.html#taxonomy)).
Typically, an **algorithm** will use the **data** to learn a **model**. This is called the learning or training phase. The learned **model** can then be used to generate predictions when presented with new data. The data used to train the models is called the **train set**. Since we need a way to test how the model will perform once it is deployed in a real setting (in production), we also need what is known as the **test set**. The test set is used to estimate the model’s performance on data it has never seen before (more on this will be presented in section [1\.6](intro.html#trainingeval)).
1\.3 Types of Machine Learning
------------------------------
Machine learning methods can be grouped into different types. Figure [1\.4](intro.html#fig:mlTaxonomy) depicts a categorization of machine learning ‘types’. This figure is based on ([Biecek et al. 2012](#ref-biecek2012)). The \\(x\\) axis represents the certainty of the labels and the \\(y\\) axis the percent of training data that is labeled. In the previous example, the labels are the names of the fruits associated with each image.
FIGURE 1\.4: Machine learning taxonomy. (Adapted from Biecek, Przemyslaw, et al. “The R package bgmm: mixture modeling with uncertain knowledge.” *Journal of Statistical Software* 47\.i03 (2012\). (CC BY 3\.0\) \[[https://creativecommons.org/licenses/by/3\.0/legalcode](https://creativecommons.org/licenses/by/3.0/legalcode)]).
From the figure, four main types of machine learning methods can be observed:
* **Supervised learning.** In this case, \\(100\\%\\) of the training data is labeled and the certainty of those labels is \\(100\\%\\). This is like the fruits example. For every image used to train the system, the respective label is also known and there is no uncertainty about the label. When the expected output is a category (the type of fruit), this is called **classification**. Examples of classification models (a.k.a classifiers) are decision trees, \\(k\\)\-Nearest Neighbors, Random Forest, neural networks, etc. When the output is a real number (e.g., temperature), it is called **regression**. Examples of regression models are linear regression, regression trees, neural networks, Random Forest, \\(k\\)\-Nearest Neighbors, etc. Note that some models can be used for both classification and regression. A supervised learning problem can be formalized as follows:
\\\[\\begin{equation}
f\\left(x\\right) \= y
\\tag{1\.1}
\\end{equation}\\]
where \\(f\\) is a function that maps some input data \\(x\\) (for example images) to an output \\(y\\) (types of fruits). Usually, an **algorithm** will try to *learn* the best **model** \\(f\\) given some **data** consisting of \\(n\\) pairs \\((x,y)\\) of examples. During learning, the algorithm has access to the expected output/label \\(y\\) for each input \\(x\\). At *inference time*, that is, when we want to make predictions for new examples, we can use the learned model \\(f\\) and feed it with a new input \\(x\\) to obtain the corresponding predicted value \\(y\\).
* **Semi\-supervised learning.** This is the case when the certainty of the labels is \\(100\\%\\) but not all training data are labeled. Usually, this scenario considers the case when only a very small proportion of the data is labeled. That is, the dataset contains pairs of examples of the form \\((x,y)\\) but also examples where \\(y\\) is missing \\((x,?)\\). In supervised learning, both \\(x\\) and \\(y\\) must be present. On the other hand, semi\-supervised algorithms can learn even if some examples are missing the expected output \\(y\\). This is a common situation in real life since labeling data can be expensive and time\-consuming. In the fruits example, someone needs to tag every training image manually before training a model. Semi\-supervised learning methods try to extract information also from the unlabeled data to improve the models. Examples of some semi\-supervised learning methods are self\-learning, co\-training, and tri\-training. ([Triguero, García, and Herrera 2013](#ref-trigueroselflabeled)).
* **Partially\-supervised learning.** This is a generalization that encompasses supervised and semi\-supervised learning. Here, the examples have uncertain (*soft*) labels. For example, the label of a fruit image instead of being an *‘orange’* or *‘pear’* could be a vector \\(\[0\.7, 0\.3]\\) where the first element is the probability that the image corresponds to an orange and the second element is the probability that it’s a pear. Maybe the image was not very clear, and these are the beliefs of the person tagging the images encoded as probabilities. Examples of models that can be used for partially\-supervised learning are mixture models with belief functions ([Côme et al. 2009](#ref-comelearning)) and neural networks.
* **Unsupervised learning.** This is the extreme case when none of the training examples have a label. That is, the dataset only has pairs \\((x,?)\\). Now, you may be wondering: If there are no labels, is it possible to extract information from these data? The answer is *yes*. Imagine you have fruit images with no labels. What you could try to do is to automatically group them into meaningful categories/groups. The categories could be the types of fruits themselves, i.e., trying to form groups in which images within the same category belong to the same type. In the fruits example, we could infer the true types by visually inspecting the images, but in many cases, visual inspection is difficult and the formed groups may not have an easy interpretation, but still, they can be very useful and can be used as a preprocessing step (like in vector quantization). These types of algorithms that find groups (hierarchical groups in some cases) are called **clustering methods**. Examples of clustering methods are \\(k\\)\-means, \\(k\\)\-medoids, and hierarchical clustering. Clustering algorithms are not the only unsupervised learning methods. Association rules, word embeddings, and autoencoders are examples of other unsupervised learning methods. *Note:* Some people may claim that word embeddings and autoencoders are not fully unsupervised methods but for practical purposes, this is not relevant.
Additionally, there is another type of machine learning called **Reinforcement Learning (RL)** which has substantial differences from the previous ones. This type of learning does not rely on example data as the previous ones but on stimuli from an agent’s environment. At any given point in time, an agent can perform an action which will lead it to a new state where a *reward* is collected. The aim is to learn the sequence of actions that maximize the reward. This type of learning is not covered in this book. A good introduction to the topic can be consulted here[2](#fn2).
This book will mainly cover supervised learning problems and more specifically, classification problems. For example, given a set of wearable sensor readings, we want to predict contextual information about a given user such as location, current activity, mood, and so on. Unsupervised learning methods (clustering and association rules) will be covered in chapter [6](unsupervised.html#unsupervised) and autoencoders are introduced in chapter [10](abnormalbehaviors.html#abnormalbehaviors).
1\.4 Terminology
----------------
This section introduces some basic terminology that will be helpful for the rest of the book.
### 1\.4\.1 Tables
Since data is the most important ingredient in machine learning, let’s start with some related terms. First, data needs to be stored/structured so it can be easily manipulated and processed. Most of the time, datasets will be stored as *tables* or in R terminology, *data frames*. Figure [1\.5](intro.html#fig:terminology1) shows the classic `mtcars` dataset[3](#fn3) stored in a data frame.
FIGURE 1\.5: Table/Data frame components. Source: Data from the 1974 Motor Trend US magazine.
The columns represent *variables* and the rows represent *examples* also known as *instances* or *data points*. In this table, there are \\(5\\) variables *mpg*, *cyl*, *disp*, *hp* and the *model* (the first column). In this example, the first column does not have a name, but it is still a variable. Each row represents a specific car model with its values per variable. In machine learning terminology, rows are more commonly called *instances* whereas in statistics they are often called *data points* or *observations*. Here, those terms will be used interchangeably.
Figure [1\.6](intro.html#fig:terminology2) shows a data frame for the `iris` dataset which consists of different kinds of plants ([Fisher 1936](#ref-Fisher1936)). Suppose that we are interested in predicting the *Species* based on the other variables. In machine learning terminology, the variable of interest (the one that depends on the others) is called the *class* or *label* for classification problems. For regression, it is often referred to as *y*. In statistics, it is more commonly known as the *response*, *dependent*, or *y* variable, for both classification and regression.
In machine learning terminology, the rest of the variables are called *features* or *attributes*. In statistics, they are called *predictors*, *independent variables*, or just *X*. From the context, most of the time it should be easy to identify dependent from independent variables regardless of the used terminology. The word **feature vector** is also very common in machine learning. A feature vector is just a structure containing the features of a given instance. For example, the features of the first instance in Figure [1\.6](intro.html#fig:terminology2) can be stored as a feature vector \\(\[5\.4,3\.9,1\.3,0\.4]\\) of size \\(4\\). In a programming language, this can be implemented with an array.
FIGURE 1\.6: Table/Data frame components (cont.). Source: Data from Fisher, Ronald A., “The Use of Multiple Measurements in Taxonomic Problems.” *Annals of Eugenics* 7, no. 2 (1936\): 179–88\.
### 1\.4\.2 Variable Types
When working with machine learning algorithms, the following are the most commonly used variable types. Here, when I talk about variable types, I do not refer to programming\-language\-specific data types (int, boolean, string, etc.) but to more general types regardless of the underlying implementation for each specific programming language.
* **Categorical/Nominal:** These variables take values from a discrete set of possible values (categories). For example, the categorical variable *color* can take the values *‘red’*, *‘green’*, *‘black’*, and so on. Categorical variables do not have an ordering.
* **Numeric:** Real values such as height, weight, price, etc.
* **Integer:** Integer values such as number of rooms, age, number of floors, etc.
* **Ordinal:** Similar to categorical variables, these take their values from a set of discrete values, but they do have an ordering. For example, low \< medium \< high.
### 1\.4\.3 Predictive Models
In machine learning terminology, a *predictive model* is a model that takes some input and produces an output. *Classifiers* and *Regressors* are predictive models. I will use the terms classifier/model and regressor/model interchangeably.
### 1\.4\.1 Tables
Since data is the most important ingredient in machine learning, let’s start with some related terms. First, data needs to be stored/structured so it can be easily manipulated and processed. Most of the time, datasets will be stored as *tables* or in R terminology, *data frames*. Figure [1\.5](intro.html#fig:terminology1) shows the classic `mtcars` dataset[3](#fn3) stored in a data frame.
FIGURE 1\.5: Table/Data frame components. Source: Data from the 1974 Motor Trend US magazine.
The columns represent *variables* and the rows represent *examples* also known as *instances* or *data points*. In this table, there are \\(5\\) variables *mpg*, *cyl*, *disp*, *hp* and the *model* (the first column). In this example, the first column does not have a name, but it is still a variable. Each row represents a specific car model with its values per variable. In machine learning terminology, rows are more commonly called *instances* whereas in statistics they are often called *data points* or *observations*. Here, those terms will be used interchangeably.
Figure [1\.6](intro.html#fig:terminology2) shows a data frame for the `iris` dataset which consists of different kinds of plants ([Fisher 1936](#ref-Fisher1936)). Suppose that we are interested in predicting the *Species* based on the other variables. In machine learning terminology, the variable of interest (the one that depends on the others) is called the *class* or *label* for classification problems. For regression, it is often referred to as *y*. In statistics, it is more commonly known as the *response*, *dependent*, or *y* variable, for both classification and regression.
In machine learning terminology, the rest of the variables are called *features* or *attributes*. In statistics, they are called *predictors*, *independent variables*, or just *X*. From the context, most of the time it should be easy to identify dependent from independent variables regardless of the used terminology. The word **feature vector** is also very common in machine learning. A feature vector is just a structure containing the features of a given instance. For example, the features of the first instance in Figure [1\.6](intro.html#fig:terminology2) can be stored as a feature vector \\(\[5\.4,3\.9,1\.3,0\.4]\\) of size \\(4\\). In a programming language, this can be implemented with an array.
FIGURE 1\.6: Table/Data frame components (cont.). Source: Data from Fisher, Ronald A., “The Use of Multiple Measurements in Taxonomic Problems.” *Annals of Eugenics* 7, no. 2 (1936\): 179–88\.
### 1\.4\.2 Variable Types
When working with machine learning algorithms, the following are the most commonly used variable types. Here, when I talk about variable types, I do not refer to programming\-language\-specific data types (int, boolean, string, etc.) but to more general types regardless of the underlying implementation for each specific programming language.
* **Categorical/Nominal:** These variables take values from a discrete set of possible values (categories). For example, the categorical variable *color* can take the values *‘red’*, *‘green’*, *‘black’*, and so on. Categorical variables do not have an ordering.
* **Numeric:** Real values such as height, weight, price, etc.
* **Integer:** Integer values such as number of rooms, age, number of floors, etc.
* **Ordinal:** Similar to categorical variables, these take their values from a set of discrete values, but they do have an ordering. For example, low \< medium \< high.
### 1\.4\.3 Predictive Models
In machine learning terminology, a *predictive model* is a model that takes some input and produces an output. *Classifiers* and *Regressors* are predictive models. I will use the terms classifier/model and regressor/model interchangeably.
1\.5 Data Analysis Pipeline
---------------------------
Usually, the data analysis pipeline consists of several steps which are depicted in Figure [1\.7](intro.html#fig:pipeline). This is not a complete list but includes the most common steps. It all starts with the data collection. Then the data exploration and so on, until the results are presented. These steps can be followed in sequence, but you can always jump from one step to another one. In fact, most of the time you will end up using an iterative approach by going from one step to the other (forward or backward) as needed.
FIGURE 1\.7: Data analysis pipeline.
The big gray box at the bottom means that machine learning methods can be used in all those steps and not just during training or evaluation. For example, one may use dimensionality reduction methods in the *data exploration* phase to plot the data or classification/regression methods in the *cleaning* phase to impute missing values. Now, let’s give a brief description of each of those phases:
* **Data exploration.** This step aims to familiarize yourself and understand the data so you can make informed decisions during the following steps. Some of the tasks involved in this phase include summarizing your data, generating plots, validating assumptions, and so on. During this phase you can, for example, identify outliers, missing values, or noisy data points that can be cleaned in the next phase. Chapter [4](edavis.html#edavis) will introduce some data exploration techniques. Throughout the book, we will also use some other data exploratory methods but if you are interested in diving deeper into this topic, I recommend you check out the “Exploratory Data Analysis with R” book by Peng ([2016](#ref-peng2016)).
* **Data cleaning.** After the data exploration phase, we can remove the identified outliers, remove noisy data points, remove variables that are not needed for further computation, and so on.
* **Preprocessing.** Predictive models expect the data to be in some structured format and satisfying some constraints. For example, several models are sensitive to class imbalances, i.e., the presence of many instances with a given class but a small number of instances with other classes. In fraud detection scenarios, most of the instances will belong to the normal class but just a small proportion will be of type *‘illegal transaction’*. In this case, we may want to do some preprocessing to try to balance the dataset. Some models are also sensitive to feature\-scale differences. For example, a variable *weight* could be in kilograms but another variable *height* in centimeters. Before training a predictive model, the data needs to be prepared in such a way that the models can get the most out of it. Chapter [5](preprocessing.html#preprocessing) will present some common preprocessing steps.
* **Training and evaluation.** Once the data is preprocessed, we can proceed to train the models. Furthermore, we also need ways to evaluate their generalization performance on new unseen instances. The purpose of this phase is to try, and fine\-tune different models to find the one that performs the best. Later in this chapter, some model evaluation techniques will be introduced.
* **Interpretation and presentation of results.** The purpose of this phase is to analyze and interpret the models’ results. We can use performance metrics derived from the evaluation phase to make informed decisions. We may also want to understand how the models work internally and how the predictions are derived.
1\.6 Evaluating Predictive Models
---------------------------------
Before showing you how to train a machine learning model, in this section, I would like to introduce the process of **evaluating** a predictive model, which is part of the data analysis pipeline. This applies to both classification and regression problems. I’m starting with this topic because it will be a recurring one every time you work with machine learning. You will also be training a lot of models, but you will need ways to validate them as well.
Once you have trained a model (with a training set), that is, finding the best function \\(f\\) that maps inputs to their corresponding outputs, you may want to estimate how good the model is at solving a particular problem when presented with examples it has never seen before (that were not part of the training set). This estimate of how good the model is at predicting the output of new examples is called the **generalization performance**.
To estimate the generalization performance of a model, a dataset is usually divided into a *train set* and a *test set*. As the name implies, the train set is used to train the model (learn its parameters) and the test set is used to evaluate/test its generalization performance. We need independent sets because when deploying models in the wild, they will be presented with new instances never seen before. By dividing the dataset into two subsets, we are simulating this scenario where the test set instances were never seen by the model at training time so the performance estimate will be more accurate rather than if we used the same set to train and then to evaluate the performance. There are two main validation methods that differ in the way the dataset is divided into train and test sets: *hold\-out validation* and *k\-fold cross validation*.
**1\) Hold\-out validation.** This method randomly splits the dataset into train and test sets based on some predefined percentages. For example, randomly select \\(70\\%\\) of the instances and use them as the train set and use the remaining \\(30\\%\\) of the examples for the test set. This will vary depending on the application and the amount of data, but typical splits are \\(50/50\\) and \\(70/30\\) percent for the train and test sets, respectively. Figure [1\.8](intro.html#fig:holdout) shows an example of a dataset divided into \\(70/30\\).
FIGURE 1\.8: Hold\-out validation.
Then, the train set is used to train (fit) a model, and the test set to evaluate how well that model performs on new data. The performance can be measured using performance metrics such as the *accuracy* for classification problems. The accuracy is the percent of correctly classified instances.
It is a good practice to estimate the performance on both, the train and test sets. Usually, the performance on the train set will be better since the model was trained with that very same data. It is also common to measure the performance computing the error instead of accuracy. For example, the percent of misclassified instances. These are called the *train error* and *test error* (also known as the *generalization error*), for both the train and test sets, respectively. Estimating these two errors will allow you to ‘debug’ your models and understand if they are underfitting or overfitting (more on this in the following sections).
**2\) \\(k\\)\-fold cross\-validation.** Hold\-out validation is a good way to evaluate your models when you have a lot of data. However, in many cases, your data will be limited. In those cases, you want to make efficient use of the data. With hold\-out validation, each instance is included either in the train or test set. \\(k\\)\-fold cross\-validation provides a way in which instances take part in both, the test and train set, thus making more efficient use of the data.
This method consists of randomly assigning each instance into one of \\(k\\) folds (subsets) with approximately the same size. Then, \\(k\\) iterations are performed. In each iteration, one of the folds is used to test the model while the remaining ones are used to train it. Each fold is used once as the test set and \\(k\-1\\) times as part of the train set. Typical values for \\(k\\) are \\(3\\), \\(5\\), and \\(10\\). In the extreme case where \\(k\\) is equal to the total number of instances in the dataset, it is called leave\-one\-out cross\-validation (LOOCV). Figure [1\.8](intro.html#fig:holdout) shows an example of cross\-validation with \\(k\=5\\).
FIGURE 1\.9: \\(k\\)\-fold cross validation with \\(k\=5\\) and \\(5\\) iterations.
The generalization performance is then computed by taking the average accuracy/error from each iteration.
Hold\-out validation is typically used when there is a lot of available data and models take significant time to be trained. On the other hand, \\(k\\)\-fold cross\-validation is used when data is limited. However, it is more computational intensive since it requires training \\(k\\) models.
**Validation set.**
Most predictive models require some hyperparameter tuning. For example, a \\(k\\)\-Nearest Neighbors model requires to set \\(k\\), the number of neighbors. For decision trees, one can specify the maximum allowed tree depth, among other hyperparameters. Neural networks require even more hyperparameter tuning to work properly. Also, one may try different preprocessing techniques and features. All those changes affect the final performance. If all those hyperparameter changes are evaluated using the test set, there is a risk of *overfitting* the model. That is, making the model very specific to this particular data. Instead of using the *test set* to fine\-tune parameters, a *validation set* needs to be used instead. Thus, the dataset is randomly partitioned into three subsets: **train/validation/test** sets. The *train set* is used to train the model. The *validation set* is used to estimate the model’s performance while trying different hyperparameters and preprocessing methods. Once you are happy with your final model, you use the *test set* to assess the final generalization performance and this is what you report. The **test set is used only once**. Remember that we want to assess performance on unseen instances. When using *k\-fold cross validation*, first, an independent test set needs to be put aside. Hyperparameters are tuned using cross\-validation and the test set is used at the very end and just once to estimate the final performance.
When working with multi\-user systems, we need to additionally take into account between\-user differences. In those situations, it is advised to perform extra validations. Those multi\-user validation techniques will be covered in chapter [9](multiuser.html#multiuser).
1\.7 Simple Classification Example
----------------------------------
simple\_model.R
So far, a lot of terminology and concepts have been introduced. In this section, we will work through a practical example that will demonstrate how most of these concepts fit together. Here you will build (from scratch) your first classification and regression models! Furthermore, you will learn how to evaluate their generalization performance.
Suppose you have a dataset that contains information about felines including their maximum speed in km/hr and their specific type. For the sake of the example, suppose that these two variables are the only ones that we can observe. As for the types, consider that there are two possibilities: *‘tiger’* and *‘leopard’*. Figure [1\.10](intro.html#fig:felinesTable) shows the first \\(10\\) instances (rows) of the dataset.
FIGURE 1\.10: First 10 instances of felines dataset.
This table has \\(2\\) variables: *speed* and *class*. The first one is a numeric variable. The second one is a categorical variable. In this case, it can take two possible values: *‘tiger’* or *‘leopard’*.
This dataset was synthetically created for illustration purposes, but I promise you that hereafter, we will mostly use real datasets!
The code to reproduce this example is available in the *‘Introduction to Behavior and Machine Learning’* folder in the script file `simple_model.R`. The script contains the code used to generate the dataset. The dataset is stored in a data frame named `dataset`. Let’s start by doing a simple exploratory analysis of the dataset. More detailed exploratory analysis methods will be presented in chapter [4](edavis.html#edavis). First, we can print the data frame dimensions with the `dim()` function.
```
# Print number of rows and columns.
dim(dataset)
#> [1] 100 2
```
The output tells us that the data frame has \\(100\\) rows and \\(2\\) columns. Now we may be interested to know how many of those correspond to *tigers*. We can use the `table()` function to get that information.
```
# Count instances in each class.
table(dataset$class)
#> leopard tiger
#> 50 50
```
Here we see that \\(50\\) instances are of type *‘leopard’* and also that \\(50\\) instances are of type *‘tiger’*. In fact, this is how the dataset was intentionally generated. The next thing we can do is to compute some summary statistics for each column. R already provides a very convenient function for that purpose. Yes, it is the `summary()` function.
```
# Compute some summary statistics.
summary(dataset)
#> speed class
#> Min. :42.96 leopard:50
#> 1st Qu.:48.41 tiger :50
#> Median :51.12
#> Mean :51.53
#> 3rd Qu.:53.99
#> Max. :61.65
```
Since *speed* is a numeric variable, `summary()` computes some statistics like the mean, min, max, etc. The *class* variable is a factor. Thus, it returns row counts instead. In R, categorical variables are usually encoded as factors. It is similar to a string, but R treats factors in a special way. We can already appreciate that with the previous code snippet when the summary function returned class counts.
There are many other ways in which you can explore a dataset, but for now, let’s assume we already feel comfortable and that we have a good understanding of the data. Since this dataset is very simple, we won’t need to do any further data cleaning or preprocessing.
Now, imagine that you are asked to build a model that is able to predict the type of feline based on the observed attributes. In this case, the only thing we can observe is the *speed*. Our task is to build a function that maps speed measurements to classes. That is, we want to be able to predict the type of feline based on how fast it runs. According to the terminology presented in section [1\.4](intro.html#terminology), *speed* would be a **feature** variable and *class* would be the **class** variable.
Based on the types of machine learning methods presented in section [1\.3](intro.html#taxonomy), this one is a **supervised learning** problem because for each instance, the class is available. And, specifically, since we want to predict a category, this is a **classification** problem.
Before building our classification model, it would be worth plotting the data. Figure [1\.11](intro.html#fig:felineSpeeds) shows the speeds for both tigers and leopards.
FIGURE 1\.11: Feline speeds with vertical dashed lines at the means.
Here, I omitted the code for building the plot, but it is included in the script. I also added vertical dashed lines at the mean speeds for the two classes. From this plot, it seems that leopards are faster than tigers (with some exceptions). One thing we can note is that the data points are grouped around the mean values of their corresponding classes. That is, most of the tiger data points are closer to the mean speed for tigers and the same can be observed for leopards. Of course, there are some exceptions where an instance is closer to the mean of the opposite class. This could be because some tigers may be as fast as leopards. Some leopards may also be slower than the average, maybe because they are newborns or they are old. Unfortunately, we do not have more information, so the best we can do is use our single feature *speed*. We can use these insights to come up with a simple model that discriminates between the two classes based on this single feature variable.
One thing we can do for any new instance we want to classify is to compute its distance to the ‘center’ of each class and predict the class that is the closest one. In this case, the center is the mean value. We can formally define our model as the set of \\(n\\) centrality measures where \\(n\\) is the number of classes (\\(2\\) in our example).
\\\[\\begin{equation}
M \= \\{\\mu\_1,\\dots ,\\mu\_n\\}
\\tag{1\.2}
\\end{equation}\\]
Those centrality measures (the class means in this particular case) are called the **parameters** of the model. Training a model consists of finding those optimal parameters that will allow us to achieve the best performance on new instances that were not part of the training data. In most cases, we will need an **algorithm** to find those parameters. In our example, the algorithm consists of simply computing the mean speed for each class. That is, for each class, sum all the corresponding speeds and divide them by the number of data points that belong to that class.
Once those parameters are found, we can start making predictions on new data points. This is called *inference* or *prediction*. In this case, when a new data point arrives, we can predict its class by computing its distance to each of the \\(n\\) centrality measures in \\(M\\) and return the class of the closest one.
The following function implements the training part of our model.
```
# Define a simple classifier that learns
# a centrality measure for each class.
simple.model.train <- function(data, centrality=mean){
# Store unique classes.
classes <- unique(data$class)
# Define an array to store the learned parameters.
params <- numeric(length(classes))
# Make this a named array.
names(params) <- classes
# Iterate through each class and compute its centrality measure.
for(c in classes){
# Filter instances by class.
tmp <- data[which(data$class == c),]
# Compute the centrality measure.
centrality.measure <- centrality(tmp$speed)
# Store the centrality measure for this class.
params[c] <- centrality.measure
}
return(params)
}
```
The first argument is the training data and the second argument is the centrality function we want to use (the mean, by default). This function iterates each class, computes the centrality measure based on the speed, and stores the results in a named array called `params` which is then returned at the end.
Most of the time, training a model involves feeding it with the training data and any additional **hyperparameters** specific to each model. In this case, the centrality measure is a hyperparameter and here, we set it to be the *mean*.
The difference between **parameters** and **hyperparameters** is that the former are learned during training. The **hyperparameters** are settings specific to each model that can be defined before the actual training starts.
Now that we have a function that performs the training, we need another one that performs the actual inference or prediction on new data points. Let’s call this one `simple.classifier.predict()`. Its first argument is a data frame with the instances we want to get predictions for. The second argument is the named vector of parameters learned during training. This function will return an array with the predicted class for each instance in `newdata`.
```
# Define a function that predicts a class
# based on the learned parameters.
simple.classifier.predict <- function(newdata, params){
# Variable to store the predictions of
# each instance in newdata.
predictions <- NULL
# Iterate instances in newdata
for(i in 1:nrow(newdata)){
instance <- newdata[i,]
# Predict the name of the class which
# centrality measure is closest.
pred <- names(which.min(abs(instance$speed - params)))
predictions <- c(predictions, pred)
}
return(predictions)
}
```
This function iterates through each row and computes the distance to each centrality measure and returns the name of the class that was the closest one. The distance computation is done with the following line of code:
```
pred <- names(which.min(abs(instance$speed - params)))
```
First, it computes the absolute difference between the speed and each centrality measure stored in `params` and then, it returns the class name of the minimum one. Now that we have defined the training and prediction procedures, we are ready to test our classifier!
In section [1\.6](intro.html#trainingeval), two evaluation methods were presented. *Hold\-out* and *k\-fold cross\-validation*. These methods allow you to estimate how your model will perform on new data. Let’s start with *hold\-out validation*.
First, we need to split the data into two independent sets. We will use \\(70\\%\\) of the data to train our classifier and the remaining \\(30\\%\\) to test it. The following code splits `dataset` into a `trainset` and `testset`.
```
# Percent to be used as training data.
pctTrain <- 0.7
# Set seed for reproducibility.
set.seed(123)
idxs <- sample(nrow(dataset),
size = nrow(dataset) * pctTrain,
replace = FALSE)
trainset <- dataset[idxs,]
testset <- dataset[-idxs,]
```
The `sample()` function was used to select integer numbers at random from \\(1\\) to \\(n\\), where \\(n\\) is the total number of data points in `dataset`. These randomly selected data points are the ones that will go to the train set. The `size` argument tells the function to return \\(70\\) numbers which correspond to \\(70\\%\\) of the total since `dataset` has \\(100\\) instances.
The last argument `replace` is set to `FALSE` because we do not want repeated instances. The ‘\-’ symbol in `dataset[-idxs,]` is used to select everything that is not in the train set. This ensures that any instance only belongs to either the train or the test set. **We don’t want an instance to be copied into both sets.**
Now it’s time to test our functions. We can train our model using the `trainset` by calling our previously defined function `simple.model.train()`.
```
# Train the model using the trainset.
params <- simple.model.train(trainset, mean)
# Print the learned parameters.
print(params)
#> tiger leopard
#> 48.88246 54.58369
```
After training the model, we print the learned parameters. In this case, the mean for *tiger* is \\(48\.88\\) and for *leopard*, it is \\(54\.58\\). With these parameters, we can start making predictions on our test set! We pass the test set and the newly\-learned parameters to our function `simple.classifier.predict()`.
```
# Predict classes on the test set.
test.predictions <- simple.classifier.predict(testset, params)
# Display first predictions.
head(test.predictions)
#> [1] "tiger" "tiger" "leopard" "tiger" "tiger" "leopard"
```
Our predict function returns predictions for each instance in the test set. We can use the `head()` function to print the first predictions. The first two instances were classified as tigers, the third one as leopard, and so on.
But how good are those predictions? Since we know what the true classes are (also known as **ground truth**) in our test set, we can compute the performance. In this case, we will compute the accuracy, which is the percentage of correct classifications. Note that we did not use the class information when making predictions, we only used the *speed*. We pretended that we didn’t have the true class. We will use the true class only to evaluate the model’s performance.
```
# Compute test accuracy.
sum(test.predictions == as.character(testset$class)) /
nrow(testset)
#> [1] 0.8333333
```
We can compute the accuracy by counting how many predictions were equal to the true classes and divide them by the total number of points in the test set. In this case, the test accuracy was \\(83\.0\\%\\). **Congratulations! you have trained and evaluated your first classifier.**
It is also a good idea to compute the performance on the same train set that was used to train the model.
```
# Compute train accuracy.
train.predictions <- simple.classifier.predict(trainset, params)
sum(train.predictions == as.character(trainset$class)) /
nrow(trainset)
#> [1] 0.8571429
```
The *train accuracy* was \\(85\.7\\%\\). As expected, this was higher than the *test accuracy*. Typically, what you report is the performance on the *test set*, but we can use the performance on the *train set* to look for signs of over/under\-fitting which will be covered in the following sections.
### 1\.7\.1 \\(k\\)\-fold Cross\-validation Example
Now, let’s see how \\(k\\)\-fold cross\-validation can be implemented to test our classifier. I will choose a \\(k\=5\\). This means that \\(5\\) independent sets are going to be generated and \\(5\\) iterations will be run.
```
# Number of folds.
k <- 5
set.seed(123)
# Generate random folds.
folds <- sample(k, size = nrow(dataset), replace = TRUE)
# Print how many instances ended up in each fold.
table(folds)
#> folds
#> 1 2 3 4 5
#> 21 20 23 17 19
```
Again, we can use the `sample()` function. This time we want to select random integers between \\(1\\) and \\(k\\). The total number of integers will be equal to the total number of instances \\(n\\) in the entire dataset. Note that this time we set `replace = TRUE` since \\(k \< n\\), so this implies that we need to pick repeated numbers. Each number will represent the fold to which each instance belongs to. As before, we need to make sure that each instance belongs only to one of the sets. Here, we are guaranteeing that by assigning each instance a single fold number. We can use the `table()` function to print how many instances ended up in each fold. Here, we see that the folds will contain between \\(17\\) and \\(23\\) instances.
\\(k\\)\-fold cross\-validation consists of iterating \\(k\\) times. In each iteration, one of the folds is selected as the test set and the remaining folds are used to build the train set. Within each iteration, the model is trained with the train set and evaluated with the test set. At the end, the average accuracy across folds is reported.
```
# Variables to store accuracies on each fold.
test.accuracies <- NULL
train.accuracies <- NULL
for(i in 1:k){
testset <- dataset[which(folds == i),]
trainset <- dataset[which(folds != i),]
params <- simple.model.train(trainset, mean)
test.predictions <- simple.classifier.predict(testset, params)
train.predictions <- simple.classifier.predict(trainset, params)
# Accuracy on test set.
acc <- sum(test.predictions ==
as.character(testset$class)) /
nrow(testset)
test.accuracies <- c(test.accuracies, acc)
# Accuracy on train set.
acc <- sum(train.predictions ==
as.character(trainset$class)) /
nrow(trainset)
train.accuracies <- c(train.accuracies, acc)
}
# Print mean accuracy across folds on the test set.
mean(test.accuracies)
#> [1] 0.829823
# Print mean accuracy across folds on the train set.
mean(train.accuracies)
#> [1] 0.8422414
```
The test mean accuracy across the \\(5\\) folds was \\(\\approx 83\\%\\) which is very similar to the accuracy estimated by hold\-out validation.
Note that in section [1\.6](intro.html#trainingeval) a **validation set** was also mentioned. This one is useful when you want to fine\-tune a model and/or try different preprocessing methods on your data. In case you are using hold\-out validation, you may want to split your data into three sets: train/validation/test sets. So, you train your model using the train set and estimate its performance using the validation set. Then you can fine\-tune your model. For example, here, instead of the mean as centrality measure, you can try to use the median and measure the performance again with the validation set. When you are pleased with your settings, you estimate the final performance of the model with the test set *only once*.
In the case of \\(k\\)\-fold cross\-validation, you can set aside a test set at the beginning. Then you use the remaining data to perform cross\-validation and fine\-tune your model. Within each iteration, you test the performance with the validation data. Once you are sure you are not going to do any parameter tuning, you can train a model with the train and validation sets and test the generalization performance using the test set.
One of the benefits of machine learning is that it allows us to find patterns based on data freeing us from having to program hard\-coded rules. This means more scalable and flexible code. If for some reason, now, instead of \\(2\\) classes we needed to add another class, for example, a *‘jaguar’*, the only thing we need to do is update our database and retrain our model. We don’t need to modify the internals of the algorithms. They will update themselves based on the data.
We can try this by adding a third class *‘jaguar’* to the dataset as shown in the script `simple_model.R`. It then trains the model as usual and performs predictions.
### 1\.7\.1 \\(k\\)\-fold Cross\-validation Example
Now, let’s see how \\(k\\)\-fold cross\-validation can be implemented to test our classifier. I will choose a \\(k\=5\\). This means that \\(5\\) independent sets are going to be generated and \\(5\\) iterations will be run.
```
# Number of folds.
k <- 5
set.seed(123)
# Generate random folds.
folds <- sample(k, size = nrow(dataset), replace = TRUE)
# Print how many instances ended up in each fold.
table(folds)
#> folds
#> 1 2 3 4 5
#> 21 20 23 17 19
```
Again, we can use the `sample()` function. This time we want to select random integers between \\(1\\) and \\(k\\). The total number of integers will be equal to the total number of instances \\(n\\) in the entire dataset. Note that this time we set `replace = TRUE` since \\(k \< n\\), so this implies that we need to pick repeated numbers. Each number will represent the fold to which each instance belongs to. As before, we need to make sure that each instance belongs only to one of the sets. Here, we are guaranteeing that by assigning each instance a single fold number. We can use the `table()` function to print how many instances ended up in each fold. Here, we see that the folds will contain between \\(17\\) and \\(23\\) instances.
\\(k\\)\-fold cross\-validation consists of iterating \\(k\\) times. In each iteration, one of the folds is selected as the test set and the remaining folds are used to build the train set. Within each iteration, the model is trained with the train set and evaluated with the test set. At the end, the average accuracy across folds is reported.
```
# Variables to store accuracies on each fold.
test.accuracies <- NULL
train.accuracies <- NULL
for(i in 1:k){
testset <- dataset[which(folds == i),]
trainset <- dataset[which(folds != i),]
params <- simple.model.train(trainset, mean)
test.predictions <- simple.classifier.predict(testset, params)
train.predictions <- simple.classifier.predict(trainset, params)
# Accuracy on test set.
acc <- sum(test.predictions ==
as.character(testset$class)) /
nrow(testset)
test.accuracies <- c(test.accuracies, acc)
# Accuracy on train set.
acc <- sum(train.predictions ==
as.character(trainset$class)) /
nrow(trainset)
train.accuracies <- c(train.accuracies, acc)
}
# Print mean accuracy across folds on the test set.
mean(test.accuracies)
#> [1] 0.829823
# Print mean accuracy across folds on the train set.
mean(train.accuracies)
#> [1] 0.8422414
```
The test mean accuracy across the \\(5\\) folds was \\(\\approx 83\\%\\) which is very similar to the accuracy estimated by hold\-out validation.
Note that in section [1\.6](intro.html#trainingeval) a **validation set** was also mentioned. This one is useful when you want to fine\-tune a model and/or try different preprocessing methods on your data. In case you are using hold\-out validation, you may want to split your data into three sets: train/validation/test sets. So, you train your model using the train set and estimate its performance using the validation set. Then you can fine\-tune your model. For example, here, instead of the mean as centrality measure, you can try to use the median and measure the performance again with the validation set. When you are pleased with your settings, you estimate the final performance of the model with the test set *only once*.
In the case of \\(k\\)\-fold cross\-validation, you can set aside a test set at the beginning. Then you use the remaining data to perform cross\-validation and fine\-tune your model. Within each iteration, you test the performance with the validation data. Once you are sure you are not going to do any parameter tuning, you can train a model with the train and validation sets and test the generalization performance using the test set.
One of the benefits of machine learning is that it allows us to find patterns based on data freeing us from having to program hard\-coded rules. This means more scalable and flexible code. If for some reason, now, instead of \\(2\\) classes we needed to add another class, for example, a *‘jaguar’*, the only thing we need to do is update our database and retrain our model. We don’t need to modify the internals of the algorithms. They will update themselves based on the data.
We can try this by adding a third class *‘jaguar’* to the dataset as shown in the script `simple_model.R`. It then trains the model as usual and performs predictions.
1\.8 Simple Regression Example
------------------------------
simple\_model.R
As opposed to classification models where the aim is to predict a category, **regression models predict numeric values**. To exemplify this, we can use our felines dataset but instead try to predict *speed* based on the type of feline. The *class* column will be treated as a **feature** variable and *speed* will be the **response variable**. Since there is only one predictor, and it is categorical, the best thing we can do to implement our regression model is to predict the mean speed depending on the class.
Recall that for the classification scenario, our learned parameters consisted of the means for each class. Thus, we can reuse our training function `simple.model.train()`. All we need to do is to define a new predict function that returns the speed based on the class. This is the opposite of what we did in the classification case (return the class based on the speed).
```
# Define a function that predicts speed
# based on the type of feline.
simple.regression.predict <- function(newdata, params){
# Variable to store the predictions of
# each instance in newdata.
predictions <- NULL
# Iterate instances in newdata
for(i in 1:nrow(newdata)){
instance <- newdata[i,]
# Return the mean value of the corresponding class stored in params.
pred <- params[which(names(params) == instance$class)]
predictions <- c(predictions, pred)
}
return(predictions)
}
```
The `simple.regression.predict()` function iterates through each instance in `newdata` and returns the mean speed from `params` for the corresponding class.
Again, we can validate our model using *hold\-out validation*. The train set will contain \\(70\\%\\) of the instances and the remaining will be used as the test set.
```
pctTrain <- 0.7
set.seed(123)
idxs <- sample(nrow(dataset),
size = nrow(dataset) * pctTrain,
replace = FALSE)
trainset <- dataset[idxs,]
testset <- dataset[-idxs,]
# Reuse our train function.
params <- simple.model.train(trainset, mean)
print(params)
#> tiger leopard
#> 48.88246 54.5836
```
Here, we reused our previous function `simple.model.train()` to learn the parameters and then print them. Then we can use those parameters to infer the speed. If a test instance belongs to the class *‘tiger’* then return \\(48\.88\\). If it is of class *‘leopard’* then return \\(54\.58\\).
```
# Predict speeds on the test set.
test.predictions <-
simple.regression.predict(testset, params)
# Print first predictions.
head(test.predictions)
#> 48.88246 54.58369 54.58369 48.88246 48.88246 54.58369
```
Since these are numeric predictions, we cannot use accuracy as in the classification case to evaluate the performance. One way to evaluate the performance of regression models is by computing the **mean absolute error (MAE)**. This measure tells you, on average, how much each prediction deviates from its true value. It is computed by subtracting each prediction from its real value and taking the absolute value: \\(\|predicted \- realValue\|\\). This can be visualized in Figure [1\.12](intro.html#fig:maeExample). The distances between the true and predicted values are the errors and the MAE is the average of all those errors.
FIGURE 1\.12: Prediction errors.
We can use the following code to compute the MAE:
```
# Compute mean absolute error (MAE) on the test set.
mean(abs(test.predictions - testset$speed))
#> [1] 2.562598
```
The MAE on the *test set* was \\(2\.56\\). That is, on average, our simple model had a deviation of \\(2\.56\\) km/hr with respect to the true values, which is not bad. We can also compute the MAE on the *train set*.
```
# Predict speeds on the train set.
train.predictions <-
simple.regression.predict(trainset, params)
# Compute mean absolute error (MAE) on the train set.
mean(abs(train.predictions - trainset$speed))
#> [1] 2.16097
```
The MAE on the *train set* was \\(2\.16\\), which is better than the *test set* MAE (small MAE values are preferred). **Now, you have built, trained, and evaluated a regression model!**
This was a simple example, but it illustrates the basic idea of regression and how it differs from classification. It also shows how the performance of regression models is typically evaluated with the MAE as opposed to the accuracy used in classification. In chapter [8](deeplearning.html#deeplearning), more advanced methods such as neural networks will be introduced, which can be used to solve regression problems.
In this section, we have gone through several of the data analysis pipeline phases. We did a simple exploratory analysis of the data and then we built, trained, and validated the models to perform both classification and regression. Finally, we estimated the overall performance of the models and presented the results. Here, we coded our models from scratch, but in practice, you typically use models that have already been implemented and tested. All in all, I hope these examples have given you the feeling of how it is to work with machine learning.
1\.9 Underfitting and Overfitting
---------------------------------
From the felines classification example, we saw how we can separate two classes by computing the mean for each class. For the two\-class problem, this is equivalent to having a decision line between the two means (Figure [1\.13](intro.html#fig:boundary)). Everything to the right of this decision line will be closer to the mean that corresponds to *‘leopard’* and everything to the left to *‘tiger’*. In this case, the classification function is a vertical line. During learning, the position of the line that reduces the classification error is searched for. We implicitly estimated the position of the line by finding the *mean values* for each of the classes.
FIGURE 1\.13: Decision line between the two classes.
Now, imagine that we do not only have access to the *speed* but also to the felines’ *age*. This extra information could help us reduce the prediction error since age plays an important role in how fast a feline is. Figure [1\.14](intro.html#fig:underOverFitting) (left) shows how it will look like if we plot *age* in the x\-axis and *speed* in the y\-axis. Here, we can see that for both, tigers and leopards, the *speed* seems to increase as *age* increases. Then, at some point, as *age* increases the *speed* begins to decrease.
Constructing a classifier with a single vertical line as we did before will not work in this \\(2\\)\-dimensional case where we have \\(2\\) predictors. Now we will need a more complex decision boundary (function) to separate the two classes. One approach would be to use a line as before but this time we allow the line to have a slope (angle). Everything below the line is classified as *‘tiger’* and everything else as *‘leopard’*. Thus, the learning phase involves finding the line’s *position* and its *slope* that achieves the smallest error.
Figure [1\.14](intro.html#fig:underOverFitting) (left) shows a possible decision line. Even though this function is more complex than a vertical line, it will still produce a lot of misclassifications (it does not clearly separate both classes). This is called **underfitting**, that is, the model is so simple that it is not able to capture the underlying data patterns.
FIGURE 1\.14: Underfitting and overfitting.
Let’s try a more complex function, for example, a curve. Figure [1\.14](intro.html#fig:underOverFitting) (middle) shows that a curve does a better job at separating the two classes with fewer misclassifications but still, \\(3\\) leopards are misclassified as tigers and \\(1\\) tiger is misclassified as leopard. Can we do better than that? Yes, just keep increasing the complexity of the decision function.
Figure [1\.14](intro.html#fig:underOverFitting) (right) shows a more complex function that was able to separate the two classes with \\(100\\%\\) accuracy or equivalently, with a \\(0\\%\\) error. However, there is a problem. This function learned how to accurately separate the *training data*, but it is likely that it will not do as well with a new *test set*. This function became so specialized with respect to this particular data that it failed to capture the overall pattern. This is called **overfitting**. In this case, the model ‘memorizes’ the train set instead of finding general patterns applicable to new unseen instances. If we were to choose a model, the best one would be the one in the middle. Even if it is not perfect on the train data, it will do better than the other models when evaluated on new test data.
Overfitting is a common problem in machine learning. One way to know if a model is overfitting is by checking if the error in the train set is low while it is high on a new set (can be a test or validation set). Figure [1\.15](intro.html#fig:modelComplexity) illustrates this idea. Too\-simple models will produce a high error for both, the train and validation sets (underfitting). As the complexity of the model increases, the errors on both sets are reduced. Then, at some point, the complexity of a model becomes so high that it gets too specific on the train set and fails to perform well on a new independent set (overfitting).
FIGURE 1\.15: Model complexity vs. train and validation error.
In this example, we saw how *underfitting* and *overfitting* can affect the generalization performance of a model in a classification setting but the same can occur in regression problems.
There are several methods that aim to reduce overfitting, but many of them are specific to the type of model. For example, with decision trees (covered in chapter [2](classification.html#classification)), one way to reduce overfitting is to limit their depth or build ensembles of trees (chapter [3](ensemble.html#ensemble)). Neural networks are also highly prone to overfitting since they can be very complex and have millions of parameters. In chapter [8](deeplearning.html#deeplearning), several techniques to reduce the effect of overfitting will be presented.
1\.10 Bias and Variance
-----------------------
So far, we have seen how to train predictive models and evaluate how well they do on new data (test/validation sets). The main goal is to have predictive models that have a low error rate when used with new data. Understanding the source of the error can help us make more informed decisions when building predictive models. The *test error*, also known as the *generalization error* of a predictive model can be decomposed into three components: bias, variance, and noise.
**Noise.** This component is inherent to the data itself and there is nothing we can do about it. For example, two instances having the same values in their features but with a different label.
**Bias.** How much the average prediction differs from the true value. Note the *average* keyword. This means that we make the assumption that an infinite (or very large) number of train sets can be generated and for each, a predictive model is trained. Then we average the predictions of all those models and see how much that average differs from the true value.
**Variance.** How much the predictions change for a given data point when training a model using a different train set each time.
Bias and variance are closely related to underfitting and overfitting. High variance is a sign of overfitting. That is, a model is so complex that it will fit a particular train set very well. Every time it is trained with a different train set, the *train error* will be low, but it will likely generate very different predictions for the same test points and a much higher *test error*.
Figure [1\.16](intro.html#fig:overfittingVariance) illustrates the relation between overfitting and high variance with a regression problem.
FIGURE 1\.16: High variance and overfitting.
Given a feature \\(x\\), two models are trained to predict \\(y\\): i) a *complex model* (top row), and ii) a *simpler model* (bottom row). Both models are fitted with two training sets (\\(a\\) and \\(b\\)) sampled from the same distribution. The complex model fits the train data perfectly but makes very different predictions (big \\(\\Delta\\)) for the same test point when using a different train set. The simpler model does not fit the train data so well but has a smaller \\(\\Delta\\) and a lower error on the test point as well. Visually, the function (red curve) of the complex model also varies a lot across train sets whereas the shapes of the simpler model functions look very similar.
On the other hand, if a model is too simple, it will underfit causing *highly biased* results without being able to capture the input\-output relationships. This results in a high *train error* and in consequence, a high *test error* as well.
A formal definition of the error decomposition is explained in the book “The elements of statistical learning: data mining, inference, and prediction” ([Hastie, Tibshirani, and Friedman 2009](#ref-hastie2009elements)).
1\.11 Summary
-------------
In this chapter, several introductory machine learning concepts and terms were introduced and they are the basis for the methods that will be covered in the following chapters.
* **Behavior** can be defined as *“an observable activity in a human or animal”*.
* Three main reasons of why we may want to analyze behavior automatically were discussed: **react**, **understand**, and **document/archive**.
* One way to observe behavior automatically is through the use of sensors and/or data.
* **Machine Learning** consists of a set of computational algorithms that automatically find useful patterns and relationships from data.
* The three main building blocks of machine learning are: **data**, **algorithms**, and **models**.
* The main types of machine learning are **supervised learning**, **semi\-supervised learning**, **partially\-supervised learning**, and **unsupervised learning**.
* In R, data is usually stored in data frames. Data frames have variables (columns) and instances (rows). Depending on the task, variables can be **independent** or **dependent**.
* A **predictive model** is a model that takes some input and produces an output. *Classifiers* and *regressors* are predictive models.
* A data analysis pipeline consists of several tasks including data collection, cleaning, preprocessing, training/evaluation, and presentation of results.
* Model evaluation can be performed with **hold\-out validation** or **\\(k\\)\-fold cross\-validation**.
* **Overfitting** occurs when a model ‘memorizes’ the training data instead of finding useful underlying patterns.
* The test error can be decomposed into **noise**, **bias**, and **variance**.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/classification.html |
Chapter 2 Predicting Behavior with Classification Models
========================================================
In the previous chapter, the concept of **classification** was introduced along with a simple example (feline\-type classification). This chapter will cover more in depth concepts on classification methods and their application to behavior analysis tasks. Moreover, additional **performance metrics** will be introduced. This chapter begins with an introduction to **\\(k\\)\-Nearest Neighbors (\\(k\\)\-NN)** which is one of the simplest classification algorithms. Then, an example of \\(k\\)\-NN applied to indoor location using Wi\-Fi signals is presented. This chapter also covers **Decision Trees** and **Naive Bayes** classifiers and how they can be used for activity recognition based on smartphone accelerometer data. After that, **Dynamic Time Warping (DTW)** (a method for aligning time series) is introduced, together with an example of how it can be used for hand gesture recognition.
2\.1 *k*\-Nearest Neighbors
---------------------------
\\(k\\)\-Nearest Neighbors (\\(k\\)\-NN) is one of the simplest classification algorithms. The predicted class for a given *query instance* is the most common class of its *k* nearest neighbors. A *query instance* is just the instance we want to make predictions on. In its most basic form, the algorithm consists of two steps:
1. Compute the distance between the *query instance* and all *training instances*.
2. Return the most common class label among the *k* nearest training instances (neighbors).
This is a type of *lazy\-learning* algorithm because all the computations take place at prediction time. There are no parameters to learn at training time! The training phase consists only of storing the training instances so they can be compared to the query instance at prediction time. The hyper\-parameter *k* is usually specified by the user and depends on each application. We also need to specify a *distance function* that returns small distances for similar instances and big distances for very dissimilar instances. For numeric features, the **Euclidean distance** is one of the most commonly used distance function. The Euclidean distance between two points can be computed as follows:
\\\[\\begin{equation}
d\\left(p,q\\right) \= \\sqrt{\\sum\_{i\=1}^n{\\left(p\_i\-q\_i\\right)^2}}
\\tag{2\.1}
\\end{equation}\\]
where \\(p\\) and \\(q\\) are \\(n\\)\-dimensional feature vectors and \\(i\\) is the index to the vectors’ elements. Figure [2\.1](classification.html#fig:simpleKnn) shows the idea graphically (adapted from the \\(k\\)\-NN article[4](#fn4) in Wikipedia). The query instance is depicted with the ‘?’ symbol. If we choose \\(k\=3\\) (represented by the inner dashed circle) the predicted class is *‘square’* because there are two squares but only one circle. If \\(k\=5\\) (outer dotted circle), the predicted class is *‘circle’*.
FIGURE 2\.1: \\(k\\)\-NN example for \\(k\=3\\) (inner dashed circle) and \\(k\=5\\) (dotted outer circle). (Adapted from Antti Ajanki AnAj. Source: Wikipedia (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
Typical values for \\(k\\) are small odd numbers like \\(1,2,3,5\\). The \\(k\\)\-NN algorithm can also be used for regression with a small modification: Instead of returning the majority class of the nearest neighbors, return the mean value of their response variable. Despite its simplicity, \\(k\\)\-NN has proved to perform really well in many tasks including time series classification ([Xi et al. 2006](#ref-xi2006)).
### 2\.1\.1 Indoor Location with Wi\-Fi Signals
`indoor_classification.R` `indoor_auxiliary.R`
You might already have experienced some troubles with geolocation services when you are inside a building. Part of this is because GPS technologies do not provide good indoors\-accuracy due to several sources of interference. For some applications, it would be beneficial to have accurate location estimations inside buildings even at room\-level. For example, in domotics and localization services in big public places like airports or shopping malls. Having good indoor location estimates can also be used in behavior analysis such as extracting trajectory patterns.
In this section, we will implement \\(k\\)\-NN to perform indoor location in a building based on Wi\-Fi signals. For instance, we can use a smartphone to scan the nearby Wi\-Fi access points and based on this information, determine our location at room\-level. This can be formulated as a classification problem: Given a set of Wi\-Fi signals as input, predict the location where the device is located.
For this classification problem, we will use the *INDOOR LOCATION* dataset (see Appendix [B](appendixDatasets.html#appendixDatasets)) which was collected with an Android smartphone. The smartphone application scans the nearby access points and stores their information and label. The label is provided by the user and represents the room where the device is located. Several instances for every location were recorded. To generate each instance, the device scans and records the MAC address and signal strength of the nearby access points. A delay of \\(500\\) ms is set between scans. For each location, approximately \\(3\\) minutes of data were collected while the user walked in the specific room. Figure [2\.2](classification.html#fig:layoutHouse) depicts the layout of the building where the data was collected. The data has four different locations: *‘bedroomA’*, *‘bedroomB’*, *‘tvroom’*, and the *‘lobby’*. The lobby (not shown in the layout) is at the same level as bedroom A but on the first floor.
FIGURE 2\.2: Layout of the apartments building. (Adapted by permission from Springer: Lecture Notes in Computer Science, Contextualized Hand Gesture Recognition with Smartphones, Garcia\-Ceja E., Brena R., Galván\-Tejada C.E., 2014, [https://doi.org/10\.1007/978\-3\-319\-07491\-7\_13](https://doi.org/10.1007/978-3-319-07491-7_13)).
Table [2\.1](classification.html#tab:headWifi) shows the first rows of the dataset. The first column is the class. The `scanid` column is a unique identifier for the given Wi\-Fi scan (instance). To preserve privacy, MAC addresses were converted into integer values. Every instance is composed of several rows. For example, the first instance with `scanid=1` has two rows (one row per mac address). Intuitively, the same location should have similar MAC addresses across scans. From the table, we can see that at *bedroomA* access points with MAC address \\(1\\) and \\(2\\) are usually found by the device.
TABLE 2\.1: First rows of Wi\-Fi scans.
| locationid | scanid | mac | signalstrength |
| --- | --- | --- | --- |
| bedroomA | 1 | 1 | \-88\.50 |
| bedroomA | 1 | 2 | \-91\.00 |
| bedroomA | 2 | 1 | \-88\.00 |
| bedroomA | 2 | 2 | \-90\.00 |
| bedroomA | 3 | 1 | \-87\.62 |
| bedroomA | 3 | 2 | \-90\.00 |
| bedroomA | 4 | 2 | \-90\.25 |
| bedroomA | 4 | 1 | \-90\.00 |
| bedroomA | 4 | 3 | \-91\.00 |
Since each instance is composed of several rows, we will convert our data frame into a list of lists where each inner list represents a single instance with the class (`locationId`), a unique id, and a data frame with the corresponding access points. The example code can be found in the script `indoor_classification.R`.
```
# Read Wi-Fi data
df <- read.csv(datapath, stringsAsFactors = F)
# Convert data frame into a list of lists.
# Each inner list represents one instance.
dataset <- wifiScansToList(df)
# Print number of instances in the dataset.
length(dataset)
#> [1] 365
# Print the first instance.
dataset[[1]]
#> $locationId
#> [1] "bedroomA"
#>
#> $scanId
#> [1] 1
#>
#> $accessPoints
#> mac signalstrength
#> 1 1 -88.5
#> 2 2 -91.0
```
First, we read the dataset from the csv file and store it in the data frame `df`. To make things easier, the data frame is converted into a list of lists using the auxiliary function `wifiScansToList()` which is defined in the script `indoor_auxiliary.R`. Next, we print the number of instances in the dataset, that is, the number of lists. The dataset contains \\(365\\) instances. The \\(365\\) was just a coincidence, the data was not collected every day during one year but in the same day. Next, we extract the first instance with `dataset[[1]]`. Here, we see that each instance has three pieces of information. The class (locationId), a unique id (scanId), and a set of access points stored in a data frame. The first instance has two access points with MAC addresses \\(1\\) and \\(2\\). There is also information about the signal strength, though, this one will not be used.
Since we would expect that similar locations have similar MAC addresses and locations that are far away from each other have different MAC addresses, we need a distance measure that captures this notion of similarity. In this case, we cannot use the Euclidean distance on MAC addresses. Even though they were encoded as integer values, they do not represent magnitudes but unique identifiers. Each instance is composed of a set of \\(n\\) MAC addresses stored in the `accessPoints` data frame. To compute the distance between two instances (two sets) we can use the *Jaccard distance*. This distance is based on element sets:
\\\[\\begin{equation}
j\\left(A,B\\right)\=\\frac{\\left\|A\\cup B\\right\|\-\\left\|A\\cap B\\right\|}{\\left\|A\\cup B\\right\|}
\\tag{2\.2}
\\end{equation}\\]
where \\(A\\) and \\(B\\) are sets of MAC addresses. A **set** is an unordered collection of elements with no repetitions. As an example, let’s say we have two sets, \\(S\_1\\) and \\(S\_2\\):
\\\[\\begin{align\*}
S\_1\&\=\\{a,b,c,d,e\\}\\\\
S\_2\&\=\\{e,f,g,a\\}
\\end{align\*}\\]
The set \\(S\_1\\) has \\(5\\) elements (letters) and \\(S\_2\\) has \\(4\\) elements. \\(A \\cup B\\) means the **union** of the two sets and its result is the set of all elements that are either in \\(A\\) or \\(B\\). For instance, the union of \\(S\_1\\) and \\(S\_2\\) is \\(S\_1 \\cup S\_2 \= \\{a,b,c,d,e,f,g\\}\\). The \\(A \\cap B\\) denotes the **intersection** between \\(A\\) and \\(B\\) which is the set of elements that are in both \\(A\\) and \\(B\\). In our example, \\(S\_1 \\cap S\_2 \= \\{a,e\\}\\). Finally the vertical bars \\(\|\|\\) mean the **cardinality** of the set, that is, its number of elements. The cardinality of \\(S\_1\\) is \\(\|S\_1\|\=5\\) because it has \\(5\\) elements. The cardinality of the union of the two sets \\(\|S\_1 \\cup S\_2\|\=7\\) because this set has \\(7\\) elements.
In R, we can implement the Jaccard distance as follows:
```
jaccardDistance <- function(set1, set2){
lengthUnion <- length(union(set1, set2))
lengthIntersectoin <- length(intersect(set1, set2))
d <- (lengthUnion - lengthIntersectoin) / lengthUnion
return(d)
}
```
The implementation is in the script `indoor_auxiliary.R`. Now, we can try our function! Let’s compute the distance between two instances of the same class (*‘bedroomA’*).
```
# Compute jaccard distance between instances with same class:
# (bedroomA)
jaccardDistance(dataset[[1]]$accessPoints$mac,
dataset[[4]]$accessPoints$mac)
#> [1] 0.3333333
```
Now let’s try to compute the distance between instances with different classes.
```
# Jaccard distance of instances with different class:
# (bedroomA and bedroomB)
jaccardDistance(dataset[[1]]$accessPoints$mac,
dataset[[210]]$accessPoints$mac)
#> [1] 0.6666667
```
The distance between instances of the same class was \\(0\.33\\) whereas the distance between instances of the different classes was \\(0\.66\\). So, our function is working as expected.
In the extreme case when the sets \\(A\\) and \\(B\\) are identical, the distance will be \\(0\\). When there are no common elements in the sets, the distance will be \\(1\\). Armed with this distance metric, we can now implement the \\(k\\)\-NN function in R. The `knn_classifier()` implementation is in the script `indoor_auxiliary.R`. Its first argument is the dataset (the list of instances). The second argument *k*, is the number of nearest neighbors to use, and the last two arguments are the indices of the train and test instances, respectively. This indices are pointers to the elements in the `dataset` variable.
```
knn_classifier <- function(dataset, k, trainSetIndices, testSetIndices){
groundTruth <- NULL
predictions <- NULL
for(queryInstance in testSetIndices){
distancesToQuery <- NULL
for(trainInstance in trainSetIndices){
jd <- jaccardDistance(dataset[[queryInstance]]$accessPoints$mac,
dataset[[trainInstance]]$accessPoints$mac)
distancesToQuery <- c(distancesToQuery, jd)
}
indices <- sort(distancesToQuery, index.return = TRUE)$ix
indices <- indices[1:k]
# Indices of the k nearest neighbors
nnIndices <- trainSetIndices[indices]
# Get the actual instances
nnInstances <- dataset[nnIndices]
# Get their respective classes
nnClasses <- sapply(nnInstances, function(e){e[[1]]})
prediction <- Mode(nnClasses)
predictions <- c(predictions, prediction)
groundTruth <- c(groundTruth,
dataset[[queryInstance]]$locationId)
}
return(list(predictions = predictions,
groundTruth = groundTruth))
}
```
For each instance `queryInstance` in the test set, the `knn_classifier()` computes its jaccard distance to every other instance in the train set and stores those distances in `distancesToQuery`. Then, those distances are sorted in ascending order and the most common class among the first \\(k\\) elements is returned as the predicted class. The function `Mode()` returns the most common element. Finally, `knn_classifier()` returns a list with the predictions for every instance in the test set and their respective ground truth class for evaluation.
Now, we can try our classifier. We will use \\(70\\%\\) of the dataset as train set and the remaining as the test set.
```
# Total number of instances
numberInstances <- length(dataset)
# Set seed for reproducibility
set.seed(12345)
# Split into train and test sets.
trainSetIndices <- sample(1:numberInstances,
size = round(numberInstances * 0.7),
replace = F)
testSetIndices <- (1:numberInstances)[-trainSetIndices]
```
The function `knn_classifier()` predicts the class for each test set instance and returns a list with their predictions and their ground truth classes. With this information, we can compute the *accuracy* on the test set which is the percentage of correctly classified instances. In this example, we set \\(k\=3\\).
```
# Obtain predictions on the test set.
result <- knn_classifier(dataset,
k = 3,
trainSetIndices,
testSetIndices)
# Calculate and print accuracy.
sum(result$predictions == result$groundTruth) /
length(result$predictions)
#> [1] 0.9454545
```
Not bad! Our simple \\(k\\)\-NN algorithm achieved an accuracy of \\(94\.5\\%\\). Usually, it is a good idea to visualize the predictions to have a better understanding of the classifier’s behavior. **Confusion matrices** allow us to exactly do that. We can use the `confusionMatrix()` function from the `caret` package to generate a confusion matrix. Its first argument is a factor with the predictions and the second one is a factor with the corresponding true values. This function returns an object with several performance metrics (see next section) and the confusion matrix. The actual confusion matrix is stored in the `table` object.
```
library(caret)
cm <- confusionMatrix(factor(result$predictions),
factor(result$groundTruth))
cm$table # Access the confusion matrix.
#> Reference
#> Prediction bedroomA bedroomB lobby tvroom
#> bedroomA 26 0 3 1
#> bedroomB 0 17 0 1
#> lobby 0 1 28 0
#> tvroom 0 0 0 33
```
The columns of the confusion matrix represent the true classes and the rows the predictions. For example, from the total \\(31\\) instances of type *‘lobby’*, \\(28\\) were correctly classified as *‘lobby’* while \\(3\\) were misclassified as *‘bedroomA’*. Something I find useful is to plot the confusion matrix as proportions instead of counts (Figure [2\.3](classification.html#fig:wifiCM)). From this confusion matrix we see that for the class *‘bedroomB’*, \\(94\\%\\) of the instances were correctly classified while \\(6\\%\\) were mislabeled as *‘lobby’*. On the other hand, instances of type *‘bedroomA’* were always classified correctly.
FIGURE 2\.3: Confusion matrix for location predictions.
A confusion matrix is a good way to analyze the classification results per class and it helps to spot weaknesses which can be used to improve the model, for example, by extracting additional features.
2\.2 Performance Metrics
------------------------
Performance metrics allow us to assess the generalization performance of a model from different angles. The most common performance metric for classification is the accuracy:
\\\[\\begin{equation}
accuracy \= \\frac{\\\# \\textrm{ correctly classified instances}}{\\textrm{total } \\\# \\textrm{ instances}}
\\tag{2\.3}
\\end{equation}\\]
In order to have a better understanding of the generalization performance of a model, it is a good practice to compute several performance metrics in addition to the accuracy. Accuracy also has some limitations, especially in highly imbalanced datasets. The following metrics provide different views of a model’s performance for the binary case (when there are only two classes). These metrics can be extended to the multi\-class setting using a *one vs. all* approach. That is, compare each class to the remaining classes.
Before introducing the other metrics, it is convenient to define some terms:
* True positives (TP): Positive examples classified as positives.
* True negatives (TN): Negative examples classified as negatives.
* False positives (FP): Negative examples misclassified as positives.
* False negatives (FN): Positive examples misclassified as negatives.
For the binary classification case, it is you who decide which one is the positive class. For example, if your problem is about detecting falls and you have two classes: *‘fall’* and *‘nofall’*, then, considering *‘fall’* as the positive class makes sense since this is the one you are most interested in detecting. The following, is a list of commonly used metrics in classification:
**Recall:** The proportion of positives that are classified as such. Alternative names for recall are: **true positive rate**, **sensitivity**, and **hit rate**. In fact, the diagonal of the confusion matrix with proportions of the indoor location example shows the recall for each class (Figure [2\.3](classification.html#fig:wifiCM)).
\\\[\\begin{equation}
recall \= \\frac{\\textrm{TP}}{\\textrm{P}}
\\tag{2\.4}
\\end{equation}\\]
**Specificity:** The proportion of negatives classified as such. It is also called the **true negative rate**.
\\\[\\begin{equation}
specificity \= \\frac{\\textrm{TN}}{\\textrm{N}}
\\tag{2\.5}
\\end{equation}\\]
**Precision:** The fraction of true positives among those classified as positives. Also known as the **positive predictive value**.
\\\[\\begin{equation}
precision \= \\frac{\\textrm{TP}}{\\textrm{TP \+ FP}}
\\tag{2\.6}
\\end{equation}\\]
**F1\-score:** This is the harmonic mean of precision and recall.
\\\[\\begin{equation}
\\textit{F1\-score} \= 2 \\cdot \\frac{\\textrm{precision} \\cdot \\textrm{recall}}{\\textrm{precision \+ recall}}
\\tag{2\.7}
\\end{equation}\\]
The `confusionMatrix()` function from the `caret` package computes several of those metrics. From our previous confusion matrix object, we can inspect those metrics by class.
```
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: bedroomA 1.0000000 0.9523810 0.8666667 0.9285714
#> Class: bedroomB 0.9444444 0.9891304 0.9444444 0.9444444
#> Class: lobby 0.9032258 0.9873418 0.9655172 0.9333333
#> Class: tvroom 0.9428571 1.0000000 1.0000000 0.9705882
```
The mean of the metrics across all classes can be computed by taking the mean for each column of the returned object:
```
colMeans(cm$byClass[,c("Recall", "Specificity", "Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.9476318 0.9822133 0.9441571 0.9442344
```
### 2\.2\.1 Confusion Matrix
As briefly introduced in the previous section, a *confusion matrix* provides a nice way to understand the model’s predictions and spot where it made mistakes. Figure [2\.4](classification.html#fig:binaryCM) shows a confusion matrix for the binary case. The columns represent the true classes and the rows the predicted classes. The **P** stands for the positive cases and the **N** for the negative ones. Each entry in the matrix corresponds to the TP, TN, FP, and FN. The TP and TN are the correct classifications whereas the FN and FP are the misclassifications.
FIGURE 2\.4: Confusion matrix for the binary case. P: positives, N: negatives.
Figure [2\.5](classification.html#fig:binaryCM2) shows a concrete example of a confusion matrix derived from a list of \\(15\\) instances with their predictions and their corresponding true values (ground truth). For example, the first element in the list is a **P** and it was correctly classified as a **P**. The eight element is a **P** but it was misclassified as **N**. The associated confusion matrix for these ground truth and predicted classes is shown at the bottom.
There are \\(7\\) true positives and \\(3\\) true negatives. In total, \\(10\\) instances were correctly classified (TP and TN) and \\(5\\) were misclassified (FP and FN). From this matrix we can calculate what is the total number of true positives by taking the sum of the first column, \\(10\\) in this case. The total number of true negatives is obtained by summing the second column, \\(5\\) in this case. Having this information we can compute any of the previous performance metrics: accuracy, recall, specificity, precision, and F1\-score.
FIGURE 2\.5: A concrete example of a confusion matrix for the binary case. P:positives, N:negatives.
Be aware that there is no standard that defines whether the true classes or the predicted classes go in the rows or columns, thus, you need to check for this everytime you encounter a new confusion matrix.
`shiny_metrics.R` This shiny app demonstrates how different performance metrics behave when the confusion matrix values change.
2\.3 Decision Trees
-------------------
Decision trees are powerful predictive models (especially when combining several of them, see chapter [3](ensemble.html#ensemble)) used for classification and regression tasks. Here, the focus will be on classification. Each node in a tree represents partial or final decisions based on a single feature. If a node is a leaf, then it represents a final decision. A leaf is simply a terminal node, i.e, it has no children nodes. Given a feature vector representing an instance, the predicted class is obtained by testing the feature values and following the tree path until a leaf is reached. Figure [2\.6](classification.html#fig:treeExample) exemplifies a query instance with an unknown class (left) and a decision tree (right). To predict the class of an unknown instance, its features are evaluated starting at the root of the tree. In this case *number\_wheels* is \\(4\\) in the query instance so we take the left path from the root. Now, we need to evaluate *weight*. This time the test is false since the weight is \\(2300\\) and we take the right path. Since this is a leaf node the final predicted class is *‘truck’*. Usually, small trees are preferable (small depth) because they are easier to visualize and interpret and are less prone to overfitting. The example tree has a depth of 2\. Had the number of wheels been \\(2\\) instead of \\(4\\), then testing the *weight* feature would not have been necessary.
FIGURE 2\.6: Example decision tree. The query instance is classified as truck by this tree.
As shown in the example, decision trees are easy to interpret and the final result can be explained by just following the path. Now let’s see how these decision trees are learned from data. Consider the following artificial *concert* dataset (Figure [2\.7](classification.html#fig:concertTable)).
FIGURE 2\.7: Concert dataset.
The first four variables are features and the last column is the class. The class is the decision whether or not we should go to a music concert based on the other variables. In this case, all variables are binary except *Price* which has three possible values: *low*, *medium*, and *high*.
* *Tired:* Indicates whether the person is tired or not.
* *Rain:* Whether it is raining or not.
* *Metal:* Indicates whether this is a heavy metal concert or not.
* *Price:* Ticket price.
* *Go:* The decision of whether to go to the music concert or not.
The main question when building a tree is which feature should be at the root (top). Once you answer this question, you may need to grow the tree by adding another feature (node) as one of the root’s children. To decide which new feature to add you need to answer the same first question: “What feature should be at the root of this subtree?”. This is a recursive definition! The tree keeps growing until you reach a leaf node, there are no more features to select from, or you have reached a predefined maximum depth.
For the *concert* dataset we need to find which is the best variable to be placed at the root. Let’s suppose we need to choose between *Price* and *Metal*. Figure [2\.8](classification.html#fig:treeAlgo1) shows these two possibilities.
FIGURE 2\.8: Two example trees with one variable split by Price (left) and Metal (right).
If we select *Price*, there are three possible subnodes, one for each value: *low*, *medium*, and *high*. If *Price* is *low* then four instances fall into this subtree (the first four from the table). For all of them, the value of *Go* is \\(1\\). If *Price* is *high*, two instances fall into this category and their *Go* value is \\(0\\), thus if the price is high then you should not go to the concert according to this data. There are six instances for which the *Price* value is *medium*. From those, two of them have *Go\=1* and the remaining four have *Go\=0*. For cases when the price is *low* or *high* we can arrive at a solution. If the price is *low* then go to the concert, if the price is *high* then do not go. However, if the price is *medium* it is still not clear what to do since this subnode is not *pure*. That is, the labels of the instances are mixed: two with an output of \\(1\\) and four with an output of \\(0\\). In this case we can try to use another feature to decide and grow the tree but first, let’s look at what happens if we decide to use *Metal* as the first feature at the root. In this case, we end up with two subsets with six instances each. And for each subnode, what decision should we take is still not clear because the output is ‘mixed’ (Go: 3, NotGo: 3\). At this point we would need to continue growing the tree below each subnode.
Intuitively, it seems like *Price* is a better feature since its subnodes are more *pure*. Then we can use another feature to split the instances whose *Price* is *medium*. For example, using the *Metal* variable. Figure [2\.9](classification.html#fig:treeAlgo2) shows how this would look like. Since one of the subnodes of *Metal* is still not pure we can further split it using the *Rain* variable, for example. At this point, we can not split any further. Note that the *Tired* variable was never used.
FIGURE 2\.9: Tree splitting example. Left:tree splits. Right:Highlighted instances when splitting by Price and Metal.
So far, we have chosen the root variable based on which one looks more pure but to automate the process, we need a way to measure this *purity* in a quantitative manner. One way to do that is by using the *entropy*. *Entropy* is a measure of uncertainty from information theory. It is \\(0\\) when there is no uncertainty and \\(1\\) when there is complete uncertainty. The entropy of a discrete variable \\(X\\) with values \\(x\_1\\dots x\_n\\) and probability mass function \\(P(X)\\) is:
\\\[\\begin{equation}
H(X) \= \-\\sum\_{i\=1}^n{P(x\_i)log P(x\_i)}
\\tag{2\.8}
\\end{equation}\\]
Take for example a fair coin with probability of heads and tails \= \\(0\.5\\) each. The entropy for that coin is:
\\\[\\begin{equation\*}
H(X) \= \- (0\.5\)log(0\.5\) \+ (0\.5\)log(0\.5\) \= 1
\\end{equation\*}\\]
Since we do not know what will be the result when we drop the coin, the entropy is maximum. Now consider the extreme case when the coin is biased such that the probability of heads is \\(1\\) and the probability of tails is \\(0\\). The entropy in this case is zero:
\\\[\\begin{equation\*}
H(X) \= \- (1\)log(1\) \+ (0\)log(0\) \= 0
\\end{equation\*}\\]
If we know that the result is always going to be heads, then there is no uncertainty when the coin is dropped. The entropy of \\(p\\) positive examples and \\(n\\) negative examples is:
\\\[\\begin{equation}
H(p, n) \= \- (\\frac{p}{p\+n})log(\\frac{p}{p\+n}) \+ (\\frac{n}{p\+n})log(\\frac{n}{p\+n})
\\tag{2\.9}
\\end{equation}\\]
Thus, we can use this to compute the entropy for the three possible values of *Price* with respect to the class. The positives are the instances where *Go\=1* and the negatives are the instances where *Go\=0*:
\\\[\\begin{equation\*}
H\_{price\=low}(4, 0\) \= \- (\\frac{4}{4\+0})log(\\frac{4}{4\+0}) \+ (\\frac{0}{4\+0})log(\\frac{0}{4\+0}) \= 0
\\end{equation\*}\\]
\\\[\\begin{equation\*}
H\_{price\=medium}(2, 4\) \= \- (\\frac{2}{2\+4})log(\\frac{2}{2\+4}) \+ (\\frac{4}{2\+4})log(\\frac{4}{2\+4}) \= 0\.918
\\end{equation\*}\\]
\\\[\\begin{equation\*}
H\_{price\=high}(0, 2\) \= \- (\\frac{0}{0\+2})log(\\frac{0}{0\+2}) \+ (\\frac{2}{0\+2})log(\\frac{2}{0\+2}) \= 0
\\end{equation\*}\\]
The average of those three can be calculated by taking into account the number of corresponding instances for each value and the total number of instances (\\(12\\)):
\\\[\\begin{equation\*}
meanH(price) \= (4/12\)(0\) \+ (6/12\)(0\.918\) \+ (2/12\)(0\) \= 0\.459
\\end{equation\*}\\]
Before deciding to split on *Price* the entropy of the entire dataset is \\(1\\) since there are six positive and negative examples:
\\\[\\begin{equation\*}
H(6,6\) \= 1
\\end{equation\*}\\]
Now we can compute the *information gain* for *Price*. Intuitively, the information gain tells you how powerful this variable is at dividing the instances based on their class, that is, how much you are learning:
\\\[\\begin{equation\*}
infoGain(Price) \= 1 \- meanH(Price) \= 1 \- 0\.459 \= 0\.541
\\end{equation\*}\\]
Since you want to learn fast, you want your root node to be the one with the highest information gain. For the rest of the variables the information gain is:
\\(infoGain(Tired) \= 0\\)
\\(infoGain(Rain) \= 0\.020\\)
\\(infoGain(Metal) \= 0\\)
The highest information gain is produced by *Price*, thus, it is selected as the root node. Then, the process continues recursively for each branch but excluding *Price*. Since branches with values *low* and *high* are already done, we only need to further split *medium*. Sometimes it is not possible to have completely pure nodes like with *low* and *high*. This can happen for example, when there are no more attributes left or when two or more instances have the same feature values but different labels. In those situations the final prediction is the most common label (majority vote).
There exist many implementations of decision trees. Some implementations compute variable importance using the entropy (as shown here) but others use the Gini index, for example. Each implementation also treats numeric variables in different ways. Pruning the tree using different techniques is also common in order to reduce its size.
Some of the most common implementations are C4\.5 trees ([Quinlan 2014](#ref-quinlan2014)) and CART ([Steinberg and Colla 2009](#ref-steinberg2009)). The later is implemented in the `rpart` R package ([Therneau and Atkinson 2019](#ref-rpart)) which will be used in the following section to build a model that predicts physical activities from smartphones sensor data.
### 2\.3\.1 Activity Recognition with Smartphones
`smartphone_activities.R`
As mentioned in the introduction, an example of behavior is an observable physical activity. We can infer what **physical activity** someone is doing by looking at her/his body movements. Observing physical activities can provide useful behavioral and contextual information about someone. This can also be used as a proxy to, for example, infer someone’s health condition by detecting deviations in activity patterns.
Nowadays, most smartphones come with a tri\-axial accelerometer sensor. This sensor measures gravitational forces from the \\(x\\), \\(y\\), and \\(z\\) axes. This information can be used to capture movement patterns from the user and automate the process of monitoring the type of physical activity being performed.
In this section, we will use decision trees to automatically classify physical activities from acceleration data. We will use the *WISDM* dataset[5](#fn5) and from now on, I will refer to it as the *SMARTPHONE ACTIVITIES* dataset. It contains acceleration recordings that were collected with a smartphone and was made available by Kwapisz, Weiss, and Moore ([2010](#ref-kwapisz2010)). The dataset has \\(6\\) different activities: *‘walking’*, *‘jogging’*, *‘walking upstairs’*, *‘walking downstairs’*, *‘sitting’* and *‘standing’*. The data were collected by \\(36\\) volunteers with an Android phone located in their pant’s pocket and with a sampling rate of \\(20\\) Hz (\\(1\\) sample every \\(50\\) milliseconds).
The dataset contains two types of files. One with the raw accelerometer data and the other one after feature extraction. Figure [2\.10](classification.html#fig:wisdmFirstLines) shows the first \\(10\\) lines of the raw accelerometer values of the first file. The first column is the id of the user that collected the data and the second column is the class. The third column is the timestamp and the remaining columns are the \\(x\\), \\(y\\), and \\(z\\) accelerometer values, respectively.
FIGURE 2\.10: First 10 lines of raw accelerometer data.
Usually, classification models are not trained with the raw data but with *feature vectors* extracted from the raw data. Feature vectors have the advantage of being more compact, thus, making the learning phase more efficient. For activity recognition, the feature extraction process consists of defining a moving window of size \\(w\\) that starts at position \\(i\\). At the beginning, \\(i\\) is the index pointing to the first accelerometer readings. Then, \\(n\\) statistical features are computed on the elements covered by the window such as mean, standard deviation, \\(0\\)\-crossings, etc. This will produce a \\(n\\)\-dimensional feature vector and the process is repeated by moving the window \\(s\\) steps forward. Typical values of \\(s\\) are such that the overlap between the previous window position and the next one is about \\(30\\%\\) to \\(50\\%\\). An overlap of \\(0\\) is also typical, that is, \\(s \= w\\). Figure [2\.11](classification.html#fig:featureExtraction) depicts the process.
FIGURE 2\.11: Moving window for feature extraction.
Once we have the set of feature vectors and their associated class labels, we can use them to train a classifier and make predictions on new data (Figure [2\.12](classification.html#fig:extractedFeatureVectors)).
FIGURE 2\.12: The extracted feature vectors are used to train a classifier.
For this example, we will use the file with features already extracted. The authors used windows of \\(10\\) seconds which is equivalent to \\(200\\) observations given the \\(20\\) Hz sampling rate and they used \\(0\\%\\) overlap. From each window, they extracted \\(43\\) features such as the mean, standard deviation, absolute deviations, etc.
Let’s read and print the first rows of the dataset. The script for this section is `smartphone_activities.R`. The data frame has several columns, but we only print the first five features and the class which is stored in the last column.
```
# Read data.
df <- read.csv(datapath,stringsAsFactors = F)
# Some code to clean the dataset.
# (cleaning code not shown here).
# Print first rows of the dataset.
head(df[,c(1:5,40)])
#> X0 X1 X2 X3 X4 class
#> 1 0.04 0.09 0.14 0.12 0.11 Jogging
#> 2 0.12 0.12 0.06 0.07 0.11 Jogging
#> 3 0.14 0.09 0.11 0.09 0.09 Jogging
#> 4 0.06 0.10 0.09 0.09 0.11 Walking
#> 5 0.12 0.11 0.10 0.08 0.10 Walking
#> 6 0.09 0.09 0.10 0.12 0.08 Walking
#> 7 0.12 0.12 0.12 0.13 0.15 Upstairs
#> 8 0.10 0.10 0.10 0.10 0.11 Upstairs
#> 9 0.08 0.07 0.08 0.08 0.05 Upstairs
```
Our aim is to predict the class based on all the numeric features. We will use the `rpart` package ([Therneau and Atkinson 2019](#ref-rpart)) which implements classification and regression trees. We will assess the performance of the decision tree with \\(10\\)\-fold cross\-validation. We can use the `sample()` function to generate the folds. This function will sample \\(n\\) integers from \\(1\\) to \\(k\\) where \\(n\\) is the number of rows in the data frame.
```
# Package with implementations of decision trees.
library(rpart)
# Set seed for reproducibility.
set.seed(1234)
# Define the number of folds.
k <- 10
# Generate folds.
folds <- sample(k, size = nrow(df), replace = TRUE)
# Print first 10 values.
head(folds)
#> [1] 10 6 5 9 5 6
```
The `folds` variable stores the fold each instance belongs to. For example, the first instance belongs to fold \\(10\\), the second instance belongs to fold \\(6\\), and so on. We can now generate our test and train sets. We will iterate \\(k\=10\\) times. For each iteration \\(i\\), the test set is built using the instances that belong to fold \\(i\\) and the train set will be composed of the remaining instances (those that do not belong to fold \\(i\\)). Next, the `rpart()` function is used to train the decision tree with the train set. By default, `rpart()` performs \\(10\\)\-fold cross\-validation internally. To avoid this, we set the parameter `xval = 0`. Then, we can use the trained model to obtain the predictions on the test set with the generic `predict()` function. The ground truth classes and the predictions are stored so the performance metrics can be computed.
```
# Variable to store ground truth classes.
groundTruth <- NULL
# Variable to store the classifier's predictions.
predictions <- NULL
for(i in 1:k){
trainSet <- df[which(folds != i), ]
testSet <- df[which(folds == i), ]
# Train the decision tree
treeClassifier <- rpart(class ~ .,
trainSet, xval=0)
# Get predictions on the test set.
foldPredictions <- predict(treeClassifier,
testSet, type = "class")
predictions <- c(predictions,
as.character(foldPredictions))
groundTruth <- c(groundTruth,
as.character(testSet$class))
}
```
The first argument of the `rpart()` function is `class ~ .` which is a formula that instructs the method to use the *class* column as the class. The `~ .` means “use all the remaining columns as features”. Now, we can use the `confusionMatrix()` function to compute the performance metrics and the confusion matrix.
```
cm <- confusionMatrix(as.factor(predictions),
as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.7895903
# Print performance metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: Downstairs 0.2821970 0.9617587 0.4434524 0.3449074
#> Class: Jogging 0.9612308 0.9601898 0.9118506 0.9358898
#> Class: Sitting 0.8366013 0.9984351 0.9696970 0.8982456
#> Class: Standing 0.8983740 0.9932328 0.8632812 0.8804781
#> Class: Upstairs 0.2246835 0.9669870 0.4733333 0.3047210
#> Class: Walking 0.9360884 0.8198981 0.7642213 0.8414687
# Print overall metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.6898625 0.9500836 0.7376393 0.7009518
```
FIGURE 2\.13: Confusion matrix for activities’ predictions.
The overall accuracy was \\(78\\%\\) and by looking at the individual performance metrics, some classes had low scores like *‘walking downstairs’* and *‘walking upstairs’*. From the confusion matrix (Figure [2\.13](classification.html#fig:activitiesTreeCM)), it can be seen that those two activities were often confused with each other but also with the *‘walking’* activity. The package `rpart.plot` ([Milborrow 2019](#ref-rpartplot)) can be used to plot the resulting tree (Figure [2\.14](classification.html#fig:activitiesTree)).
```
library(rpart.plot)
# Plot the tree from the last fold.
rpart.plot(treeClassifier, fallen.leaves = F,
shadow.col = "gray", legend.y = 1)
```
FIGURE 2\.14: Resulting decision tree.
The `fallen.leaves = F` argument prevents the leaves to be plotted at the bottom. This is useful if the tree has many nodes. Each node shows the predicted class, the predicted probability of each class, and the percentage of observations in the node. The plot also shows the feature used for each split. We can see that the *YABSOLDEV* variable is at the root thus, it had the highest information gain with the initial set of instances. At the root of the tree, before looking at any of the features, the predicted class is *‘Walking’*. This is because its prior probability is the highest one (\\(\\approx 0\.39\\)), that is, it’s the most common activity present in the dataset. So, if we didn’t have any other information, our best bet would be to predict the most frequent activity.
```
# Prior probabilities.
table(trainSet$class) / nrow(trainSet)
#> Downstairs Jogging Sitting Standing Upstairs Walking
#> 0.09882885 0.29607561 0.05506472 0.04705157 0.11793713 0.38504212
```
These results look promising, but they can still be improved. In the next chapter, I will show you how to improve these results with *Ensemble Learning* which is a method that is used to aggregate many models.
2\.4 Naive Bayes
----------------
Naive Bayes is yet another type of classifier. This one is based on Bayes’ rule. The name *Naive* is because this method assumes that the features are independent. In the previous section we learned that decision trees are built recursively. Trees are built by first selecting a feature to be at the root and then, the root is split into subnodes and so on. How those subnodes are chosen depends on their parent node. With Naive Bayes, features don’t need information about other features, thus, the parameters for each feature can be learned in parallel.
To demonstrate how Naive Bayes works I will use the *SMARTPHONE ACTIVITIES* dataset as in the previous section. For any given *query instance*, the aim is to **predict its most likely class** based on the accelerometer features. For a new *query instance*, we want to estimate its class based on the features that we have observed. Let’s say we want to know what is the probability that the query instance belongs to the class *‘Walking’*. This can be formulated as follows:
\\\[\\begin{equation\*}
P(C\=\\textit{Walking} \| f\_1,\\dots ,f\_n).
\\end{equation\*}\\]
This reads as the conditional probability that the class is *‘Walking’* **given** the observed evidence. For each instance, the evidence that we can observe are its features \\(f\_1, \\dots ,f\_n\\). In this dataset, each instance has \\(39\\) features. If we want to estimate the most likely class, all we need to do is to compute the conditional probability for each class and return the highest one:
\\\[\\begin{equation}
y \= \\operatorname\*{arg max}\_{k \\in \\{1, \\dots ,K\\}} P(C\_k \| f\_1,\\dots ,f\_n)
\\tag{2\.10}
\\end{equation}\\]
where \\(K\\) is the total number of possible classes. The \\(\\text{arg max}\\) notation means: Evaluate the right hand expression for every class \\(k\\) and return the \\(k\\) that resulted with the maximum probability. If instead of *arg max* we had *max* (without the *arg*) that would mean to return the actual maximum probability instead of the class \\(k\\).
Now let’s see how we can compute \\(P(C\_k \| f\_1,\\dots ,f\_n)\\). To compute a conditional probability we can use Bayes’ rule:
\\\[\\begin{equation}
P(H\|E) \= \\frac{P(H)P(E\|H)}{P(E)}
\\tag{2\.11}
\\end{equation}\\]
Let’s dissect that formula:
1. \\(P(H\|E)\\) is called the **posterior** and it is the probability of the hypothesis \\(H\\) given the observed evidence \\(E\\). In our example, the hypothesis can be that \\(C\=Walking\\) and the evidence consists of the measured features. This is the probability that ultimately we want to estimate for each class and pick the class with the highest probability.
2. \\(P(H)\\) is called the **prior**. This is the probability of a hypothesis happening without having any evidence. In our example, this translates into the probability that an instance belongs to a particular class without looking at its features. In practice, this is estimated from the class counts in the training set. Suppose the training set consists of \\(100\\) instances and from those, \\(80\\) are of type *‘Walking’* and \\(20\\) are of type *‘Jogging’*. Then, the prior probability for *‘Walking’* is \\(P(C\=Walking)\=80/100\=0\.8\\) and the prior for *‘Jogging’* is \\(P(C\=Jogging)\=20/100\=0\.2\\).
3. \\(P(E)\\) is the probability of the evidence. Since this one doesn’t depend on the class we don’t need to compute it. This can be thought of as a normalization factor. When choosing the final class we only need to select the one with the highest score, so there is no need to normalize them into proper probabilities between \\(0\\) and \\(1\\).
4. \\(P(E\|H)\\) is called the **likelihood**. For numerical variables we can estimate this using a *Gaussian probability density function*. This sounds intimidating! but all we need to do is to compute the *mean* and *standard deviation* for each feature\-class pair and plug them in the probability density function (pdf). The formula for a Gaussian (also called normal) pdf is:
\\\[\\begin{equation}
f(x) \= \\frac{1}{{\\sigma \\sqrt {2\\pi } }}e^{ \- \\left( {x \- \\mu } \\right)^2 / 2 \\sigma ^2 }
\\tag{2\.12}
\\end{equation}\\]
where \\(\\mu\\) is the mean and \\(\\sigma\\) is the standard deviation.
Suppose that for some feature \\(f1\\) when the class is *‘Walking’*, its mean is \\(5\\) and its standard deviation is \\(3\\). That is, we filter the train set and only select those instances with class *‘Walking’* and compute the mean and standard deviation for feature \\(f1\\). Figure [2\.15](classification.html#fig:pdf1) shows how its pdf looks like.
FIGURE 2\.15: Gaussian probability density function with mean 5 and standard deviation 3\.
If we have a query instance with a feature \\(f\_1 \= 1\.7\\), we can compute its likelihood given the *‘Walking’* class \\(P(f\_1\=1\.7\|C\=Walking)\\) with equation [(2\.12\)](classification.html#eq:gaussianpdf) by plugging \\(x\=1\.7\\), \\(\\mu\=5\\), and \\(\\sigma\=3\\). In R, the function `dnorm()` implements the normal pdf.
```
dnorm(x=1.7, mean = 5, sd = 3)
#> [1] 0.07261739
```
In Figure [2\.16](classification.html#fig:pdf2) the solid circle shows the likelihood when \\(x\=1\.7\\).
FIGURE 2\.16: Likelihood (0\.072\) when x\=1\.7\.
If we have more than one feature we need to compute the likelihood for each and take their **product**: \\(P(f\_1\|C\=Walking)\*P(f\_2\|C\=Walking)\*\\dots\*P(f\_n\|C\=Walking)\\). Each feature and class pair has its own \\(\\mu\\) and \\(\\sigma\\) parameters. Thus, Naive Bayes requires to learn \\(K\*F\*2\\) parameters for the \\(P(E\|H)\\) part plus \\(K\\) parameters for the priors \\(P(H)\\). \\(K\\) is the number of classes, \\(F\\) is the number of features, and the \\(2\\) stands for the mean and standard deviation.
We have seen how we can compute \\(P(C\_k\|f\_1, \\dots ,f\_n)\\) using Baye’s rule by calculating the prior \\(P(H)\\) and \\(P(E\|H)\\) which is the product of the likelihoods for each feature. If we substitute Bayes’s rule (omitting the denominator) in equation [(2\.10\)](classification.html#eq:bayesclassifier) we get our Naive Bayes classifier:
\\\[\\begin{equation}
y \= \\operatorname\*{arg max}\_{k \\in \\{1, \\dots ,K\\}} P(C\_k) \\prod\_{i\=1}^{F} P(f\_i \| C\_k)
\\tag{2\.13}
\\end{equation}\\]
In the following section we will implement our own Naive Bayes algorithm in R and test it on the *SMARTPHONE ACTIVITIES* dataset. Then, we will compare our implementation with that of the well known `e1071` package ([Meyer et al. 2019](#ref-e1071)).
Naive Bayes works well with missing values since the features are independent. At prediction time, if an instance has one or more missing values then, those features are just ignored and the posterior probability is computed based only on the available variabels. Another advantage of the feature independence assumption is that feature selection algorithms run very fast with Naive Bayes. When building a predictive model, not all features may provide useful information and some features may even degrade the performance. Feature selection algorithms aim to find the best set of features and some of them need to try a huge number of feature combinations. With Naive Bayes, the parameters only need to be learned once and then different combinations of features can be evaluated by omitting the ones that are not used. With decision trees, for example, we would need to build entire new trees every time we want to try different input features.
Here, we have shown how we can use a Gaussian pdf to compute the likelihood \\(P(E\|H)\\) when the features are numeric. This assumes that the features have a normal distribution. However, this is not always the case. In practice, Naive Bayes can work really well even if that assumption is not met. Furthermore, nothing prevents us from using another distribution to estimate the likelihood or even defining a specific distribution for each feature. For categorical variables, \\(P(E\|H)\\) is estimated using the frequencies of the feature values.
### 2\.4\.1 Activity Recognition with Naive Bayes
`naive_bayes.R`
It’s time to implement Naive Bayes. To keep it simple, first we will go through a step by step example using a single feature. Then, we will implement a function to train a Naive Bayes classifier for the case of multiple features.
Let’s assume we have already split the data into train and test sets. The complete code is in the script `naive_bayes.R`. We will only use the feature *RESULTANT* which corresponds to the acceleration magnitude of the three axes of the accelerometer sensor. The following code snippet prints the first rows of the train set. The *RESULTANT* feature is in column \\(39\\) and the class is the last column (\\(40\\)).
```
head(trainset[,c(39:40)])
#> RESULTANT class
#> 1004 11.14 Walking
#> 623 1.24 Upstairs
#> 2693 9.90 Standing
#> 934 10.44 Upstairs
#> 4496 10.43 Walking
#> 2948 15.28 Jogging
```
First, we compute the prior probabilities for each class in the train set and store them in the variable `priors`. This corresponds to the \\(P(C\_k)\\) part in equation [(2\.13\)](classification.html#eq:bayesclassifier2).
```
# Compute prior probabilities.
priors <- table(trainset$class) / nrow(trainset)
# Print the table of priors.
priors
#> Downstairs Jogging Sitting Standing Upstairs
#> 0.09622990 0.30266280 0.05721065 0.04640127 0.11521223
#> Walking
#> 0.38228315
```
We can access each prior by name like this:
```
# Get the prior for "Jogging".
priors["Jogging"]
#> Jogging
#> 0.3026628
```
This means that \\(30\\%\\) of the instances in the train set are of type *‘Jogging’*. Now we need to compute the \\(P(f\_i\|C\_k)\\) part from equation [(2\.13\)](classification.html#eq:bayesclassifier2). In R, we can define a method to compute the probability density function from equation [(2\.12\)](classification.html#eq:gaussianpdf) as:
```
# Probability density function of normal distribution.
f <- function(x, m, s){
(1 / (sqrt(2*pi)*s)) * exp(-((x-m)^2) / (2 * s^2))
}
```
It’s first argument `x` is the input value. The second argument `m` is the mean, and the last argument `s` is the standard deviation. For illustration purposes we are defining this function manually but remember that this pdf is already implemented with the base `dnorm()` function.
According to equation [(2\.13\)](classification.html#eq:bayesclassifier2) we need to compute \\(P(f\_i\|C\_k)\\) for each feature \\(i\\) and class \\(k\\). Let’s assume there are only two classes, *‘Walking’* and *‘Jogging’*. Thus, we need to compute the mean and standard deviation for each, and for the feature *RESULTANT* (column \\(39\\)).
```
# Compute the mean and sd of
# the feature RESULTANT (column 39)
# when the class = "Standing".
mean.standing <- mean(trainset[which(trainset$class == "Standing"), 39])
sd.standing <- sd(trainset[which(trainset$class == "Standing"), 39])
# Compute mean and sd when
# the class = "Jogging".
mean.jogging <- mean(trainset[which(trainset$class == "Jogging"), 39])
sd.jogging <- sd(trainset[which(trainset$class == "Jogging"), 39])
```
Print the means:
```
mean.standing
#> [1] 9.405795
mean.jogging
#> [1] 13.70145
```
Note that the mean value for *‘Jogging’* is higher for this feature. This was expected since this feature captures the overall movement across all axes. Now we have everything we need to start making predictions on new instances. We have the priors and we have the means and standard deviations for each feature\-class pair.
Let’s select the first instance from the test set and try to predict its class.
```
# Select a query instance from the test set.
query <- testset[1,] # Select the first one.
```
Now we compute the posterior probability for each class using the learned means and standard deviations:
```
# Compute P(Standing)P(RESULTANT|Standing)
priors["Standing"] * f(query$RESULTANT, mean.standing, sd.standing)
#> 0.003169748
# Compute P(Jogging)P(RESULTANT|Jogging)
priors["Jogging"] * f(query$RESULTANT, mean.jogging, sd.jogging)
#> 0.03884481
```
The posterior for *‘Jogging’* was higher (\\(0\.038\\)) so we classify the query instance as *‘Jogging’*. If we check the true class we see that it was correctly classified!
```
# Inspect the true class of the query instance.
query$class
#> [1] "Jogging"
```
In this example we assumed that there was only one feature and we computed each step manually. However, this can easily be extended to deal with more features. So let’s just do that. We can write two functions, one for training the classifier and the other for making predictions.
The following function will be used to train the classifier. It takes as input a data frame with \\(n\\) features. This function assumes that the class is the last column. The function returns a list with the learned priors, means, and standard deviations.
```
# Function to learn the parameters of
# a Naive Bayes classifier.
# Assumes that the last column of data is the class.
naive.bayes.train <- function(data){
# Unique classes.
classes <- unique(data$class)
# Number of features.
nfeatures <- ncol(data) - 1
# List to store the learned means and sds.
list.means.sds <- list()
for(c in classes){
# Matrix to store the mean and sd for each feature.
# First column stores the mean and second column
# stores the sd.
M <- matrix(0, nrow = nfeatures, ncol = 2)
# Populate matrix.
for(i in 1:nfeatures){
feature.values <- data[which(data$class == c),i]
M[i,1] <- mean(feature.values)
M[i,2] <- sd(feature.values)
}
list.means.sds[c] <- list(M)
}
# Compute prior probabilities.
priors <- table(data$class) / nrow(data)
return(list(list.means.sds=list.means.sds,
priors=priors))
}
```
The function iterates through each class and for each, it creates a matrix `M` with \\(F\\) rows and \\(2\\) columns where \\(F\\) is the number of features. The first column stores the means and the second the standard deviations. Those matrices are saved in a list indexed by the class name so at prediction time we can retrieve each matrix individually. At the end, the prior probabilities are computed. Finally, a list is returned. The first element of the list is the list of matrices and the second element are the priors.
The next function will make predictions based on the learned parameters. Its first argument is the learned parameters and the second a data frame with the instances we want to make predictions for.
```
# Function to make predictions using
# the learned parameters.
naive.bayes.predict <- function(params, data){
# Variable to store the prediction for each instance.
predictions <- NULL
n <- nrow(data)
# Get class names.
classes <- names(params$priors)
# Get number of features.
nfeatures <- nrow(params$list.means.sds[[1]])
# Iterate instances.
for(i in 1:n){
query <- data[i,]
max.probability <- -Inf
predicted.class <- ""
# Find the class with highest probability.
for(c in classes){
# Get the prior probability for class c.
acum.prob <- params$priors[c]
# Iterate features.
for(j in 1:nfeatures){
# Compute P(feature|class)
tmp <- f(query[,j],
params$list.means.sds[[c]][j,1],
params$list.means.sds[[c]][j,2])
# Accumulate result.
acum.prob <- acum.prob * tmp
}
if(acum.prob > max.probability){
max.probability <- acum.prob
predicted.class <- c
}
}
predictions <- c(predictions, predicted.class)
}
return(predictions)
}
```
This function iterates through each instance and computes the posterior for each class and stores the one that achieved the highest value as the prediction. Finally, it returns the list with all predictions.
Now we are ready to train our Naive Bayes classifier. All we need to do is call the function `naive.bayes.train()` and pass the train set.
```
# Learn Naive Bayes parameters.
nb.model <- naive.bayes.train(trainset)
```
The learned parameters are stored in `nb.model` and we can make predictions with the `naive.bayes.predict()` function by passing the `nb.model` and a test set.
```
# Make predictions.
predictions <- naive.bayes.predict(nb.model, testset)
```
Then, we can assess the performance of the model by computing the confusion matrix.
```
# Compute confusion matrix and other performance metrics.
groundTruth <- testset$class
cm <- confusionMatrix(as.factor(predictions),
as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.7501538
# Print overall metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.6621381 0.9423729 0.6468372 0.6433231
```
The accuracy was \\(75\\%\\). In the previous section we obtained an accuracy of \\(78\\%\\) with decision trees. However, this does not necessarily mean that decision trees are better. Moreover, in the previous section we used cross\-validation and here we used hold\-out validation.
Computing the posterior may cause a loss of numeric precision, specially when there are many features. This is because we are multiplying the likelihoods for each feature (see equation [(2\.13\)](classification.html#eq:bayesclassifier2)) and those likelihoods are small numbers. One way to fix that is to use logarithms. In `naive.bayes.predict()` we can change `acum.prob <- params$priors[c]` with `acum.prob <- log(params$priors[c])` and `acum.prob <- acum.prob * tmp` with `acum.prob <- acum.prob + log(tmp)`. If you try those changes you should get the same result as before.
There is already a popular R package (`e1071`) for training Naive Bayes classifiers. The following code trains a classifier using this package.
```
#### Use Naive Bayes implementation from package e1071 ####
library(e1071)
# We need to convert the class into a factor.
trainset$class <- as.factor(trainset$class)
nb.model2 <- naiveBayes(class ~., trainset)
predictions2 <- predict(nb.model2, testset)
cm2 <- confusionMatrix(as.factor(predictions2),
as.factor(groundTruth))
# Print accuracy
cm2$overall["Accuracy"]
#> Accuracy
#> 0.7501538
```
As you can see, the result was the same as the one obtained with our implementation! We implemented our own for illustrative purposes but it is advisable to use already tested and proven packages. Furthermore, this one also supports categorical variables.
2\.5 Dynamic Time Warping
-------------------------
`dtw_example.R`
In the previous activity recognition example, we used the extracted features represented as feature vectors to train the classifiers instead of using the raw data. In some situations this can lead to temporal\-relationships information loss. In the previous example, we could classify the activities with reasonable accuracy since the extracted features were able to retain enough information from the raw data. However, in some cases, having temporal information is crucial. For example, in hand signature recognition, a query signature is checked for a match with one of the signatures in a database. The signatures need to have an almost exact match to authenticate a user. If we represent each signature as a feature vector, it can turn out that two signatures have very similar feature vectors even though they look completely different. For example, Figure [2\.17](classification.html#fig:correlations) shows four datasets. They look very different but they all have the same correlation of \\(0\.816\\)[6](#fn6).
FIGURE 2\.17: Four datasets with the same correlation of 0\.816\. (Anscombe, Francis J., 1973, Graphs in statistical analysis. American Statistician, 27, 17–21\. Source: Wikipedia, User:Schutz (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
To avoid this potential issue, we can also include time\-dependent information into our models by keeping the order of the data points. Another issue is that two time series that belong to the same class will still have some differences. Every time the same person signs a document the signature will vary a bit. In the same way, when we pronounce a word, sometimes we emphasize some letters or speak at different speeds. Figure [2\.18](classification.html#fig:verygood) shows two versions of the sentence “very good”. In the second one (bottom) the speaker emphasizes the “e” and as a result, the two sentences are not aligned in time anymore even though they have the same meaning.
FIGURE 2\.18: Time shift example between two sentences.
To compare two sequences we could use the well known Euclidean distance. However since the two sequences may not be aligned in time, the result could be misleading. Furthermore, the two sequences differ in length. To account for this “time\-shift” effect in timeseries data, *Dynamic Time Warping* (DTW) ([Sakoe et al. 1990](#ref-sakoe1990dynamic)) can be used instead. DTW is a method that:
* Finds an optimal match between two time\-dependent sequences.
* Computes their dissimilarity.
* Finds the optimal deformation (mapping) of one of the sequences onto the other.
Another advantage of DTW is that the timeseries do not need to be of the same length. Suppose we have two timeseries, a *query*, and a *reference* we want to compare with:
\\\[\\begin{align\*}
query\&\=(2,2,2,4,4,3\)\\\\
ref\&\=(2,2,3,3,2\)
\\end{align\*}\\]
The first thing to note is that the sequences differ in length. Figure [2\.19](classification.html#fig:queryref) shows their plot. The *query* is the solid line and seems to be shifted to the right one position with respect to the *reference*. The plot also shows the resulting alignment after applying the DTW algorithm (dashed lines between the sequences). The resulting distance (after aligning) between the sequences is \\(3\\). In the following, we will see how the problem can be formalized and how it can be computed. Don’t worry if you find the math notation a bit difficult to grasp at this point. A step by step example will follow which should help to explain how the method works.
FIGURE 2\.19: DTW alignment between the query and reference sequences (solid line is the query).
The problem of aligning two sequences can be formalized as follows ([Rabiner and Juang 1993](#ref-Rabiner1993)). Let \\(X\\) and \\(Y\\) be two sequences:
\\\[\\begin{align\*}
X\&\=(x\_1,x\_2,\\dots,x\_{T\_x}) \\\\
Y\&\=(y\_1,y\_2,\\dots,y\_{T\_y})
\\end{align\*}\\]
where \\(x\_i\\) and \\(y\_i\\) are vectors. In the previous example, the vectors only have one element since the sequences are \\(1\\)\-dimensional, but DTW also works with multidimensional sequences. \\(T\_x\\) and \\(T\_y\\) are the sequences’ lengths. Let
\\\[\\begin{align\*}
d(i\_x,i\_y)
\\end{align\*}\\]
be the *dissimilarity* (distance) between vectors \\(x\_i\\) and \\(y\_i\\) (e.g., Euclidean distance). Then, \\(\\phi\_x\\) and \\(\\phi\_y\\) are the warping functions that relate \\(i\_x\\) and \\(i\_y\\) to a common axis \\(k\\):
\\\[\\begin{align\*}
i\_x\&\=\\phi\_x (k), k\=1,2,\\dots,T \\\\
i\_y\&\=\\phi\_y (k), k\=1,2,\\dots,T.
\\end{align\*}\\]
The total dissimilarity between the two sequences is:
\\\[\\begin{equation}
d\_\\phi (X,Y) \= \\sum\_{k\=1}^T{d\\left(\\phi\_x (k), \\phi\_y (k)\\right)}
\\tag{2\.14}
\\end{equation}\\]
The aim is to find the warping function \\(\\phi\\) that minimizes the total dissimilarity:
\\\[\\begin{equation}
\\operatorname\*{min}\_{\\phi} d\_\\phi (X,Y)
\\tag{2\.15}
\\end{equation}\\]
The solution can be efficiently computed using dynamic programming. Usually, when solving this minimization problem, some constraints are applied:
* **Endpoint constraints.** This constraint makes sure that the first and last elements of each sequence are connected (mapped to each other).
\\\[\\begin{align\*}
\\phi\_x (1\)\&\=1, \\phi\_y (1\)\=1 \\\\
\\phi\_x (T)\&\=T\_x, \\phi\_y (T)\=T\_y
\\end{align\*}\\]
* **Monotonicity.** This constraint allows ‘time to flow’ only from left to right. That is, we cannot go back in time.
\\\[\\begin{align\*}
\\phi\_x (k\+1\) \\geq \\phi\_x(k) \\\\
\\phi\_y (k\+1\) \\geq \\phi\_y(k)
\\end{align\*}\\]
* **Local constraints.** For example, allow jumps of at most \\(1\\) step.
\\\[\\begin{align\*}
\\phi\_x (k\+1\) \- \\phi\_x(k) \\leq 1 \\\\
\\phi\_y (k\+1\) \- \\phi\_y(k) \\leq 1
\\end{align\*}\\]
Also, it is possible to apply global constraints, other local constraints, and apply different weights to slopes but the three described above are the most common ones. For a comprehensive list of constraints, please see ([Rabiner and Juang 1993](#ref-Rabiner1993)). Now let’s get back to our example and go through the steps to compute the dissimilarity and warping functions between our query (\\(Q\\)) and reference (\\(R\\)) sequences:
\\\[\\begin{align\*}
Q\&\=(2,2,2,4,4,3\) \\\\
R\&\=(2,2,3,3,2\)
\\end{align\*}\\]
The first step is to compute a *local cost matrix*. This is just a matrix that contains the distance between every pair of points between the two sequences. For this example, we will use the *Manhattan distance*. Since our sequences are \\(1\\)\-dimensional this distance can be computed as the absolute difference \\(\|x\_i \- y\_i\|\\). Figure [2\.20](classification.html#fig:localCost) shows the resulting local cost matrix.
FIGURE 2\.20: Local cost matrix between Q and R.
For example, position \\((1,1\)\=0\\) (*row*,*column*) because the first element of \\(Q\\) is \\(2\\) and the first element of \\(R\\) is also \\(2\\), thus, \\(\|2\-2\|\=0\\). The rest of the matrix is filled in the same way. In dynamic programming, partial results are computed and stored in a table. Figure [2\.21](classification.html#fig:dynamicTable) shows the final dynamic programming table computed from the local cost matrix. Initially, this table is empty. We start to fill it from bottom left at position \\((1,1\)\\). From the local cost matrix, the cost at position \\((1,1\)\\) is \\(0\\) so the cost at that position in the dynamic programming table is \\(0\\). Then we can start filling in the contiguous cells. The only direction from which we can arrive at position \\((1,2\)\\) is from the west (W). The cost at position \\((1,2\)\\) from the local cost matrix is \\(0\\) and the cost of the *minimum* of the cell from the west \\((1,1\)\\) is also \\(0\\). So \\(W:0\+0\=0\\). For each cell we add the current cost plus the minimum cost when coming from the contiguous cell. The minimum costs are marked with red. For some cells it is possible to arrive from three different directions: S, W, and SW, thus we need to compute the cost when coming from each of those. The final minimum cost at position \\((5,6\)\\) is \\(3\\). Thus, that is the global DTW distance. In the example, it is possible to get the minimum at \\((5,6\)\\) when arriving from the south or southwest.
FIGURE 2\.21: Dynamic programming table.
Once the table is filled in, we can backtrack starting at \\((5,6\)\\) to find the warping functions. Figure [2\.22](classification.html#fig:warpingResult) shows the final warping functions. Because of the endpoint constraints, we know that \\(\\phi\_Q(1\)\=1, \\phi\_R(1\)\=1\\), \\(\\phi\_Q(6\)\=6\\), and \\(\\phi\_R(6\)\=5\\). Then, from \\((5,6\)\\) the minimum contiguous value is \\(2\\) coming from SW, thus \\(\\phi\_Q(5\)\=5, \\phi\_R(5\)\=4\\), and so on. Note that we could also have chosen to arrive from the south with the same minimum value of \\(2\\) but still this would have resulted in the same overall distance. The dashed line in figure [2\.21](classification.html#fig:dynamicTable) shows the full backtracking.
FIGURE 2\.22: Resulting warping functions.
The runtime complexity of DTW is \\(O(T\_x T\_y)\\). This is the required time to compute the local cost matrix and the dynamic programming table.
In R, the `dtw` package ([Giorgino 2009](#ref-giorgino2009)) has the function `dtw()` to compute the DTW distance between two sequences. Let’s use this package to solve the previous example.
```
library("dtw")
# Sequences from the example
query <- c(2,2,2,4,4,3)
ref <- c(2,2,3,3,2)
# Find dtw distance.
alignment <- dtw(query, ref,
step = symmetric1, keep.internals = T)
```
The `keep.internals = T` keeps the input data so it can be accessed later, e.g., for plotting. The cost matrix and final distance can be accessed from the resulting object. The `step` argument specifies a step pattern. A step pattern describes some of the algorithm constraints such as endpoint and local constraints. In this case, we use `symmetric1` which applies the constraints explained before. We can access the cost matrix, the final distance, and the warping functions \\(\\phi\_x\\) and \\(\\phi\_y\\) as follows:
```
alignment$localCostMatrix
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 0 0 1 1 0
#> [2,] 0 0 1 1 0
#> [3,] 0 0 1 1 0
#> [4,] 2 2 1 1 2
#> [5,] 2 2 1 1 2
#> [6,] 1 1 0 0 1
alignment$distance
#> [1] 3
alignment$index1
#> [1] 1 2 3 4 5 6
alignment$index2
#> [1] 1 1 2 3 4 5
```
The local cost matrix is the same one as in Figure [2\.20](classification.html#fig:localCost) but in rotated form. The resulting object also has the dynamic programming table which can be plotted along with the resulting backtracking (see Figure [2\.23](classification.html#fig:backtracking)).
```
ccm <- alignment$costMatrix
image(x = 1:nrow(ccm), y = 1:ncol(ccm),
ccm, xlab = "Q", ylab = "R")
text(row(ccm), col(ccm), label = ccm)
lines(alignment$index1, alignment$index2)
```
FIGURE 2\.23: Dynamic programming table and backtracking.
And finally, the aligned sequences can be plotted. The previous Figure [2\.19](classification.html#fig:queryref) shows the result of the following command.
```
plot(alignment, type="two", off=1.5,
match.lty=2,
match.indices=10,
main="DTW resulting alignment",
xlab="time", ylab="magnitude")
```
### 2\.5\.1 Hand Gesture Recognition
`hand_gestures.R`, `hand_gestures_auxiliary.R`
Gestures are a form of communication. They are often accompanied with speech but can also be used to communicate something independently of speech (like in sign language). Gestures allow us to externalize and emphasize emotions and thoughts. They are based on body movements from arms, hands, fingers, face, head, etc. Gestures can be used as a non\-verbal way to identify and study behaviors for different purposes such as for emotion ([De Gelder 2006](#ref-de2006towards)) or for the identification of developmental disorders like autism ([Anzulewicz, Sobota, and Delafield\-Butt 2016](#ref-anzulewicz2016toward)).
Gestures can also be used to develop user\-computer interaction applications. The following video shows an example application of gesture recognition for domotics.
The application determines the indoor location using \\(k\\)\-NN as it was shown in this chapter. The gestures are classified using DTW (I’ll show how to do it in a moment). Based on the location and type of gesture, an specific home appliance is activated. I programmed that app some time ago using the same algorithms presented here.
To demonstrate how DTW can be used for hand gesture recognition, we will examine the *HAND GESTURES* dataset that was collected with a smartphone using its accelerometer sensor. The data was collected by \\(10\\) individuals who performed \\(5\\) repetitions of \\(10\\) different gestures (*‘triangle’*, *‘square’*, *‘circle’*, *‘a’*, *‘b’*, *‘c’*, *‘1’*, *‘2’*, *‘3’*, *‘4’*). The sensor is a tri\-axial accelerometer that returns values for the \\(x\\), \\(y\\), and \\(z\\) axes. The participants were not instructed to hold the smartphone in any particular way. The sampling rate was set at \\(50\\) Hz. To record a gesture, the user presses the phone’s screen with her/his thumb, performs the gesture in the air, and stops pressing the screen after the gesture is complete. Figure [2\.24](classification.html#fig:gesturesFigure) shows the start and end positions of the \\(10\\) gestures.
FIGURE 2\.24: Paths for the 10 considered gestures.
In order to make the recognition orientation\-independent, we can compute the *magnitude* of the \\(3\\) accelerometer axes. This will provide us with the overall movement patterns regardless of orientation.
\\\[\\begin{equation}
Magnitude(t) \= \\sqrt {{a\_x}{{(t)}^2} \+ {a\_y}{{(t)}^2} \+ {a\_z}{{(t)}^2}}
\\tag{2\.16}
\\end{equation}\\]
where \\({a\_x}{{(t)}}\\), \\({a\_y}{{(t)}}\\), and \\({a\_z}{{(t)}}\\) are the accelerations at time \\(t\\).
Figure [2\.25](classification.html#fig:handGestureMagnitude) shows the raw accelerometer values (dashed lines) for a *triangle* gesture. The solid line shows the resulting magnitude. This will also simplify things since we will now work with \\(1\\)\-dimensional sequences (the magnitudes) instead of the other \\(3\\) axes.
FIGURE 2\.25: Triangle gesture.
The gestures are stored in text files that contain the \\(x\\), \\(y\\), and \\(z\\) recordings. The script `hand_gestures_auxiliary.R` has some auxiliary functions to preprocess the data. Since the sequences of each gesture are of varying length, storing them as a data frame could be problematic because data frames have fixed sizes. Instead, the `gen.instances()` function processes the files and returns all hand gestures as a list. This function also computes the magnitude (equation [(2\.16\)](classification.html#eq:magnitude)). The following code (from `hand_gestures.R`) calls the `gen.instances()` function and stores the results in the `instances` variable which is a list. Then, we select the first and second instances to be the query and the reference.
```
# Format instances from files.
instances <- gen.instances("../data/hand_gestures/")
# Use first instance as the query.
query <- instances[[1]]
# Use second instance as the reference.
ref <- instances[[2]]
```
Each element in `instances` is also a list that stores the *type* and *values* (magnitude) of each gesture.
```
# Print their respective classes
print(query$type)
#> [1] "1"
print(ref$type)
#> [1] "1"
```
Here, the first two instances are of type *‘1’*. We can also print the magnitude values.
```
# Print values.
print(query$values)
#> [1] 9.167477 9.291464 9.729926 9.901090 ....
```
In this case, both classes are “1”. We can use the `dtw()` function to compute the similarity between the *query* and the *reference* instance and plot the resulting alignment (Figure [2\.26](classification.html#fig:alignmentExample)).
```
alignment <- dtw(query$values, ref$values, keep = TRUE)
# Print similarity (distance)
alignment$distance
#> [1] 68.56493
# Plot result.
plot(alignment, type="two", off=1, match.lty=2, match.indices=40,
main="DTW resulting alignment",
xlab="time", ylab="magnitude")
```
FIGURE 2\.26: Resulting alignment.
To perform the actual classification, we will use our well\-known \\(k\\)\-NN classifier with \\(k\=1\\). To classify a *query instance*, we need to compute its DTW distance to every other instance in the training set and predict the label from the closest one. We will test the performance using \\(10\\)\-fold cross\-validation. Since computing all DTW distances takes some time, we can precompute all pairs of distances and store them in a matrix. The auxiliary function `matrix.distances()` does the job. Since this can take some minutes, the results are saved so there is no need to wait next time the code is run.
```
D <- matrix.distances(instances)
# Save results.
save(D, file="D.RData")
```
The `matrix.distances()` returns a list. The first element is an array with the gestures’ classes and the second element is the actual distance matrix. The elements in the diagonal are set to `Inf` to signal that we don’t want to take into account the dissimilarity between a gesture and itself.
For convenience, this matrix is already stored in the file `D.RData` located this chapter’s code directory. The following code performs the \\(10\\)\-fold cross\-validation and computes the performance results.
```
# Load the DTW distances matrix.
load("D.RData")
set.seed(1234)
k <- 10 # Number of folds.
folds <- sample(k, size = length(D[[1]]), replace = T)
predictions <- NULL
groundTruth <- NULL
# Implement k-NN with k=1.
for(i in 1:k){
trainSet <- which(folds != i)
testSet <- which(folds == i)
train.labels <- D[[1]][trainSet]
for(query in testSet){
type <- D[[1]][query]
distances <- D[[2]][query, ][trainSet]
# Return the closest one.
nn <- sort(distances, index.return = T)$ix[1]
pred <- train.labels[nn]
predictions <- c(predictions, pred)
groundTruth <- c(groundTruth, type)
}
} # end of for
```
The line `distances <- D[[2]][query, ][trainSet]` retrieves the pre\-computed distances between the test *query* and all gestures in the train set. Then, those distances are sorted in ascending order and the class of the closest one is used as the prediction. Finally, the performance is calculated.
```
cm <- confusionMatrix(factor(predictions),
factor(groundTruth))
# Compute performance metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: 1 0.84 0.9911111 0.9130435 0.8750000
#> Class: 2 0.84 0.9866667 0.8750000 0.8571429
#> Class: 3 0.96 0.9911111 0.9230769 0.9411765
#> Class: 4 0.98 0.9933333 0.9423077 0.9607843
#> Class: a 0.78 0.9733333 0.7647059 0.7722772
#> Class: b 0.76 0.9955556 0.9500000 0.8444444
#> Class: c 0.90 1.0000000 1.0000000 0.9473684
#> Class: circleLeft 0.78 0.9622222 0.6964286 0.7358491
#> Class: square 1.00 0.9977778 0.9803922 0.9900990
#> Class: triangle 0.92 0.9711111 0.7796610 0.8440367
# Overall performance metrics
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.8760000 0.9862222 0.8824616 0.8768178
```
FIGURE 2\.27: Confusion matrix for hand gestures’ predictions.
The overall recall was \\(0\.87\\) which is not bad. From the confusion matrix (Figure [2\.27](classification.html#fig:gesturesCM)), we can see that the class *‘a’* was often confused with *‘circleLeft’* and vice versa. This makes sense since both have similar motions (see Figure [2\.24](classification.html#fig:gesturesFigure)). Also, *‘b’* was often confused with *‘circleLeft’*. The *‘square’* class was always correctly classified. This example demonstrated how DTW can be used with \\(k\\)\-NN to recognize hand gestures.
2\.6 Dummy Models
-----------------
`dummy_classifiers.R`
When faced with a new problem, you may be tempted to start trying to solve it by using a complex model. Then, you proceed to train your complex model and evaluate it. The results look reasonably good so you think you are done. However, this good performance could only be an *illusion*. Sometimes there are underlying problems with the data that can give the false impression that a model is performing well. Examples of such problems are imbalanced datasets, no correlation between the features and the classes, features not containing enough information, etc. **Dummy models** can be used to spot some of those problems. Dummy models use little or no information at all when making predictions (we’ll see how in a moment).
Furthermore, for some problems (specially in regression) it is not clear what is considered to be a good performance. There are problems in which doing slightly better than random is considered a great achievement (e.g., in forecasting) but for other problems that would be unacceptable. Thus, we need some type of baseline to assess whether or not a particular model is bringing some benefit. Dummy models are not only used to spot problems but can be used as baselines as well.
Dummy models are also called *baseline models* or *dumb models*. One student I was supervising used to call them *stupid models*. When I am angry, I also call them like that, but today I’m in a good mood so I’ll refer to them as *dummy*.
Now, I will present three types of dummy classifiers and how they can be implemented in R.
### 2\.6\.1 Most\-frequent\-class Classifier
As the name implies, the most\-frequent\-class classifier always predicts the most frequent label found in the train set. This means that the model does not even need to look at the features! Once it is presented with a new instance, it just outputs the most common class as the prediction.
To show how it can be implemented, I will use the *SMARTPHONES ACTIVITIES* dataset. For demonstration purposes, I will only keep two classes: *‘Walking’* and *‘Upstairs’*. Furthermore, I will only pick a small percent of the instances with class *‘Upstairs’* to simulate an imbalanced dataset. Imbalanced means that there are classes for which only a few instances exist. More about imbalanced data and how to handle it will be covered in chapter [5](preprocessing.html#preprocessing). After those modifications, we can check the class counts:
```
# Print class counts.
table(dataset$class)
#> Upstairs Walking
#> 200 2081
# In percentages.
table(dataset$class) / nrow(dataset)
#> Upstairs Walking
#> 0.08768084 0.91231916
```
We can see that more than \\(90\\%\\) of the instances belong to class *‘Walking’*. It’s time to define the dummy classifier!
```
# Define the dummy classifier's train function.
most.frequent.class.train <- function(data){
# Get a table with the class counts.
counts <- table(data$class)
# Select the label with the most counts.
most.frequent <- names(which.max(counts))
return(most.frequent)
}
```
The `most.frequent.class.train()` function will learn the parameters from a train set. The only thing this model needs to learn is what is the most frequent class. First, the `table()` function is used to get the class counts and then the name of the class with the max counts is returned. Now we define the predict function which takes as its first argument the learned parameters and as second argument the test set on which we want to make predictions. The parameter only consists of the name of a class.
```
# Define the dummy classifier's predict function.
most.frequent.class.predict <- function(params, data){
# Return the same label for as many rows as there are in data.
return(rep(params, nrow(data)))
}
```
The only thing the predict function does is to return the `params` argument that contains the class name repeated \\(n\\) times. Where \\(n\\) is the number of rows in the test data frame.
Let’s try our functions. The dataset has already been split into \\(50\\%\\) for training and \\(50\\%\\) for testing. First we train the dummy model using the train set. Then, the learned parameter is printed.
```
# Learn the parameters.
dummy.model1 <- most.frequent.class.train(trainset)
# Print the learned parameter.
dummy.model1
#> [1] "Walking"
```
Now we can make predictions on the test set and compute the accuracy.
```
# Make predictions.
predictions <- most.frequent.class.predict(dummy.model1, testset)
# Compute confusion matrix and other performance metrics.
cm <- confusionMatrix(factor(predictions, levels),
factor(testset$class, levels))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.9087719
```
The accuracy was \\(90\.8\\%\\). It seems that the dummy classifier was not that dummy after all! Let’s print the confusion matrix to inspect the predictions.
```
# Print confusion matrix.
cm$table
#> Reference
#> Prediction Walking Upstairs
#> Walking 1036 104
#> Upstairs 0 0
```
From the confusion matrix we can see that all *‘Walking’* activities were correctly classified but none of the *‘Upstairs’* classes were identified. This is because the dummy model only predicts *‘Walking’*. Here we can see that even though it seemed like the dummy model was doing pretty good, it was not that good after all.
We can now try with a decision tree from the `rpart` package.
```
### Let's try with a decision tree
treeClassifier <- rpart(class ~ ., trainset)
tree.predictions <- predict(treeClassifier, testset, type = "class")
cm.tree <- confusionMatrix(factor(tree.predictions, levels),
factor(testset$class, levels))
# Print accuracy
cm.tree$overall["Accuracy"]
#> Accuracy
#> 0.9263158
```
Decision trees are more powerful than dummy classifiers but the accuracy was very similar!
It is a good practice to compare powerful models against dummy models. If their performances are similar, this may be an indication that there is something that needs to be checked. In this example, the problem was that the dataset was imbalanced. It is also adivisable to report not only the accuracy but other metrics. We could also have noted the imbalance problem by looking at the recall of the individual classes, for example.
### 2\.6\.2 Uniform Classifier
This is another type of dummy classifier. This one predicts classes at random with equal probability and can be implemented as follows.
```
# Define the dummy classifier's train function.
uniform.train <- function(data){
# Get the unique classes.
unique.classes <- unique(data$class)
return(unique.classes)
}
# Define the dummy classifier's predict function.
uniform.predict <- function(params, data){
# Sample classes uniformly.
return(sample(unique.classes, size = nrow(data), replace = T))
}
```
At prediction time, it just picks a random label for each instance in the test set. This model achieved an accuracy of only \\(49\.0\\%\\) using the same dataset, but it correctly identified more classes of type *‘Upstairs’*.
```
#> Reference
#> Prediction Walking Upstairs
#> Walking 506 54
#> Upstairs 530 50
```
If a dataset is balanced and the accuracy of the uniform classifier is similar to the more complex model, the problem may be that the features are not providing enough information. That is, the complex classifier was not able to extract any useful patterns from the features.
### 2\.6\.3 Frequency\-based Classifier
This one is similar to the uniform classifier but the probability of choosing a class is proportional to its frequency in the train set. Its implementation is similar to the uniform classifier but makes use of the `prob` parameter in the `sample()` function to specify weights for each class. The higher the weight for a class, the more probable it will be chosen at prediction time. The implementation of this one is in the script `dummy_classifiers.R`.
The frequency\-based classifier achieved an accuracy of \\(85\.5\\%\\). Much lower than the most\-frequent\-class model (\\(90\.8\\%\\)) but it was able to detect some of the *‘Upstairs’* classes.
### 2\.6\.4 Other Dummy Classifiers
Another dummy model that can be used for classification is to apply simple thresholds.
```
if(feature1 < threshold)
return("A")
else
return("B")
```
In fact, the previous rule can be thought of as a very simple decision tree with only one root node. Surprisingly, sometimes simple rules can be difficult to beat by more complex models. In this section I’ve been focusing on classification problems, but dummy models can also be constructed for **regression**. The simplest one would be to predict the mean value of \\(y\\) regardless of the feature values. Another dummy model could predict a random value between the min and max of \\(y\\). If there is a categorical feature, one could predict the mean value based on the category. In fact, that is what we did in chapter [1](intro.html#intro) in the simple regression example.
In summary, one can construct any type of dummy model depending on the application. The takeaway is that dummy models allow us to assess how more complex models perform with respect to some baselines and help us to detect possible problems in the data and features. What I typically do when solving a problem is to start with simple models and/or rules and then, try more complex models. Of course, manual thresholds and simple rules can work remarkably well in some situations but they are not scalable. Depending on the use case, one can just implement the simple solution or go for something more complex if the system is expected to grow or be used in more general ways.
2\.7 Summary
------------
This chapter focused on **classification** models. Classifiers predict a category based on the input features. Here, it was demonstrated how classifiers can be used to detect indoor locations, classify activities, and hand gestures.
* **\\(k\\)\-Nearest Neighbors (\\(k\\)\-NN)** predicts the class of a test point as the majority class of the \\(k\\) nearest neighbors.
* Some classification performance metrics are **recall**, **specificity**, **precision**, **accuracy**, **F1\-score**, etc.
* **Decision trees** are easy\-to\-interpret classifiers trained recursively based on feature importance (for example, purity).
* **Naive Bayes** is a type of classifier where features are assumed to be independent.
* **Dynamic Time Warping (DTW)** computes the similarity between two timeseries after aligning them in time. This can be used for classification for example, in combination with \\(k\\)\-NN.
* **Dummy models** can help to spot possible errors in the data and can also be used as baselines.
2\.1 *k*\-Nearest Neighbors
---------------------------
\\(k\\)\-Nearest Neighbors (\\(k\\)\-NN) is one of the simplest classification algorithms. The predicted class for a given *query instance* is the most common class of its *k* nearest neighbors. A *query instance* is just the instance we want to make predictions on. In its most basic form, the algorithm consists of two steps:
1. Compute the distance between the *query instance* and all *training instances*.
2. Return the most common class label among the *k* nearest training instances (neighbors).
This is a type of *lazy\-learning* algorithm because all the computations take place at prediction time. There are no parameters to learn at training time! The training phase consists only of storing the training instances so they can be compared to the query instance at prediction time. The hyper\-parameter *k* is usually specified by the user and depends on each application. We also need to specify a *distance function* that returns small distances for similar instances and big distances for very dissimilar instances. For numeric features, the **Euclidean distance** is one of the most commonly used distance function. The Euclidean distance between two points can be computed as follows:
\\\[\\begin{equation}
d\\left(p,q\\right) \= \\sqrt{\\sum\_{i\=1}^n{\\left(p\_i\-q\_i\\right)^2}}
\\tag{2\.1}
\\end{equation}\\]
where \\(p\\) and \\(q\\) are \\(n\\)\-dimensional feature vectors and \\(i\\) is the index to the vectors’ elements. Figure [2\.1](classification.html#fig:simpleKnn) shows the idea graphically (adapted from the \\(k\\)\-NN article[4](#fn4) in Wikipedia). The query instance is depicted with the ‘?’ symbol. If we choose \\(k\=3\\) (represented by the inner dashed circle) the predicted class is *‘square’* because there are two squares but only one circle. If \\(k\=5\\) (outer dotted circle), the predicted class is *‘circle’*.
FIGURE 2\.1: \\(k\\)\-NN example for \\(k\=3\\) (inner dashed circle) and \\(k\=5\\) (dotted outer circle). (Adapted from Antti Ajanki AnAj. Source: Wikipedia (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
Typical values for \\(k\\) are small odd numbers like \\(1,2,3,5\\). The \\(k\\)\-NN algorithm can also be used for regression with a small modification: Instead of returning the majority class of the nearest neighbors, return the mean value of their response variable. Despite its simplicity, \\(k\\)\-NN has proved to perform really well in many tasks including time series classification ([Xi et al. 2006](#ref-xi2006)).
### 2\.1\.1 Indoor Location with Wi\-Fi Signals
`indoor_classification.R` `indoor_auxiliary.R`
You might already have experienced some troubles with geolocation services when you are inside a building. Part of this is because GPS technologies do not provide good indoors\-accuracy due to several sources of interference. For some applications, it would be beneficial to have accurate location estimations inside buildings even at room\-level. For example, in domotics and localization services in big public places like airports or shopping malls. Having good indoor location estimates can also be used in behavior analysis such as extracting trajectory patterns.
In this section, we will implement \\(k\\)\-NN to perform indoor location in a building based on Wi\-Fi signals. For instance, we can use a smartphone to scan the nearby Wi\-Fi access points and based on this information, determine our location at room\-level. This can be formulated as a classification problem: Given a set of Wi\-Fi signals as input, predict the location where the device is located.
For this classification problem, we will use the *INDOOR LOCATION* dataset (see Appendix [B](appendixDatasets.html#appendixDatasets)) which was collected with an Android smartphone. The smartphone application scans the nearby access points and stores their information and label. The label is provided by the user and represents the room where the device is located. Several instances for every location were recorded. To generate each instance, the device scans and records the MAC address and signal strength of the nearby access points. A delay of \\(500\\) ms is set between scans. For each location, approximately \\(3\\) minutes of data were collected while the user walked in the specific room. Figure [2\.2](classification.html#fig:layoutHouse) depicts the layout of the building where the data was collected. The data has four different locations: *‘bedroomA’*, *‘bedroomB’*, *‘tvroom’*, and the *‘lobby’*. The lobby (not shown in the layout) is at the same level as bedroom A but on the first floor.
FIGURE 2\.2: Layout of the apartments building. (Adapted by permission from Springer: Lecture Notes in Computer Science, Contextualized Hand Gesture Recognition with Smartphones, Garcia\-Ceja E., Brena R., Galván\-Tejada C.E., 2014, [https://doi.org/10\.1007/978\-3\-319\-07491\-7\_13](https://doi.org/10.1007/978-3-319-07491-7_13)).
Table [2\.1](classification.html#tab:headWifi) shows the first rows of the dataset. The first column is the class. The `scanid` column is a unique identifier for the given Wi\-Fi scan (instance). To preserve privacy, MAC addresses were converted into integer values. Every instance is composed of several rows. For example, the first instance with `scanid=1` has two rows (one row per mac address). Intuitively, the same location should have similar MAC addresses across scans. From the table, we can see that at *bedroomA* access points with MAC address \\(1\\) and \\(2\\) are usually found by the device.
TABLE 2\.1: First rows of Wi\-Fi scans.
| locationid | scanid | mac | signalstrength |
| --- | --- | --- | --- |
| bedroomA | 1 | 1 | \-88\.50 |
| bedroomA | 1 | 2 | \-91\.00 |
| bedroomA | 2 | 1 | \-88\.00 |
| bedroomA | 2 | 2 | \-90\.00 |
| bedroomA | 3 | 1 | \-87\.62 |
| bedroomA | 3 | 2 | \-90\.00 |
| bedroomA | 4 | 2 | \-90\.25 |
| bedroomA | 4 | 1 | \-90\.00 |
| bedroomA | 4 | 3 | \-91\.00 |
Since each instance is composed of several rows, we will convert our data frame into a list of lists where each inner list represents a single instance with the class (`locationId`), a unique id, and a data frame with the corresponding access points. The example code can be found in the script `indoor_classification.R`.
```
# Read Wi-Fi data
df <- read.csv(datapath, stringsAsFactors = F)
# Convert data frame into a list of lists.
# Each inner list represents one instance.
dataset <- wifiScansToList(df)
# Print number of instances in the dataset.
length(dataset)
#> [1] 365
# Print the first instance.
dataset[[1]]
#> $locationId
#> [1] "bedroomA"
#>
#> $scanId
#> [1] 1
#>
#> $accessPoints
#> mac signalstrength
#> 1 1 -88.5
#> 2 2 -91.0
```
First, we read the dataset from the csv file and store it in the data frame `df`. To make things easier, the data frame is converted into a list of lists using the auxiliary function `wifiScansToList()` which is defined in the script `indoor_auxiliary.R`. Next, we print the number of instances in the dataset, that is, the number of lists. The dataset contains \\(365\\) instances. The \\(365\\) was just a coincidence, the data was not collected every day during one year but in the same day. Next, we extract the first instance with `dataset[[1]]`. Here, we see that each instance has three pieces of information. The class (locationId), a unique id (scanId), and a set of access points stored in a data frame. The first instance has two access points with MAC addresses \\(1\\) and \\(2\\). There is also information about the signal strength, though, this one will not be used.
Since we would expect that similar locations have similar MAC addresses and locations that are far away from each other have different MAC addresses, we need a distance measure that captures this notion of similarity. In this case, we cannot use the Euclidean distance on MAC addresses. Even though they were encoded as integer values, they do not represent magnitudes but unique identifiers. Each instance is composed of a set of \\(n\\) MAC addresses stored in the `accessPoints` data frame. To compute the distance between two instances (two sets) we can use the *Jaccard distance*. This distance is based on element sets:
\\\[\\begin{equation}
j\\left(A,B\\right)\=\\frac{\\left\|A\\cup B\\right\|\-\\left\|A\\cap B\\right\|}{\\left\|A\\cup B\\right\|}
\\tag{2\.2}
\\end{equation}\\]
where \\(A\\) and \\(B\\) are sets of MAC addresses. A **set** is an unordered collection of elements with no repetitions. As an example, let’s say we have two sets, \\(S\_1\\) and \\(S\_2\\):
\\\[\\begin{align\*}
S\_1\&\=\\{a,b,c,d,e\\}\\\\
S\_2\&\=\\{e,f,g,a\\}
\\end{align\*}\\]
The set \\(S\_1\\) has \\(5\\) elements (letters) and \\(S\_2\\) has \\(4\\) elements. \\(A \\cup B\\) means the **union** of the two sets and its result is the set of all elements that are either in \\(A\\) or \\(B\\). For instance, the union of \\(S\_1\\) and \\(S\_2\\) is \\(S\_1 \\cup S\_2 \= \\{a,b,c,d,e,f,g\\}\\). The \\(A \\cap B\\) denotes the **intersection** between \\(A\\) and \\(B\\) which is the set of elements that are in both \\(A\\) and \\(B\\). In our example, \\(S\_1 \\cap S\_2 \= \\{a,e\\}\\). Finally the vertical bars \\(\|\|\\) mean the **cardinality** of the set, that is, its number of elements. The cardinality of \\(S\_1\\) is \\(\|S\_1\|\=5\\) because it has \\(5\\) elements. The cardinality of the union of the two sets \\(\|S\_1 \\cup S\_2\|\=7\\) because this set has \\(7\\) elements.
In R, we can implement the Jaccard distance as follows:
```
jaccardDistance <- function(set1, set2){
lengthUnion <- length(union(set1, set2))
lengthIntersectoin <- length(intersect(set1, set2))
d <- (lengthUnion - lengthIntersectoin) / lengthUnion
return(d)
}
```
The implementation is in the script `indoor_auxiliary.R`. Now, we can try our function! Let’s compute the distance between two instances of the same class (*‘bedroomA’*).
```
# Compute jaccard distance between instances with same class:
# (bedroomA)
jaccardDistance(dataset[[1]]$accessPoints$mac,
dataset[[4]]$accessPoints$mac)
#> [1] 0.3333333
```
Now let’s try to compute the distance between instances with different classes.
```
# Jaccard distance of instances with different class:
# (bedroomA and bedroomB)
jaccardDistance(dataset[[1]]$accessPoints$mac,
dataset[[210]]$accessPoints$mac)
#> [1] 0.6666667
```
The distance between instances of the same class was \\(0\.33\\) whereas the distance between instances of the different classes was \\(0\.66\\). So, our function is working as expected.
In the extreme case when the sets \\(A\\) and \\(B\\) are identical, the distance will be \\(0\\). When there are no common elements in the sets, the distance will be \\(1\\). Armed with this distance metric, we can now implement the \\(k\\)\-NN function in R. The `knn_classifier()` implementation is in the script `indoor_auxiliary.R`. Its first argument is the dataset (the list of instances). The second argument *k*, is the number of nearest neighbors to use, and the last two arguments are the indices of the train and test instances, respectively. This indices are pointers to the elements in the `dataset` variable.
```
knn_classifier <- function(dataset, k, trainSetIndices, testSetIndices){
groundTruth <- NULL
predictions <- NULL
for(queryInstance in testSetIndices){
distancesToQuery <- NULL
for(trainInstance in trainSetIndices){
jd <- jaccardDistance(dataset[[queryInstance]]$accessPoints$mac,
dataset[[trainInstance]]$accessPoints$mac)
distancesToQuery <- c(distancesToQuery, jd)
}
indices <- sort(distancesToQuery, index.return = TRUE)$ix
indices <- indices[1:k]
# Indices of the k nearest neighbors
nnIndices <- trainSetIndices[indices]
# Get the actual instances
nnInstances <- dataset[nnIndices]
# Get their respective classes
nnClasses <- sapply(nnInstances, function(e){e[[1]]})
prediction <- Mode(nnClasses)
predictions <- c(predictions, prediction)
groundTruth <- c(groundTruth,
dataset[[queryInstance]]$locationId)
}
return(list(predictions = predictions,
groundTruth = groundTruth))
}
```
For each instance `queryInstance` in the test set, the `knn_classifier()` computes its jaccard distance to every other instance in the train set and stores those distances in `distancesToQuery`. Then, those distances are sorted in ascending order and the most common class among the first \\(k\\) elements is returned as the predicted class. The function `Mode()` returns the most common element. Finally, `knn_classifier()` returns a list with the predictions for every instance in the test set and their respective ground truth class for evaluation.
Now, we can try our classifier. We will use \\(70\\%\\) of the dataset as train set and the remaining as the test set.
```
# Total number of instances
numberInstances <- length(dataset)
# Set seed for reproducibility
set.seed(12345)
# Split into train and test sets.
trainSetIndices <- sample(1:numberInstances,
size = round(numberInstances * 0.7),
replace = F)
testSetIndices <- (1:numberInstances)[-trainSetIndices]
```
The function `knn_classifier()` predicts the class for each test set instance and returns a list with their predictions and their ground truth classes. With this information, we can compute the *accuracy* on the test set which is the percentage of correctly classified instances. In this example, we set \\(k\=3\\).
```
# Obtain predictions on the test set.
result <- knn_classifier(dataset,
k = 3,
trainSetIndices,
testSetIndices)
# Calculate and print accuracy.
sum(result$predictions == result$groundTruth) /
length(result$predictions)
#> [1] 0.9454545
```
Not bad! Our simple \\(k\\)\-NN algorithm achieved an accuracy of \\(94\.5\\%\\). Usually, it is a good idea to visualize the predictions to have a better understanding of the classifier’s behavior. **Confusion matrices** allow us to exactly do that. We can use the `confusionMatrix()` function from the `caret` package to generate a confusion matrix. Its first argument is a factor with the predictions and the second one is a factor with the corresponding true values. This function returns an object with several performance metrics (see next section) and the confusion matrix. The actual confusion matrix is stored in the `table` object.
```
library(caret)
cm <- confusionMatrix(factor(result$predictions),
factor(result$groundTruth))
cm$table # Access the confusion matrix.
#> Reference
#> Prediction bedroomA bedroomB lobby tvroom
#> bedroomA 26 0 3 1
#> bedroomB 0 17 0 1
#> lobby 0 1 28 0
#> tvroom 0 0 0 33
```
The columns of the confusion matrix represent the true classes and the rows the predictions. For example, from the total \\(31\\) instances of type *‘lobby’*, \\(28\\) were correctly classified as *‘lobby’* while \\(3\\) were misclassified as *‘bedroomA’*. Something I find useful is to plot the confusion matrix as proportions instead of counts (Figure [2\.3](classification.html#fig:wifiCM)). From this confusion matrix we see that for the class *‘bedroomB’*, \\(94\\%\\) of the instances were correctly classified while \\(6\\%\\) were mislabeled as *‘lobby’*. On the other hand, instances of type *‘bedroomA’* were always classified correctly.
FIGURE 2\.3: Confusion matrix for location predictions.
A confusion matrix is a good way to analyze the classification results per class and it helps to spot weaknesses which can be used to improve the model, for example, by extracting additional features.
### 2\.1\.1 Indoor Location with Wi\-Fi Signals
`indoor_classification.R` `indoor_auxiliary.R`
You might already have experienced some troubles with geolocation services when you are inside a building. Part of this is because GPS technologies do not provide good indoors\-accuracy due to several sources of interference. For some applications, it would be beneficial to have accurate location estimations inside buildings even at room\-level. For example, in domotics and localization services in big public places like airports or shopping malls. Having good indoor location estimates can also be used in behavior analysis such as extracting trajectory patterns.
In this section, we will implement \\(k\\)\-NN to perform indoor location in a building based on Wi\-Fi signals. For instance, we can use a smartphone to scan the nearby Wi\-Fi access points and based on this information, determine our location at room\-level. This can be formulated as a classification problem: Given a set of Wi\-Fi signals as input, predict the location where the device is located.
For this classification problem, we will use the *INDOOR LOCATION* dataset (see Appendix [B](appendixDatasets.html#appendixDatasets)) which was collected with an Android smartphone. The smartphone application scans the nearby access points and stores their information and label. The label is provided by the user and represents the room where the device is located. Several instances for every location were recorded. To generate each instance, the device scans and records the MAC address and signal strength of the nearby access points. A delay of \\(500\\) ms is set between scans. For each location, approximately \\(3\\) minutes of data were collected while the user walked in the specific room. Figure [2\.2](classification.html#fig:layoutHouse) depicts the layout of the building where the data was collected. The data has four different locations: *‘bedroomA’*, *‘bedroomB’*, *‘tvroom’*, and the *‘lobby’*. The lobby (not shown in the layout) is at the same level as bedroom A but on the first floor.
FIGURE 2\.2: Layout of the apartments building. (Adapted by permission from Springer: Lecture Notes in Computer Science, Contextualized Hand Gesture Recognition with Smartphones, Garcia\-Ceja E., Brena R., Galván\-Tejada C.E., 2014, [https://doi.org/10\.1007/978\-3\-319\-07491\-7\_13](https://doi.org/10.1007/978-3-319-07491-7_13)).
Table [2\.1](classification.html#tab:headWifi) shows the first rows of the dataset. The first column is the class. The `scanid` column is a unique identifier for the given Wi\-Fi scan (instance). To preserve privacy, MAC addresses were converted into integer values. Every instance is composed of several rows. For example, the first instance with `scanid=1` has two rows (one row per mac address). Intuitively, the same location should have similar MAC addresses across scans. From the table, we can see that at *bedroomA* access points with MAC address \\(1\\) and \\(2\\) are usually found by the device.
TABLE 2\.1: First rows of Wi\-Fi scans.
| locationid | scanid | mac | signalstrength |
| --- | --- | --- | --- |
| bedroomA | 1 | 1 | \-88\.50 |
| bedroomA | 1 | 2 | \-91\.00 |
| bedroomA | 2 | 1 | \-88\.00 |
| bedroomA | 2 | 2 | \-90\.00 |
| bedroomA | 3 | 1 | \-87\.62 |
| bedroomA | 3 | 2 | \-90\.00 |
| bedroomA | 4 | 2 | \-90\.25 |
| bedroomA | 4 | 1 | \-90\.00 |
| bedroomA | 4 | 3 | \-91\.00 |
Since each instance is composed of several rows, we will convert our data frame into a list of lists where each inner list represents a single instance with the class (`locationId`), a unique id, and a data frame with the corresponding access points. The example code can be found in the script `indoor_classification.R`.
```
# Read Wi-Fi data
df <- read.csv(datapath, stringsAsFactors = F)
# Convert data frame into a list of lists.
# Each inner list represents one instance.
dataset <- wifiScansToList(df)
# Print number of instances in the dataset.
length(dataset)
#> [1] 365
# Print the first instance.
dataset[[1]]
#> $locationId
#> [1] "bedroomA"
#>
#> $scanId
#> [1] 1
#>
#> $accessPoints
#> mac signalstrength
#> 1 1 -88.5
#> 2 2 -91.0
```
First, we read the dataset from the csv file and store it in the data frame `df`. To make things easier, the data frame is converted into a list of lists using the auxiliary function `wifiScansToList()` which is defined in the script `indoor_auxiliary.R`. Next, we print the number of instances in the dataset, that is, the number of lists. The dataset contains \\(365\\) instances. The \\(365\\) was just a coincidence, the data was not collected every day during one year but in the same day. Next, we extract the first instance with `dataset[[1]]`. Here, we see that each instance has three pieces of information. The class (locationId), a unique id (scanId), and a set of access points stored in a data frame. The first instance has two access points with MAC addresses \\(1\\) and \\(2\\). There is also information about the signal strength, though, this one will not be used.
Since we would expect that similar locations have similar MAC addresses and locations that are far away from each other have different MAC addresses, we need a distance measure that captures this notion of similarity. In this case, we cannot use the Euclidean distance on MAC addresses. Even though they were encoded as integer values, they do not represent magnitudes but unique identifiers. Each instance is composed of a set of \\(n\\) MAC addresses stored in the `accessPoints` data frame. To compute the distance between two instances (two sets) we can use the *Jaccard distance*. This distance is based on element sets:
\\\[\\begin{equation}
j\\left(A,B\\right)\=\\frac{\\left\|A\\cup B\\right\|\-\\left\|A\\cap B\\right\|}{\\left\|A\\cup B\\right\|}
\\tag{2\.2}
\\end{equation}\\]
where \\(A\\) and \\(B\\) are sets of MAC addresses. A **set** is an unordered collection of elements with no repetitions. As an example, let’s say we have two sets, \\(S\_1\\) and \\(S\_2\\):
\\\[\\begin{align\*}
S\_1\&\=\\{a,b,c,d,e\\}\\\\
S\_2\&\=\\{e,f,g,a\\}
\\end{align\*}\\]
The set \\(S\_1\\) has \\(5\\) elements (letters) and \\(S\_2\\) has \\(4\\) elements. \\(A \\cup B\\) means the **union** of the two sets and its result is the set of all elements that are either in \\(A\\) or \\(B\\). For instance, the union of \\(S\_1\\) and \\(S\_2\\) is \\(S\_1 \\cup S\_2 \= \\{a,b,c,d,e,f,g\\}\\). The \\(A \\cap B\\) denotes the **intersection** between \\(A\\) and \\(B\\) which is the set of elements that are in both \\(A\\) and \\(B\\). In our example, \\(S\_1 \\cap S\_2 \= \\{a,e\\}\\). Finally the vertical bars \\(\|\|\\) mean the **cardinality** of the set, that is, its number of elements. The cardinality of \\(S\_1\\) is \\(\|S\_1\|\=5\\) because it has \\(5\\) elements. The cardinality of the union of the two sets \\(\|S\_1 \\cup S\_2\|\=7\\) because this set has \\(7\\) elements.
In R, we can implement the Jaccard distance as follows:
```
jaccardDistance <- function(set1, set2){
lengthUnion <- length(union(set1, set2))
lengthIntersectoin <- length(intersect(set1, set2))
d <- (lengthUnion - lengthIntersectoin) / lengthUnion
return(d)
}
```
The implementation is in the script `indoor_auxiliary.R`. Now, we can try our function! Let’s compute the distance between two instances of the same class (*‘bedroomA’*).
```
# Compute jaccard distance between instances with same class:
# (bedroomA)
jaccardDistance(dataset[[1]]$accessPoints$mac,
dataset[[4]]$accessPoints$mac)
#> [1] 0.3333333
```
Now let’s try to compute the distance between instances with different classes.
```
# Jaccard distance of instances with different class:
# (bedroomA and bedroomB)
jaccardDistance(dataset[[1]]$accessPoints$mac,
dataset[[210]]$accessPoints$mac)
#> [1] 0.6666667
```
The distance between instances of the same class was \\(0\.33\\) whereas the distance between instances of the different classes was \\(0\.66\\). So, our function is working as expected.
In the extreme case when the sets \\(A\\) and \\(B\\) are identical, the distance will be \\(0\\). When there are no common elements in the sets, the distance will be \\(1\\). Armed with this distance metric, we can now implement the \\(k\\)\-NN function in R. The `knn_classifier()` implementation is in the script `indoor_auxiliary.R`. Its first argument is the dataset (the list of instances). The second argument *k*, is the number of nearest neighbors to use, and the last two arguments are the indices of the train and test instances, respectively. This indices are pointers to the elements in the `dataset` variable.
```
knn_classifier <- function(dataset, k, trainSetIndices, testSetIndices){
groundTruth <- NULL
predictions <- NULL
for(queryInstance in testSetIndices){
distancesToQuery <- NULL
for(trainInstance in trainSetIndices){
jd <- jaccardDistance(dataset[[queryInstance]]$accessPoints$mac,
dataset[[trainInstance]]$accessPoints$mac)
distancesToQuery <- c(distancesToQuery, jd)
}
indices <- sort(distancesToQuery, index.return = TRUE)$ix
indices <- indices[1:k]
# Indices of the k nearest neighbors
nnIndices <- trainSetIndices[indices]
# Get the actual instances
nnInstances <- dataset[nnIndices]
# Get their respective classes
nnClasses <- sapply(nnInstances, function(e){e[[1]]})
prediction <- Mode(nnClasses)
predictions <- c(predictions, prediction)
groundTruth <- c(groundTruth,
dataset[[queryInstance]]$locationId)
}
return(list(predictions = predictions,
groundTruth = groundTruth))
}
```
For each instance `queryInstance` in the test set, the `knn_classifier()` computes its jaccard distance to every other instance in the train set and stores those distances in `distancesToQuery`. Then, those distances are sorted in ascending order and the most common class among the first \\(k\\) elements is returned as the predicted class. The function `Mode()` returns the most common element. Finally, `knn_classifier()` returns a list with the predictions for every instance in the test set and their respective ground truth class for evaluation.
Now, we can try our classifier. We will use \\(70\\%\\) of the dataset as train set and the remaining as the test set.
```
# Total number of instances
numberInstances <- length(dataset)
# Set seed for reproducibility
set.seed(12345)
# Split into train and test sets.
trainSetIndices <- sample(1:numberInstances,
size = round(numberInstances * 0.7),
replace = F)
testSetIndices <- (1:numberInstances)[-trainSetIndices]
```
The function `knn_classifier()` predicts the class for each test set instance and returns a list with their predictions and their ground truth classes. With this information, we can compute the *accuracy* on the test set which is the percentage of correctly classified instances. In this example, we set \\(k\=3\\).
```
# Obtain predictions on the test set.
result <- knn_classifier(dataset,
k = 3,
trainSetIndices,
testSetIndices)
# Calculate and print accuracy.
sum(result$predictions == result$groundTruth) /
length(result$predictions)
#> [1] 0.9454545
```
Not bad! Our simple \\(k\\)\-NN algorithm achieved an accuracy of \\(94\.5\\%\\). Usually, it is a good idea to visualize the predictions to have a better understanding of the classifier’s behavior. **Confusion matrices** allow us to exactly do that. We can use the `confusionMatrix()` function from the `caret` package to generate a confusion matrix. Its first argument is a factor with the predictions and the second one is a factor with the corresponding true values. This function returns an object with several performance metrics (see next section) and the confusion matrix. The actual confusion matrix is stored in the `table` object.
```
library(caret)
cm <- confusionMatrix(factor(result$predictions),
factor(result$groundTruth))
cm$table # Access the confusion matrix.
#> Reference
#> Prediction bedroomA bedroomB lobby tvroom
#> bedroomA 26 0 3 1
#> bedroomB 0 17 0 1
#> lobby 0 1 28 0
#> tvroom 0 0 0 33
```
The columns of the confusion matrix represent the true classes and the rows the predictions. For example, from the total \\(31\\) instances of type *‘lobby’*, \\(28\\) were correctly classified as *‘lobby’* while \\(3\\) were misclassified as *‘bedroomA’*. Something I find useful is to plot the confusion matrix as proportions instead of counts (Figure [2\.3](classification.html#fig:wifiCM)). From this confusion matrix we see that for the class *‘bedroomB’*, \\(94\\%\\) of the instances were correctly classified while \\(6\\%\\) were mislabeled as *‘lobby’*. On the other hand, instances of type *‘bedroomA’* were always classified correctly.
FIGURE 2\.3: Confusion matrix for location predictions.
A confusion matrix is a good way to analyze the classification results per class and it helps to spot weaknesses which can be used to improve the model, for example, by extracting additional features.
2\.2 Performance Metrics
------------------------
Performance metrics allow us to assess the generalization performance of a model from different angles. The most common performance metric for classification is the accuracy:
\\\[\\begin{equation}
accuracy \= \\frac{\\\# \\textrm{ correctly classified instances}}{\\textrm{total } \\\# \\textrm{ instances}}
\\tag{2\.3}
\\end{equation}\\]
In order to have a better understanding of the generalization performance of a model, it is a good practice to compute several performance metrics in addition to the accuracy. Accuracy also has some limitations, especially in highly imbalanced datasets. The following metrics provide different views of a model’s performance for the binary case (when there are only two classes). These metrics can be extended to the multi\-class setting using a *one vs. all* approach. That is, compare each class to the remaining classes.
Before introducing the other metrics, it is convenient to define some terms:
* True positives (TP): Positive examples classified as positives.
* True negatives (TN): Negative examples classified as negatives.
* False positives (FP): Negative examples misclassified as positives.
* False negatives (FN): Positive examples misclassified as negatives.
For the binary classification case, it is you who decide which one is the positive class. For example, if your problem is about detecting falls and you have two classes: *‘fall’* and *‘nofall’*, then, considering *‘fall’* as the positive class makes sense since this is the one you are most interested in detecting. The following, is a list of commonly used metrics in classification:
**Recall:** The proportion of positives that are classified as such. Alternative names for recall are: **true positive rate**, **sensitivity**, and **hit rate**. In fact, the diagonal of the confusion matrix with proportions of the indoor location example shows the recall for each class (Figure [2\.3](classification.html#fig:wifiCM)).
\\\[\\begin{equation}
recall \= \\frac{\\textrm{TP}}{\\textrm{P}}
\\tag{2\.4}
\\end{equation}\\]
**Specificity:** The proportion of negatives classified as such. It is also called the **true negative rate**.
\\\[\\begin{equation}
specificity \= \\frac{\\textrm{TN}}{\\textrm{N}}
\\tag{2\.5}
\\end{equation}\\]
**Precision:** The fraction of true positives among those classified as positives. Also known as the **positive predictive value**.
\\\[\\begin{equation}
precision \= \\frac{\\textrm{TP}}{\\textrm{TP \+ FP}}
\\tag{2\.6}
\\end{equation}\\]
**F1\-score:** This is the harmonic mean of precision and recall.
\\\[\\begin{equation}
\\textit{F1\-score} \= 2 \\cdot \\frac{\\textrm{precision} \\cdot \\textrm{recall}}{\\textrm{precision \+ recall}}
\\tag{2\.7}
\\end{equation}\\]
The `confusionMatrix()` function from the `caret` package computes several of those metrics. From our previous confusion matrix object, we can inspect those metrics by class.
```
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: bedroomA 1.0000000 0.9523810 0.8666667 0.9285714
#> Class: bedroomB 0.9444444 0.9891304 0.9444444 0.9444444
#> Class: lobby 0.9032258 0.9873418 0.9655172 0.9333333
#> Class: tvroom 0.9428571 1.0000000 1.0000000 0.9705882
```
The mean of the metrics across all classes can be computed by taking the mean for each column of the returned object:
```
colMeans(cm$byClass[,c("Recall", "Specificity", "Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.9476318 0.9822133 0.9441571 0.9442344
```
### 2\.2\.1 Confusion Matrix
As briefly introduced in the previous section, a *confusion matrix* provides a nice way to understand the model’s predictions and spot where it made mistakes. Figure [2\.4](classification.html#fig:binaryCM) shows a confusion matrix for the binary case. The columns represent the true classes and the rows the predicted classes. The **P** stands for the positive cases and the **N** for the negative ones. Each entry in the matrix corresponds to the TP, TN, FP, and FN. The TP and TN are the correct classifications whereas the FN and FP are the misclassifications.
FIGURE 2\.4: Confusion matrix for the binary case. P: positives, N: negatives.
Figure [2\.5](classification.html#fig:binaryCM2) shows a concrete example of a confusion matrix derived from a list of \\(15\\) instances with their predictions and their corresponding true values (ground truth). For example, the first element in the list is a **P** and it was correctly classified as a **P**. The eight element is a **P** but it was misclassified as **N**. The associated confusion matrix for these ground truth and predicted classes is shown at the bottom.
There are \\(7\\) true positives and \\(3\\) true negatives. In total, \\(10\\) instances were correctly classified (TP and TN) and \\(5\\) were misclassified (FP and FN). From this matrix we can calculate what is the total number of true positives by taking the sum of the first column, \\(10\\) in this case. The total number of true negatives is obtained by summing the second column, \\(5\\) in this case. Having this information we can compute any of the previous performance metrics: accuracy, recall, specificity, precision, and F1\-score.
FIGURE 2\.5: A concrete example of a confusion matrix for the binary case. P:positives, N:negatives.
Be aware that there is no standard that defines whether the true classes or the predicted classes go in the rows or columns, thus, you need to check for this everytime you encounter a new confusion matrix.
`shiny_metrics.R` This shiny app demonstrates how different performance metrics behave when the confusion matrix values change.
### 2\.2\.1 Confusion Matrix
As briefly introduced in the previous section, a *confusion matrix* provides a nice way to understand the model’s predictions and spot where it made mistakes. Figure [2\.4](classification.html#fig:binaryCM) shows a confusion matrix for the binary case. The columns represent the true classes and the rows the predicted classes. The **P** stands for the positive cases and the **N** for the negative ones. Each entry in the matrix corresponds to the TP, TN, FP, and FN. The TP and TN are the correct classifications whereas the FN and FP are the misclassifications.
FIGURE 2\.4: Confusion matrix for the binary case. P: positives, N: negatives.
Figure [2\.5](classification.html#fig:binaryCM2) shows a concrete example of a confusion matrix derived from a list of \\(15\\) instances with their predictions and their corresponding true values (ground truth). For example, the first element in the list is a **P** and it was correctly classified as a **P**. The eight element is a **P** but it was misclassified as **N**. The associated confusion matrix for these ground truth and predicted classes is shown at the bottom.
There are \\(7\\) true positives and \\(3\\) true negatives. In total, \\(10\\) instances were correctly classified (TP and TN) and \\(5\\) were misclassified (FP and FN). From this matrix we can calculate what is the total number of true positives by taking the sum of the first column, \\(10\\) in this case. The total number of true negatives is obtained by summing the second column, \\(5\\) in this case. Having this information we can compute any of the previous performance metrics: accuracy, recall, specificity, precision, and F1\-score.
FIGURE 2\.5: A concrete example of a confusion matrix for the binary case. P:positives, N:negatives.
Be aware that there is no standard that defines whether the true classes or the predicted classes go in the rows or columns, thus, you need to check for this everytime you encounter a new confusion matrix.
`shiny_metrics.R` This shiny app demonstrates how different performance metrics behave when the confusion matrix values change.
2\.3 Decision Trees
-------------------
Decision trees are powerful predictive models (especially when combining several of them, see chapter [3](ensemble.html#ensemble)) used for classification and regression tasks. Here, the focus will be on classification. Each node in a tree represents partial or final decisions based on a single feature. If a node is a leaf, then it represents a final decision. A leaf is simply a terminal node, i.e, it has no children nodes. Given a feature vector representing an instance, the predicted class is obtained by testing the feature values and following the tree path until a leaf is reached. Figure [2\.6](classification.html#fig:treeExample) exemplifies a query instance with an unknown class (left) and a decision tree (right). To predict the class of an unknown instance, its features are evaluated starting at the root of the tree. In this case *number\_wheels* is \\(4\\) in the query instance so we take the left path from the root. Now, we need to evaluate *weight*. This time the test is false since the weight is \\(2300\\) and we take the right path. Since this is a leaf node the final predicted class is *‘truck’*. Usually, small trees are preferable (small depth) because they are easier to visualize and interpret and are less prone to overfitting. The example tree has a depth of 2\. Had the number of wheels been \\(2\\) instead of \\(4\\), then testing the *weight* feature would not have been necessary.
FIGURE 2\.6: Example decision tree. The query instance is classified as truck by this tree.
As shown in the example, decision trees are easy to interpret and the final result can be explained by just following the path. Now let’s see how these decision trees are learned from data. Consider the following artificial *concert* dataset (Figure [2\.7](classification.html#fig:concertTable)).
FIGURE 2\.7: Concert dataset.
The first four variables are features and the last column is the class. The class is the decision whether or not we should go to a music concert based on the other variables. In this case, all variables are binary except *Price* which has three possible values: *low*, *medium*, and *high*.
* *Tired:* Indicates whether the person is tired or not.
* *Rain:* Whether it is raining or not.
* *Metal:* Indicates whether this is a heavy metal concert or not.
* *Price:* Ticket price.
* *Go:* The decision of whether to go to the music concert or not.
The main question when building a tree is which feature should be at the root (top). Once you answer this question, you may need to grow the tree by adding another feature (node) as one of the root’s children. To decide which new feature to add you need to answer the same first question: “What feature should be at the root of this subtree?”. This is a recursive definition! The tree keeps growing until you reach a leaf node, there are no more features to select from, or you have reached a predefined maximum depth.
For the *concert* dataset we need to find which is the best variable to be placed at the root. Let’s suppose we need to choose between *Price* and *Metal*. Figure [2\.8](classification.html#fig:treeAlgo1) shows these two possibilities.
FIGURE 2\.8: Two example trees with one variable split by Price (left) and Metal (right).
If we select *Price*, there are three possible subnodes, one for each value: *low*, *medium*, and *high*. If *Price* is *low* then four instances fall into this subtree (the first four from the table). For all of them, the value of *Go* is \\(1\\). If *Price* is *high*, two instances fall into this category and their *Go* value is \\(0\\), thus if the price is high then you should not go to the concert according to this data. There are six instances for which the *Price* value is *medium*. From those, two of them have *Go\=1* and the remaining four have *Go\=0*. For cases when the price is *low* or *high* we can arrive at a solution. If the price is *low* then go to the concert, if the price is *high* then do not go. However, if the price is *medium* it is still not clear what to do since this subnode is not *pure*. That is, the labels of the instances are mixed: two with an output of \\(1\\) and four with an output of \\(0\\). In this case we can try to use another feature to decide and grow the tree but first, let’s look at what happens if we decide to use *Metal* as the first feature at the root. In this case, we end up with two subsets with six instances each. And for each subnode, what decision should we take is still not clear because the output is ‘mixed’ (Go: 3, NotGo: 3\). At this point we would need to continue growing the tree below each subnode.
Intuitively, it seems like *Price* is a better feature since its subnodes are more *pure*. Then we can use another feature to split the instances whose *Price* is *medium*. For example, using the *Metal* variable. Figure [2\.9](classification.html#fig:treeAlgo2) shows how this would look like. Since one of the subnodes of *Metal* is still not pure we can further split it using the *Rain* variable, for example. At this point, we can not split any further. Note that the *Tired* variable was never used.
FIGURE 2\.9: Tree splitting example. Left:tree splits. Right:Highlighted instances when splitting by Price and Metal.
So far, we have chosen the root variable based on which one looks more pure but to automate the process, we need a way to measure this *purity* in a quantitative manner. One way to do that is by using the *entropy*. *Entropy* is a measure of uncertainty from information theory. It is \\(0\\) when there is no uncertainty and \\(1\\) when there is complete uncertainty. The entropy of a discrete variable \\(X\\) with values \\(x\_1\\dots x\_n\\) and probability mass function \\(P(X)\\) is:
\\\[\\begin{equation}
H(X) \= \-\\sum\_{i\=1}^n{P(x\_i)log P(x\_i)}
\\tag{2\.8}
\\end{equation}\\]
Take for example a fair coin with probability of heads and tails \= \\(0\.5\\) each. The entropy for that coin is:
\\\[\\begin{equation\*}
H(X) \= \- (0\.5\)log(0\.5\) \+ (0\.5\)log(0\.5\) \= 1
\\end{equation\*}\\]
Since we do not know what will be the result when we drop the coin, the entropy is maximum. Now consider the extreme case when the coin is biased such that the probability of heads is \\(1\\) and the probability of tails is \\(0\\). The entropy in this case is zero:
\\\[\\begin{equation\*}
H(X) \= \- (1\)log(1\) \+ (0\)log(0\) \= 0
\\end{equation\*}\\]
If we know that the result is always going to be heads, then there is no uncertainty when the coin is dropped. The entropy of \\(p\\) positive examples and \\(n\\) negative examples is:
\\\[\\begin{equation}
H(p, n) \= \- (\\frac{p}{p\+n})log(\\frac{p}{p\+n}) \+ (\\frac{n}{p\+n})log(\\frac{n}{p\+n})
\\tag{2\.9}
\\end{equation}\\]
Thus, we can use this to compute the entropy for the three possible values of *Price* with respect to the class. The positives are the instances where *Go\=1* and the negatives are the instances where *Go\=0*:
\\\[\\begin{equation\*}
H\_{price\=low}(4, 0\) \= \- (\\frac{4}{4\+0})log(\\frac{4}{4\+0}) \+ (\\frac{0}{4\+0})log(\\frac{0}{4\+0}) \= 0
\\end{equation\*}\\]
\\\[\\begin{equation\*}
H\_{price\=medium}(2, 4\) \= \- (\\frac{2}{2\+4})log(\\frac{2}{2\+4}) \+ (\\frac{4}{2\+4})log(\\frac{4}{2\+4}) \= 0\.918
\\end{equation\*}\\]
\\\[\\begin{equation\*}
H\_{price\=high}(0, 2\) \= \- (\\frac{0}{0\+2})log(\\frac{0}{0\+2}) \+ (\\frac{2}{0\+2})log(\\frac{2}{0\+2}) \= 0
\\end{equation\*}\\]
The average of those three can be calculated by taking into account the number of corresponding instances for each value and the total number of instances (\\(12\\)):
\\\[\\begin{equation\*}
meanH(price) \= (4/12\)(0\) \+ (6/12\)(0\.918\) \+ (2/12\)(0\) \= 0\.459
\\end{equation\*}\\]
Before deciding to split on *Price* the entropy of the entire dataset is \\(1\\) since there are six positive and negative examples:
\\\[\\begin{equation\*}
H(6,6\) \= 1
\\end{equation\*}\\]
Now we can compute the *information gain* for *Price*. Intuitively, the information gain tells you how powerful this variable is at dividing the instances based on their class, that is, how much you are learning:
\\\[\\begin{equation\*}
infoGain(Price) \= 1 \- meanH(Price) \= 1 \- 0\.459 \= 0\.541
\\end{equation\*}\\]
Since you want to learn fast, you want your root node to be the one with the highest information gain. For the rest of the variables the information gain is:
\\(infoGain(Tired) \= 0\\)
\\(infoGain(Rain) \= 0\.020\\)
\\(infoGain(Metal) \= 0\\)
The highest information gain is produced by *Price*, thus, it is selected as the root node. Then, the process continues recursively for each branch but excluding *Price*. Since branches with values *low* and *high* are already done, we only need to further split *medium*. Sometimes it is not possible to have completely pure nodes like with *low* and *high*. This can happen for example, when there are no more attributes left or when two or more instances have the same feature values but different labels. In those situations the final prediction is the most common label (majority vote).
There exist many implementations of decision trees. Some implementations compute variable importance using the entropy (as shown here) but others use the Gini index, for example. Each implementation also treats numeric variables in different ways. Pruning the tree using different techniques is also common in order to reduce its size.
Some of the most common implementations are C4\.5 trees ([Quinlan 2014](#ref-quinlan2014)) and CART ([Steinberg and Colla 2009](#ref-steinberg2009)). The later is implemented in the `rpart` R package ([Therneau and Atkinson 2019](#ref-rpart)) which will be used in the following section to build a model that predicts physical activities from smartphones sensor data.
### 2\.3\.1 Activity Recognition with Smartphones
`smartphone_activities.R`
As mentioned in the introduction, an example of behavior is an observable physical activity. We can infer what **physical activity** someone is doing by looking at her/his body movements. Observing physical activities can provide useful behavioral and contextual information about someone. This can also be used as a proxy to, for example, infer someone’s health condition by detecting deviations in activity patterns.
Nowadays, most smartphones come with a tri\-axial accelerometer sensor. This sensor measures gravitational forces from the \\(x\\), \\(y\\), and \\(z\\) axes. This information can be used to capture movement patterns from the user and automate the process of monitoring the type of physical activity being performed.
In this section, we will use decision trees to automatically classify physical activities from acceleration data. We will use the *WISDM* dataset[5](#fn5) and from now on, I will refer to it as the *SMARTPHONE ACTIVITIES* dataset. It contains acceleration recordings that were collected with a smartphone and was made available by Kwapisz, Weiss, and Moore ([2010](#ref-kwapisz2010)). The dataset has \\(6\\) different activities: *‘walking’*, *‘jogging’*, *‘walking upstairs’*, *‘walking downstairs’*, *‘sitting’* and *‘standing’*. The data were collected by \\(36\\) volunteers with an Android phone located in their pant’s pocket and with a sampling rate of \\(20\\) Hz (\\(1\\) sample every \\(50\\) milliseconds).
The dataset contains two types of files. One with the raw accelerometer data and the other one after feature extraction. Figure [2\.10](classification.html#fig:wisdmFirstLines) shows the first \\(10\\) lines of the raw accelerometer values of the first file. The first column is the id of the user that collected the data and the second column is the class. The third column is the timestamp and the remaining columns are the \\(x\\), \\(y\\), and \\(z\\) accelerometer values, respectively.
FIGURE 2\.10: First 10 lines of raw accelerometer data.
Usually, classification models are not trained with the raw data but with *feature vectors* extracted from the raw data. Feature vectors have the advantage of being more compact, thus, making the learning phase more efficient. For activity recognition, the feature extraction process consists of defining a moving window of size \\(w\\) that starts at position \\(i\\). At the beginning, \\(i\\) is the index pointing to the first accelerometer readings. Then, \\(n\\) statistical features are computed on the elements covered by the window such as mean, standard deviation, \\(0\\)\-crossings, etc. This will produce a \\(n\\)\-dimensional feature vector and the process is repeated by moving the window \\(s\\) steps forward. Typical values of \\(s\\) are such that the overlap between the previous window position and the next one is about \\(30\\%\\) to \\(50\\%\\). An overlap of \\(0\\) is also typical, that is, \\(s \= w\\). Figure [2\.11](classification.html#fig:featureExtraction) depicts the process.
FIGURE 2\.11: Moving window for feature extraction.
Once we have the set of feature vectors and their associated class labels, we can use them to train a classifier and make predictions on new data (Figure [2\.12](classification.html#fig:extractedFeatureVectors)).
FIGURE 2\.12: The extracted feature vectors are used to train a classifier.
For this example, we will use the file with features already extracted. The authors used windows of \\(10\\) seconds which is equivalent to \\(200\\) observations given the \\(20\\) Hz sampling rate and they used \\(0\\%\\) overlap. From each window, they extracted \\(43\\) features such as the mean, standard deviation, absolute deviations, etc.
Let’s read and print the first rows of the dataset. The script for this section is `smartphone_activities.R`. The data frame has several columns, but we only print the first five features and the class which is stored in the last column.
```
# Read data.
df <- read.csv(datapath,stringsAsFactors = F)
# Some code to clean the dataset.
# (cleaning code not shown here).
# Print first rows of the dataset.
head(df[,c(1:5,40)])
#> X0 X1 X2 X3 X4 class
#> 1 0.04 0.09 0.14 0.12 0.11 Jogging
#> 2 0.12 0.12 0.06 0.07 0.11 Jogging
#> 3 0.14 0.09 0.11 0.09 0.09 Jogging
#> 4 0.06 0.10 0.09 0.09 0.11 Walking
#> 5 0.12 0.11 0.10 0.08 0.10 Walking
#> 6 0.09 0.09 0.10 0.12 0.08 Walking
#> 7 0.12 0.12 0.12 0.13 0.15 Upstairs
#> 8 0.10 0.10 0.10 0.10 0.11 Upstairs
#> 9 0.08 0.07 0.08 0.08 0.05 Upstairs
```
Our aim is to predict the class based on all the numeric features. We will use the `rpart` package ([Therneau and Atkinson 2019](#ref-rpart)) which implements classification and regression trees. We will assess the performance of the decision tree with \\(10\\)\-fold cross\-validation. We can use the `sample()` function to generate the folds. This function will sample \\(n\\) integers from \\(1\\) to \\(k\\) where \\(n\\) is the number of rows in the data frame.
```
# Package with implementations of decision trees.
library(rpart)
# Set seed for reproducibility.
set.seed(1234)
# Define the number of folds.
k <- 10
# Generate folds.
folds <- sample(k, size = nrow(df), replace = TRUE)
# Print first 10 values.
head(folds)
#> [1] 10 6 5 9 5 6
```
The `folds` variable stores the fold each instance belongs to. For example, the first instance belongs to fold \\(10\\), the second instance belongs to fold \\(6\\), and so on. We can now generate our test and train sets. We will iterate \\(k\=10\\) times. For each iteration \\(i\\), the test set is built using the instances that belong to fold \\(i\\) and the train set will be composed of the remaining instances (those that do not belong to fold \\(i\\)). Next, the `rpart()` function is used to train the decision tree with the train set. By default, `rpart()` performs \\(10\\)\-fold cross\-validation internally. To avoid this, we set the parameter `xval = 0`. Then, we can use the trained model to obtain the predictions on the test set with the generic `predict()` function. The ground truth classes and the predictions are stored so the performance metrics can be computed.
```
# Variable to store ground truth classes.
groundTruth <- NULL
# Variable to store the classifier's predictions.
predictions <- NULL
for(i in 1:k){
trainSet <- df[which(folds != i), ]
testSet <- df[which(folds == i), ]
# Train the decision tree
treeClassifier <- rpart(class ~ .,
trainSet, xval=0)
# Get predictions on the test set.
foldPredictions <- predict(treeClassifier,
testSet, type = "class")
predictions <- c(predictions,
as.character(foldPredictions))
groundTruth <- c(groundTruth,
as.character(testSet$class))
}
```
The first argument of the `rpart()` function is `class ~ .` which is a formula that instructs the method to use the *class* column as the class. The `~ .` means “use all the remaining columns as features”. Now, we can use the `confusionMatrix()` function to compute the performance metrics and the confusion matrix.
```
cm <- confusionMatrix(as.factor(predictions),
as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.7895903
# Print performance metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: Downstairs 0.2821970 0.9617587 0.4434524 0.3449074
#> Class: Jogging 0.9612308 0.9601898 0.9118506 0.9358898
#> Class: Sitting 0.8366013 0.9984351 0.9696970 0.8982456
#> Class: Standing 0.8983740 0.9932328 0.8632812 0.8804781
#> Class: Upstairs 0.2246835 0.9669870 0.4733333 0.3047210
#> Class: Walking 0.9360884 0.8198981 0.7642213 0.8414687
# Print overall metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.6898625 0.9500836 0.7376393 0.7009518
```
FIGURE 2\.13: Confusion matrix for activities’ predictions.
The overall accuracy was \\(78\\%\\) and by looking at the individual performance metrics, some classes had low scores like *‘walking downstairs’* and *‘walking upstairs’*. From the confusion matrix (Figure [2\.13](classification.html#fig:activitiesTreeCM)), it can be seen that those two activities were often confused with each other but also with the *‘walking’* activity. The package `rpart.plot` ([Milborrow 2019](#ref-rpartplot)) can be used to plot the resulting tree (Figure [2\.14](classification.html#fig:activitiesTree)).
```
library(rpart.plot)
# Plot the tree from the last fold.
rpart.plot(treeClassifier, fallen.leaves = F,
shadow.col = "gray", legend.y = 1)
```
FIGURE 2\.14: Resulting decision tree.
The `fallen.leaves = F` argument prevents the leaves to be plotted at the bottom. This is useful if the tree has many nodes. Each node shows the predicted class, the predicted probability of each class, and the percentage of observations in the node. The plot also shows the feature used for each split. We can see that the *YABSOLDEV* variable is at the root thus, it had the highest information gain with the initial set of instances. At the root of the tree, before looking at any of the features, the predicted class is *‘Walking’*. This is because its prior probability is the highest one (\\(\\approx 0\.39\\)), that is, it’s the most common activity present in the dataset. So, if we didn’t have any other information, our best bet would be to predict the most frequent activity.
```
# Prior probabilities.
table(trainSet$class) / nrow(trainSet)
#> Downstairs Jogging Sitting Standing Upstairs Walking
#> 0.09882885 0.29607561 0.05506472 0.04705157 0.11793713 0.38504212
```
These results look promising, but they can still be improved. In the next chapter, I will show you how to improve these results with *Ensemble Learning* which is a method that is used to aggregate many models.
### 2\.3\.1 Activity Recognition with Smartphones
`smartphone_activities.R`
As mentioned in the introduction, an example of behavior is an observable physical activity. We can infer what **physical activity** someone is doing by looking at her/his body movements. Observing physical activities can provide useful behavioral and contextual information about someone. This can also be used as a proxy to, for example, infer someone’s health condition by detecting deviations in activity patterns.
Nowadays, most smartphones come with a tri\-axial accelerometer sensor. This sensor measures gravitational forces from the \\(x\\), \\(y\\), and \\(z\\) axes. This information can be used to capture movement patterns from the user and automate the process of monitoring the type of physical activity being performed.
In this section, we will use decision trees to automatically classify physical activities from acceleration data. We will use the *WISDM* dataset[5](#fn5) and from now on, I will refer to it as the *SMARTPHONE ACTIVITIES* dataset. It contains acceleration recordings that were collected with a smartphone and was made available by Kwapisz, Weiss, and Moore ([2010](#ref-kwapisz2010)). The dataset has \\(6\\) different activities: *‘walking’*, *‘jogging’*, *‘walking upstairs’*, *‘walking downstairs’*, *‘sitting’* and *‘standing’*. The data were collected by \\(36\\) volunteers with an Android phone located in their pant’s pocket and with a sampling rate of \\(20\\) Hz (\\(1\\) sample every \\(50\\) milliseconds).
The dataset contains two types of files. One with the raw accelerometer data and the other one after feature extraction. Figure [2\.10](classification.html#fig:wisdmFirstLines) shows the first \\(10\\) lines of the raw accelerometer values of the first file. The first column is the id of the user that collected the data and the second column is the class. The third column is the timestamp and the remaining columns are the \\(x\\), \\(y\\), and \\(z\\) accelerometer values, respectively.
FIGURE 2\.10: First 10 lines of raw accelerometer data.
Usually, classification models are not trained with the raw data but with *feature vectors* extracted from the raw data. Feature vectors have the advantage of being more compact, thus, making the learning phase more efficient. For activity recognition, the feature extraction process consists of defining a moving window of size \\(w\\) that starts at position \\(i\\). At the beginning, \\(i\\) is the index pointing to the first accelerometer readings. Then, \\(n\\) statistical features are computed on the elements covered by the window such as mean, standard deviation, \\(0\\)\-crossings, etc. This will produce a \\(n\\)\-dimensional feature vector and the process is repeated by moving the window \\(s\\) steps forward. Typical values of \\(s\\) are such that the overlap between the previous window position and the next one is about \\(30\\%\\) to \\(50\\%\\). An overlap of \\(0\\) is also typical, that is, \\(s \= w\\). Figure [2\.11](classification.html#fig:featureExtraction) depicts the process.
FIGURE 2\.11: Moving window for feature extraction.
Once we have the set of feature vectors and their associated class labels, we can use them to train a classifier and make predictions on new data (Figure [2\.12](classification.html#fig:extractedFeatureVectors)).
FIGURE 2\.12: The extracted feature vectors are used to train a classifier.
For this example, we will use the file with features already extracted. The authors used windows of \\(10\\) seconds which is equivalent to \\(200\\) observations given the \\(20\\) Hz sampling rate and they used \\(0\\%\\) overlap. From each window, they extracted \\(43\\) features such as the mean, standard deviation, absolute deviations, etc.
Let’s read and print the first rows of the dataset. The script for this section is `smartphone_activities.R`. The data frame has several columns, but we only print the first five features and the class which is stored in the last column.
```
# Read data.
df <- read.csv(datapath,stringsAsFactors = F)
# Some code to clean the dataset.
# (cleaning code not shown here).
# Print first rows of the dataset.
head(df[,c(1:5,40)])
#> X0 X1 X2 X3 X4 class
#> 1 0.04 0.09 0.14 0.12 0.11 Jogging
#> 2 0.12 0.12 0.06 0.07 0.11 Jogging
#> 3 0.14 0.09 0.11 0.09 0.09 Jogging
#> 4 0.06 0.10 0.09 0.09 0.11 Walking
#> 5 0.12 0.11 0.10 0.08 0.10 Walking
#> 6 0.09 0.09 0.10 0.12 0.08 Walking
#> 7 0.12 0.12 0.12 0.13 0.15 Upstairs
#> 8 0.10 0.10 0.10 0.10 0.11 Upstairs
#> 9 0.08 0.07 0.08 0.08 0.05 Upstairs
```
Our aim is to predict the class based on all the numeric features. We will use the `rpart` package ([Therneau and Atkinson 2019](#ref-rpart)) which implements classification and regression trees. We will assess the performance of the decision tree with \\(10\\)\-fold cross\-validation. We can use the `sample()` function to generate the folds. This function will sample \\(n\\) integers from \\(1\\) to \\(k\\) where \\(n\\) is the number of rows in the data frame.
```
# Package with implementations of decision trees.
library(rpart)
# Set seed for reproducibility.
set.seed(1234)
# Define the number of folds.
k <- 10
# Generate folds.
folds <- sample(k, size = nrow(df), replace = TRUE)
# Print first 10 values.
head(folds)
#> [1] 10 6 5 9 5 6
```
The `folds` variable stores the fold each instance belongs to. For example, the first instance belongs to fold \\(10\\), the second instance belongs to fold \\(6\\), and so on. We can now generate our test and train sets. We will iterate \\(k\=10\\) times. For each iteration \\(i\\), the test set is built using the instances that belong to fold \\(i\\) and the train set will be composed of the remaining instances (those that do not belong to fold \\(i\\)). Next, the `rpart()` function is used to train the decision tree with the train set. By default, `rpart()` performs \\(10\\)\-fold cross\-validation internally. To avoid this, we set the parameter `xval = 0`. Then, we can use the trained model to obtain the predictions on the test set with the generic `predict()` function. The ground truth classes and the predictions are stored so the performance metrics can be computed.
```
# Variable to store ground truth classes.
groundTruth <- NULL
# Variable to store the classifier's predictions.
predictions <- NULL
for(i in 1:k){
trainSet <- df[which(folds != i), ]
testSet <- df[which(folds == i), ]
# Train the decision tree
treeClassifier <- rpart(class ~ .,
trainSet, xval=0)
# Get predictions on the test set.
foldPredictions <- predict(treeClassifier,
testSet, type = "class")
predictions <- c(predictions,
as.character(foldPredictions))
groundTruth <- c(groundTruth,
as.character(testSet$class))
}
```
The first argument of the `rpart()` function is `class ~ .` which is a formula that instructs the method to use the *class* column as the class. The `~ .` means “use all the remaining columns as features”. Now, we can use the `confusionMatrix()` function to compute the performance metrics and the confusion matrix.
```
cm <- confusionMatrix(as.factor(predictions),
as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.7895903
# Print performance metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: Downstairs 0.2821970 0.9617587 0.4434524 0.3449074
#> Class: Jogging 0.9612308 0.9601898 0.9118506 0.9358898
#> Class: Sitting 0.8366013 0.9984351 0.9696970 0.8982456
#> Class: Standing 0.8983740 0.9932328 0.8632812 0.8804781
#> Class: Upstairs 0.2246835 0.9669870 0.4733333 0.3047210
#> Class: Walking 0.9360884 0.8198981 0.7642213 0.8414687
# Print overall metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.6898625 0.9500836 0.7376393 0.7009518
```
FIGURE 2\.13: Confusion matrix for activities’ predictions.
The overall accuracy was \\(78\\%\\) and by looking at the individual performance metrics, some classes had low scores like *‘walking downstairs’* and *‘walking upstairs’*. From the confusion matrix (Figure [2\.13](classification.html#fig:activitiesTreeCM)), it can be seen that those two activities were often confused with each other but also with the *‘walking’* activity. The package `rpart.plot` ([Milborrow 2019](#ref-rpartplot)) can be used to plot the resulting tree (Figure [2\.14](classification.html#fig:activitiesTree)).
```
library(rpart.plot)
# Plot the tree from the last fold.
rpart.plot(treeClassifier, fallen.leaves = F,
shadow.col = "gray", legend.y = 1)
```
FIGURE 2\.14: Resulting decision tree.
The `fallen.leaves = F` argument prevents the leaves to be plotted at the bottom. This is useful if the tree has many nodes. Each node shows the predicted class, the predicted probability of each class, and the percentage of observations in the node. The plot also shows the feature used for each split. We can see that the *YABSOLDEV* variable is at the root thus, it had the highest information gain with the initial set of instances. At the root of the tree, before looking at any of the features, the predicted class is *‘Walking’*. This is because its prior probability is the highest one (\\(\\approx 0\.39\\)), that is, it’s the most common activity present in the dataset. So, if we didn’t have any other information, our best bet would be to predict the most frequent activity.
```
# Prior probabilities.
table(trainSet$class) / nrow(trainSet)
#> Downstairs Jogging Sitting Standing Upstairs Walking
#> 0.09882885 0.29607561 0.05506472 0.04705157 0.11793713 0.38504212
```
These results look promising, but they can still be improved. In the next chapter, I will show you how to improve these results with *Ensemble Learning* which is a method that is used to aggregate many models.
2\.4 Naive Bayes
----------------
Naive Bayes is yet another type of classifier. This one is based on Bayes’ rule. The name *Naive* is because this method assumes that the features are independent. In the previous section we learned that decision trees are built recursively. Trees are built by first selecting a feature to be at the root and then, the root is split into subnodes and so on. How those subnodes are chosen depends on their parent node. With Naive Bayes, features don’t need information about other features, thus, the parameters for each feature can be learned in parallel.
To demonstrate how Naive Bayes works I will use the *SMARTPHONE ACTIVITIES* dataset as in the previous section. For any given *query instance*, the aim is to **predict its most likely class** based on the accelerometer features. For a new *query instance*, we want to estimate its class based on the features that we have observed. Let’s say we want to know what is the probability that the query instance belongs to the class *‘Walking’*. This can be formulated as follows:
\\\[\\begin{equation\*}
P(C\=\\textit{Walking} \| f\_1,\\dots ,f\_n).
\\end{equation\*}\\]
This reads as the conditional probability that the class is *‘Walking’* **given** the observed evidence. For each instance, the evidence that we can observe are its features \\(f\_1, \\dots ,f\_n\\). In this dataset, each instance has \\(39\\) features. If we want to estimate the most likely class, all we need to do is to compute the conditional probability for each class and return the highest one:
\\\[\\begin{equation}
y \= \\operatorname\*{arg max}\_{k \\in \\{1, \\dots ,K\\}} P(C\_k \| f\_1,\\dots ,f\_n)
\\tag{2\.10}
\\end{equation}\\]
where \\(K\\) is the total number of possible classes. The \\(\\text{arg max}\\) notation means: Evaluate the right hand expression for every class \\(k\\) and return the \\(k\\) that resulted with the maximum probability. If instead of *arg max* we had *max* (without the *arg*) that would mean to return the actual maximum probability instead of the class \\(k\\).
Now let’s see how we can compute \\(P(C\_k \| f\_1,\\dots ,f\_n)\\). To compute a conditional probability we can use Bayes’ rule:
\\\[\\begin{equation}
P(H\|E) \= \\frac{P(H)P(E\|H)}{P(E)}
\\tag{2\.11}
\\end{equation}\\]
Let’s dissect that formula:
1. \\(P(H\|E)\\) is called the **posterior** and it is the probability of the hypothesis \\(H\\) given the observed evidence \\(E\\). In our example, the hypothesis can be that \\(C\=Walking\\) and the evidence consists of the measured features. This is the probability that ultimately we want to estimate for each class and pick the class with the highest probability.
2. \\(P(H)\\) is called the **prior**. This is the probability of a hypothesis happening without having any evidence. In our example, this translates into the probability that an instance belongs to a particular class without looking at its features. In practice, this is estimated from the class counts in the training set. Suppose the training set consists of \\(100\\) instances and from those, \\(80\\) are of type *‘Walking’* and \\(20\\) are of type *‘Jogging’*. Then, the prior probability for *‘Walking’* is \\(P(C\=Walking)\=80/100\=0\.8\\) and the prior for *‘Jogging’* is \\(P(C\=Jogging)\=20/100\=0\.2\\).
3. \\(P(E)\\) is the probability of the evidence. Since this one doesn’t depend on the class we don’t need to compute it. This can be thought of as a normalization factor. When choosing the final class we only need to select the one with the highest score, so there is no need to normalize them into proper probabilities between \\(0\\) and \\(1\\).
4. \\(P(E\|H)\\) is called the **likelihood**. For numerical variables we can estimate this using a *Gaussian probability density function*. This sounds intimidating! but all we need to do is to compute the *mean* and *standard deviation* for each feature\-class pair and plug them in the probability density function (pdf). The formula for a Gaussian (also called normal) pdf is:
\\\[\\begin{equation}
f(x) \= \\frac{1}{{\\sigma \\sqrt {2\\pi } }}e^{ \- \\left( {x \- \\mu } \\right)^2 / 2 \\sigma ^2 }
\\tag{2\.12}
\\end{equation}\\]
where \\(\\mu\\) is the mean and \\(\\sigma\\) is the standard deviation.
Suppose that for some feature \\(f1\\) when the class is *‘Walking’*, its mean is \\(5\\) and its standard deviation is \\(3\\). That is, we filter the train set and only select those instances with class *‘Walking’* and compute the mean and standard deviation for feature \\(f1\\). Figure [2\.15](classification.html#fig:pdf1) shows how its pdf looks like.
FIGURE 2\.15: Gaussian probability density function with mean 5 and standard deviation 3\.
If we have a query instance with a feature \\(f\_1 \= 1\.7\\), we can compute its likelihood given the *‘Walking’* class \\(P(f\_1\=1\.7\|C\=Walking)\\) with equation [(2\.12\)](classification.html#eq:gaussianpdf) by plugging \\(x\=1\.7\\), \\(\\mu\=5\\), and \\(\\sigma\=3\\). In R, the function `dnorm()` implements the normal pdf.
```
dnorm(x=1.7, mean = 5, sd = 3)
#> [1] 0.07261739
```
In Figure [2\.16](classification.html#fig:pdf2) the solid circle shows the likelihood when \\(x\=1\.7\\).
FIGURE 2\.16: Likelihood (0\.072\) when x\=1\.7\.
If we have more than one feature we need to compute the likelihood for each and take their **product**: \\(P(f\_1\|C\=Walking)\*P(f\_2\|C\=Walking)\*\\dots\*P(f\_n\|C\=Walking)\\). Each feature and class pair has its own \\(\\mu\\) and \\(\\sigma\\) parameters. Thus, Naive Bayes requires to learn \\(K\*F\*2\\) parameters for the \\(P(E\|H)\\) part plus \\(K\\) parameters for the priors \\(P(H)\\). \\(K\\) is the number of classes, \\(F\\) is the number of features, and the \\(2\\) stands for the mean and standard deviation.
We have seen how we can compute \\(P(C\_k\|f\_1, \\dots ,f\_n)\\) using Baye’s rule by calculating the prior \\(P(H)\\) and \\(P(E\|H)\\) which is the product of the likelihoods for each feature. If we substitute Bayes’s rule (omitting the denominator) in equation [(2\.10\)](classification.html#eq:bayesclassifier) we get our Naive Bayes classifier:
\\\[\\begin{equation}
y \= \\operatorname\*{arg max}\_{k \\in \\{1, \\dots ,K\\}} P(C\_k) \\prod\_{i\=1}^{F} P(f\_i \| C\_k)
\\tag{2\.13}
\\end{equation}\\]
In the following section we will implement our own Naive Bayes algorithm in R and test it on the *SMARTPHONE ACTIVITIES* dataset. Then, we will compare our implementation with that of the well known `e1071` package ([Meyer et al. 2019](#ref-e1071)).
Naive Bayes works well with missing values since the features are independent. At prediction time, if an instance has one or more missing values then, those features are just ignored and the posterior probability is computed based only on the available variabels. Another advantage of the feature independence assumption is that feature selection algorithms run very fast with Naive Bayes. When building a predictive model, not all features may provide useful information and some features may even degrade the performance. Feature selection algorithms aim to find the best set of features and some of them need to try a huge number of feature combinations. With Naive Bayes, the parameters only need to be learned once and then different combinations of features can be evaluated by omitting the ones that are not used. With decision trees, for example, we would need to build entire new trees every time we want to try different input features.
Here, we have shown how we can use a Gaussian pdf to compute the likelihood \\(P(E\|H)\\) when the features are numeric. This assumes that the features have a normal distribution. However, this is not always the case. In practice, Naive Bayes can work really well even if that assumption is not met. Furthermore, nothing prevents us from using another distribution to estimate the likelihood or even defining a specific distribution for each feature. For categorical variables, \\(P(E\|H)\\) is estimated using the frequencies of the feature values.
### 2\.4\.1 Activity Recognition with Naive Bayes
`naive_bayes.R`
It’s time to implement Naive Bayes. To keep it simple, first we will go through a step by step example using a single feature. Then, we will implement a function to train a Naive Bayes classifier for the case of multiple features.
Let’s assume we have already split the data into train and test sets. The complete code is in the script `naive_bayes.R`. We will only use the feature *RESULTANT* which corresponds to the acceleration magnitude of the three axes of the accelerometer sensor. The following code snippet prints the first rows of the train set. The *RESULTANT* feature is in column \\(39\\) and the class is the last column (\\(40\\)).
```
head(trainset[,c(39:40)])
#> RESULTANT class
#> 1004 11.14 Walking
#> 623 1.24 Upstairs
#> 2693 9.90 Standing
#> 934 10.44 Upstairs
#> 4496 10.43 Walking
#> 2948 15.28 Jogging
```
First, we compute the prior probabilities for each class in the train set and store them in the variable `priors`. This corresponds to the \\(P(C\_k)\\) part in equation [(2\.13\)](classification.html#eq:bayesclassifier2).
```
# Compute prior probabilities.
priors <- table(trainset$class) / nrow(trainset)
# Print the table of priors.
priors
#> Downstairs Jogging Sitting Standing Upstairs
#> 0.09622990 0.30266280 0.05721065 0.04640127 0.11521223
#> Walking
#> 0.38228315
```
We can access each prior by name like this:
```
# Get the prior for "Jogging".
priors["Jogging"]
#> Jogging
#> 0.3026628
```
This means that \\(30\\%\\) of the instances in the train set are of type *‘Jogging’*. Now we need to compute the \\(P(f\_i\|C\_k)\\) part from equation [(2\.13\)](classification.html#eq:bayesclassifier2). In R, we can define a method to compute the probability density function from equation [(2\.12\)](classification.html#eq:gaussianpdf) as:
```
# Probability density function of normal distribution.
f <- function(x, m, s){
(1 / (sqrt(2*pi)*s)) * exp(-((x-m)^2) / (2 * s^2))
}
```
It’s first argument `x` is the input value. The second argument `m` is the mean, and the last argument `s` is the standard deviation. For illustration purposes we are defining this function manually but remember that this pdf is already implemented with the base `dnorm()` function.
According to equation [(2\.13\)](classification.html#eq:bayesclassifier2) we need to compute \\(P(f\_i\|C\_k)\\) for each feature \\(i\\) and class \\(k\\). Let’s assume there are only two classes, *‘Walking’* and *‘Jogging’*. Thus, we need to compute the mean and standard deviation for each, and for the feature *RESULTANT* (column \\(39\\)).
```
# Compute the mean and sd of
# the feature RESULTANT (column 39)
# when the class = "Standing".
mean.standing <- mean(trainset[which(trainset$class == "Standing"), 39])
sd.standing <- sd(trainset[which(trainset$class == "Standing"), 39])
# Compute mean and sd when
# the class = "Jogging".
mean.jogging <- mean(trainset[which(trainset$class == "Jogging"), 39])
sd.jogging <- sd(trainset[which(trainset$class == "Jogging"), 39])
```
Print the means:
```
mean.standing
#> [1] 9.405795
mean.jogging
#> [1] 13.70145
```
Note that the mean value for *‘Jogging’* is higher for this feature. This was expected since this feature captures the overall movement across all axes. Now we have everything we need to start making predictions on new instances. We have the priors and we have the means and standard deviations for each feature\-class pair.
Let’s select the first instance from the test set and try to predict its class.
```
# Select a query instance from the test set.
query <- testset[1,] # Select the first one.
```
Now we compute the posterior probability for each class using the learned means and standard deviations:
```
# Compute P(Standing)P(RESULTANT|Standing)
priors["Standing"] * f(query$RESULTANT, mean.standing, sd.standing)
#> 0.003169748
# Compute P(Jogging)P(RESULTANT|Jogging)
priors["Jogging"] * f(query$RESULTANT, mean.jogging, sd.jogging)
#> 0.03884481
```
The posterior for *‘Jogging’* was higher (\\(0\.038\\)) so we classify the query instance as *‘Jogging’*. If we check the true class we see that it was correctly classified!
```
# Inspect the true class of the query instance.
query$class
#> [1] "Jogging"
```
In this example we assumed that there was only one feature and we computed each step manually. However, this can easily be extended to deal with more features. So let’s just do that. We can write two functions, one for training the classifier and the other for making predictions.
The following function will be used to train the classifier. It takes as input a data frame with \\(n\\) features. This function assumes that the class is the last column. The function returns a list with the learned priors, means, and standard deviations.
```
# Function to learn the parameters of
# a Naive Bayes classifier.
# Assumes that the last column of data is the class.
naive.bayes.train <- function(data){
# Unique classes.
classes <- unique(data$class)
# Number of features.
nfeatures <- ncol(data) - 1
# List to store the learned means and sds.
list.means.sds <- list()
for(c in classes){
# Matrix to store the mean and sd for each feature.
# First column stores the mean and second column
# stores the sd.
M <- matrix(0, nrow = nfeatures, ncol = 2)
# Populate matrix.
for(i in 1:nfeatures){
feature.values <- data[which(data$class == c),i]
M[i,1] <- mean(feature.values)
M[i,2] <- sd(feature.values)
}
list.means.sds[c] <- list(M)
}
# Compute prior probabilities.
priors <- table(data$class) / nrow(data)
return(list(list.means.sds=list.means.sds,
priors=priors))
}
```
The function iterates through each class and for each, it creates a matrix `M` with \\(F\\) rows and \\(2\\) columns where \\(F\\) is the number of features. The first column stores the means and the second the standard deviations. Those matrices are saved in a list indexed by the class name so at prediction time we can retrieve each matrix individually. At the end, the prior probabilities are computed. Finally, a list is returned. The first element of the list is the list of matrices and the second element are the priors.
The next function will make predictions based on the learned parameters. Its first argument is the learned parameters and the second a data frame with the instances we want to make predictions for.
```
# Function to make predictions using
# the learned parameters.
naive.bayes.predict <- function(params, data){
# Variable to store the prediction for each instance.
predictions <- NULL
n <- nrow(data)
# Get class names.
classes <- names(params$priors)
# Get number of features.
nfeatures <- nrow(params$list.means.sds[[1]])
# Iterate instances.
for(i in 1:n){
query <- data[i,]
max.probability <- -Inf
predicted.class <- ""
# Find the class with highest probability.
for(c in classes){
# Get the prior probability for class c.
acum.prob <- params$priors[c]
# Iterate features.
for(j in 1:nfeatures){
# Compute P(feature|class)
tmp <- f(query[,j],
params$list.means.sds[[c]][j,1],
params$list.means.sds[[c]][j,2])
# Accumulate result.
acum.prob <- acum.prob * tmp
}
if(acum.prob > max.probability){
max.probability <- acum.prob
predicted.class <- c
}
}
predictions <- c(predictions, predicted.class)
}
return(predictions)
}
```
This function iterates through each instance and computes the posterior for each class and stores the one that achieved the highest value as the prediction. Finally, it returns the list with all predictions.
Now we are ready to train our Naive Bayes classifier. All we need to do is call the function `naive.bayes.train()` and pass the train set.
```
# Learn Naive Bayes parameters.
nb.model <- naive.bayes.train(trainset)
```
The learned parameters are stored in `nb.model` and we can make predictions with the `naive.bayes.predict()` function by passing the `nb.model` and a test set.
```
# Make predictions.
predictions <- naive.bayes.predict(nb.model, testset)
```
Then, we can assess the performance of the model by computing the confusion matrix.
```
# Compute confusion matrix and other performance metrics.
groundTruth <- testset$class
cm <- confusionMatrix(as.factor(predictions),
as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.7501538
# Print overall metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.6621381 0.9423729 0.6468372 0.6433231
```
The accuracy was \\(75\\%\\). In the previous section we obtained an accuracy of \\(78\\%\\) with decision trees. However, this does not necessarily mean that decision trees are better. Moreover, in the previous section we used cross\-validation and here we used hold\-out validation.
Computing the posterior may cause a loss of numeric precision, specially when there are many features. This is because we are multiplying the likelihoods for each feature (see equation [(2\.13\)](classification.html#eq:bayesclassifier2)) and those likelihoods are small numbers. One way to fix that is to use logarithms. In `naive.bayes.predict()` we can change `acum.prob <- params$priors[c]` with `acum.prob <- log(params$priors[c])` and `acum.prob <- acum.prob * tmp` with `acum.prob <- acum.prob + log(tmp)`. If you try those changes you should get the same result as before.
There is already a popular R package (`e1071`) for training Naive Bayes classifiers. The following code trains a classifier using this package.
```
#### Use Naive Bayes implementation from package e1071 ####
library(e1071)
# We need to convert the class into a factor.
trainset$class <- as.factor(trainset$class)
nb.model2 <- naiveBayes(class ~., trainset)
predictions2 <- predict(nb.model2, testset)
cm2 <- confusionMatrix(as.factor(predictions2),
as.factor(groundTruth))
# Print accuracy
cm2$overall["Accuracy"]
#> Accuracy
#> 0.7501538
```
As you can see, the result was the same as the one obtained with our implementation! We implemented our own for illustrative purposes but it is advisable to use already tested and proven packages. Furthermore, this one also supports categorical variables.
### 2\.4\.1 Activity Recognition with Naive Bayes
`naive_bayes.R`
It’s time to implement Naive Bayes. To keep it simple, first we will go through a step by step example using a single feature. Then, we will implement a function to train a Naive Bayes classifier for the case of multiple features.
Let’s assume we have already split the data into train and test sets. The complete code is in the script `naive_bayes.R`. We will only use the feature *RESULTANT* which corresponds to the acceleration magnitude of the three axes of the accelerometer sensor. The following code snippet prints the first rows of the train set. The *RESULTANT* feature is in column \\(39\\) and the class is the last column (\\(40\\)).
```
head(trainset[,c(39:40)])
#> RESULTANT class
#> 1004 11.14 Walking
#> 623 1.24 Upstairs
#> 2693 9.90 Standing
#> 934 10.44 Upstairs
#> 4496 10.43 Walking
#> 2948 15.28 Jogging
```
First, we compute the prior probabilities for each class in the train set and store them in the variable `priors`. This corresponds to the \\(P(C\_k)\\) part in equation [(2\.13\)](classification.html#eq:bayesclassifier2).
```
# Compute prior probabilities.
priors <- table(trainset$class) / nrow(trainset)
# Print the table of priors.
priors
#> Downstairs Jogging Sitting Standing Upstairs
#> 0.09622990 0.30266280 0.05721065 0.04640127 0.11521223
#> Walking
#> 0.38228315
```
We can access each prior by name like this:
```
# Get the prior for "Jogging".
priors["Jogging"]
#> Jogging
#> 0.3026628
```
This means that \\(30\\%\\) of the instances in the train set are of type *‘Jogging’*. Now we need to compute the \\(P(f\_i\|C\_k)\\) part from equation [(2\.13\)](classification.html#eq:bayesclassifier2). In R, we can define a method to compute the probability density function from equation [(2\.12\)](classification.html#eq:gaussianpdf) as:
```
# Probability density function of normal distribution.
f <- function(x, m, s){
(1 / (sqrt(2*pi)*s)) * exp(-((x-m)^2) / (2 * s^2))
}
```
It’s first argument `x` is the input value. The second argument `m` is the mean, and the last argument `s` is the standard deviation. For illustration purposes we are defining this function manually but remember that this pdf is already implemented with the base `dnorm()` function.
According to equation [(2\.13\)](classification.html#eq:bayesclassifier2) we need to compute \\(P(f\_i\|C\_k)\\) for each feature \\(i\\) and class \\(k\\). Let’s assume there are only two classes, *‘Walking’* and *‘Jogging’*. Thus, we need to compute the mean and standard deviation for each, and for the feature *RESULTANT* (column \\(39\\)).
```
# Compute the mean and sd of
# the feature RESULTANT (column 39)
# when the class = "Standing".
mean.standing <- mean(trainset[which(trainset$class == "Standing"), 39])
sd.standing <- sd(trainset[which(trainset$class == "Standing"), 39])
# Compute mean and sd when
# the class = "Jogging".
mean.jogging <- mean(trainset[which(trainset$class == "Jogging"), 39])
sd.jogging <- sd(trainset[which(trainset$class == "Jogging"), 39])
```
Print the means:
```
mean.standing
#> [1] 9.405795
mean.jogging
#> [1] 13.70145
```
Note that the mean value for *‘Jogging’* is higher for this feature. This was expected since this feature captures the overall movement across all axes. Now we have everything we need to start making predictions on new instances. We have the priors and we have the means and standard deviations for each feature\-class pair.
Let’s select the first instance from the test set and try to predict its class.
```
# Select a query instance from the test set.
query <- testset[1,] # Select the first one.
```
Now we compute the posterior probability for each class using the learned means and standard deviations:
```
# Compute P(Standing)P(RESULTANT|Standing)
priors["Standing"] * f(query$RESULTANT, mean.standing, sd.standing)
#> 0.003169748
# Compute P(Jogging)P(RESULTANT|Jogging)
priors["Jogging"] * f(query$RESULTANT, mean.jogging, sd.jogging)
#> 0.03884481
```
The posterior for *‘Jogging’* was higher (\\(0\.038\\)) so we classify the query instance as *‘Jogging’*. If we check the true class we see that it was correctly classified!
```
# Inspect the true class of the query instance.
query$class
#> [1] "Jogging"
```
In this example we assumed that there was only one feature and we computed each step manually. However, this can easily be extended to deal with more features. So let’s just do that. We can write two functions, one for training the classifier and the other for making predictions.
The following function will be used to train the classifier. It takes as input a data frame with \\(n\\) features. This function assumes that the class is the last column. The function returns a list with the learned priors, means, and standard deviations.
```
# Function to learn the parameters of
# a Naive Bayes classifier.
# Assumes that the last column of data is the class.
naive.bayes.train <- function(data){
# Unique classes.
classes <- unique(data$class)
# Number of features.
nfeatures <- ncol(data) - 1
# List to store the learned means and sds.
list.means.sds <- list()
for(c in classes){
# Matrix to store the mean and sd for each feature.
# First column stores the mean and second column
# stores the sd.
M <- matrix(0, nrow = nfeatures, ncol = 2)
# Populate matrix.
for(i in 1:nfeatures){
feature.values <- data[which(data$class == c),i]
M[i,1] <- mean(feature.values)
M[i,2] <- sd(feature.values)
}
list.means.sds[c] <- list(M)
}
# Compute prior probabilities.
priors <- table(data$class) / nrow(data)
return(list(list.means.sds=list.means.sds,
priors=priors))
}
```
The function iterates through each class and for each, it creates a matrix `M` with \\(F\\) rows and \\(2\\) columns where \\(F\\) is the number of features. The first column stores the means and the second the standard deviations. Those matrices are saved in a list indexed by the class name so at prediction time we can retrieve each matrix individually. At the end, the prior probabilities are computed. Finally, a list is returned. The first element of the list is the list of matrices and the second element are the priors.
The next function will make predictions based on the learned parameters. Its first argument is the learned parameters and the second a data frame with the instances we want to make predictions for.
```
# Function to make predictions using
# the learned parameters.
naive.bayes.predict <- function(params, data){
# Variable to store the prediction for each instance.
predictions <- NULL
n <- nrow(data)
# Get class names.
classes <- names(params$priors)
# Get number of features.
nfeatures <- nrow(params$list.means.sds[[1]])
# Iterate instances.
for(i in 1:n){
query <- data[i,]
max.probability <- -Inf
predicted.class <- ""
# Find the class with highest probability.
for(c in classes){
# Get the prior probability for class c.
acum.prob <- params$priors[c]
# Iterate features.
for(j in 1:nfeatures){
# Compute P(feature|class)
tmp <- f(query[,j],
params$list.means.sds[[c]][j,1],
params$list.means.sds[[c]][j,2])
# Accumulate result.
acum.prob <- acum.prob * tmp
}
if(acum.prob > max.probability){
max.probability <- acum.prob
predicted.class <- c
}
}
predictions <- c(predictions, predicted.class)
}
return(predictions)
}
```
This function iterates through each instance and computes the posterior for each class and stores the one that achieved the highest value as the prediction. Finally, it returns the list with all predictions.
Now we are ready to train our Naive Bayes classifier. All we need to do is call the function `naive.bayes.train()` and pass the train set.
```
# Learn Naive Bayes parameters.
nb.model <- naive.bayes.train(trainset)
```
The learned parameters are stored in `nb.model` and we can make predictions with the `naive.bayes.predict()` function by passing the `nb.model` and a test set.
```
# Make predictions.
predictions <- naive.bayes.predict(nb.model, testset)
```
Then, we can assess the performance of the model by computing the confusion matrix.
```
# Compute confusion matrix and other performance metrics.
groundTruth <- testset$class
cm <- confusionMatrix(as.factor(predictions),
as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.7501538
# Print overall metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.6621381 0.9423729 0.6468372 0.6433231
```
The accuracy was \\(75\\%\\). In the previous section we obtained an accuracy of \\(78\\%\\) with decision trees. However, this does not necessarily mean that decision trees are better. Moreover, in the previous section we used cross\-validation and here we used hold\-out validation.
Computing the posterior may cause a loss of numeric precision, specially when there are many features. This is because we are multiplying the likelihoods for each feature (see equation [(2\.13\)](classification.html#eq:bayesclassifier2)) and those likelihoods are small numbers. One way to fix that is to use logarithms. In `naive.bayes.predict()` we can change `acum.prob <- params$priors[c]` with `acum.prob <- log(params$priors[c])` and `acum.prob <- acum.prob * tmp` with `acum.prob <- acum.prob + log(tmp)`. If you try those changes you should get the same result as before.
There is already a popular R package (`e1071`) for training Naive Bayes classifiers. The following code trains a classifier using this package.
```
#### Use Naive Bayes implementation from package e1071 ####
library(e1071)
# We need to convert the class into a factor.
trainset$class <- as.factor(trainset$class)
nb.model2 <- naiveBayes(class ~., trainset)
predictions2 <- predict(nb.model2, testset)
cm2 <- confusionMatrix(as.factor(predictions2),
as.factor(groundTruth))
# Print accuracy
cm2$overall["Accuracy"]
#> Accuracy
#> 0.7501538
```
As you can see, the result was the same as the one obtained with our implementation! We implemented our own for illustrative purposes but it is advisable to use already tested and proven packages. Furthermore, this one also supports categorical variables.
2\.5 Dynamic Time Warping
-------------------------
`dtw_example.R`
In the previous activity recognition example, we used the extracted features represented as feature vectors to train the classifiers instead of using the raw data. In some situations this can lead to temporal\-relationships information loss. In the previous example, we could classify the activities with reasonable accuracy since the extracted features were able to retain enough information from the raw data. However, in some cases, having temporal information is crucial. For example, in hand signature recognition, a query signature is checked for a match with one of the signatures in a database. The signatures need to have an almost exact match to authenticate a user. If we represent each signature as a feature vector, it can turn out that two signatures have very similar feature vectors even though they look completely different. For example, Figure [2\.17](classification.html#fig:correlations) shows four datasets. They look very different but they all have the same correlation of \\(0\.816\\)[6](#fn6).
FIGURE 2\.17: Four datasets with the same correlation of 0\.816\. (Anscombe, Francis J., 1973, Graphs in statistical analysis. American Statistician, 27, 17–21\. Source: Wikipedia, User:Schutz (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
To avoid this potential issue, we can also include time\-dependent information into our models by keeping the order of the data points. Another issue is that two time series that belong to the same class will still have some differences. Every time the same person signs a document the signature will vary a bit. In the same way, when we pronounce a word, sometimes we emphasize some letters or speak at different speeds. Figure [2\.18](classification.html#fig:verygood) shows two versions of the sentence “very good”. In the second one (bottom) the speaker emphasizes the “e” and as a result, the two sentences are not aligned in time anymore even though they have the same meaning.
FIGURE 2\.18: Time shift example between two sentences.
To compare two sequences we could use the well known Euclidean distance. However since the two sequences may not be aligned in time, the result could be misleading. Furthermore, the two sequences differ in length. To account for this “time\-shift” effect in timeseries data, *Dynamic Time Warping* (DTW) ([Sakoe et al. 1990](#ref-sakoe1990dynamic)) can be used instead. DTW is a method that:
* Finds an optimal match between two time\-dependent sequences.
* Computes their dissimilarity.
* Finds the optimal deformation (mapping) of one of the sequences onto the other.
Another advantage of DTW is that the timeseries do not need to be of the same length. Suppose we have two timeseries, a *query*, and a *reference* we want to compare with:
\\\[\\begin{align\*}
query\&\=(2,2,2,4,4,3\)\\\\
ref\&\=(2,2,3,3,2\)
\\end{align\*}\\]
The first thing to note is that the sequences differ in length. Figure [2\.19](classification.html#fig:queryref) shows their plot. The *query* is the solid line and seems to be shifted to the right one position with respect to the *reference*. The plot also shows the resulting alignment after applying the DTW algorithm (dashed lines between the sequences). The resulting distance (after aligning) between the sequences is \\(3\\). In the following, we will see how the problem can be formalized and how it can be computed. Don’t worry if you find the math notation a bit difficult to grasp at this point. A step by step example will follow which should help to explain how the method works.
FIGURE 2\.19: DTW alignment between the query and reference sequences (solid line is the query).
The problem of aligning two sequences can be formalized as follows ([Rabiner and Juang 1993](#ref-Rabiner1993)). Let \\(X\\) and \\(Y\\) be two sequences:
\\\[\\begin{align\*}
X\&\=(x\_1,x\_2,\\dots,x\_{T\_x}) \\\\
Y\&\=(y\_1,y\_2,\\dots,y\_{T\_y})
\\end{align\*}\\]
where \\(x\_i\\) and \\(y\_i\\) are vectors. In the previous example, the vectors only have one element since the sequences are \\(1\\)\-dimensional, but DTW also works with multidimensional sequences. \\(T\_x\\) and \\(T\_y\\) are the sequences’ lengths. Let
\\\[\\begin{align\*}
d(i\_x,i\_y)
\\end{align\*}\\]
be the *dissimilarity* (distance) between vectors \\(x\_i\\) and \\(y\_i\\) (e.g., Euclidean distance). Then, \\(\\phi\_x\\) and \\(\\phi\_y\\) are the warping functions that relate \\(i\_x\\) and \\(i\_y\\) to a common axis \\(k\\):
\\\[\\begin{align\*}
i\_x\&\=\\phi\_x (k), k\=1,2,\\dots,T \\\\
i\_y\&\=\\phi\_y (k), k\=1,2,\\dots,T.
\\end{align\*}\\]
The total dissimilarity between the two sequences is:
\\\[\\begin{equation}
d\_\\phi (X,Y) \= \\sum\_{k\=1}^T{d\\left(\\phi\_x (k), \\phi\_y (k)\\right)}
\\tag{2\.14}
\\end{equation}\\]
The aim is to find the warping function \\(\\phi\\) that minimizes the total dissimilarity:
\\\[\\begin{equation}
\\operatorname\*{min}\_{\\phi} d\_\\phi (X,Y)
\\tag{2\.15}
\\end{equation}\\]
The solution can be efficiently computed using dynamic programming. Usually, when solving this minimization problem, some constraints are applied:
* **Endpoint constraints.** This constraint makes sure that the first and last elements of each sequence are connected (mapped to each other).
\\\[\\begin{align\*}
\\phi\_x (1\)\&\=1, \\phi\_y (1\)\=1 \\\\
\\phi\_x (T)\&\=T\_x, \\phi\_y (T)\=T\_y
\\end{align\*}\\]
* **Monotonicity.** This constraint allows ‘time to flow’ only from left to right. That is, we cannot go back in time.
\\\[\\begin{align\*}
\\phi\_x (k\+1\) \\geq \\phi\_x(k) \\\\
\\phi\_y (k\+1\) \\geq \\phi\_y(k)
\\end{align\*}\\]
* **Local constraints.** For example, allow jumps of at most \\(1\\) step.
\\\[\\begin{align\*}
\\phi\_x (k\+1\) \- \\phi\_x(k) \\leq 1 \\\\
\\phi\_y (k\+1\) \- \\phi\_y(k) \\leq 1
\\end{align\*}\\]
Also, it is possible to apply global constraints, other local constraints, and apply different weights to slopes but the three described above are the most common ones. For a comprehensive list of constraints, please see ([Rabiner and Juang 1993](#ref-Rabiner1993)). Now let’s get back to our example and go through the steps to compute the dissimilarity and warping functions between our query (\\(Q\\)) and reference (\\(R\\)) sequences:
\\\[\\begin{align\*}
Q\&\=(2,2,2,4,4,3\) \\\\
R\&\=(2,2,3,3,2\)
\\end{align\*}\\]
The first step is to compute a *local cost matrix*. This is just a matrix that contains the distance between every pair of points between the two sequences. For this example, we will use the *Manhattan distance*. Since our sequences are \\(1\\)\-dimensional this distance can be computed as the absolute difference \\(\|x\_i \- y\_i\|\\). Figure [2\.20](classification.html#fig:localCost) shows the resulting local cost matrix.
FIGURE 2\.20: Local cost matrix between Q and R.
For example, position \\((1,1\)\=0\\) (*row*,*column*) because the first element of \\(Q\\) is \\(2\\) and the first element of \\(R\\) is also \\(2\\), thus, \\(\|2\-2\|\=0\\). The rest of the matrix is filled in the same way. In dynamic programming, partial results are computed and stored in a table. Figure [2\.21](classification.html#fig:dynamicTable) shows the final dynamic programming table computed from the local cost matrix. Initially, this table is empty. We start to fill it from bottom left at position \\((1,1\)\\). From the local cost matrix, the cost at position \\((1,1\)\\) is \\(0\\) so the cost at that position in the dynamic programming table is \\(0\\). Then we can start filling in the contiguous cells. The only direction from which we can arrive at position \\((1,2\)\\) is from the west (W). The cost at position \\((1,2\)\\) from the local cost matrix is \\(0\\) and the cost of the *minimum* of the cell from the west \\((1,1\)\\) is also \\(0\\). So \\(W:0\+0\=0\\). For each cell we add the current cost plus the minimum cost when coming from the contiguous cell. The minimum costs are marked with red. For some cells it is possible to arrive from three different directions: S, W, and SW, thus we need to compute the cost when coming from each of those. The final minimum cost at position \\((5,6\)\\) is \\(3\\). Thus, that is the global DTW distance. In the example, it is possible to get the minimum at \\((5,6\)\\) when arriving from the south or southwest.
FIGURE 2\.21: Dynamic programming table.
Once the table is filled in, we can backtrack starting at \\((5,6\)\\) to find the warping functions. Figure [2\.22](classification.html#fig:warpingResult) shows the final warping functions. Because of the endpoint constraints, we know that \\(\\phi\_Q(1\)\=1, \\phi\_R(1\)\=1\\), \\(\\phi\_Q(6\)\=6\\), and \\(\\phi\_R(6\)\=5\\). Then, from \\((5,6\)\\) the minimum contiguous value is \\(2\\) coming from SW, thus \\(\\phi\_Q(5\)\=5, \\phi\_R(5\)\=4\\), and so on. Note that we could also have chosen to arrive from the south with the same minimum value of \\(2\\) but still this would have resulted in the same overall distance. The dashed line in figure [2\.21](classification.html#fig:dynamicTable) shows the full backtracking.
FIGURE 2\.22: Resulting warping functions.
The runtime complexity of DTW is \\(O(T\_x T\_y)\\). This is the required time to compute the local cost matrix and the dynamic programming table.
In R, the `dtw` package ([Giorgino 2009](#ref-giorgino2009)) has the function `dtw()` to compute the DTW distance between two sequences. Let’s use this package to solve the previous example.
```
library("dtw")
# Sequences from the example
query <- c(2,2,2,4,4,3)
ref <- c(2,2,3,3,2)
# Find dtw distance.
alignment <- dtw(query, ref,
step = symmetric1, keep.internals = T)
```
The `keep.internals = T` keeps the input data so it can be accessed later, e.g., for plotting. The cost matrix and final distance can be accessed from the resulting object. The `step` argument specifies a step pattern. A step pattern describes some of the algorithm constraints such as endpoint and local constraints. In this case, we use `symmetric1` which applies the constraints explained before. We can access the cost matrix, the final distance, and the warping functions \\(\\phi\_x\\) and \\(\\phi\_y\\) as follows:
```
alignment$localCostMatrix
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 0 0 1 1 0
#> [2,] 0 0 1 1 0
#> [3,] 0 0 1 1 0
#> [4,] 2 2 1 1 2
#> [5,] 2 2 1 1 2
#> [6,] 1 1 0 0 1
alignment$distance
#> [1] 3
alignment$index1
#> [1] 1 2 3 4 5 6
alignment$index2
#> [1] 1 1 2 3 4 5
```
The local cost matrix is the same one as in Figure [2\.20](classification.html#fig:localCost) but in rotated form. The resulting object also has the dynamic programming table which can be plotted along with the resulting backtracking (see Figure [2\.23](classification.html#fig:backtracking)).
```
ccm <- alignment$costMatrix
image(x = 1:nrow(ccm), y = 1:ncol(ccm),
ccm, xlab = "Q", ylab = "R")
text(row(ccm), col(ccm), label = ccm)
lines(alignment$index1, alignment$index2)
```
FIGURE 2\.23: Dynamic programming table and backtracking.
And finally, the aligned sequences can be plotted. The previous Figure [2\.19](classification.html#fig:queryref) shows the result of the following command.
```
plot(alignment, type="two", off=1.5,
match.lty=2,
match.indices=10,
main="DTW resulting alignment",
xlab="time", ylab="magnitude")
```
### 2\.5\.1 Hand Gesture Recognition
`hand_gestures.R`, `hand_gestures_auxiliary.R`
Gestures are a form of communication. They are often accompanied with speech but can also be used to communicate something independently of speech (like in sign language). Gestures allow us to externalize and emphasize emotions and thoughts. They are based on body movements from arms, hands, fingers, face, head, etc. Gestures can be used as a non\-verbal way to identify and study behaviors for different purposes such as for emotion ([De Gelder 2006](#ref-de2006towards)) or for the identification of developmental disorders like autism ([Anzulewicz, Sobota, and Delafield\-Butt 2016](#ref-anzulewicz2016toward)).
Gestures can also be used to develop user\-computer interaction applications. The following video shows an example application of gesture recognition for domotics.
The application determines the indoor location using \\(k\\)\-NN as it was shown in this chapter. The gestures are classified using DTW (I’ll show how to do it in a moment). Based on the location and type of gesture, an specific home appliance is activated. I programmed that app some time ago using the same algorithms presented here.
To demonstrate how DTW can be used for hand gesture recognition, we will examine the *HAND GESTURES* dataset that was collected with a smartphone using its accelerometer sensor. The data was collected by \\(10\\) individuals who performed \\(5\\) repetitions of \\(10\\) different gestures (*‘triangle’*, *‘square’*, *‘circle’*, *‘a’*, *‘b’*, *‘c’*, *‘1’*, *‘2’*, *‘3’*, *‘4’*). The sensor is a tri\-axial accelerometer that returns values for the \\(x\\), \\(y\\), and \\(z\\) axes. The participants were not instructed to hold the smartphone in any particular way. The sampling rate was set at \\(50\\) Hz. To record a gesture, the user presses the phone’s screen with her/his thumb, performs the gesture in the air, and stops pressing the screen after the gesture is complete. Figure [2\.24](classification.html#fig:gesturesFigure) shows the start and end positions of the \\(10\\) gestures.
FIGURE 2\.24: Paths for the 10 considered gestures.
In order to make the recognition orientation\-independent, we can compute the *magnitude* of the \\(3\\) accelerometer axes. This will provide us with the overall movement patterns regardless of orientation.
\\\[\\begin{equation}
Magnitude(t) \= \\sqrt {{a\_x}{{(t)}^2} \+ {a\_y}{{(t)}^2} \+ {a\_z}{{(t)}^2}}
\\tag{2\.16}
\\end{equation}\\]
where \\({a\_x}{{(t)}}\\), \\({a\_y}{{(t)}}\\), and \\({a\_z}{{(t)}}\\) are the accelerations at time \\(t\\).
Figure [2\.25](classification.html#fig:handGestureMagnitude) shows the raw accelerometer values (dashed lines) for a *triangle* gesture. The solid line shows the resulting magnitude. This will also simplify things since we will now work with \\(1\\)\-dimensional sequences (the magnitudes) instead of the other \\(3\\) axes.
FIGURE 2\.25: Triangle gesture.
The gestures are stored in text files that contain the \\(x\\), \\(y\\), and \\(z\\) recordings. The script `hand_gestures_auxiliary.R` has some auxiliary functions to preprocess the data. Since the sequences of each gesture are of varying length, storing them as a data frame could be problematic because data frames have fixed sizes. Instead, the `gen.instances()` function processes the files and returns all hand gestures as a list. This function also computes the magnitude (equation [(2\.16\)](classification.html#eq:magnitude)). The following code (from `hand_gestures.R`) calls the `gen.instances()` function and stores the results in the `instances` variable which is a list. Then, we select the first and second instances to be the query and the reference.
```
# Format instances from files.
instances <- gen.instances("../data/hand_gestures/")
# Use first instance as the query.
query <- instances[[1]]
# Use second instance as the reference.
ref <- instances[[2]]
```
Each element in `instances` is also a list that stores the *type* and *values* (magnitude) of each gesture.
```
# Print their respective classes
print(query$type)
#> [1] "1"
print(ref$type)
#> [1] "1"
```
Here, the first two instances are of type *‘1’*. We can also print the magnitude values.
```
# Print values.
print(query$values)
#> [1] 9.167477 9.291464 9.729926 9.901090 ....
```
In this case, both classes are “1”. We can use the `dtw()` function to compute the similarity between the *query* and the *reference* instance and plot the resulting alignment (Figure [2\.26](classification.html#fig:alignmentExample)).
```
alignment <- dtw(query$values, ref$values, keep = TRUE)
# Print similarity (distance)
alignment$distance
#> [1] 68.56493
# Plot result.
plot(alignment, type="two", off=1, match.lty=2, match.indices=40,
main="DTW resulting alignment",
xlab="time", ylab="magnitude")
```
FIGURE 2\.26: Resulting alignment.
To perform the actual classification, we will use our well\-known \\(k\\)\-NN classifier with \\(k\=1\\). To classify a *query instance*, we need to compute its DTW distance to every other instance in the training set and predict the label from the closest one. We will test the performance using \\(10\\)\-fold cross\-validation. Since computing all DTW distances takes some time, we can precompute all pairs of distances and store them in a matrix. The auxiliary function `matrix.distances()` does the job. Since this can take some minutes, the results are saved so there is no need to wait next time the code is run.
```
D <- matrix.distances(instances)
# Save results.
save(D, file="D.RData")
```
The `matrix.distances()` returns a list. The first element is an array with the gestures’ classes and the second element is the actual distance matrix. The elements in the diagonal are set to `Inf` to signal that we don’t want to take into account the dissimilarity between a gesture and itself.
For convenience, this matrix is already stored in the file `D.RData` located this chapter’s code directory. The following code performs the \\(10\\)\-fold cross\-validation and computes the performance results.
```
# Load the DTW distances matrix.
load("D.RData")
set.seed(1234)
k <- 10 # Number of folds.
folds <- sample(k, size = length(D[[1]]), replace = T)
predictions <- NULL
groundTruth <- NULL
# Implement k-NN with k=1.
for(i in 1:k){
trainSet <- which(folds != i)
testSet <- which(folds == i)
train.labels <- D[[1]][trainSet]
for(query in testSet){
type <- D[[1]][query]
distances <- D[[2]][query, ][trainSet]
# Return the closest one.
nn <- sort(distances, index.return = T)$ix[1]
pred <- train.labels[nn]
predictions <- c(predictions, pred)
groundTruth <- c(groundTruth, type)
}
} # end of for
```
The line `distances <- D[[2]][query, ][trainSet]` retrieves the pre\-computed distances between the test *query* and all gestures in the train set. Then, those distances are sorted in ascending order and the class of the closest one is used as the prediction. Finally, the performance is calculated.
```
cm <- confusionMatrix(factor(predictions),
factor(groundTruth))
# Compute performance metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: 1 0.84 0.9911111 0.9130435 0.8750000
#> Class: 2 0.84 0.9866667 0.8750000 0.8571429
#> Class: 3 0.96 0.9911111 0.9230769 0.9411765
#> Class: 4 0.98 0.9933333 0.9423077 0.9607843
#> Class: a 0.78 0.9733333 0.7647059 0.7722772
#> Class: b 0.76 0.9955556 0.9500000 0.8444444
#> Class: c 0.90 1.0000000 1.0000000 0.9473684
#> Class: circleLeft 0.78 0.9622222 0.6964286 0.7358491
#> Class: square 1.00 0.9977778 0.9803922 0.9900990
#> Class: triangle 0.92 0.9711111 0.7796610 0.8440367
# Overall performance metrics
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.8760000 0.9862222 0.8824616 0.8768178
```
FIGURE 2\.27: Confusion matrix for hand gestures’ predictions.
The overall recall was \\(0\.87\\) which is not bad. From the confusion matrix (Figure [2\.27](classification.html#fig:gesturesCM)), we can see that the class *‘a’* was often confused with *‘circleLeft’* and vice versa. This makes sense since both have similar motions (see Figure [2\.24](classification.html#fig:gesturesFigure)). Also, *‘b’* was often confused with *‘circleLeft’*. The *‘square’* class was always correctly classified. This example demonstrated how DTW can be used with \\(k\\)\-NN to recognize hand gestures.
### 2\.5\.1 Hand Gesture Recognition
`hand_gestures.R`, `hand_gestures_auxiliary.R`
Gestures are a form of communication. They are often accompanied with speech but can also be used to communicate something independently of speech (like in sign language). Gestures allow us to externalize and emphasize emotions and thoughts. They are based on body movements from arms, hands, fingers, face, head, etc. Gestures can be used as a non\-verbal way to identify and study behaviors for different purposes such as for emotion ([De Gelder 2006](#ref-de2006towards)) or for the identification of developmental disorders like autism ([Anzulewicz, Sobota, and Delafield\-Butt 2016](#ref-anzulewicz2016toward)).
Gestures can also be used to develop user\-computer interaction applications. The following video shows an example application of gesture recognition for domotics.
The application determines the indoor location using \\(k\\)\-NN as it was shown in this chapter. The gestures are classified using DTW (I’ll show how to do it in a moment). Based on the location and type of gesture, an specific home appliance is activated. I programmed that app some time ago using the same algorithms presented here.
To demonstrate how DTW can be used for hand gesture recognition, we will examine the *HAND GESTURES* dataset that was collected with a smartphone using its accelerometer sensor. The data was collected by \\(10\\) individuals who performed \\(5\\) repetitions of \\(10\\) different gestures (*‘triangle’*, *‘square’*, *‘circle’*, *‘a’*, *‘b’*, *‘c’*, *‘1’*, *‘2’*, *‘3’*, *‘4’*). The sensor is a tri\-axial accelerometer that returns values for the \\(x\\), \\(y\\), and \\(z\\) axes. The participants were not instructed to hold the smartphone in any particular way. The sampling rate was set at \\(50\\) Hz. To record a gesture, the user presses the phone’s screen with her/his thumb, performs the gesture in the air, and stops pressing the screen after the gesture is complete. Figure [2\.24](classification.html#fig:gesturesFigure) shows the start and end positions of the \\(10\\) gestures.
FIGURE 2\.24: Paths for the 10 considered gestures.
In order to make the recognition orientation\-independent, we can compute the *magnitude* of the \\(3\\) accelerometer axes. This will provide us with the overall movement patterns regardless of orientation.
\\\[\\begin{equation}
Magnitude(t) \= \\sqrt {{a\_x}{{(t)}^2} \+ {a\_y}{{(t)}^2} \+ {a\_z}{{(t)}^2}}
\\tag{2\.16}
\\end{equation}\\]
where \\({a\_x}{{(t)}}\\), \\({a\_y}{{(t)}}\\), and \\({a\_z}{{(t)}}\\) are the accelerations at time \\(t\\).
Figure [2\.25](classification.html#fig:handGestureMagnitude) shows the raw accelerometer values (dashed lines) for a *triangle* gesture. The solid line shows the resulting magnitude. This will also simplify things since we will now work with \\(1\\)\-dimensional sequences (the magnitudes) instead of the other \\(3\\) axes.
FIGURE 2\.25: Triangle gesture.
The gestures are stored in text files that contain the \\(x\\), \\(y\\), and \\(z\\) recordings. The script `hand_gestures_auxiliary.R` has some auxiliary functions to preprocess the data. Since the sequences of each gesture are of varying length, storing them as a data frame could be problematic because data frames have fixed sizes. Instead, the `gen.instances()` function processes the files and returns all hand gestures as a list. This function also computes the magnitude (equation [(2\.16\)](classification.html#eq:magnitude)). The following code (from `hand_gestures.R`) calls the `gen.instances()` function and stores the results in the `instances` variable which is a list. Then, we select the first and second instances to be the query and the reference.
```
# Format instances from files.
instances <- gen.instances("../data/hand_gestures/")
# Use first instance as the query.
query <- instances[[1]]
# Use second instance as the reference.
ref <- instances[[2]]
```
Each element in `instances` is also a list that stores the *type* and *values* (magnitude) of each gesture.
```
# Print their respective classes
print(query$type)
#> [1] "1"
print(ref$type)
#> [1] "1"
```
Here, the first two instances are of type *‘1’*. We can also print the magnitude values.
```
# Print values.
print(query$values)
#> [1] 9.167477 9.291464 9.729926 9.901090 ....
```
In this case, both classes are “1”. We can use the `dtw()` function to compute the similarity between the *query* and the *reference* instance and plot the resulting alignment (Figure [2\.26](classification.html#fig:alignmentExample)).
```
alignment <- dtw(query$values, ref$values, keep = TRUE)
# Print similarity (distance)
alignment$distance
#> [1] 68.56493
# Plot result.
plot(alignment, type="two", off=1, match.lty=2, match.indices=40,
main="DTW resulting alignment",
xlab="time", ylab="magnitude")
```
FIGURE 2\.26: Resulting alignment.
To perform the actual classification, we will use our well\-known \\(k\\)\-NN classifier with \\(k\=1\\). To classify a *query instance*, we need to compute its DTW distance to every other instance in the training set and predict the label from the closest one. We will test the performance using \\(10\\)\-fold cross\-validation. Since computing all DTW distances takes some time, we can precompute all pairs of distances and store them in a matrix. The auxiliary function `matrix.distances()` does the job. Since this can take some minutes, the results are saved so there is no need to wait next time the code is run.
```
D <- matrix.distances(instances)
# Save results.
save(D, file="D.RData")
```
The `matrix.distances()` returns a list. The first element is an array with the gestures’ classes and the second element is the actual distance matrix. The elements in the diagonal are set to `Inf` to signal that we don’t want to take into account the dissimilarity between a gesture and itself.
For convenience, this matrix is already stored in the file `D.RData` located this chapter’s code directory. The following code performs the \\(10\\)\-fold cross\-validation and computes the performance results.
```
# Load the DTW distances matrix.
load("D.RData")
set.seed(1234)
k <- 10 # Number of folds.
folds <- sample(k, size = length(D[[1]]), replace = T)
predictions <- NULL
groundTruth <- NULL
# Implement k-NN with k=1.
for(i in 1:k){
trainSet <- which(folds != i)
testSet <- which(folds == i)
train.labels <- D[[1]][trainSet]
for(query in testSet){
type <- D[[1]][query]
distances <- D[[2]][query, ][trainSet]
# Return the closest one.
nn <- sort(distances, index.return = T)$ix[1]
pred <- train.labels[nn]
predictions <- c(predictions, pred)
groundTruth <- c(groundTruth, type)
}
} # end of for
```
The line `distances <- D[[2]][query, ][trainSet]` retrieves the pre\-computed distances between the test *query* and all gestures in the train set. Then, those distances are sorted in ascending order and the class of the closest one is used as the prediction. Finally, the performance is calculated.
```
cm <- confusionMatrix(factor(predictions),
factor(groundTruth))
# Compute performance metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: 1 0.84 0.9911111 0.9130435 0.8750000
#> Class: 2 0.84 0.9866667 0.8750000 0.8571429
#> Class: 3 0.96 0.9911111 0.9230769 0.9411765
#> Class: 4 0.98 0.9933333 0.9423077 0.9607843
#> Class: a 0.78 0.9733333 0.7647059 0.7722772
#> Class: b 0.76 0.9955556 0.9500000 0.8444444
#> Class: c 0.90 1.0000000 1.0000000 0.9473684
#> Class: circleLeft 0.78 0.9622222 0.6964286 0.7358491
#> Class: square 1.00 0.9977778 0.9803922 0.9900990
#> Class: triangle 0.92 0.9711111 0.7796610 0.8440367
# Overall performance metrics
colMeans(cm$byClass[,c("Recall", "Specificity",
"Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.8760000 0.9862222 0.8824616 0.8768178
```
FIGURE 2\.27: Confusion matrix for hand gestures’ predictions.
The overall recall was \\(0\.87\\) which is not bad. From the confusion matrix (Figure [2\.27](classification.html#fig:gesturesCM)), we can see that the class *‘a’* was often confused with *‘circleLeft’* and vice versa. This makes sense since both have similar motions (see Figure [2\.24](classification.html#fig:gesturesFigure)). Also, *‘b’* was often confused with *‘circleLeft’*. The *‘square’* class was always correctly classified. This example demonstrated how DTW can be used with \\(k\\)\-NN to recognize hand gestures.
2\.6 Dummy Models
-----------------
`dummy_classifiers.R`
When faced with a new problem, you may be tempted to start trying to solve it by using a complex model. Then, you proceed to train your complex model and evaluate it. The results look reasonably good so you think you are done. However, this good performance could only be an *illusion*. Sometimes there are underlying problems with the data that can give the false impression that a model is performing well. Examples of such problems are imbalanced datasets, no correlation between the features and the classes, features not containing enough information, etc. **Dummy models** can be used to spot some of those problems. Dummy models use little or no information at all when making predictions (we’ll see how in a moment).
Furthermore, for some problems (specially in regression) it is not clear what is considered to be a good performance. There are problems in which doing slightly better than random is considered a great achievement (e.g., in forecasting) but for other problems that would be unacceptable. Thus, we need some type of baseline to assess whether or not a particular model is bringing some benefit. Dummy models are not only used to spot problems but can be used as baselines as well.
Dummy models are also called *baseline models* or *dumb models*. One student I was supervising used to call them *stupid models*. When I am angry, I also call them like that, but today I’m in a good mood so I’ll refer to them as *dummy*.
Now, I will present three types of dummy classifiers and how they can be implemented in R.
### 2\.6\.1 Most\-frequent\-class Classifier
As the name implies, the most\-frequent\-class classifier always predicts the most frequent label found in the train set. This means that the model does not even need to look at the features! Once it is presented with a new instance, it just outputs the most common class as the prediction.
To show how it can be implemented, I will use the *SMARTPHONES ACTIVITIES* dataset. For demonstration purposes, I will only keep two classes: *‘Walking’* and *‘Upstairs’*. Furthermore, I will only pick a small percent of the instances with class *‘Upstairs’* to simulate an imbalanced dataset. Imbalanced means that there are classes for which only a few instances exist. More about imbalanced data and how to handle it will be covered in chapter [5](preprocessing.html#preprocessing). After those modifications, we can check the class counts:
```
# Print class counts.
table(dataset$class)
#> Upstairs Walking
#> 200 2081
# In percentages.
table(dataset$class) / nrow(dataset)
#> Upstairs Walking
#> 0.08768084 0.91231916
```
We can see that more than \\(90\\%\\) of the instances belong to class *‘Walking’*. It’s time to define the dummy classifier!
```
# Define the dummy classifier's train function.
most.frequent.class.train <- function(data){
# Get a table with the class counts.
counts <- table(data$class)
# Select the label with the most counts.
most.frequent <- names(which.max(counts))
return(most.frequent)
}
```
The `most.frequent.class.train()` function will learn the parameters from a train set. The only thing this model needs to learn is what is the most frequent class. First, the `table()` function is used to get the class counts and then the name of the class with the max counts is returned. Now we define the predict function which takes as its first argument the learned parameters and as second argument the test set on which we want to make predictions. The parameter only consists of the name of a class.
```
# Define the dummy classifier's predict function.
most.frequent.class.predict <- function(params, data){
# Return the same label for as many rows as there are in data.
return(rep(params, nrow(data)))
}
```
The only thing the predict function does is to return the `params` argument that contains the class name repeated \\(n\\) times. Where \\(n\\) is the number of rows in the test data frame.
Let’s try our functions. The dataset has already been split into \\(50\\%\\) for training and \\(50\\%\\) for testing. First we train the dummy model using the train set. Then, the learned parameter is printed.
```
# Learn the parameters.
dummy.model1 <- most.frequent.class.train(trainset)
# Print the learned parameter.
dummy.model1
#> [1] "Walking"
```
Now we can make predictions on the test set and compute the accuracy.
```
# Make predictions.
predictions <- most.frequent.class.predict(dummy.model1, testset)
# Compute confusion matrix and other performance metrics.
cm <- confusionMatrix(factor(predictions, levels),
factor(testset$class, levels))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.9087719
```
The accuracy was \\(90\.8\\%\\). It seems that the dummy classifier was not that dummy after all! Let’s print the confusion matrix to inspect the predictions.
```
# Print confusion matrix.
cm$table
#> Reference
#> Prediction Walking Upstairs
#> Walking 1036 104
#> Upstairs 0 0
```
From the confusion matrix we can see that all *‘Walking’* activities were correctly classified but none of the *‘Upstairs’* classes were identified. This is because the dummy model only predicts *‘Walking’*. Here we can see that even though it seemed like the dummy model was doing pretty good, it was not that good after all.
We can now try with a decision tree from the `rpart` package.
```
### Let's try with a decision tree
treeClassifier <- rpart(class ~ ., trainset)
tree.predictions <- predict(treeClassifier, testset, type = "class")
cm.tree <- confusionMatrix(factor(tree.predictions, levels),
factor(testset$class, levels))
# Print accuracy
cm.tree$overall["Accuracy"]
#> Accuracy
#> 0.9263158
```
Decision trees are more powerful than dummy classifiers but the accuracy was very similar!
It is a good practice to compare powerful models against dummy models. If their performances are similar, this may be an indication that there is something that needs to be checked. In this example, the problem was that the dataset was imbalanced. It is also adivisable to report not only the accuracy but other metrics. We could also have noted the imbalance problem by looking at the recall of the individual classes, for example.
### 2\.6\.2 Uniform Classifier
This is another type of dummy classifier. This one predicts classes at random with equal probability and can be implemented as follows.
```
# Define the dummy classifier's train function.
uniform.train <- function(data){
# Get the unique classes.
unique.classes <- unique(data$class)
return(unique.classes)
}
# Define the dummy classifier's predict function.
uniform.predict <- function(params, data){
# Sample classes uniformly.
return(sample(unique.classes, size = nrow(data), replace = T))
}
```
At prediction time, it just picks a random label for each instance in the test set. This model achieved an accuracy of only \\(49\.0\\%\\) using the same dataset, but it correctly identified more classes of type *‘Upstairs’*.
```
#> Reference
#> Prediction Walking Upstairs
#> Walking 506 54
#> Upstairs 530 50
```
If a dataset is balanced and the accuracy of the uniform classifier is similar to the more complex model, the problem may be that the features are not providing enough information. That is, the complex classifier was not able to extract any useful patterns from the features.
### 2\.6\.3 Frequency\-based Classifier
This one is similar to the uniform classifier but the probability of choosing a class is proportional to its frequency in the train set. Its implementation is similar to the uniform classifier but makes use of the `prob` parameter in the `sample()` function to specify weights for each class. The higher the weight for a class, the more probable it will be chosen at prediction time. The implementation of this one is in the script `dummy_classifiers.R`.
The frequency\-based classifier achieved an accuracy of \\(85\.5\\%\\). Much lower than the most\-frequent\-class model (\\(90\.8\\%\\)) but it was able to detect some of the *‘Upstairs’* classes.
### 2\.6\.4 Other Dummy Classifiers
Another dummy model that can be used for classification is to apply simple thresholds.
```
if(feature1 < threshold)
return("A")
else
return("B")
```
In fact, the previous rule can be thought of as a very simple decision tree with only one root node. Surprisingly, sometimes simple rules can be difficult to beat by more complex models. In this section I’ve been focusing on classification problems, but dummy models can also be constructed for **regression**. The simplest one would be to predict the mean value of \\(y\\) regardless of the feature values. Another dummy model could predict a random value between the min and max of \\(y\\). If there is a categorical feature, one could predict the mean value based on the category. In fact, that is what we did in chapter [1](intro.html#intro) in the simple regression example.
In summary, one can construct any type of dummy model depending on the application. The takeaway is that dummy models allow us to assess how more complex models perform with respect to some baselines and help us to detect possible problems in the data and features. What I typically do when solving a problem is to start with simple models and/or rules and then, try more complex models. Of course, manual thresholds and simple rules can work remarkably well in some situations but they are not scalable. Depending on the use case, one can just implement the simple solution or go for something more complex if the system is expected to grow or be used in more general ways.
### 2\.6\.1 Most\-frequent\-class Classifier
As the name implies, the most\-frequent\-class classifier always predicts the most frequent label found in the train set. This means that the model does not even need to look at the features! Once it is presented with a new instance, it just outputs the most common class as the prediction.
To show how it can be implemented, I will use the *SMARTPHONES ACTIVITIES* dataset. For demonstration purposes, I will only keep two classes: *‘Walking’* and *‘Upstairs’*. Furthermore, I will only pick a small percent of the instances with class *‘Upstairs’* to simulate an imbalanced dataset. Imbalanced means that there are classes for which only a few instances exist. More about imbalanced data and how to handle it will be covered in chapter [5](preprocessing.html#preprocessing). After those modifications, we can check the class counts:
```
# Print class counts.
table(dataset$class)
#> Upstairs Walking
#> 200 2081
# In percentages.
table(dataset$class) / nrow(dataset)
#> Upstairs Walking
#> 0.08768084 0.91231916
```
We can see that more than \\(90\\%\\) of the instances belong to class *‘Walking’*. It’s time to define the dummy classifier!
```
# Define the dummy classifier's train function.
most.frequent.class.train <- function(data){
# Get a table with the class counts.
counts <- table(data$class)
# Select the label with the most counts.
most.frequent <- names(which.max(counts))
return(most.frequent)
}
```
The `most.frequent.class.train()` function will learn the parameters from a train set. The only thing this model needs to learn is what is the most frequent class. First, the `table()` function is used to get the class counts and then the name of the class with the max counts is returned. Now we define the predict function which takes as its first argument the learned parameters and as second argument the test set on which we want to make predictions. The parameter only consists of the name of a class.
```
# Define the dummy classifier's predict function.
most.frequent.class.predict <- function(params, data){
# Return the same label for as many rows as there are in data.
return(rep(params, nrow(data)))
}
```
The only thing the predict function does is to return the `params` argument that contains the class name repeated \\(n\\) times. Where \\(n\\) is the number of rows in the test data frame.
Let’s try our functions. The dataset has already been split into \\(50\\%\\) for training and \\(50\\%\\) for testing. First we train the dummy model using the train set. Then, the learned parameter is printed.
```
# Learn the parameters.
dummy.model1 <- most.frequent.class.train(trainset)
# Print the learned parameter.
dummy.model1
#> [1] "Walking"
```
Now we can make predictions on the test set and compute the accuracy.
```
# Make predictions.
predictions <- most.frequent.class.predict(dummy.model1, testset)
# Compute confusion matrix and other performance metrics.
cm <- confusionMatrix(factor(predictions, levels),
factor(testset$class, levels))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.9087719
```
The accuracy was \\(90\.8\\%\\). It seems that the dummy classifier was not that dummy after all! Let’s print the confusion matrix to inspect the predictions.
```
# Print confusion matrix.
cm$table
#> Reference
#> Prediction Walking Upstairs
#> Walking 1036 104
#> Upstairs 0 0
```
From the confusion matrix we can see that all *‘Walking’* activities were correctly classified but none of the *‘Upstairs’* classes were identified. This is because the dummy model only predicts *‘Walking’*. Here we can see that even though it seemed like the dummy model was doing pretty good, it was not that good after all.
We can now try with a decision tree from the `rpart` package.
```
### Let's try with a decision tree
treeClassifier <- rpart(class ~ ., trainset)
tree.predictions <- predict(treeClassifier, testset, type = "class")
cm.tree <- confusionMatrix(factor(tree.predictions, levels),
factor(testset$class, levels))
# Print accuracy
cm.tree$overall["Accuracy"]
#> Accuracy
#> 0.9263158
```
Decision trees are more powerful than dummy classifiers but the accuracy was very similar!
It is a good practice to compare powerful models against dummy models. If their performances are similar, this may be an indication that there is something that needs to be checked. In this example, the problem was that the dataset was imbalanced. It is also adivisable to report not only the accuracy but other metrics. We could also have noted the imbalance problem by looking at the recall of the individual classes, for example.
### 2\.6\.2 Uniform Classifier
This is another type of dummy classifier. This one predicts classes at random with equal probability and can be implemented as follows.
```
# Define the dummy classifier's train function.
uniform.train <- function(data){
# Get the unique classes.
unique.classes <- unique(data$class)
return(unique.classes)
}
# Define the dummy classifier's predict function.
uniform.predict <- function(params, data){
# Sample classes uniformly.
return(sample(unique.classes, size = nrow(data), replace = T))
}
```
At prediction time, it just picks a random label for each instance in the test set. This model achieved an accuracy of only \\(49\.0\\%\\) using the same dataset, but it correctly identified more classes of type *‘Upstairs’*.
```
#> Reference
#> Prediction Walking Upstairs
#> Walking 506 54
#> Upstairs 530 50
```
If a dataset is balanced and the accuracy of the uniform classifier is similar to the more complex model, the problem may be that the features are not providing enough information. That is, the complex classifier was not able to extract any useful patterns from the features.
### 2\.6\.3 Frequency\-based Classifier
This one is similar to the uniform classifier but the probability of choosing a class is proportional to its frequency in the train set. Its implementation is similar to the uniform classifier but makes use of the `prob` parameter in the `sample()` function to specify weights for each class. The higher the weight for a class, the more probable it will be chosen at prediction time. The implementation of this one is in the script `dummy_classifiers.R`.
The frequency\-based classifier achieved an accuracy of \\(85\.5\\%\\). Much lower than the most\-frequent\-class model (\\(90\.8\\%\\)) but it was able to detect some of the *‘Upstairs’* classes.
### 2\.6\.4 Other Dummy Classifiers
Another dummy model that can be used for classification is to apply simple thresholds.
```
if(feature1 < threshold)
return("A")
else
return("B")
```
In fact, the previous rule can be thought of as a very simple decision tree with only one root node. Surprisingly, sometimes simple rules can be difficult to beat by more complex models. In this section I’ve been focusing on classification problems, but dummy models can also be constructed for **regression**. The simplest one would be to predict the mean value of \\(y\\) regardless of the feature values. Another dummy model could predict a random value between the min and max of \\(y\\). If there is a categorical feature, one could predict the mean value based on the category. In fact, that is what we did in chapter [1](intro.html#intro) in the simple regression example.
In summary, one can construct any type of dummy model depending on the application. The takeaway is that dummy models allow us to assess how more complex models perform with respect to some baselines and help us to detect possible problems in the data and features. What I typically do when solving a problem is to start with simple models and/or rules and then, try more complex models. Of course, manual thresholds and simple rules can work remarkably well in some situations but they are not scalable. Depending on the use case, one can just implement the simple solution or go for something more complex if the system is expected to grow or be used in more general ways.
2\.7 Summary
------------
This chapter focused on **classification** models. Classifiers predict a category based on the input features. Here, it was demonstrated how classifiers can be used to detect indoor locations, classify activities, and hand gestures.
* **\\(k\\)\-Nearest Neighbors (\\(k\\)\-NN)** predicts the class of a test point as the majority class of the \\(k\\) nearest neighbors.
* Some classification performance metrics are **recall**, **specificity**, **precision**, **accuracy**, **F1\-score**, etc.
* **Decision trees** are easy\-to\-interpret classifiers trained recursively based on feature importance (for example, purity).
* **Naive Bayes** is a type of classifier where features are assumed to be independent.
* **Dynamic Time Warping (DTW)** computes the similarity between two timeseries after aligning them in time. This can be used for classification for example, in combination with \\(k\\)\-NN.
* **Dummy models** can help to spot possible errors in the data and can also be used as baselines.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/ensemble.html |
Chapter 3 Predicting Behavior with Ensemble Learning
====================================================
In the previous chapters, we have been building single models, either for classification or regression. With **ensemble learning**, the idea is to train several models and combine their results to increase the performance. Usually, ensemble methods outperform single models. In the context of *ensemble learning*, the individual models whose results are to be combined are known as **base learners**. Base learners can be of the same type (homogeneous) or of different types (heterogeneous). Examples of ensemble methods are **Bagging**, **Random Forest**, and **Stacked Generalization**. In the following sections, the three of them will be described and example applications in behavior analysis will be presented as well.
3\.1 Bagging
------------
Bagging stands for “bootstrap aggregating” and is an ensemble learning method proposed by Breiman ([1996](#ref-breimanBagging1996)). Ummm…, *Bootstrap*, *aggregating*? Let’s start with the *aggregating* part. As the name implies, this method is based on training several *base learners* (e.g., decision trees) and combining their outputs to produce a single final prediction. One way to combine the results is by taking the majority vote for classification tasks or the average for regression. In an ideal case, we would have enough data to train each *base learner* with an independent train set. However, in practice we may only have a single train set of limited size. Training several *base learners* with the same train set is equivalent to having a single learner, provided that the training procedure of the base learners is deterministic. Even if the training procedure is not deterministic, the resulting models might be very similar. What we would like to have is accurate base learners but at the same time they should be diverse. Then, how can those base learners be trained? Well, this is where the *bootstrap* part comes into play.
Bootstrapping means generating new train sets by sampling instances with replacement from the original train set. If the original train set has \\(N\\) instances, the method selects \\(N\\) instances at random to produce a new train set. *With replacement* means that repeated instances are allowed. This has the effect of generating a new train set of size \\(N\\) by removing some instances and duplicating other instances. By using this method, \\(n\\) different train sets can be generated and used to train \\(n\\) different learners.
It has been shown that having more diverse base learners increases performance. One way to generate diverse learners is by using different train sets as just described. In his original work, Breiman ([1996](#ref-breimanBagging1996)) used decision trees as base learners. Decision trees are considered to be very unstable. This means that small changes in the train set produce very different trees \- but this is a good thing for bagging! Most of the time, the aggregated predictions will produce better results than the best individual learner from the ensemble.
Figure [3\.1](ensemble.html#fig:baggingexample) shows bootstrapping in action. The train set is sampled with replacement \\(3\\) times. The numbers represent indices to arbitrary train instances. Here, we can see that in the first sample, the instance number \\(5\\) is missing but instead, instance \\(2\\) is duplicated. All samples have five elements. Then, each sample is used to train individual decision trees.
FIGURE 3\.1: Bagging example.
One of the disadvantages of ensemble methods is their higher computational cost both during training and inference. Another disadvantage of ensemble methods is that they are more difficult to interpret. Still, there exist model agnostic interpretability methods ([Molnar 2019](#ref-molnarInterpretable)) that can help to analyze the results. In the next section, I will show you how to implement your own Bagging model with decision trees in R.
### 3\.1\.1 Activity Recognition with Bagging
`bagging_activities.R` `iterated_bagging_activities.R`
In this section, we will implement Bagging with decision trees. Then, we will test our implementation on the *SMARTPHONE ACTIVITIES* dataset. The following code snippet shows the implementation of the `my_bagging()` function. The complete code is in the script `bagging_activities.R`. The function accepts three arguments. The first one is the formula, the second one is the train set, and the third argument is the number of base learners (\\(10\\) by default). Here, we will use the `rpart` package to train the decision trees.
```
# Define our bagging classifier.
my_bagging <- function(theFormula, data, ntrees = 10){
N <- nrow(data)
# A list to store the individual trees
models <- list()
# Train individual trees and add each to 'models' list.
for(i in 1:ntrees){
# Bootstrap instances from data.
idxs <- sample(1:N, size = N, replace = T)
bootstrappedInstances <- data[idxs,]
treeModel <- rpart(as.formula(theFormula),
bootstrappedInstances,
xval = 0,
cp = 0)
models <- c(models, list(treeModel))
}
res <- structure(list(models = models),
class = "my_bagging")
return(res)
}
```
First, a list that will store each individual learner is defined `models <- list()`. Then, the function iterates `ntrees` times. In each iteration, a bootstrapped train set is generated and used to train a `rpart` model. The `xval = 0` parameter tells rpart not to perform cross\-validation internally. The `cp` parameter is also set to \\(0\\). This value controls the amount of pruning. The default is \\(0\.01\\) leading to smaller trees. This makes the trees to be more similar but since we want diversity we are setting this to \\(0\\) so bigger trees are generated and as a consequence, more diverse.
Finally, an object of class `"my_bagging"` is returned. This is just a list containing the trained base learners. The `class = "my_bagging"` argument is important. It tells R that this object is of type `my_bagging`. Setting the class will allow us to use the generic `predict()` function, and R will automatically call the corresponding `predict.my_bagging()` function which we will shortly define. The class name and the function name after `predict.` need to be the same.
```
# Define the predict function for my_bagging.
predict.my_bagging <- function(object, newdata){
ntrees <- length(object$models)
N <- nrow(newdata)
# Matrix to store predictions for each instance
# in newdata and for each tree.
M <- matrix(data = rep("",N * ntrees), nrow = N)
# Populate matrix.
# Each column of M contains all predictions for a given tree.
# Each row contains the predictions for a given instance.
for(i in 1:ntrees){
m <- object$models[[i]]
tmp <- as.character(predict(m, newdata, type = "class"))
M[,i] <- tmp
}
# Final predictions
predictions <- character()
# Iterate through each row of M.
for(i in 1:N){
# Compute class counts
classCounts <- table(M[i,])
# Get the class with the most counts.
predictions <- c(predictions,
names(classCounts)[which.max(classCounts)])
}
return(predictions)
}
```
Now let’s dissect the `predict.my_bagging()` function. First, note that the function name starts with `predict.` followed by the type of object. Following this convention will allow us to call `predict()` and R will call the corresponding method based on the class of the object. The first argument `object` is an object of type “my\_bagging” as returned by `my_bagging()`. The second argument `newdata` is the test set we want to generate predictions for. A matrix `M` that will store the predictions for each tree is defined. This matrix has \\(N\\) rows and \\(ntrees\\) columns where \\(N\\) is the number of instances in `newdata` and \\(ntrees\\) is the number of trees. Thus, each column stores the predictions for each of the base learners. This function iterates through each base learner (rpart in this case), and makes a prediction for each instance in `newdata`. Then, the results are stored in matrix `M`. Finally, it iterates through each instance and computes the most common predicted class from the base learners.
Let’s test our Bagging function! We will test it with the activity recognition dataset introduced in section [2\.3\.1](classification.html#activityRecognition) and set the number of trees to \\(10\\). The following code shows how to use our bagging functions to train the model and make predictions on a test set.
```
baggingClassifier <- my_bagging(class ~ ., trainSet, ntree = 10)
predictions <- predict(baggingClassifier, testSet)
```
The following will perform \\(5\\)\-fold cross\-validation and print the results.
```
set.seed(1234)
k <- 5
folds <- sample(k, size = nrow(df), replace = TRUE)
# Variable to store ground truth classes.
groundTruth <- NULL
# Variable to store the classifier's predictions.
predictions <- NULL
for(i in 1:k){
trainSet <- df[which(folds != i), ]
testSet <- df[which(folds == i), ]
treeClassifier <- my_bagging(class ~ ., trainSet, ntree = 10)
foldPredictions <- predict(treeClassifier, testSet)
predictions <- c(predictions, as.character(foldPredictions))
groundTruth <- c(groundTruth, as.character(testSet$class))
}
cm <- confusionMatrix(as.factor(predictions), as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.861388
# Print other metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: Downstairs 0.5378788 0.9588957 0.5855670 0.5607108
#> Class: Jogging 0.9618462 0.9820722 0.9583078 0.9600737
#> Class: Sitting 0.9607843 0.9982394 0.9702970 0.9655172
#> Class: Standing 0.9146341 0.9988399 0.9740260 0.9433962
#> Class: Upstairs 0.5664557 0.9563310 0.6313933 0.5971643
#> Class: Walking 0.9336857 0.9226850 0.8827806 0.9075199
# Print average performance metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity", "Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.8125475 0.9695105 0.8337286 0.8223970
```
The accuracy was much better now compared to \\(0\.789\\) from the previous chapter without using Bagging!
The effect of adding more trees to the ensemble can also be analyzed. The script `iterated_bagging_activities.R` does \\(5\\)\-fold cross\-validation as we just did but starts with \\(1\\) tree in the ensemble and repeats the process by adding more trees until \\(50\\).
Figure [3\.2](ensemble.html#fig:iteratedBagging) shows the effect on the train and test accuracy with different number of trees. Here, we can see that \\(3\\) trees already produce a significant performance increase compared to \\(1\\) or \\(2\\) trees. This makes sense since having only \\(2\\) trees does not add additional information. If the two trees produce different predictions then, it becomes a random choice between the two labels. In fact, \\(2\\) trees produced worse results than \\(1\\) tree. But we cannot make strong conclusions since the experiment was run only once. One possibility to break ties when there are only two trees is to use the averaged probabilities of each label. rpart can return those probabilities by setting `type = "prob"` in the `predict()` function which is the default behavior. This is left as an exercise for the reader. In the following section, Random Forest will be described which is a way of introducing more diversity to the base learners.
FIGURE 3\.2: Bagging results for different number of trees.
3\.2 Random Forest
------------------
`rf_activities.R` `iterated_rf_activities.R` `iterated_bagging_rf.R`
A Random Forest can be thought of as an extension of Bagging. Random Forests were proposed by Breiman ([2001](#ref-breimanRF)) and as the name implies, they introduce more randomness to the individual trees. This is with the objective of having decorrelated trees. With Bagging, most of the trees are very similar at the root because the most important variables are selected first (see chapter [2](classification.html#classification)). To avoid this happening, a simple modification can be introduced. When building a tree, instead of evaluating all features at each split to find the most important one (based on some purity measure like *information gain*), a random subset of the features (usually \\(\\sqrt{\|features\|}\\)) is sampled. This simple modification produces more decorrelated trees and in general, it results in better performance compared to Bagging.
In R, the most famous library that implements Random Forest is…, yes you guessed it: `randomForest` ([Liaw and Wiener 2002](#ref-randomForest)). The following code snippet shows how to fit a Random Forest with \\(10\\) trees.
```
library(randomForest)
rf <- randomForest(class ~ ., trainSet, ntree = 10)
```
By default, `ntree = 500`. Among other things, you can control how many random features are sampled at each split with the `mtry` argument. By default, for classification `mtry = floor(sqrt(ncol(x)))` and for regression `mtry = max(floor(ncol(x)/3), 1)`.
The following code performs \\(5\\)\-fold cross\-validation with the activities dataset already stored in `df` and prints the results. The complete code can be found in the script `randomForest_activities.R`.
```
set.seed(1234)
k <- 5
folds <- sample(k, size = nrow(df), replace = TRUE)
# Variable to store ground truth classes.
groundTruth <- NULL
# Variable to store the classifier's predictions.
predictions <- NULL
for(i in 1:k){
trainSet <- df[which(folds != i), ]
testSet <- df[which(folds == i), ]
rf <- randomForest(class ~ ., trainSet, ntree = 10)
foldPredictions <- predict(rf, testSet)
predictions <- c(predictions, as.character(foldPredictions))
groundTruth <- c(groundTruth, as.character(testSet$class))
}
cm <- confusionMatrix(as.factor(predictions), as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#>Accuracy
#> 0.870801
# Print other metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: Downstairs 0.5094697 0.9652352 0.6127563 0.5563599
#> Class: Jogging 0.9784615 0.9831268 0.9613059 0.9698079
#> Class: Sitting 0.9803922 0.9992175 0.9868421 0.9836066
#> Class: Standing 0.9512195 0.9990333 0.9790795 0.9649485
#> Class: Upstairs 0.5363924 0.9636440 0.6608187 0.5921397
#> Class: Walking 0.9543489 0.9151933 0.8752755 0.9131034
# Print other metrics overall.
colMeans(cm$byClass[,c("Recall", "Specificity", "Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.8183807 0.9709083 0.8460130 0.8299943
```
Those results are better than the previous ones with Bagging. Figure [3\.3](ensemble.html#fig:iteratedRF) shows the results when doing \\(5\\)\-fold cross\-validation for different number of trees (the complete script is in `iterated_randomForest_activities.R`). From these results, we can see a similar behavior as Bagging. That is, the accuracy increases very quickly and then it stabilizes.
FIGURE 3\.3: Random Forest results for different number of trees.
If we directly compare Bagging vs. Random Forest, Random Forest outperforms Bagging (Figure [3\.4](ensemble.html#fig:iteratedBaggingRF)). The complete code to generate the plot is in the script `iterated_bagging_rf.R`.
FIGURE 3\.4: Bagging vs. Random Forest.
3\.3 Stacked Generalization
---------------------------
Stacked Generalization (a.k.a *Stacking*) is a powerful ensemble learning method proposed by Wolpert ([1992](#ref-wolpertStacked)). The method consists of training a set of **powerful** base learners (*first\-level learners*) and combining their outputs by *stacking* them to form a new train set. The base learners’ outputs are their predictions and optionally, the class probabilities of those predictions. The predictions of the base learners are known as the **meta\-features**. The meta\-features along with their true labels \\(y\\) are used to build a new train set that is used to train a **meta\-learner**. The rationale behind this is that the predictions themselves contain information that can be used by the *meta\-learner*.
The procedure to train a Stacking model is as follows:
1. Define a set of first level\-learners \\(\\mathscr{L}\\) and a *meta\-learner*.
2. Train the first\-level learners \\(\\mathscr{L}\\) with training data \\(\\textbf{D}\\).
3. Predict the classes of \\(\\textbf{D}\\) with each learner in \\(\\mathscr{L}\\). Each learner produces a predictions vector \\(\\textbf{p}\_i\\) with \\(\\lvert\\textbf{D}\\lvert\\) elements each.
4. Build a matrix \\(\\textbf{M}\_{\\lvert\\textbf{D}\\lvert \\times \\lvert\\mathscr{L}\\lvert}\\) by column binding (stacking) the prediction vectors. Then, add the true labels \\(\\textbf{y}\\) to generate the new train set \\(\\textbf{D}'\\).
5. Train the *meta\-learner* with \\(\\textbf{D}'\\).
6. Output the final stacking model \\(\\mathcal{S}:\<\\mathscr{L},\\textit{meta\-learner}\>\\).
Figure [3\.5](ensemble.html#fig:stackingProcess) shows the procedure to generate the new training data \\(\\textbf{D}'\\) used to train the *meta\-learner*.
FIGURE 3\.5: Process to generate the new train set D’ by column\-binding the predictions of the first\-level learners and adding the true labels. (Reprinted from *Information Fusion* Vol. 40, Enrique Garcia\-Ceja, Carlos E. Galván\-Tejada, and Ramon Brena, “Multi\-view stacking for activity recognition with sound and accelerometer data” pp. 45\-56, Copyright 2018, with permission from Elsevier, doi: [https://doi.org/10\.1016/j.inffus.2017\.06\.004](https://doi.org/10.1016/j.inffus.2017.06.004)).
Note that steps \\(2\\) and \\(3\\) can lead to overfitting because the predictions are made with the same data used to train the models. To avoid this, steps \\(2\\) and \\(3\\) are usually performed using \\(k\\)\-fold cross\-validation. After \\(\\textbf{D}'\\) has been generated, the learners in \\(\\mathscr{L}\\) can be retrained using all data in \\(\\textbf{D}\\).
Ting and Witten ([1999](#ref-ting1999)) showed that the performance can increase by adding confidence information about the predictions. For example, the probabilities produced by the first\-level learners. Most classifiers can output probabilities.
At prediction time, each first\-level learner predicts the class, and optionally, the class probabilities of a given instance. These predictions are used to form a feature vector (*meta\-features*) that is fed to the *meta\-learner* to obtain the final prediction. Usually, first\-level learners are high performing classifiers such as Random Forests, Support Vector Machines, Neural Networks, etc. The *meta\-learner* should also be a powerful classifier.
In the next section, I will introduce *Multi\-view Stacking* which is similar to Generalized Stacking except that each first\-level learner is trained with features from a different *view*.
3\.4 Multi\-view Stacking for Home Tasks Recognition
----------------------------------------------------
`stacking_algorithms.R` `stacking_activities.R`
**Multi\-view learning** refers to the case when an instance can be characterized by two or more independent ‘views’. For example, one can extract features for webpage classification from a webpage’s text but also from the links pointing to it. Usually, there is the assumption that the views are independent and each is sufficient to solve the problem. Then, why combine them? In many cases, each different view provides additional and complementary information, thus, allowing to train better models.
The simplest thing one can do is to extract features from each view, aggregate them, and train a single model. This approach usually works well but has some limitations. Each view may have different statistical properties, thus, different types of models may be needed for each view. When aggregating features from all views, new variable correlations may be introduced which could impact the performance. Another limitation is that features need to be in the same format (feature vectors, images, etc.), so they can be aggregated.
For video classification, we could have two views. One represented by sequences of images, and the other by the corresponding audio. For the video part, we could encode the features as the images themselves, i.e., matrices. Then, a Convolutional Neural Network (covered in chapter [8](deeplearning.html#deeplearning)) could be trained directly from those images. For the audio part, statistical features can be extracted and stored as normal feature vectors. In this case, the two representations (views) are different. One is a matrix and the other a one\-dimensional feature vector. Combining them to train a single classifier could be problematic given the nature of the views and their different encoding formats. Instead, we can train two models, one for each view and then combine the results. This is precisely the idea of *Multi\-view Stacking* ([Garcia\-Ceja, Galván\-Tejada, and Brena 2018](#ref-garcia2018multiview)). Train a different model for each view and combine the outputs like in *Stacking*.
Here, *Multi\-view Stacking* will be demonstrated using the *HOME TASKS* dataset. This dataset was collected from two sources. Acceleration and audio. The acceleration was recorded with a wrist\-band watch and the audio using a cellphone. This dataset consists of \\(7\\) common home tasks: *‘mop floor’*, *‘sweep floor’*, *‘type on computer keyboard’*, *‘brush teeth’*, *‘wash hands’*, *‘eat chips’*, and *‘watch t.v.’*. Three volunteers performed each activity for approximately \\(3\\) minutes.
The acceleration and audio signals were segmented into \\(3\\)\-second windows. From each window, different features were extracted. From the acceleration, \\(16\\) features were extracted from the \\(3\\) axes (\\(x\\),\\(y\\),\\(z\\)) such as mean, standard deviation, maximum values, mean magnitude, area under the curve, etc. From the audio signals, \\(12\\) features were extracted, namely, Mel Frequency Cepstral Coefficients (MFCCs). To preserve volunteers’ privacy, the original audio was not released. The dataset already contains the extracted features from acceleration and audio. The first column is the label.
In order to implement *Multi\-view Stacking*, two Random Forests will be trained, one for each view (acceleration and audio). The predicted outputs will be stacked to form the new training set \\(D'\\) and a Random Forest trained with \\(D'\\) will act as the *meta\-learner*.
The next code snippet taken from `stacking_algorithms.R` shows the multi\-view stacking function implemented in R.
```
mvstacking <- function(D, v1cols, v2cols, k = 10){
# Generate folds for internal cross-validation.
folds <- sample(1:k, size = nrow(D), replace = T)
trueLabels <- NULL
predicted.v1 <- NULL # predicted labels with view 1
predicted.v2 <- NULL # predicted labels with view 2
probs.v1 <- NULL # predicted probabilities with view 1
probs.v2 <- NULL # predicted probabilities with view 2
# Perform internal cross-validation.
for(i in 1:k){
train <- D[folds != i, ]
test <- D[folds == i, ]
trueLabels <- c(trueLabels, as.character(test$label))
# Train learner with view 1 and make predictions.
m.v1 <- randomForest(label ~.,
train[,c("label",v1cols)], nt = 100)
raw.v1 <- predict(m.v1, newdata = test[,v1cols], type = "prob")
probs.v1 <- rbind(probs.v1, raw.v1)
pred.v1 <- as.character(predict(m.v1,
newdata = test[,v1cols],
type = "class"))
predicted.v1 <- c(predicted.v1, pred.v1)
# Train learner with view 2 and make predictions.
m.v2 <- randomForest(label ~.,
train[,c("label",v2cols)], nt = 100)
raw.v2 <- predict(m.v2, newdata = test[,v2cols], type = "prob")
probs.v2 <- rbind(probs.v2, raw.v2)
pred.v2 <- as.character(predict(m.v2,
newdata = test[,v2cols],
type = "class"))
predicted.v2 <- c(predicted.v2, pred.v2)
}
# Build first-order learners with all data.
learnerV1 <- randomForest(label ~.,
D[,c("label",v1cols)], nt = 100)
learnerV2 <- randomForest(label ~.,
D[,c("label",v2cols)], nt = 100)
# Construct meta-features.
metaFeatures <- data.frame(label = trueLabels,
((probs.v1 + probs.v2) / 2),
pred1 = predicted.v1,
pred2 = predicted.v2)
#train meta-learner
metalearner <- randomForest(label ~.,
metaFeatures, nt = 100)
res <- structure(list(metalearner=metalearner,
learnerV1=learnerV1,
learnerV2=learnerV2,
v1cols = v1cols,
v2cols = v2cols),
class = "mvstacking")
return(res)
}
```
The first argument `D` is a data frame containing the training data. `v1cols` and `v2cols` are the column names of the two views. Finally, argument `k` specifies the number of folds for the internal cross\-validation to avoid overfitting (Steps \\(2\\) and \\(3\\) as described in the generalized stacking procedure).
The function iterates through each fold and trains a Random Forest with the train data for each of the two views. Within each iteration, the trained models are used to predict the labels and probabilities on the internal test set. Predicted labels and probabilities on the internal test sets are concatenated across all folds (`predicted.v1`, `predicted.v2`).
After cross\-validation, the meta\-features are generated by creating a data frame with the predictions of each view. Additionally, the average of class probabilities is added as a meta\-feature. The true labels are also added. The purpose of cross\-validation is to avoid overfitting but at the end, we do not want to waste data so both learners are re\-trained with all data `D`.
Finally, the *meta\-learner* which is also a Random Forest is trained with the *meta\-features* data frame. A list with all the required information to make predictions is created. This includes first\-level learners, the meta\-learner, and the column names for each view so we know how to divide the data frame into two views at prediction time.
The following code snippet shows the implementation for making predictions using a trained stacking model.
```
predict.mvstacking <- function(object, newdata){
# Predict probabilities with view 1.
raw.v1 <- predict(object$learnerV1,
newdata = newdata[,object$v1cols],
type = "prob")
# Predict classes with view 1.
pred.v1 <- as.character(predict(object$learnerV1,
newdata = newdata[,object$v1cols],
type = "class"))
# Predict probabilities with view 2.
raw.v2 <- predict(object$learnerV2,
newdata = newdata[,object$v2cols],
type = "prob")
# Predict classes with view 2.
pred.v2 <- as.character(predict(object$learnerV2,
newdata = newdata[,object$v2cols],
type = "class"))
# Build meta-features
metaFeatures <- data.frame(((raw.v1 + raw.v2) / 2),
pred1 = pred.v1,
pred2 = pred.v2)
# Set levels on factors to avoid errors in randomForest predict.
levels(metaFeatures$pred1) <- object$metalearner$classes
levels(metaFeatures$pred2) <- object$metalearner$classes
predictions <- as.character(predict(object$metalearner,
newdata = metaFeatures),
type="class")
return(predictions)
}
```
The `object` parameter is the trained model and `newdata` is a data frame from which we want to make the predictions. First, labels and probabilities are predicted using the two views. Then, a data frame with the *meta\-features* is assembled with the predicted label and the averaged probabilities. Finally, the *meta\-learner* is used to predict the final classes using the *meta\-features*.
The script `stacking_activities.R` shows how to use our `mvstacking()` function. With the following two lines we can train and make predictions.
```
m.stacking <- mvstacking(trainset, v1cols, v2cols, k = 10)
pred.stacking <- predict(m.stacking, newdata = testset[,-1])
```
The script performs \\(10\\)\-fold cross\-validation and for the sake of comparison, it builds three models. One with only audio features, one with only acceleration features, and the Multi\-view Stacking one combining both types of features.
Table [3\.1](ensemble.html#tab:stackingResults) shows the results for each view and with Multi\-view Stacking. Clearly, combining both views with Multi\-view Stacking achieved the best results compared to using a single view.
TABLE 3\.1: Stacking results.
| | Accuracy | Recall | Specificity | Precision | F1 |
| --- | --- | --- | --- | --- | --- |
| Audio | 0\.8535 | 0\.8497 | 0\.9753 | 0\.8564 | 0\.8521 |
| Accelerometer | 0\.8557 | 0\.8470 | 0\.9760 | 0\.8523 | 0\.8487 |
| Multi\-view Stacking | 0\.9365 | 0\.9318 | 0\.9895 | 0\.9333 | 0\.9325 |
FIGURE 3\.6: Confusion matrices.
Figure [3\.6](ensemble.html#fig:stackingCMs) shows the resulting confusion matrices for the three cases. By looking at the recall (anti\-diagonal) of the individual classes, it seems that audio features are better at recognizing some activities like *‘sweep’* and *‘mop floor’* whereas the accelerometer features are better for classifying *‘eat chips’*, *‘wash hands’*, *‘type on keyboard’*, etc. thus, those two views are somehow complementary. All recall values when using Multi\-view Stacking are higher than for any of the other views.
3\.5 Summary
------------
In this chapter, several ensemble learning methods were introduced. In general, ensemble models perform better than single models.
* The main idea of ensemble learning is to train several models and combine their results.
* **Bagging** is an ensemble method consisting of \\(n\\) *base\-learners*, each, trained with bootstrapped training samples.
* **Random Forest** is an ensemble of trees. It introduces randomness to the trees by selecting random features in each split.
* Another ensemble method is called **stacked generalization**. It consists of a set of *base\-learners* and a *meta\-learner*. The later is trained using the outputs of the *base\-learners*.
* **Multi\-view learning** can be used when an instance can be represented by two or more *views* (for example, different sensors).
3\.1 Bagging
------------
Bagging stands for “bootstrap aggregating” and is an ensemble learning method proposed by Breiman ([1996](#ref-breimanBagging1996)). Ummm…, *Bootstrap*, *aggregating*? Let’s start with the *aggregating* part. As the name implies, this method is based on training several *base learners* (e.g., decision trees) and combining their outputs to produce a single final prediction. One way to combine the results is by taking the majority vote for classification tasks or the average for regression. In an ideal case, we would have enough data to train each *base learner* with an independent train set. However, in practice we may only have a single train set of limited size. Training several *base learners* with the same train set is equivalent to having a single learner, provided that the training procedure of the base learners is deterministic. Even if the training procedure is not deterministic, the resulting models might be very similar. What we would like to have is accurate base learners but at the same time they should be diverse. Then, how can those base learners be trained? Well, this is where the *bootstrap* part comes into play.
Bootstrapping means generating new train sets by sampling instances with replacement from the original train set. If the original train set has \\(N\\) instances, the method selects \\(N\\) instances at random to produce a new train set. *With replacement* means that repeated instances are allowed. This has the effect of generating a new train set of size \\(N\\) by removing some instances and duplicating other instances. By using this method, \\(n\\) different train sets can be generated and used to train \\(n\\) different learners.
It has been shown that having more diverse base learners increases performance. One way to generate diverse learners is by using different train sets as just described. In his original work, Breiman ([1996](#ref-breimanBagging1996)) used decision trees as base learners. Decision trees are considered to be very unstable. This means that small changes in the train set produce very different trees \- but this is a good thing for bagging! Most of the time, the aggregated predictions will produce better results than the best individual learner from the ensemble.
Figure [3\.1](ensemble.html#fig:baggingexample) shows bootstrapping in action. The train set is sampled with replacement \\(3\\) times. The numbers represent indices to arbitrary train instances. Here, we can see that in the first sample, the instance number \\(5\\) is missing but instead, instance \\(2\\) is duplicated. All samples have five elements. Then, each sample is used to train individual decision trees.
FIGURE 3\.1: Bagging example.
One of the disadvantages of ensemble methods is their higher computational cost both during training and inference. Another disadvantage of ensemble methods is that they are more difficult to interpret. Still, there exist model agnostic interpretability methods ([Molnar 2019](#ref-molnarInterpretable)) that can help to analyze the results. In the next section, I will show you how to implement your own Bagging model with decision trees in R.
### 3\.1\.1 Activity Recognition with Bagging
`bagging_activities.R` `iterated_bagging_activities.R`
In this section, we will implement Bagging with decision trees. Then, we will test our implementation on the *SMARTPHONE ACTIVITIES* dataset. The following code snippet shows the implementation of the `my_bagging()` function. The complete code is in the script `bagging_activities.R`. The function accepts three arguments. The first one is the formula, the second one is the train set, and the third argument is the number of base learners (\\(10\\) by default). Here, we will use the `rpart` package to train the decision trees.
```
# Define our bagging classifier.
my_bagging <- function(theFormula, data, ntrees = 10){
N <- nrow(data)
# A list to store the individual trees
models <- list()
# Train individual trees and add each to 'models' list.
for(i in 1:ntrees){
# Bootstrap instances from data.
idxs <- sample(1:N, size = N, replace = T)
bootstrappedInstances <- data[idxs,]
treeModel <- rpart(as.formula(theFormula),
bootstrappedInstances,
xval = 0,
cp = 0)
models <- c(models, list(treeModel))
}
res <- structure(list(models = models),
class = "my_bagging")
return(res)
}
```
First, a list that will store each individual learner is defined `models <- list()`. Then, the function iterates `ntrees` times. In each iteration, a bootstrapped train set is generated and used to train a `rpart` model. The `xval = 0` parameter tells rpart not to perform cross\-validation internally. The `cp` parameter is also set to \\(0\\). This value controls the amount of pruning. The default is \\(0\.01\\) leading to smaller trees. This makes the trees to be more similar but since we want diversity we are setting this to \\(0\\) so bigger trees are generated and as a consequence, more diverse.
Finally, an object of class `"my_bagging"` is returned. This is just a list containing the trained base learners. The `class = "my_bagging"` argument is important. It tells R that this object is of type `my_bagging`. Setting the class will allow us to use the generic `predict()` function, and R will automatically call the corresponding `predict.my_bagging()` function which we will shortly define. The class name and the function name after `predict.` need to be the same.
```
# Define the predict function for my_bagging.
predict.my_bagging <- function(object, newdata){
ntrees <- length(object$models)
N <- nrow(newdata)
# Matrix to store predictions for each instance
# in newdata and for each tree.
M <- matrix(data = rep("",N * ntrees), nrow = N)
# Populate matrix.
# Each column of M contains all predictions for a given tree.
# Each row contains the predictions for a given instance.
for(i in 1:ntrees){
m <- object$models[[i]]
tmp <- as.character(predict(m, newdata, type = "class"))
M[,i] <- tmp
}
# Final predictions
predictions <- character()
# Iterate through each row of M.
for(i in 1:N){
# Compute class counts
classCounts <- table(M[i,])
# Get the class with the most counts.
predictions <- c(predictions,
names(classCounts)[which.max(classCounts)])
}
return(predictions)
}
```
Now let’s dissect the `predict.my_bagging()` function. First, note that the function name starts with `predict.` followed by the type of object. Following this convention will allow us to call `predict()` and R will call the corresponding method based on the class of the object. The first argument `object` is an object of type “my\_bagging” as returned by `my_bagging()`. The second argument `newdata` is the test set we want to generate predictions for. A matrix `M` that will store the predictions for each tree is defined. This matrix has \\(N\\) rows and \\(ntrees\\) columns where \\(N\\) is the number of instances in `newdata` and \\(ntrees\\) is the number of trees. Thus, each column stores the predictions for each of the base learners. This function iterates through each base learner (rpart in this case), and makes a prediction for each instance in `newdata`. Then, the results are stored in matrix `M`. Finally, it iterates through each instance and computes the most common predicted class from the base learners.
Let’s test our Bagging function! We will test it with the activity recognition dataset introduced in section [2\.3\.1](classification.html#activityRecognition) and set the number of trees to \\(10\\). The following code shows how to use our bagging functions to train the model and make predictions on a test set.
```
baggingClassifier <- my_bagging(class ~ ., trainSet, ntree = 10)
predictions <- predict(baggingClassifier, testSet)
```
The following will perform \\(5\\)\-fold cross\-validation and print the results.
```
set.seed(1234)
k <- 5
folds <- sample(k, size = nrow(df), replace = TRUE)
# Variable to store ground truth classes.
groundTruth <- NULL
# Variable to store the classifier's predictions.
predictions <- NULL
for(i in 1:k){
trainSet <- df[which(folds != i), ]
testSet <- df[which(folds == i), ]
treeClassifier <- my_bagging(class ~ ., trainSet, ntree = 10)
foldPredictions <- predict(treeClassifier, testSet)
predictions <- c(predictions, as.character(foldPredictions))
groundTruth <- c(groundTruth, as.character(testSet$class))
}
cm <- confusionMatrix(as.factor(predictions), as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.861388
# Print other metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: Downstairs 0.5378788 0.9588957 0.5855670 0.5607108
#> Class: Jogging 0.9618462 0.9820722 0.9583078 0.9600737
#> Class: Sitting 0.9607843 0.9982394 0.9702970 0.9655172
#> Class: Standing 0.9146341 0.9988399 0.9740260 0.9433962
#> Class: Upstairs 0.5664557 0.9563310 0.6313933 0.5971643
#> Class: Walking 0.9336857 0.9226850 0.8827806 0.9075199
# Print average performance metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity", "Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.8125475 0.9695105 0.8337286 0.8223970
```
The accuracy was much better now compared to \\(0\.789\\) from the previous chapter without using Bagging!
The effect of adding more trees to the ensemble can also be analyzed. The script `iterated_bagging_activities.R` does \\(5\\)\-fold cross\-validation as we just did but starts with \\(1\\) tree in the ensemble and repeats the process by adding more trees until \\(50\\).
Figure [3\.2](ensemble.html#fig:iteratedBagging) shows the effect on the train and test accuracy with different number of trees. Here, we can see that \\(3\\) trees already produce a significant performance increase compared to \\(1\\) or \\(2\\) trees. This makes sense since having only \\(2\\) trees does not add additional information. If the two trees produce different predictions then, it becomes a random choice between the two labels. In fact, \\(2\\) trees produced worse results than \\(1\\) tree. But we cannot make strong conclusions since the experiment was run only once. One possibility to break ties when there are only two trees is to use the averaged probabilities of each label. rpart can return those probabilities by setting `type = "prob"` in the `predict()` function which is the default behavior. This is left as an exercise for the reader. In the following section, Random Forest will be described which is a way of introducing more diversity to the base learners.
FIGURE 3\.2: Bagging results for different number of trees.
### 3\.1\.1 Activity Recognition with Bagging
`bagging_activities.R` `iterated_bagging_activities.R`
In this section, we will implement Bagging with decision trees. Then, we will test our implementation on the *SMARTPHONE ACTIVITIES* dataset. The following code snippet shows the implementation of the `my_bagging()` function. The complete code is in the script `bagging_activities.R`. The function accepts three arguments. The first one is the formula, the second one is the train set, and the third argument is the number of base learners (\\(10\\) by default). Here, we will use the `rpart` package to train the decision trees.
```
# Define our bagging classifier.
my_bagging <- function(theFormula, data, ntrees = 10){
N <- nrow(data)
# A list to store the individual trees
models <- list()
# Train individual trees and add each to 'models' list.
for(i in 1:ntrees){
# Bootstrap instances from data.
idxs <- sample(1:N, size = N, replace = T)
bootstrappedInstances <- data[idxs,]
treeModel <- rpart(as.formula(theFormula),
bootstrappedInstances,
xval = 0,
cp = 0)
models <- c(models, list(treeModel))
}
res <- structure(list(models = models),
class = "my_bagging")
return(res)
}
```
First, a list that will store each individual learner is defined `models <- list()`. Then, the function iterates `ntrees` times. In each iteration, a bootstrapped train set is generated and used to train a `rpart` model. The `xval = 0` parameter tells rpart not to perform cross\-validation internally. The `cp` parameter is also set to \\(0\\). This value controls the amount of pruning. The default is \\(0\.01\\) leading to smaller trees. This makes the trees to be more similar but since we want diversity we are setting this to \\(0\\) so bigger trees are generated and as a consequence, more diverse.
Finally, an object of class `"my_bagging"` is returned. This is just a list containing the trained base learners. The `class = "my_bagging"` argument is important. It tells R that this object is of type `my_bagging`. Setting the class will allow us to use the generic `predict()` function, and R will automatically call the corresponding `predict.my_bagging()` function which we will shortly define. The class name and the function name after `predict.` need to be the same.
```
# Define the predict function for my_bagging.
predict.my_bagging <- function(object, newdata){
ntrees <- length(object$models)
N <- nrow(newdata)
# Matrix to store predictions for each instance
# in newdata and for each tree.
M <- matrix(data = rep("",N * ntrees), nrow = N)
# Populate matrix.
# Each column of M contains all predictions for a given tree.
# Each row contains the predictions for a given instance.
for(i in 1:ntrees){
m <- object$models[[i]]
tmp <- as.character(predict(m, newdata, type = "class"))
M[,i] <- tmp
}
# Final predictions
predictions <- character()
# Iterate through each row of M.
for(i in 1:N){
# Compute class counts
classCounts <- table(M[i,])
# Get the class with the most counts.
predictions <- c(predictions,
names(classCounts)[which.max(classCounts)])
}
return(predictions)
}
```
Now let’s dissect the `predict.my_bagging()` function. First, note that the function name starts with `predict.` followed by the type of object. Following this convention will allow us to call `predict()` and R will call the corresponding method based on the class of the object. The first argument `object` is an object of type “my\_bagging” as returned by `my_bagging()`. The second argument `newdata` is the test set we want to generate predictions for. A matrix `M` that will store the predictions for each tree is defined. This matrix has \\(N\\) rows and \\(ntrees\\) columns where \\(N\\) is the number of instances in `newdata` and \\(ntrees\\) is the number of trees. Thus, each column stores the predictions for each of the base learners. This function iterates through each base learner (rpart in this case), and makes a prediction for each instance in `newdata`. Then, the results are stored in matrix `M`. Finally, it iterates through each instance and computes the most common predicted class from the base learners.
Let’s test our Bagging function! We will test it with the activity recognition dataset introduced in section [2\.3\.1](classification.html#activityRecognition) and set the number of trees to \\(10\\). The following code shows how to use our bagging functions to train the model and make predictions on a test set.
```
baggingClassifier <- my_bagging(class ~ ., trainSet, ntree = 10)
predictions <- predict(baggingClassifier, testSet)
```
The following will perform \\(5\\)\-fold cross\-validation and print the results.
```
set.seed(1234)
k <- 5
folds <- sample(k, size = nrow(df), replace = TRUE)
# Variable to store ground truth classes.
groundTruth <- NULL
# Variable to store the classifier's predictions.
predictions <- NULL
for(i in 1:k){
trainSet <- df[which(folds != i), ]
testSet <- df[which(folds == i), ]
treeClassifier <- my_bagging(class ~ ., trainSet, ntree = 10)
foldPredictions <- predict(treeClassifier, testSet)
predictions <- c(predictions, as.character(foldPredictions))
groundTruth <- c(groundTruth, as.character(testSet$class))
}
cm <- confusionMatrix(as.factor(predictions), as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#> Accuracy
#> 0.861388
# Print other metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: Downstairs 0.5378788 0.9588957 0.5855670 0.5607108
#> Class: Jogging 0.9618462 0.9820722 0.9583078 0.9600737
#> Class: Sitting 0.9607843 0.9982394 0.9702970 0.9655172
#> Class: Standing 0.9146341 0.9988399 0.9740260 0.9433962
#> Class: Upstairs 0.5664557 0.9563310 0.6313933 0.5971643
#> Class: Walking 0.9336857 0.9226850 0.8827806 0.9075199
# Print average performance metrics across classes.
colMeans(cm$byClass[,c("Recall", "Specificity", "Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.8125475 0.9695105 0.8337286 0.8223970
```
The accuracy was much better now compared to \\(0\.789\\) from the previous chapter without using Bagging!
The effect of adding more trees to the ensemble can also be analyzed. The script `iterated_bagging_activities.R` does \\(5\\)\-fold cross\-validation as we just did but starts with \\(1\\) tree in the ensemble and repeats the process by adding more trees until \\(50\\).
Figure [3\.2](ensemble.html#fig:iteratedBagging) shows the effect on the train and test accuracy with different number of trees. Here, we can see that \\(3\\) trees already produce a significant performance increase compared to \\(1\\) or \\(2\\) trees. This makes sense since having only \\(2\\) trees does not add additional information. If the two trees produce different predictions then, it becomes a random choice between the two labels. In fact, \\(2\\) trees produced worse results than \\(1\\) tree. But we cannot make strong conclusions since the experiment was run only once. One possibility to break ties when there are only two trees is to use the averaged probabilities of each label. rpart can return those probabilities by setting `type = "prob"` in the `predict()` function which is the default behavior. This is left as an exercise for the reader. In the following section, Random Forest will be described which is a way of introducing more diversity to the base learners.
FIGURE 3\.2: Bagging results for different number of trees.
3\.2 Random Forest
------------------
`rf_activities.R` `iterated_rf_activities.R` `iterated_bagging_rf.R`
A Random Forest can be thought of as an extension of Bagging. Random Forests were proposed by Breiman ([2001](#ref-breimanRF)) and as the name implies, they introduce more randomness to the individual trees. This is with the objective of having decorrelated trees. With Bagging, most of the trees are very similar at the root because the most important variables are selected first (see chapter [2](classification.html#classification)). To avoid this happening, a simple modification can be introduced. When building a tree, instead of evaluating all features at each split to find the most important one (based on some purity measure like *information gain*), a random subset of the features (usually \\(\\sqrt{\|features\|}\\)) is sampled. This simple modification produces more decorrelated trees and in general, it results in better performance compared to Bagging.
In R, the most famous library that implements Random Forest is…, yes you guessed it: `randomForest` ([Liaw and Wiener 2002](#ref-randomForest)). The following code snippet shows how to fit a Random Forest with \\(10\\) trees.
```
library(randomForest)
rf <- randomForest(class ~ ., trainSet, ntree = 10)
```
By default, `ntree = 500`. Among other things, you can control how many random features are sampled at each split with the `mtry` argument. By default, for classification `mtry = floor(sqrt(ncol(x)))` and for regression `mtry = max(floor(ncol(x)/3), 1)`.
The following code performs \\(5\\)\-fold cross\-validation with the activities dataset already stored in `df` and prints the results. The complete code can be found in the script `randomForest_activities.R`.
```
set.seed(1234)
k <- 5
folds <- sample(k, size = nrow(df), replace = TRUE)
# Variable to store ground truth classes.
groundTruth <- NULL
# Variable to store the classifier's predictions.
predictions <- NULL
for(i in 1:k){
trainSet <- df[which(folds != i), ]
testSet <- df[which(folds == i), ]
rf <- randomForest(class ~ ., trainSet, ntree = 10)
foldPredictions <- predict(rf, testSet)
predictions <- c(predictions, as.character(foldPredictions))
groundTruth <- c(groundTruth, as.character(testSet$class))
}
cm <- confusionMatrix(as.factor(predictions), as.factor(groundTruth))
# Print accuracy
cm$overall["Accuracy"]
#>Accuracy
#> 0.870801
# Print other metrics per class.
cm$byClass[,c("Recall", "Specificity", "Precision", "F1")]
#> Recall Specificity Precision F1
#> Class: Downstairs 0.5094697 0.9652352 0.6127563 0.5563599
#> Class: Jogging 0.9784615 0.9831268 0.9613059 0.9698079
#> Class: Sitting 0.9803922 0.9992175 0.9868421 0.9836066
#> Class: Standing 0.9512195 0.9990333 0.9790795 0.9649485
#> Class: Upstairs 0.5363924 0.9636440 0.6608187 0.5921397
#> Class: Walking 0.9543489 0.9151933 0.8752755 0.9131034
# Print other metrics overall.
colMeans(cm$byClass[,c("Recall", "Specificity", "Precision", "F1")])
#> Recall Specificity Precision F1
#> 0.8183807 0.9709083 0.8460130 0.8299943
```
Those results are better than the previous ones with Bagging. Figure [3\.3](ensemble.html#fig:iteratedRF) shows the results when doing \\(5\\)\-fold cross\-validation for different number of trees (the complete script is in `iterated_randomForest_activities.R`). From these results, we can see a similar behavior as Bagging. That is, the accuracy increases very quickly and then it stabilizes.
FIGURE 3\.3: Random Forest results for different number of trees.
If we directly compare Bagging vs. Random Forest, Random Forest outperforms Bagging (Figure [3\.4](ensemble.html#fig:iteratedBaggingRF)). The complete code to generate the plot is in the script `iterated_bagging_rf.R`.
FIGURE 3\.4: Bagging vs. Random Forest.
3\.3 Stacked Generalization
---------------------------
Stacked Generalization (a.k.a *Stacking*) is a powerful ensemble learning method proposed by Wolpert ([1992](#ref-wolpertStacked)). The method consists of training a set of **powerful** base learners (*first\-level learners*) and combining their outputs by *stacking* them to form a new train set. The base learners’ outputs are their predictions and optionally, the class probabilities of those predictions. The predictions of the base learners are known as the **meta\-features**. The meta\-features along with their true labels \\(y\\) are used to build a new train set that is used to train a **meta\-learner**. The rationale behind this is that the predictions themselves contain information that can be used by the *meta\-learner*.
The procedure to train a Stacking model is as follows:
1. Define a set of first level\-learners \\(\\mathscr{L}\\) and a *meta\-learner*.
2. Train the first\-level learners \\(\\mathscr{L}\\) with training data \\(\\textbf{D}\\).
3. Predict the classes of \\(\\textbf{D}\\) with each learner in \\(\\mathscr{L}\\). Each learner produces a predictions vector \\(\\textbf{p}\_i\\) with \\(\\lvert\\textbf{D}\\lvert\\) elements each.
4. Build a matrix \\(\\textbf{M}\_{\\lvert\\textbf{D}\\lvert \\times \\lvert\\mathscr{L}\\lvert}\\) by column binding (stacking) the prediction vectors. Then, add the true labels \\(\\textbf{y}\\) to generate the new train set \\(\\textbf{D}'\\).
5. Train the *meta\-learner* with \\(\\textbf{D}'\\).
6. Output the final stacking model \\(\\mathcal{S}:\<\\mathscr{L},\\textit{meta\-learner}\>\\).
Figure [3\.5](ensemble.html#fig:stackingProcess) shows the procedure to generate the new training data \\(\\textbf{D}'\\) used to train the *meta\-learner*.
FIGURE 3\.5: Process to generate the new train set D’ by column\-binding the predictions of the first\-level learners and adding the true labels. (Reprinted from *Information Fusion* Vol. 40, Enrique Garcia\-Ceja, Carlos E. Galván\-Tejada, and Ramon Brena, “Multi\-view stacking for activity recognition with sound and accelerometer data” pp. 45\-56, Copyright 2018, with permission from Elsevier, doi: [https://doi.org/10\.1016/j.inffus.2017\.06\.004](https://doi.org/10.1016/j.inffus.2017.06.004)).
Note that steps \\(2\\) and \\(3\\) can lead to overfitting because the predictions are made with the same data used to train the models. To avoid this, steps \\(2\\) and \\(3\\) are usually performed using \\(k\\)\-fold cross\-validation. After \\(\\textbf{D}'\\) has been generated, the learners in \\(\\mathscr{L}\\) can be retrained using all data in \\(\\textbf{D}\\).
Ting and Witten ([1999](#ref-ting1999)) showed that the performance can increase by adding confidence information about the predictions. For example, the probabilities produced by the first\-level learners. Most classifiers can output probabilities.
At prediction time, each first\-level learner predicts the class, and optionally, the class probabilities of a given instance. These predictions are used to form a feature vector (*meta\-features*) that is fed to the *meta\-learner* to obtain the final prediction. Usually, first\-level learners are high performing classifiers such as Random Forests, Support Vector Machines, Neural Networks, etc. The *meta\-learner* should also be a powerful classifier.
In the next section, I will introduce *Multi\-view Stacking* which is similar to Generalized Stacking except that each first\-level learner is trained with features from a different *view*.
3\.4 Multi\-view Stacking for Home Tasks Recognition
----------------------------------------------------
`stacking_algorithms.R` `stacking_activities.R`
**Multi\-view learning** refers to the case when an instance can be characterized by two or more independent ‘views’. For example, one can extract features for webpage classification from a webpage’s text but also from the links pointing to it. Usually, there is the assumption that the views are independent and each is sufficient to solve the problem. Then, why combine them? In many cases, each different view provides additional and complementary information, thus, allowing to train better models.
The simplest thing one can do is to extract features from each view, aggregate them, and train a single model. This approach usually works well but has some limitations. Each view may have different statistical properties, thus, different types of models may be needed for each view. When aggregating features from all views, new variable correlations may be introduced which could impact the performance. Another limitation is that features need to be in the same format (feature vectors, images, etc.), so they can be aggregated.
For video classification, we could have two views. One represented by sequences of images, and the other by the corresponding audio. For the video part, we could encode the features as the images themselves, i.e., matrices. Then, a Convolutional Neural Network (covered in chapter [8](deeplearning.html#deeplearning)) could be trained directly from those images. For the audio part, statistical features can be extracted and stored as normal feature vectors. In this case, the two representations (views) are different. One is a matrix and the other a one\-dimensional feature vector. Combining them to train a single classifier could be problematic given the nature of the views and their different encoding formats. Instead, we can train two models, one for each view and then combine the results. This is precisely the idea of *Multi\-view Stacking* ([Garcia\-Ceja, Galván\-Tejada, and Brena 2018](#ref-garcia2018multiview)). Train a different model for each view and combine the outputs like in *Stacking*.
Here, *Multi\-view Stacking* will be demonstrated using the *HOME TASKS* dataset. This dataset was collected from two sources. Acceleration and audio. The acceleration was recorded with a wrist\-band watch and the audio using a cellphone. This dataset consists of \\(7\\) common home tasks: *‘mop floor’*, *‘sweep floor’*, *‘type on computer keyboard’*, *‘brush teeth’*, *‘wash hands’*, *‘eat chips’*, and *‘watch t.v.’*. Three volunteers performed each activity for approximately \\(3\\) minutes.
The acceleration and audio signals were segmented into \\(3\\)\-second windows. From each window, different features were extracted. From the acceleration, \\(16\\) features were extracted from the \\(3\\) axes (\\(x\\),\\(y\\),\\(z\\)) such as mean, standard deviation, maximum values, mean magnitude, area under the curve, etc. From the audio signals, \\(12\\) features were extracted, namely, Mel Frequency Cepstral Coefficients (MFCCs). To preserve volunteers’ privacy, the original audio was not released. The dataset already contains the extracted features from acceleration and audio. The first column is the label.
In order to implement *Multi\-view Stacking*, two Random Forests will be trained, one for each view (acceleration and audio). The predicted outputs will be stacked to form the new training set \\(D'\\) and a Random Forest trained with \\(D'\\) will act as the *meta\-learner*.
The next code snippet taken from `stacking_algorithms.R` shows the multi\-view stacking function implemented in R.
```
mvstacking <- function(D, v1cols, v2cols, k = 10){
# Generate folds for internal cross-validation.
folds <- sample(1:k, size = nrow(D), replace = T)
trueLabels <- NULL
predicted.v1 <- NULL # predicted labels with view 1
predicted.v2 <- NULL # predicted labels with view 2
probs.v1 <- NULL # predicted probabilities with view 1
probs.v2 <- NULL # predicted probabilities with view 2
# Perform internal cross-validation.
for(i in 1:k){
train <- D[folds != i, ]
test <- D[folds == i, ]
trueLabels <- c(trueLabels, as.character(test$label))
# Train learner with view 1 and make predictions.
m.v1 <- randomForest(label ~.,
train[,c("label",v1cols)], nt = 100)
raw.v1 <- predict(m.v1, newdata = test[,v1cols], type = "prob")
probs.v1 <- rbind(probs.v1, raw.v1)
pred.v1 <- as.character(predict(m.v1,
newdata = test[,v1cols],
type = "class"))
predicted.v1 <- c(predicted.v1, pred.v1)
# Train learner with view 2 and make predictions.
m.v2 <- randomForest(label ~.,
train[,c("label",v2cols)], nt = 100)
raw.v2 <- predict(m.v2, newdata = test[,v2cols], type = "prob")
probs.v2 <- rbind(probs.v2, raw.v2)
pred.v2 <- as.character(predict(m.v2,
newdata = test[,v2cols],
type = "class"))
predicted.v2 <- c(predicted.v2, pred.v2)
}
# Build first-order learners with all data.
learnerV1 <- randomForest(label ~.,
D[,c("label",v1cols)], nt = 100)
learnerV2 <- randomForest(label ~.,
D[,c("label",v2cols)], nt = 100)
# Construct meta-features.
metaFeatures <- data.frame(label = trueLabels,
((probs.v1 + probs.v2) / 2),
pred1 = predicted.v1,
pred2 = predicted.v2)
#train meta-learner
metalearner <- randomForest(label ~.,
metaFeatures, nt = 100)
res <- structure(list(metalearner=metalearner,
learnerV1=learnerV1,
learnerV2=learnerV2,
v1cols = v1cols,
v2cols = v2cols),
class = "mvstacking")
return(res)
}
```
The first argument `D` is a data frame containing the training data. `v1cols` and `v2cols` are the column names of the two views. Finally, argument `k` specifies the number of folds for the internal cross\-validation to avoid overfitting (Steps \\(2\\) and \\(3\\) as described in the generalized stacking procedure).
The function iterates through each fold and trains a Random Forest with the train data for each of the two views. Within each iteration, the trained models are used to predict the labels and probabilities on the internal test set. Predicted labels and probabilities on the internal test sets are concatenated across all folds (`predicted.v1`, `predicted.v2`).
After cross\-validation, the meta\-features are generated by creating a data frame with the predictions of each view. Additionally, the average of class probabilities is added as a meta\-feature. The true labels are also added. The purpose of cross\-validation is to avoid overfitting but at the end, we do not want to waste data so both learners are re\-trained with all data `D`.
Finally, the *meta\-learner* which is also a Random Forest is trained with the *meta\-features* data frame. A list with all the required information to make predictions is created. This includes first\-level learners, the meta\-learner, and the column names for each view so we know how to divide the data frame into two views at prediction time.
The following code snippet shows the implementation for making predictions using a trained stacking model.
```
predict.mvstacking <- function(object, newdata){
# Predict probabilities with view 1.
raw.v1 <- predict(object$learnerV1,
newdata = newdata[,object$v1cols],
type = "prob")
# Predict classes with view 1.
pred.v1 <- as.character(predict(object$learnerV1,
newdata = newdata[,object$v1cols],
type = "class"))
# Predict probabilities with view 2.
raw.v2 <- predict(object$learnerV2,
newdata = newdata[,object$v2cols],
type = "prob")
# Predict classes with view 2.
pred.v2 <- as.character(predict(object$learnerV2,
newdata = newdata[,object$v2cols],
type = "class"))
# Build meta-features
metaFeatures <- data.frame(((raw.v1 + raw.v2) / 2),
pred1 = pred.v1,
pred2 = pred.v2)
# Set levels on factors to avoid errors in randomForest predict.
levels(metaFeatures$pred1) <- object$metalearner$classes
levels(metaFeatures$pred2) <- object$metalearner$classes
predictions <- as.character(predict(object$metalearner,
newdata = metaFeatures),
type="class")
return(predictions)
}
```
The `object` parameter is the trained model and `newdata` is a data frame from which we want to make the predictions. First, labels and probabilities are predicted using the two views. Then, a data frame with the *meta\-features* is assembled with the predicted label and the averaged probabilities. Finally, the *meta\-learner* is used to predict the final classes using the *meta\-features*.
The script `stacking_activities.R` shows how to use our `mvstacking()` function. With the following two lines we can train and make predictions.
```
m.stacking <- mvstacking(trainset, v1cols, v2cols, k = 10)
pred.stacking <- predict(m.stacking, newdata = testset[,-1])
```
The script performs \\(10\\)\-fold cross\-validation and for the sake of comparison, it builds three models. One with only audio features, one with only acceleration features, and the Multi\-view Stacking one combining both types of features.
Table [3\.1](ensemble.html#tab:stackingResults) shows the results for each view and with Multi\-view Stacking. Clearly, combining both views with Multi\-view Stacking achieved the best results compared to using a single view.
TABLE 3\.1: Stacking results.
| | Accuracy | Recall | Specificity | Precision | F1 |
| --- | --- | --- | --- | --- | --- |
| Audio | 0\.8535 | 0\.8497 | 0\.9753 | 0\.8564 | 0\.8521 |
| Accelerometer | 0\.8557 | 0\.8470 | 0\.9760 | 0\.8523 | 0\.8487 |
| Multi\-view Stacking | 0\.9365 | 0\.9318 | 0\.9895 | 0\.9333 | 0\.9325 |
FIGURE 3\.6: Confusion matrices.
Figure [3\.6](ensemble.html#fig:stackingCMs) shows the resulting confusion matrices for the three cases. By looking at the recall (anti\-diagonal) of the individual classes, it seems that audio features are better at recognizing some activities like *‘sweep’* and *‘mop floor’* whereas the accelerometer features are better for classifying *‘eat chips’*, *‘wash hands’*, *‘type on keyboard’*, etc. thus, those two views are somehow complementary. All recall values when using Multi\-view Stacking are higher than for any of the other views.
3\.5 Summary
------------
In this chapter, several ensemble learning methods were introduced. In general, ensemble models perform better than single models.
* The main idea of ensemble learning is to train several models and combine their results.
* **Bagging** is an ensemble method consisting of \\(n\\) *base\-learners*, each, trained with bootstrapped training samples.
* **Random Forest** is an ensemble of trees. It introduces randomness to the trees by selecting random features in each split.
* Another ensemble method is called **stacked generalization**. It consists of a set of *base\-learners* and a *meta\-learner*. The later is trained using the outputs of the *base\-learners*.
* **Multi\-view learning** can be used when an instance can be represented by two or more *views* (for example, different sensors).
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/edavis.html |
Chapter 4 Exploring and Visualizing Behavioral Data
===================================================
`EDA.R`
Exploratory data analysis (EDA) refers to the process of understanding your data. There are several available methods and tools for doing so, including summary statistics and visualizations. In this chapter, I will cover some of them. As mentioned in section [1\.5](intro.html#pipeline), data exploration is one of the first steps of the data analysis pipeline. It provides valuable input to the decision process during the next data analysis phases, for example, the selection of preprocessing tasks and predictive methods. Even though there already exist several EDA techniques, you are not constrained by them. You can always apply any means that you think will allow you to better understand your data and gain new insights.
4\.1 Talking with Field Experts
-------------------------------
Sometimes you will be involved in the whole data analysis process starting with the idea, defining the research questions, hypotheses, conducting the data collection, and so on. In those cases, it is easier to understand the initial structure of the data since you might had been the one responsible for designing the data collection protocol.
Unfortunately (or fortunately for some), it is often the case that you are already given a dataset. It may have some documentation or not. In those cases, it becomes important to talk with the field experts that designed the study and the data collection protocol to understand what was the purpose and motivation of each piece of data. Again, it is often not easy to directly have access to those who conducted the initial study. One of the reasons may be that you found the dataset online and maybe the project is already over. In those cases, you can try to contact the authors. I have done that several times and they were very responsive. It is also a good idea to try to find experts in the field even if they were not involved in the project. This will allow you to understand things from their perspective and possibly to explain patterns/values that you may find later in the process.
4\.2 Summary Statistics
-----------------------
After having a better understanding of how the data was collected and the meaning of each variable, the next step is to find out how the actual data looks like. It is always a good idea to start looking at some summary statistics. This provides general insights about the data and will help you in selecting the next preprocessing steps. In R, an easy way to do this is with the `summary()` function. The following code reads the *SMARTPHONE ACTIVITIES* dataset and due to limited space, only prints a summary of the first \\(5\\) columns, column \\(33\\), \\(35\\), and the last one (the class).
```
# Read activities dataset.
dataset <- read.csv(file.path(datasets_path,
"smartphone_activities",
"WISDM_ar_v1.1_raw.txt"),
stringsAsFactors = T)
# Print first 5 columns,
# column 33, 35 and the last one (the class).
summary(dataset[,c(1:5,33,35,ncol(dataset))])
#> UNIQUE_ID user X0 X1
#> Min. : 1.0 Min. : 1.00 Min. :0.00000 Min. :0.00000
#> 1st Qu.:136.0 1st Qu.:10.00 1st Qu.:0.06000 1st Qu.:0.07000
#> Median :271.0 Median :19.00 Median :0.09000 Median :0.10000
#> Mean :284.4 Mean :18.87 Mean :0.09414 Mean :0.09895
#> 3rd Qu.:412.0 3rd Qu.:28.00 3rd Qu.:0.12000 3rd Qu.:0.12000
#> Max. :728.0 Max. :36.00 Max. :1.00000 Max. :0.81000
#>
#> X2 XAVG ZAVG class
#> Min. :0.00000 Min. :0 ?0.22 : 29 Downstairs: 528
#> 1st Qu.:0.08000 1st Qu.:0 ?0.21 : 27 Jogging :1625
#> Median :0.10000 Median :0 ?0.11 : 26 Sitting : 306
#> Mean :0.09837 Mean :0 ?0.13 : 26 Standing : 246
#> 3rd Qu.:0.12000 3rd Qu.:0 ?0.16 : 26 Upstairs : 632
#> Max. :0.95000 Max. :0 ?0.23 : 26 Walking :2081
#> (Other):5258
```
For numerical variables, the output includes some summary statistics like the *min*, *max*, *mean*, etc. For factor variables, the output is different. It displays the unique values with their respective counts. If there are more than six unique values, the rest is omitted. For example, the **class** variable (the last one) has \\(528\\) instances with the value *‘Downstairs’*. By looking at the *min* and *max* values of the numerical variables, we see that those are not the same for all variables. For some variables, their maximum value is \\(1\\), for others, it is less than \\(1\\) and for some others, it is greater than \\(1\\). It seems that the variables are not in the same scale. This is important because some algorithms are sensitive to different scales. In chapters [2](classification.html#classification) and [3](ensemble.html#ensemble), we mainly used decision\-tree\-based algorithms which are not sensitive to different scales, but some others like neural networks are. In chapter [5](preprocessing.html#preprocessing), a method to transform variables into the same scale will be introduced.
It is good practice to check the *min* and *max* values of all variables to see if they have different ranges since some algorithms are sensitive to different scales.
The output of the `summary()` function also shows some strange values. The statistics of the variable *XAVG* are all \\(0s\\). Some other variables like *ZAVG* were encoded as characters and it seems that the *‘?’* symbol is appended to the numbers. In summary, the `summary()` function (I know, too many summaries in this sentence), allowed us to spot some errors in the dataset. What we do with that information will depend on the domain and application.
4\.3 Class Distributions
------------------------
When it comes to behavior sensing, many of the problems can be modeled as classification tasks. This means that there are different possible categories to choose from. It is often a good idea to plot the class counts (class distribution). The following code shows how to do that for the *SMARTPHONE ACTIVITIES* dataset. First, the `table()` method is used to get the actual class counts. Then, the plot is generated with `ggplot` (see Figure [4\.1](edavis.html#fig:activitiesDistribution)).
```
t <- table(dataset$class)
t <- as.data.frame(t)
colnames(t) <- c("class","count")
p <- ggplot(t, aes(x=class, y=count, fill=class)) +
geom_bar(stat="identity", color="black") +
theme_minimal() +
geom_text(aes(label=count), vjust=-0.3, size=3.5) +
scale_fill_brewer(palette="Set1")
print(p)
```
FIGURE 4\.1: Distribution of classes.
The most common activity turned out to be *‘Walking’* with \\(2081\\) instances. It seems that the volunteers were a bit sporty since *‘Jogging’* is the second most frequent activity. One thing to note is that there are some big differences here. For example, *‘Walking’* vs. *‘Standing’*. Those differences in class counts can have an impact when training classification models. This is because classifiers try to minimize the overall error regardless of the performance of individual classes, thus, they tend to prioritize the majority classes. This is called the **class imbalance problem**. This occurs when there are many instances of some classes but fewer of some other classes. For some applications this can be a problem. For example, in fraud detection, datasets have many legitimate transactions but just a few of illegal ones. This will bias a classifier to be good at detecting legitimate transactions but what we are really interested in is in detecting the illegal transactions. This is something very common to find in behavior sensing datasets. For example in the medical domain, it is much easier to collect data from healthy controls than from patients with a given condition. In chapter [5](preprocessing.html#preprocessing), some of the oversampling techniques that can be used to deal with the class imbalance problem will be presented.
When the classes are imbalanced, it is also recommended to validate the generalization performance using *stratified subsets*. This means that when dividing the dataset into train and test sets, the distribution of classes should be preserved. For example, if the dataset has class *‘A’* and *‘B’* and \\(80\\%\\) of the instances are of type *‘A’* then both, the train set and the test set should have \\(80\\%\\) of their instances of type *‘A’*. In cross\-validation, this is known as **stratified cross\-validation**.
4\.4 User\-class Sparsity Matrix
--------------------------------
In behavior sensing, usually two things are involved: *individuals* and *behaviors*. Individuals will express different behaviors to different extents. For the activity recognition example, some persons may go jogging frequently while others may never go jogging at all. Some behaviors will be present or absent depending on each individual. We can plot this information with what I call a **user\-class sparsity matrix**. Figure [4\.2](edavis.html#fig:sparsityMatrix) shows this matrix for the activities dataset. The code to generate this plot is included in the script `EDA.R`.
FIGURE 4\.2: User\-class sparsity matrix.
The *x*\-axis shows the user ids and the *y*\-axis the classes. A colored entry (gray in this case) means that the corresponding user has at least one associated instance of the corresponding class. For example, user \\(3\\) performed all activities and thus, the dataset contains at least one instance for each of the six activities. On the other hand, user \\(25\\) only has instances for two activities. Users are sorted in descending order (users that have more classes are at the left). At the bottom of the plot, the sparsity is shown (\\(0\.18\\)). This is just the percentage of empty cells in the matrix. When all users have at least one instance of every class the sparsity is \\(0\\). When the sparsity is different from \\(0\\), one needs to decide what to do depending on the application. The following cases are possible:
* Some users did not perform all activities. If the classifier was trained with, for example, \\(6\\) classes and a user never goes *‘jogging’*, the classifier may still sometimes predict *‘jogging’* even if a particular user never does that. This can degrade the predictions’ performance for that particular user and can be worse if that user never performs other activities. A possible solution is to train different classifiers with different class subsets. If you know that some users never go *‘jogging’* then you train a classifier that excludes *‘jogging’* and use that one for that set of users. The disadvantage of this is that there are many possible combinations so you need to train many models. Since several classifiers can generate prediction scores and/or probabilities per class, another solution would be to train a single model with all classes and predict the most probable class excluding those that are not part of a particular user.
* Some users can have unique classes. For example, suppose there is a new user that has an activity labeled as *‘Eating’* which no one else has, and thus, it was not included during training. In this situation, the classifier will never predict *‘Eating’* since it was not trained for that activity. One solution could be to add the new user’s data with the new labels and retrain the model. But if not too many users have the activity *‘Eating’* then, in the worst case, they will die from starvation. In a less severe case, the overall system performance can degrade because as the number of classes increases, it becomes more difficult to find separation boundaries between categories, thus, the models become less accurate. Another possible solution is to build **user\-dependent** models for each user. These, and other types of models in **multi\-user settings** will be covered in chapter [9](multiuser.html#multiuser).
4\.5 Boxplots
-------------
Boxplots are a good way to visualize the relationship between variables and classes. R already has the `boxplot()` function. In the *SMARTPHONE ACTIVITIES* dataset, the *RESULTANT* variable represents the ‘total amount of movement’ considering the three axes ([Kwapisz, Weiss, and Moore 2010](#ref-kwapisz2010)). The following code displays a set of boxplots (one for each class) with respect to the *RESULTANT* variable (Figure [4\.3](edavis.html#fig:boxplotres)).
```
boxplot(RESULTANT ~ class, dataset)
```
FIGURE 4\.3: Boxplot of RESULTANT variable across classes.
The solid black line in the middle of each box marks the *median*[7](#fn7). Overall, we can see that this variable can be good at separating high\-intensity activities like jogging, walking, etc. from low\-intensity ones like sitting or standing. With boxplots we can inspect one feature at a time. If you want to visualize the relationship between predictors, correlation plots can be used instead. Correlation plots will be presented in the next subsection.
4\.6 Correlation Plots
----------------------
Correlation plots are useful for visualizing the relationships between pairs of variables. The most common type of relationship is the **Pearson correlation**. The Pearson correlation measures the degree of **linear** association between two variables. It takes values between \\(\-1\\) and \\(1\\). A correlation of \\(1\\) means that as one of the variables increases, the other one does too. A value of \\(\-1\\) means that as one of the variables increases, the other decreases. A value of \\(0\\) means that there is no association between the variables. Figure [4\.4](edavis.html#fig:pearsonExamples) shows several examples of correlation values. Note that the correlations of the examples at the bottom are all \\(0\\)s. Even though there are some noticeable patterns in some of the examples, their correlation is \\(0\\) because those relationships are not linear.
FIGURE 4\.4: Pearson correlation examples. (Author: Denis Boigelot. Source: Wikipedia (CC0 1\.0\)).
The Pearson correlation (denoted by \\(r\\)) between two variables \\(x\\) and \\(y\\) can be calculated as follows:
\\\[\\begin{equation}
r \= \\frac{ \\sum\_{i\=1}^{n}(x\_i\-\\bar{x})(y\_i\-\\bar{y}) }{ \\sqrt{\\sum\_{i\=1}^{n}(x\_i\-\\bar{x})^2}\\sqrt{\\sum\_{i\=1}^{n}(y\_i\-\\bar{y})^2}}
\\tag{4\.1}
\\end{equation}\\]
The following code snippet uses the `corrplot` library to generate a correlation plot (Figure [4\.5](edavis.html#fig:corrhome)) for the *HOME TASKS* dataset. Remember that this dataset contains two sets of features. One set extracted from audio and the other one extracted from the accelerometer sensor. First, the Pearson correlation between each pair of variables is computed with the `cor()` function and then the `corrplot()` function is used to generate the actual plot. Here, we specify that we only want to display the upper diagonal with `type = "upper"`. The `tl.pos` argument controls where to print the labels. In this example, at the top and in the diagonal. Setting `diag = FALSE` instructs the function not to print the principal diagonal which is all ones since it is the correlation between each variable and itself.
```
library(corrplot)
# Load home activities dataset.
dataset <- read.csv(file.path(datasets_path,
"home_tasks",
"sound_acc.csv"))
CORRS <- cor(dataset[,-1])
corrplot(CORRS, diag = FALSE, tl.pos = "td", tl.cex = 0.5,
method = "color", type = "upper")
```
FIGURE 4\.5: Correlation plot of the HOME TASKS dataset.
It looks like the correlations between sound features (v1\_) and acceleration features (v2\_) are not too high. In this case, this is good since we want both sources of information to be as independent as possible so that they capture different characteristics and complement each other as explained in section [3\.4](ensemble.html#multiviewhometasks). On the other hand, there are high correlations between some acceleration features. For example *v2\_maxY* with *v2\_sdMagnitude*.
Please, be aware that the Pearson correlation only captures linear relationships.
### 4\.6\.1 Interactive Correlation Plots
When plotting correlation plots, it is useful to also visualize the actual correlation values. When there are many variables, it becomes difficult to do that. One way to overcome this limitation is by using interactive plots. The following code snippet uses the function `iplotCorr()` from the `qtlcharts` package to generate an interactive correlation plot. The nice thing about it, is that you can actually inspect the cell values by hovering the mouse. If you click on a cell, the corresponding scatter plot is also rendered. This makes these types of plots very convenient tools to explore variable relationships.
```
library(qtlcharts) # Library for interactive plots.
# Load home activities dataset.
dataset <- read.csv(file.path(datasets_path,
"home_tasks",
"sound_acc.csv"))
iplotCorr(dataset[,-1], reorder=F,
chartOpts=list(cortitle="Correlation matrix",
scattitle="Scatterplot"))
```
Please note that at the time this book was written, printed paper does not support interactive plots. Check the online html version instead to see the actual result or run the code on a computer.
4\.7 Timeseries
---------------
Behavior is something that usually depends on time. Thus, being able to visualize timeseries data is essential. To illustrate how timeseries data can be plotted, I will use the `ggplot` package and the *HAND GESTURES* dataset. Recall that the data was collected with a tri\-axial accelerometer, thus, for each hand gesture we have \\(3\\)\-dimensional timeseries. Each dimension represents one of the *x*, *y*, and *z* axes. First, we read one of the text files that stores a hand gesture from user \\(1\\). Each column represents an axis. Then, we need to do some formatting. We will create a data frame with three columns. The first one is a timestep represented as integers from \\(1\\) to the number of points per axis. The second column is a factor that represents the axis *x*, *y*, or *z*. The last column contains the actual values.
```
dataset <- read.csv(file.path(datasets_path,
"hand_gestures/1/1_20130703-120056.txt"),
header = F)
# Do some preprocessing.
type <- c(rep("x", nrow(dataset)),
rep("y", nrow(dataset)),
rep("z", nrow(dataset)))
type <- as.factor(type)
values <- c(dataset$V1, dataset$V2, dataset$V3)
t <- rep(1:nrow(dataset), 3)
df <- data.frame(timestep = t, type = type, values = values)
# Print first rows.
head(df)
#> timestep type values
#> 1 1 x 0.6864655
#> 2 2 x 0.9512450
#> 3 3 x 1.3140911
#> 4 4 x 1.4317709
#> 5 5 x 1.5102241
#> 6 6 x 1.5298374
```
Note that the last column (*values*) contains the values of all axes instead of having one column per axis. Now we can use the `ggplot()` function. The lines are colored by type of axis and this is specified with `colour = type`. The `type` column should be a factor. The line type is also dependent on the type of axis and is specified with `linetype = type`. The resulting plot is shown in Figure [4\.6](edavis.html#fig:timeseriesGesture).
```
tsPlot <- ggplot(data = df,
aes(x = timestep,
y = values,
colour = type,
linetype = type)) +
ggtitle("Hand gesture '1', user 1") +
xlab("Timestep") +
ylab("Acceleration") +
geom_line(aes(color=type)) +
theme_minimal() +
theme(plot.title = element_text(hjust = 0.5),
legend.position="right",
legend.key.width = unit(1.0,"cm"),
legend.key.size = unit(0.5,"cm"))
print(tsPlot)
```
FIGURE 4\.6: Timeseries plot for hand gesture ‘1’ user 1\.
### 4\.7\.1 Interactive Timeseries
Sometimes it is useful to interactively zoom, highlight, select, etc. parts of the plot. In R, there is a package called `dygraphs` ([Vanderkam et al. 2018](#ref-dygraphs)) that generates fancy interactive plots for timeseries data[8](#fn8). The following code snippet reads a hand gesture file and adds a column at the beginning called `timestep`.
```
library(dygraphs)
# Read the hand gesture '1' for user 1.
dataset <- read.csv(file.path(datasets_path,
"hand_gestures/1/1_20130703-120056.txt"),
header = F,
col.names = c("x","y","z"))
dataset <- cbind(timestep = 1:nrow(dataset), dataset)
```
Then we can generate a minimal plot with one line of code with:
```
dygraph(dataset)
```
If you run the code, you will be able to zoom in by clicking and dragging over a region. A double click will restore the zoom. It is possible to add a lot of customization to the plots. For example, the following code adds a text title, fills the area under the lines, adds a point of interest line, and shades the region between \\(30\\) and \\(40\\).
```
dygraph(dataset, main = "Hand Gesture '1'") %>%
dyOptions(fillGraph = TRUE, fillAlpha = 0.25) %>%
dyEvent("10", "Point of interest", labelLoc = "top") %>%
dyShading(from = "30", to = "40", color = "#CCCCCC")
```
4\.8 Multidimensional Scaling (MDS)
-----------------------------------
`iterative_mds.R`
In many situations, our data is comprised of several variables. If the number of variables is more than \\(3\\) (\\(3\\)\-dimensional data), it becomes difficult to plot the relationships between data points. Take, for example, the *HOME TASKS* dataset which has \\(27\\) predictor variables from accelerometer and sound. One thing that we may want to do is to visually inspect the data points and check whether or not points from the same class are closer compared to points from different classes. This can give you an idea of the difficulty of the problem at hand. If points of the same class are very close and grouped together then, it is likely that a classification model will not have trouble separating the data points. But how do we plot such relationships with high dimensional data? One method is by using multidimensional scaling (MDS) which consists of a set of techniques aimed at reducing the dimensionality of data so it can be visualized in \\(2\\)D or \\(3\\)D. The objective is to plot the data such that the original distances between pairs of points are preserved in a given lower dimension \\(d\\).
There exist several MDS methods but most of them take a distance matrix as input (for example, Euclidean distance). In R, generating a distance matrix from a set of points is easy. As an example, let’s generate some sample data points.
```
# Generate 3 2D random points.
x <- runif(3)
y <- runif(3)
df <- data.frame(x,y)
labels <- c("a","b","c")
print(df)
#> x y
#> 1 0.4457900 0.5978606
#> 2 0.4740106 0.5019398
#> 3 0.8890085 0.4109234
```
The `dist()` function can be used to compute the distance matrix. By default, this function computes the Euclidean distance between rows:
```
dist(df)
#> 1 2
#> 2 0.09998603
#> 3 0.48102824 0.42486143
```
The output is the Euclidean distance between the pairs of rows \\((1,2\)\\), \\((1,3\)\\) and \\((2,3\)\\).
One way to obtain cartesian coordinates in a \\(d\\) dimensional space for \\(n\\) points from their distance matrix \\(D\\) is to use an iterative algorithm ([Borg, Groenen, and Mair 2012](#ref-borg2012)). Such an algorithm consists of the following general steps:
1. Initialize \\(n\\) data points with random coordinates \\(C\\) of dimension \\(d\\).
2. Compute a distance matrix \\(D'\\) from \\(C\\).
3. Move the coordinates \\(C\\) such that the distances of \\(D'\\) get closer to the original ones in \\(D\\).
4. Repeat from step \\(2\\) until the error between \\(D'\\) and \\(D\\) cannot be reduced any further or until some predefined max number of iterations.
The script `iterative_mds.R` implements this algorithm (`iterativeMDS()` function) which is based on the implementation from ([Segaran 2007](#ref-segaran2007)). Its first argument `D` is a distance matrix, the second argument `maxit` is the total number of iterations and the last argument `lr` controls how fast the points are moved in each iteration. The script also shows how to apply the method to the *eurodist* dataset which consists of distances between several European cities. Figure [4\.7](edavis.html#fig:mds0) shows the initial random coordinates of the cities. Then, Figure [4\.8](edavis.html#fig:mds30) shows the result after \\(30\\) iterations. Finally, Figure [4\.9](edavis.html#fig:mdsFinal) shows the final result. By only knowing the distance matrix, the algorithm was able to find a visual mapping that closely resembles the real positions.
FIGURE 4\.7: MDS initial coordinates.
FIGURE 4\.8: MDS coordinates after iteration 30\.
FIGURE 4\.9: MDS final coordinates.
R already has efficient implementations to perform MDS and one of them is via the function `cmdscale()`. Its first argument is a distance matrix and the second argument \\(k\\) is the target dimension. It also has some other additional parameters that can be tuned. This function implements classical MDS based on Gower ([1966](#ref-gower1966)). The following code snippet uses the *HOME TASKS* dataset. It selects the accelerometer\-based features (v2\_\*), uses the `cmdscale()` function to reduce them into \\(2\\), dimensions and plots the result.
```
dataset <- read.csv(file.path(datasets_path, "home_tasks/sound_acc.csv"))
colNames <- names(dataset)
v2cols <- colNames[grep(colNames, pattern = "v2_")]
cols <- as.integer(dataset$label)
labels <- unique(dataset$label)
d <- dist(dataset[,v2cols])
fit <- cmdscale(d, k = 2) # k is the number of dim
x <- fit[,1]; y <- fit[,2]
plot(x, y, xlab="Coordinate 1",
ylab="Coordinate 2",
main="Accelerometer features in 2D",
pch=19,
col=cols,
cex=0.7)
legend("topleft",
legend = labels,
pch=19,
col=unique(cols),
cex=0.7,
horiz = F)
```
We can also reduce the data into \\(3\\) dimensions and use the `scatterplot3d` package to generate a \\(3\\)D scatter plot:
```
library(scatterplot3d)
fit <- cmdscale(d,k = 3)
x <- fit[,1]; y <- fit[,2]; z <- fit[,3]
scatterplot3d(x, y, z,
xlab = "",
ylab = "",
zlab = "",
main="Accelerometer features in 3D",
pch=19,
color=cols,
tick.marks = F,
cex.symbols = 0.5,
cex.lab = 0.7,
mar = c(1,0,1,0))
legend("topleft",legend = labels,
pch=19,
col=unique(cols),
cex=0.7,
horiz = F)
```
From those plots, it can be seen that the different points are more or less grouped together based on the type of activity. Still, there are several points with no clear grouping which would make them difficult to classify. In section [3\.4](ensemble.html#multiviewhometasks) of chapter [3](ensemble.html#ensemble), we achieved a classification accuracy of \\(85\\%\\) when using only the accelerometer data.
4\.9 Heatmaps
-------------
Heatmaps are a good way to visualize the ‘intensity’ of events. For example, a heatmap can be used to depict website interactions by overlapping colored pixels relative to the number of clicks. This visualization eases the process of identifying the most relevant sections of the given website, for example. In this section, we will generate a heatmap of weekly motor activity behaviors of individuals with and without diagnosed depression. The *DEPRESJON* dataset will be used for this task. It contains motor activity recordings captured with an actigraphy device which is like a watch but has several sensors including accelerometers. The device registers the amount of movement every minute. The data contains recordings of \\(23\\) patients and \\(32\\) controls (those without depression). The participants wore the device for \\(13\\) days on average.
The accompanying script `auxiliary_eda.R` has the function `computeActivityHour()` that returns a matrix with the average activity level of the depressed patients or the controls (those without depression). The matrix dimension is \\(24\\times7\\) and it stores the average activity level at each day and hour. The `type` argument is used to specify if we want to compute this matrix for the depressed or control participants.
```
source("auxiliary_eda.R")
# Generate matrix with mean activity levels
# per hour for the control and condition group.
map.control <- computeActivityHour(datapath, type = "control")
map.condition <- computeActivityHour(datapath, type = "condition")
```
Since we want to compare the heatmaps of both groups we will normalize the matrices such that the values are between \\(0\\) and \\(1\\) in both cases. The script also contains a method `normalizeMatrices()` to do the normalization.
```
# Normalize matrices.
res <- normalizeMatrices(map.control, map.condition)
```
Then, the `pheatmap` package ([Kolde 2019](#ref-pheatmap)) can be used to create the actual heatmap from the matrices.
```
library(pheatmap)
library(gridExtra)
# Generate heatmap of the control group.
a <- pheatmap(res$M1, main="control group",
cluster_row = FALSE,
cluster_col = FALSE,
show_rownames = T,
show_colnames = T,
legend = T,
color = colorRampPalette(c("white",
"blue"))(50))
# Generate heatmap of the condition group.
b <- pheatmap(res$M2, main="condition group",
cluster_row = FALSE,
cluster_col = FALSE,
show_rownames = T,
show_colnames = T,
legend = T, color = colorRampPalette(c("white",
"blue"))(50))
# Plot both heatmaps together.
grid.arrange(a$gtable, b$gtable, nrow=2)
```
Figure [4\.10](edavis.html#fig:depheatmaps) shows the two heatmaps. Here, we can see that overall, the condition group has lower activity levels. It can also be observed that people in the control group wakes up at around 6:00 but in the condition group, the activity starts to increase until 7:00 in the morning. Activity levels around midnight look higher during weekends compared to weekdays.
FIGURE 4\.10: Activity level heatmaps for the control and condition group.
All in all, heatmaps provide a good way to look at the overall patterns of a dataset and can provide some insights to further explore some aspects of the data.
4\.10 Automated EDA
-------------------
Most of the time, doing an EDA involves more or less the same steps: print summary statistics, generate boxplots, visualize variable distributions, look for missing values, etc. If your data is stored as a data frame, all those tasks require almost the same code. To speed up this process, some packages have been developed. They provide convenient functions to explore the data and generate automatic reports.
The `DataExplorer` package ([Cui 2020](#ref-dataexplorer)) has several interesting functions to explore a dataset. The following code uses the `plot_str()` function to plot the structure of `dataset` which is a data frame read from the *HOME TASKS* dataset. The complete code is available in script `EDA.R`. The output is shown in Figure [4\.11](edavis.html#fig:dfStructure). This plot shows the number of observations, the number of variables, the variable names, and their types.
```
library(DataExplorer)
dataset <- read.csv(file.path(datasets_path, "home_tasks/sound_acc.csv"))
plot_str(dataset)
```
FIGURE 4\.11: Output of function plotstr().
Another useful function is `introduce()`. This one prints some statistics like the number of rows, columns, missing values, etc. Table [4\.1](edavis.html#tab:introduceCmd) shows the output result.
```
introduce(dataset)
```
TABLE 4\.1: Output of the introduce() function.
| rows | 1386 |
| --- | --- |
| columns | 29 |
| discrete\_columns | 1 |
| continuous\_columns | 28 |
| all\_missing\_columns | 0 |
| total\_missing\_values | 0 |
| complete\_rows | 1386 |
| total\_observations | 40194 |
| memory\_usage | 328680 |
The package provides more functions to explore your data. The `create_report()` function can be used to automatically call several of those functions and generate a report in html. The package also offers functions to do feature engineering such as replacing missing values, create dummy variables (covered in chapter [5](preprocessing.html#preprocessing)), etc. For a more detailed presentation of the package’s capabilities please check its vignette[9](#fn9).
There is another similar package called `inspectdf` ([Rushworth 2019](#ref-inspectdf)) which has similar functionality. It also offers some functions to check if the categorical variables are imbalanced. This is handy if one of the categorical variables is the response variable (the one we want to predict) since having imbalanced classes may pose some problems (more on this in chapter [5](preprocessing.html#preprocessing)). The following code generates a plot (Figure [4\.12](edavis.html#fig:heatHomeTasks)) that represents the counts of categorical variables. This dataset only has one categorical variable: *label*.
```
library(inspectdf)
show_plot(inspect_cat(dataset))
```
FIGURE 4\.12: Heatmap of counts of categorical variables.
Here, we can see that the most frequent class is *‘eat\_chips’* and the less frequent one is *‘sweep’*. We can confirm this by printing the actual counts:
```
table(dataset$label)
#> brush_teeth eat_chips mop_floor sweep type_on_keyboard
#> 180 282 181 178 179
#> wash_hands watch_tv
#> 180 206
```
This chapter provided a brief introduction to some exploratory data analysis tools and methods however, this is only a tiny subset of what is available. There is already an entire book about EDA with R which I recommend you to check ([Peng 2016](#ref-peng2016)).
4\.11 Summary
-------------
One of the first tasks in a data analysis pipeline is to familiarize yourself with the data. There are several techniques and tools that can provide support during this process.
* Talking with field experts can help you to better understand the data.
* Generating summary statistics is a good way to gain general insights of a dataset. In R, the `summary()` function will compute such statistics.
* For classification problems, one of the first steps is to check the distribution of classes.
* In multi\-user settings, generating a **user\-class sparsity matrix** can be useful to detect missing classes per user.
* **Boxplots** and **correlation plots** are used to understand the behavior of the variables.
* R, has several packages for creating interactive plots such as `dygraphs` for timeseries and `qtlcharts` for correlation plots.
* **Multidimensional scaling (MDS)** can be used to project high\-dimensional data into \\(2\\) or \\(3\\) dimensions so they can be plotted.
* R has some packages like `DataExplorer` that provide some degree of automation for exploring a dataset.
4\.1 Talking with Field Experts
-------------------------------
Sometimes you will be involved in the whole data analysis process starting with the idea, defining the research questions, hypotheses, conducting the data collection, and so on. In those cases, it is easier to understand the initial structure of the data since you might had been the one responsible for designing the data collection protocol.
Unfortunately (or fortunately for some), it is often the case that you are already given a dataset. It may have some documentation or not. In those cases, it becomes important to talk with the field experts that designed the study and the data collection protocol to understand what was the purpose and motivation of each piece of data. Again, it is often not easy to directly have access to those who conducted the initial study. One of the reasons may be that you found the dataset online and maybe the project is already over. In those cases, you can try to contact the authors. I have done that several times and they were very responsive. It is also a good idea to try to find experts in the field even if they were not involved in the project. This will allow you to understand things from their perspective and possibly to explain patterns/values that you may find later in the process.
4\.2 Summary Statistics
-----------------------
After having a better understanding of how the data was collected and the meaning of each variable, the next step is to find out how the actual data looks like. It is always a good idea to start looking at some summary statistics. This provides general insights about the data and will help you in selecting the next preprocessing steps. In R, an easy way to do this is with the `summary()` function. The following code reads the *SMARTPHONE ACTIVITIES* dataset and due to limited space, only prints a summary of the first \\(5\\) columns, column \\(33\\), \\(35\\), and the last one (the class).
```
# Read activities dataset.
dataset <- read.csv(file.path(datasets_path,
"smartphone_activities",
"WISDM_ar_v1.1_raw.txt"),
stringsAsFactors = T)
# Print first 5 columns,
# column 33, 35 and the last one (the class).
summary(dataset[,c(1:5,33,35,ncol(dataset))])
#> UNIQUE_ID user X0 X1
#> Min. : 1.0 Min. : 1.00 Min. :0.00000 Min. :0.00000
#> 1st Qu.:136.0 1st Qu.:10.00 1st Qu.:0.06000 1st Qu.:0.07000
#> Median :271.0 Median :19.00 Median :0.09000 Median :0.10000
#> Mean :284.4 Mean :18.87 Mean :0.09414 Mean :0.09895
#> 3rd Qu.:412.0 3rd Qu.:28.00 3rd Qu.:0.12000 3rd Qu.:0.12000
#> Max. :728.0 Max. :36.00 Max. :1.00000 Max. :0.81000
#>
#> X2 XAVG ZAVG class
#> Min. :0.00000 Min. :0 ?0.22 : 29 Downstairs: 528
#> 1st Qu.:0.08000 1st Qu.:0 ?0.21 : 27 Jogging :1625
#> Median :0.10000 Median :0 ?0.11 : 26 Sitting : 306
#> Mean :0.09837 Mean :0 ?0.13 : 26 Standing : 246
#> 3rd Qu.:0.12000 3rd Qu.:0 ?0.16 : 26 Upstairs : 632
#> Max. :0.95000 Max. :0 ?0.23 : 26 Walking :2081
#> (Other):5258
```
For numerical variables, the output includes some summary statistics like the *min*, *max*, *mean*, etc. For factor variables, the output is different. It displays the unique values with their respective counts. If there are more than six unique values, the rest is omitted. For example, the **class** variable (the last one) has \\(528\\) instances with the value *‘Downstairs’*. By looking at the *min* and *max* values of the numerical variables, we see that those are not the same for all variables. For some variables, their maximum value is \\(1\\), for others, it is less than \\(1\\) and for some others, it is greater than \\(1\\). It seems that the variables are not in the same scale. This is important because some algorithms are sensitive to different scales. In chapters [2](classification.html#classification) and [3](ensemble.html#ensemble), we mainly used decision\-tree\-based algorithms which are not sensitive to different scales, but some others like neural networks are. In chapter [5](preprocessing.html#preprocessing), a method to transform variables into the same scale will be introduced.
It is good practice to check the *min* and *max* values of all variables to see if they have different ranges since some algorithms are sensitive to different scales.
The output of the `summary()` function also shows some strange values. The statistics of the variable *XAVG* are all \\(0s\\). Some other variables like *ZAVG* were encoded as characters and it seems that the *‘?’* symbol is appended to the numbers. In summary, the `summary()` function (I know, too many summaries in this sentence), allowed us to spot some errors in the dataset. What we do with that information will depend on the domain and application.
4\.3 Class Distributions
------------------------
When it comes to behavior sensing, many of the problems can be modeled as classification tasks. This means that there are different possible categories to choose from. It is often a good idea to plot the class counts (class distribution). The following code shows how to do that for the *SMARTPHONE ACTIVITIES* dataset. First, the `table()` method is used to get the actual class counts. Then, the plot is generated with `ggplot` (see Figure [4\.1](edavis.html#fig:activitiesDistribution)).
```
t <- table(dataset$class)
t <- as.data.frame(t)
colnames(t) <- c("class","count")
p <- ggplot(t, aes(x=class, y=count, fill=class)) +
geom_bar(stat="identity", color="black") +
theme_minimal() +
geom_text(aes(label=count), vjust=-0.3, size=3.5) +
scale_fill_brewer(palette="Set1")
print(p)
```
FIGURE 4\.1: Distribution of classes.
The most common activity turned out to be *‘Walking’* with \\(2081\\) instances. It seems that the volunteers were a bit sporty since *‘Jogging’* is the second most frequent activity. One thing to note is that there are some big differences here. For example, *‘Walking’* vs. *‘Standing’*. Those differences in class counts can have an impact when training classification models. This is because classifiers try to minimize the overall error regardless of the performance of individual classes, thus, they tend to prioritize the majority classes. This is called the **class imbalance problem**. This occurs when there are many instances of some classes but fewer of some other classes. For some applications this can be a problem. For example, in fraud detection, datasets have many legitimate transactions but just a few of illegal ones. This will bias a classifier to be good at detecting legitimate transactions but what we are really interested in is in detecting the illegal transactions. This is something very common to find in behavior sensing datasets. For example in the medical domain, it is much easier to collect data from healthy controls than from patients with a given condition. In chapter [5](preprocessing.html#preprocessing), some of the oversampling techniques that can be used to deal with the class imbalance problem will be presented.
When the classes are imbalanced, it is also recommended to validate the generalization performance using *stratified subsets*. This means that when dividing the dataset into train and test sets, the distribution of classes should be preserved. For example, if the dataset has class *‘A’* and *‘B’* and \\(80\\%\\) of the instances are of type *‘A’* then both, the train set and the test set should have \\(80\\%\\) of their instances of type *‘A’*. In cross\-validation, this is known as **stratified cross\-validation**.
4\.4 User\-class Sparsity Matrix
--------------------------------
In behavior sensing, usually two things are involved: *individuals* and *behaviors*. Individuals will express different behaviors to different extents. For the activity recognition example, some persons may go jogging frequently while others may never go jogging at all. Some behaviors will be present or absent depending on each individual. We can plot this information with what I call a **user\-class sparsity matrix**. Figure [4\.2](edavis.html#fig:sparsityMatrix) shows this matrix for the activities dataset. The code to generate this plot is included in the script `EDA.R`.
FIGURE 4\.2: User\-class sparsity matrix.
The *x*\-axis shows the user ids and the *y*\-axis the classes. A colored entry (gray in this case) means that the corresponding user has at least one associated instance of the corresponding class. For example, user \\(3\\) performed all activities and thus, the dataset contains at least one instance for each of the six activities. On the other hand, user \\(25\\) only has instances for two activities. Users are sorted in descending order (users that have more classes are at the left). At the bottom of the plot, the sparsity is shown (\\(0\.18\\)). This is just the percentage of empty cells in the matrix. When all users have at least one instance of every class the sparsity is \\(0\\). When the sparsity is different from \\(0\\), one needs to decide what to do depending on the application. The following cases are possible:
* Some users did not perform all activities. If the classifier was trained with, for example, \\(6\\) classes and a user never goes *‘jogging’*, the classifier may still sometimes predict *‘jogging’* even if a particular user never does that. This can degrade the predictions’ performance for that particular user and can be worse if that user never performs other activities. A possible solution is to train different classifiers with different class subsets. If you know that some users never go *‘jogging’* then you train a classifier that excludes *‘jogging’* and use that one for that set of users. The disadvantage of this is that there are many possible combinations so you need to train many models. Since several classifiers can generate prediction scores and/or probabilities per class, another solution would be to train a single model with all classes and predict the most probable class excluding those that are not part of a particular user.
* Some users can have unique classes. For example, suppose there is a new user that has an activity labeled as *‘Eating’* which no one else has, and thus, it was not included during training. In this situation, the classifier will never predict *‘Eating’* since it was not trained for that activity. One solution could be to add the new user’s data with the new labels and retrain the model. But if not too many users have the activity *‘Eating’* then, in the worst case, they will die from starvation. In a less severe case, the overall system performance can degrade because as the number of classes increases, it becomes more difficult to find separation boundaries between categories, thus, the models become less accurate. Another possible solution is to build **user\-dependent** models for each user. These, and other types of models in **multi\-user settings** will be covered in chapter [9](multiuser.html#multiuser).
4\.5 Boxplots
-------------
Boxplots are a good way to visualize the relationship between variables and classes. R already has the `boxplot()` function. In the *SMARTPHONE ACTIVITIES* dataset, the *RESULTANT* variable represents the ‘total amount of movement’ considering the three axes ([Kwapisz, Weiss, and Moore 2010](#ref-kwapisz2010)). The following code displays a set of boxplots (one for each class) with respect to the *RESULTANT* variable (Figure [4\.3](edavis.html#fig:boxplotres)).
```
boxplot(RESULTANT ~ class, dataset)
```
FIGURE 4\.3: Boxplot of RESULTANT variable across classes.
The solid black line in the middle of each box marks the *median*[7](#fn7). Overall, we can see that this variable can be good at separating high\-intensity activities like jogging, walking, etc. from low\-intensity ones like sitting or standing. With boxplots we can inspect one feature at a time. If you want to visualize the relationship between predictors, correlation plots can be used instead. Correlation plots will be presented in the next subsection.
4\.6 Correlation Plots
----------------------
Correlation plots are useful for visualizing the relationships between pairs of variables. The most common type of relationship is the **Pearson correlation**. The Pearson correlation measures the degree of **linear** association between two variables. It takes values between \\(\-1\\) and \\(1\\). A correlation of \\(1\\) means that as one of the variables increases, the other one does too. A value of \\(\-1\\) means that as one of the variables increases, the other decreases. A value of \\(0\\) means that there is no association between the variables. Figure [4\.4](edavis.html#fig:pearsonExamples) shows several examples of correlation values. Note that the correlations of the examples at the bottom are all \\(0\\)s. Even though there are some noticeable patterns in some of the examples, their correlation is \\(0\\) because those relationships are not linear.
FIGURE 4\.4: Pearson correlation examples. (Author: Denis Boigelot. Source: Wikipedia (CC0 1\.0\)).
The Pearson correlation (denoted by \\(r\\)) between two variables \\(x\\) and \\(y\\) can be calculated as follows:
\\\[\\begin{equation}
r \= \\frac{ \\sum\_{i\=1}^{n}(x\_i\-\\bar{x})(y\_i\-\\bar{y}) }{ \\sqrt{\\sum\_{i\=1}^{n}(x\_i\-\\bar{x})^2}\\sqrt{\\sum\_{i\=1}^{n}(y\_i\-\\bar{y})^2}}
\\tag{4\.1}
\\end{equation}\\]
The following code snippet uses the `corrplot` library to generate a correlation plot (Figure [4\.5](edavis.html#fig:corrhome)) for the *HOME TASKS* dataset. Remember that this dataset contains two sets of features. One set extracted from audio and the other one extracted from the accelerometer sensor. First, the Pearson correlation between each pair of variables is computed with the `cor()` function and then the `corrplot()` function is used to generate the actual plot. Here, we specify that we only want to display the upper diagonal with `type = "upper"`. The `tl.pos` argument controls where to print the labels. In this example, at the top and in the diagonal. Setting `diag = FALSE` instructs the function not to print the principal diagonal which is all ones since it is the correlation between each variable and itself.
```
library(corrplot)
# Load home activities dataset.
dataset <- read.csv(file.path(datasets_path,
"home_tasks",
"sound_acc.csv"))
CORRS <- cor(dataset[,-1])
corrplot(CORRS, diag = FALSE, tl.pos = "td", tl.cex = 0.5,
method = "color", type = "upper")
```
FIGURE 4\.5: Correlation plot of the HOME TASKS dataset.
It looks like the correlations between sound features (v1\_) and acceleration features (v2\_) are not too high. In this case, this is good since we want both sources of information to be as independent as possible so that they capture different characteristics and complement each other as explained in section [3\.4](ensemble.html#multiviewhometasks). On the other hand, there are high correlations between some acceleration features. For example *v2\_maxY* with *v2\_sdMagnitude*.
Please, be aware that the Pearson correlation only captures linear relationships.
### 4\.6\.1 Interactive Correlation Plots
When plotting correlation plots, it is useful to also visualize the actual correlation values. When there are many variables, it becomes difficult to do that. One way to overcome this limitation is by using interactive plots. The following code snippet uses the function `iplotCorr()` from the `qtlcharts` package to generate an interactive correlation plot. The nice thing about it, is that you can actually inspect the cell values by hovering the mouse. If you click on a cell, the corresponding scatter plot is also rendered. This makes these types of plots very convenient tools to explore variable relationships.
```
library(qtlcharts) # Library for interactive plots.
# Load home activities dataset.
dataset <- read.csv(file.path(datasets_path,
"home_tasks",
"sound_acc.csv"))
iplotCorr(dataset[,-1], reorder=F,
chartOpts=list(cortitle="Correlation matrix",
scattitle="Scatterplot"))
```
Please note that at the time this book was written, printed paper does not support interactive plots. Check the online html version instead to see the actual result or run the code on a computer.
### 4\.6\.1 Interactive Correlation Plots
When plotting correlation plots, it is useful to also visualize the actual correlation values. When there are many variables, it becomes difficult to do that. One way to overcome this limitation is by using interactive plots. The following code snippet uses the function `iplotCorr()` from the `qtlcharts` package to generate an interactive correlation plot. The nice thing about it, is that you can actually inspect the cell values by hovering the mouse. If you click on a cell, the corresponding scatter plot is also rendered. This makes these types of plots very convenient tools to explore variable relationships.
```
library(qtlcharts) # Library for interactive plots.
# Load home activities dataset.
dataset <- read.csv(file.path(datasets_path,
"home_tasks",
"sound_acc.csv"))
iplotCorr(dataset[,-1], reorder=F,
chartOpts=list(cortitle="Correlation matrix",
scattitle="Scatterplot"))
```
Please note that at the time this book was written, printed paper does not support interactive plots. Check the online html version instead to see the actual result or run the code on a computer.
4\.7 Timeseries
---------------
Behavior is something that usually depends on time. Thus, being able to visualize timeseries data is essential. To illustrate how timeseries data can be plotted, I will use the `ggplot` package and the *HAND GESTURES* dataset. Recall that the data was collected with a tri\-axial accelerometer, thus, for each hand gesture we have \\(3\\)\-dimensional timeseries. Each dimension represents one of the *x*, *y*, and *z* axes. First, we read one of the text files that stores a hand gesture from user \\(1\\). Each column represents an axis. Then, we need to do some formatting. We will create a data frame with three columns. The first one is a timestep represented as integers from \\(1\\) to the number of points per axis. The second column is a factor that represents the axis *x*, *y*, or *z*. The last column contains the actual values.
```
dataset <- read.csv(file.path(datasets_path,
"hand_gestures/1/1_20130703-120056.txt"),
header = F)
# Do some preprocessing.
type <- c(rep("x", nrow(dataset)),
rep("y", nrow(dataset)),
rep("z", nrow(dataset)))
type <- as.factor(type)
values <- c(dataset$V1, dataset$V2, dataset$V3)
t <- rep(1:nrow(dataset), 3)
df <- data.frame(timestep = t, type = type, values = values)
# Print first rows.
head(df)
#> timestep type values
#> 1 1 x 0.6864655
#> 2 2 x 0.9512450
#> 3 3 x 1.3140911
#> 4 4 x 1.4317709
#> 5 5 x 1.5102241
#> 6 6 x 1.5298374
```
Note that the last column (*values*) contains the values of all axes instead of having one column per axis. Now we can use the `ggplot()` function. The lines are colored by type of axis and this is specified with `colour = type`. The `type` column should be a factor. The line type is also dependent on the type of axis and is specified with `linetype = type`. The resulting plot is shown in Figure [4\.6](edavis.html#fig:timeseriesGesture).
```
tsPlot <- ggplot(data = df,
aes(x = timestep,
y = values,
colour = type,
linetype = type)) +
ggtitle("Hand gesture '1', user 1") +
xlab("Timestep") +
ylab("Acceleration") +
geom_line(aes(color=type)) +
theme_minimal() +
theme(plot.title = element_text(hjust = 0.5),
legend.position="right",
legend.key.width = unit(1.0,"cm"),
legend.key.size = unit(0.5,"cm"))
print(tsPlot)
```
FIGURE 4\.6: Timeseries plot for hand gesture ‘1’ user 1\.
### 4\.7\.1 Interactive Timeseries
Sometimes it is useful to interactively zoom, highlight, select, etc. parts of the plot. In R, there is a package called `dygraphs` ([Vanderkam et al. 2018](#ref-dygraphs)) that generates fancy interactive plots for timeseries data[8](#fn8). The following code snippet reads a hand gesture file and adds a column at the beginning called `timestep`.
```
library(dygraphs)
# Read the hand gesture '1' for user 1.
dataset <- read.csv(file.path(datasets_path,
"hand_gestures/1/1_20130703-120056.txt"),
header = F,
col.names = c("x","y","z"))
dataset <- cbind(timestep = 1:nrow(dataset), dataset)
```
Then we can generate a minimal plot with one line of code with:
```
dygraph(dataset)
```
If you run the code, you will be able to zoom in by clicking and dragging over a region. A double click will restore the zoom. It is possible to add a lot of customization to the plots. For example, the following code adds a text title, fills the area under the lines, adds a point of interest line, and shades the region between \\(30\\) and \\(40\\).
```
dygraph(dataset, main = "Hand Gesture '1'") %>%
dyOptions(fillGraph = TRUE, fillAlpha = 0.25) %>%
dyEvent("10", "Point of interest", labelLoc = "top") %>%
dyShading(from = "30", to = "40", color = "#CCCCCC")
```
### 4\.7\.1 Interactive Timeseries
Sometimes it is useful to interactively zoom, highlight, select, etc. parts of the plot. In R, there is a package called `dygraphs` ([Vanderkam et al. 2018](#ref-dygraphs)) that generates fancy interactive plots for timeseries data[8](#fn8). The following code snippet reads a hand gesture file and adds a column at the beginning called `timestep`.
```
library(dygraphs)
# Read the hand gesture '1' for user 1.
dataset <- read.csv(file.path(datasets_path,
"hand_gestures/1/1_20130703-120056.txt"),
header = F,
col.names = c("x","y","z"))
dataset <- cbind(timestep = 1:nrow(dataset), dataset)
```
Then we can generate a minimal plot with one line of code with:
```
dygraph(dataset)
```
If you run the code, you will be able to zoom in by clicking and dragging over a region. A double click will restore the zoom. It is possible to add a lot of customization to the plots. For example, the following code adds a text title, fills the area under the lines, adds a point of interest line, and shades the region between \\(30\\) and \\(40\\).
```
dygraph(dataset, main = "Hand Gesture '1'") %>%
dyOptions(fillGraph = TRUE, fillAlpha = 0.25) %>%
dyEvent("10", "Point of interest", labelLoc = "top") %>%
dyShading(from = "30", to = "40", color = "#CCCCCC")
```
4\.8 Multidimensional Scaling (MDS)
-----------------------------------
`iterative_mds.R`
In many situations, our data is comprised of several variables. If the number of variables is more than \\(3\\) (\\(3\\)\-dimensional data), it becomes difficult to plot the relationships between data points. Take, for example, the *HOME TASKS* dataset which has \\(27\\) predictor variables from accelerometer and sound. One thing that we may want to do is to visually inspect the data points and check whether or not points from the same class are closer compared to points from different classes. This can give you an idea of the difficulty of the problem at hand. If points of the same class are very close and grouped together then, it is likely that a classification model will not have trouble separating the data points. But how do we plot such relationships with high dimensional data? One method is by using multidimensional scaling (MDS) which consists of a set of techniques aimed at reducing the dimensionality of data so it can be visualized in \\(2\\)D or \\(3\\)D. The objective is to plot the data such that the original distances between pairs of points are preserved in a given lower dimension \\(d\\).
There exist several MDS methods but most of them take a distance matrix as input (for example, Euclidean distance). In R, generating a distance matrix from a set of points is easy. As an example, let’s generate some sample data points.
```
# Generate 3 2D random points.
x <- runif(3)
y <- runif(3)
df <- data.frame(x,y)
labels <- c("a","b","c")
print(df)
#> x y
#> 1 0.4457900 0.5978606
#> 2 0.4740106 0.5019398
#> 3 0.8890085 0.4109234
```
The `dist()` function can be used to compute the distance matrix. By default, this function computes the Euclidean distance between rows:
```
dist(df)
#> 1 2
#> 2 0.09998603
#> 3 0.48102824 0.42486143
```
The output is the Euclidean distance between the pairs of rows \\((1,2\)\\), \\((1,3\)\\) and \\((2,3\)\\).
One way to obtain cartesian coordinates in a \\(d\\) dimensional space for \\(n\\) points from their distance matrix \\(D\\) is to use an iterative algorithm ([Borg, Groenen, and Mair 2012](#ref-borg2012)). Such an algorithm consists of the following general steps:
1. Initialize \\(n\\) data points with random coordinates \\(C\\) of dimension \\(d\\).
2. Compute a distance matrix \\(D'\\) from \\(C\\).
3. Move the coordinates \\(C\\) such that the distances of \\(D'\\) get closer to the original ones in \\(D\\).
4. Repeat from step \\(2\\) until the error between \\(D'\\) and \\(D\\) cannot be reduced any further or until some predefined max number of iterations.
The script `iterative_mds.R` implements this algorithm (`iterativeMDS()` function) which is based on the implementation from ([Segaran 2007](#ref-segaran2007)). Its first argument `D` is a distance matrix, the second argument `maxit` is the total number of iterations and the last argument `lr` controls how fast the points are moved in each iteration. The script also shows how to apply the method to the *eurodist* dataset which consists of distances between several European cities. Figure [4\.7](edavis.html#fig:mds0) shows the initial random coordinates of the cities. Then, Figure [4\.8](edavis.html#fig:mds30) shows the result after \\(30\\) iterations. Finally, Figure [4\.9](edavis.html#fig:mdsFinal) shows the final result. By only knowing the distance matrix, the algorithm was able to find a visual mapping that closely resembles the real positions.
FIGURE 4\.7: MDS initial coordinates.
FIGURE 4\.8: MDS coordinates after iteration 30\.
FIGURE 4\.9: MDS final coordinates.
R already has efficient implementations to perform MDS and one of them is via the function `cmdscale()`. Its first argument is a distance matrix and the second argument \\(k\\) is the target dimension. It also has some other additional parameters that can be tuned. This function implements classical MDS based on Gower ([1966](#ref-gower1966)). The following code snippet uses the *HOME TASKS* dataset. It selects the accelerometer\-based features (v2\_\*), uses the `cmdscale()` function to reduce them into \\(2\\), dimensions and plots the result.
```
dataset <- read.csv(file.path(datasets_path, "home_tasks/sound_acc.csv"))
colNames <- names(dataset)
v2cols <- colNames[grep(colNames, pattern = "v2_")]
cols <- as.integer(dataset$label)
labels <- unique(dataset$label)
d <- dist(dataset[,v2cols])
fit <- cmdscale(d, k = 2) # k is the number of dim
x <- fit[,1]; y <- fit[,2]
plot(x, y, xlab="Coordinate 1",
ylab="Coordinate 2",
main="Accelerometer features in 2D",
pch=19,
col=cols,
cex=0.7)
legend("topleft",
legend = labels,
pch=19,
col=unique(cols),
cex=0.7,
horiz = F)
```
We can also reduce the data into \\(3\\) dimensions and use the `scatterplot3d` package to generate a \\(3\\)D scatter plot:
```
library(scatterplot3d)
fit <- cmdscale(d,k = 3)
x <- fit[,1]; y <- fit[,2]; z <- fit[,3]
scatterplot3d(x, y, z,
xlab = "",
ylab = "",
zlab = "",
main="Accelerometer features in 3D",
pch=19,
color=cols,
tick.marks = F,
cex.symbols = 0.5,
cex.lab = 0.7,
mar = c(1,0,1,0))
legend("topleft",legend = labels,
pch=19,
col=unique(cols),
cex=0.7,
horiz = F)
```
From those plots, it can be seen that the different points are more or less grouped together based on the type of activity. Still, there are several points with no clear grouping which would make them difficult to classify. In section [3\.4](ensemble.html#multiviewhometasks) of chapter [3](ensemble.html#ensemble), we achieved a classification accuracy of \\(85\\%\\) when using only the accelerometer data.
4\.9 Heatmaps
-------------
Heatmaps are a good way to visualize the ‘intensity’ of events. For example, a heatmap can be used to depict website interactions by overlapping colored pixels relative to the number of clicks. This visualization eases the process of identifying the most relevant sections of the given website, for example. In this section, we will generate a heatmap of weekly motor activity behaviors of individuals with and without diagnosed depression. The *DEPRESJON* dataset will be used for this task. It contains motor activity recordings captured with an actigraphy device which is like a watch but has several sensors including accelerometers. The device registers the amount of movement every minute. The data contains recordings of \\(23\\) patients and \\(32\\) controls (those without depression). The participants wore the device for \\(13\\) days on average.
The accompanying script `auxiliary_eda.R` has the function `computeActivityHour()` that returns a matrix with the average activity level of the depressed patients or the controls (those without depression). The matrix dimension is \\(24\\times7\\) and it stores the average activity level at each day and hour. The `type` argument is used to specify if we want to compute this matrix for the depressed or control participants.
```
source("auxiliary_eda.R")
# Generate matrix with mean activity levels
# per hour for the control and condition group.
map.control <- computeActivityHour(datapath, type = "control")
map.condition <- computeActivityHour(datapath, type = "condition")
```
Since we want to compare the heatmaps of both groups we will normalize the matrices such that the values are between \\(0\\) and \\(1\\) in both cases. The script also contains a method `normalizeMatrices()` to do the normalization.
```
# Normalize matrices.
res <- normalizeMatrices(map.control, map.condition)
```
Then, the `pheatmap` package ([Kolde 2019](#ref-pheatmap)) can be used to create the actual heatmap from the matrices.
```
library(pheatmap)
library(gridExtra)
# Generate heatmap of the control group.
a <- pheatmap(res$M1, main="control group",
cluster_row = FALSE,
cluster_col = FALSE,
show_rownames = T,
show_colnames = T,
legend = T,
color = colorRampPalette(c("white",
"blue"))(50))
# Generate heatmap of the condition group.
b <- pheatmap(res$M2, main="condition group",
cluster_row = FALSE,
cluster_col = FALSE,
show_rownames = T,
show_colnames = T,
legend = T, color = colorRampPalette(c("white",
"blue"))(50))
# Plot both heatmaps together.
grid.arrange(a$gtable, b$gtable, nrow=2)
```
Figure [4\.10](edavis.html#fig:depheatmaps) shows the two heatmaps. Here, we can see that overall, the condition group has lower activity levels. It can also be observed that people in the control group wakes up at around 6:00 but in the condition group, the activity starts to increase until 7:00 in the morning. Activity levels around midnight look higher during weekends compared to weekdays.
FIGURE 4\.10: Activity level heatmaps for the control and condition group.
All in all, heatmaps provide a good way to look at the overall patterns of a dataset and can provide some insights to further explore some aspects of the data.
4\.10 Automated EDA
-------------------
Most of the time, doing an EDA involves more or less the same steps: print summary statistics, generate boxplots, visualize variable distributions, look for missing values, etc. If your data is stored as a data frame, all those tasks require almost the same code. To speed up this process, some packages have been developed. They provide convenient functions to explore the data and generate automatic reports.
The `DataExplorer` package ([Cui 2020](#ref-dataexplorer)) has several interesting functions to explore a dataset. The following code uses the `plot_str()` function to plot the structure of `dataset` which is a data frame read from the *HOME TASKS* dataset. The complete code is available in script `EDA.R`. The output is shown in Figure [4\.11](edavis.html#fig:dfStructure). This plot shows the number of observations, the number of variables, the variable names, and their types.
```
library(DataExplorer)
dataset <- read.csv(file.path(datasets_path, "home_tasks/sound_acc.csv"))
plot_str(dataset)
```
FIGURE 4\.11: Output of function plotstr().
Another useful function is `introduce()`. This one prints some statistics like the number of rows, columns, missing values, etc. Table [4\.1](edavis.html#tab:introduceCmd) shows the output result.
```
introduce(dataset)
```
TABLE 4\.1: Output of the introduce() function.
| rows | 1386 |
| --- | --- |
| columns | 29 |
| discrete\_columns | 1 |
| continuous\_columns | 28 |
| all\_missing\_columns | 0 |
| total\_missing\_values | 0 |
| complete\_rows | 1386 |
| total\_observations | 40194 |
| memory\_usage | 328680 |
The package provides more functions to explore your data. The `create_report()` function can be used to automatically call several of those functions and generate a report in html. The package also offers functions to do feature engineering such as replacing missing values, create dummy variables (covered in chapter [5](preprocessing.html#preprocessing)), etc. For a more detailed presentation of the package’s capabilities please check its vignette[9](#fn9).
There is another similar package called `inspectdf` ([Rushworth 2019](#ref-inspectdf)) which has similar functionality. It also offers some functions to check if the categorical variables are imbalanced. This is handy if one of the categorical variables is the response variable (the one we want to predict) since having imbalanced classes may pose some problems (more on this in chapter [5](preprocessing.html#preprocessing)). The following code generates a plot (Figure [4\.12](edavis.html#fig:heatHomeTasks)) that represents the counts of categorical variables. This dataset only has one categorical variable: *label*.
```
library(inspectdf)
show_plot(inspect_cat(dataset))
```
FIGURE 4\.12: Heatmap of counts of categorical variables.
Here, we can see that the most frequent class is *‘eat\_chips’* and the less frequent one is *‘sweep’*. We can confirm this by printing the actual counts:
```
table(dataset$label)
#> brush_teeth eat_chips mop_floor sweep type_on_keyboard
#> 180 282 181 178 179
#> wash_hands watch_tv
#> 180 206
```
This chapter provided a brief introduction to some exploratory data analysis tools and methods however, this is only a tiny subset of what is available. There is already an entire book about EDA with R which I recommend you to check ([Peng 2016](#ref-peng2016)).
4\.11 Summary
-------------
One of the first tasks in a data analysis pipeline is to familiarize yourself with the data. There are several techniques and tools that can provide support during this process.
* Talking with field experts can help you to better understand the data.
* Generating summary statistics is a good way to gain general insights of a dataset. In R, the `summary()` function will compute such statistics.
* For classification problems, one of the first steps is to check the distribution of classes.
* In multi\-user settings, generating a **user\-class sparsity matrix** can be useful to detect missing classes per user.
* **Boxplots** and **correlation plots** are used to understand the behavior of the variables.
* R, has several packages for creating interactive plots such as `dygraphs` for timeseries and `qtlcharts` for correlation plots.
* **Multidimensional scaling (MDS)** can be used to project high\-dimensional data into \\(2\\) or \\(3\\) dimensions so they can be plotted.
* R has some packages like `DataExplorer` that provide some degree of automation for exploring a dataset.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/preprocessing.html |
Chapter 5 Preprocessing Behavioral Data
=======================================
`preprocessing.R`
Behavioral data comes in many flavors and forms, but when training predictive models, the data needs to be in a particular format. Some sources of variation when collecting data are:
* **Sensors’ format.** Each type of sensor and manufacturer stores data in a different format. For example, .csv files, binary files, images, proprietary formats, etc.
* **Sampling rate.** The sampling rate is how many measurements are taken per unit of time. For example, a heart rate sensor may return a single value every second, thus, the sampling rate is \\(1\\) Hz. An accelerometer that captures \\(50\\) values per second has a sampling rate of \\(50\\) Hz.
* **Scales and ranges.** Some sensors may return values in *degrees* (e.g., a temperature sensor) while others may return values in some other scale, for example, in *centimeters* for a proximity sensor. Furthermore, ranges can also vary. That is, a sensor may capture values in the range of \\(0\-1000\\), for example.
During the data exploration step (chapter [4](edavis.html#edavis)) we may also find that values are missing, inconsistent, noisy, and so on, thus, we also need to take care of that.
This chapter provides an overview of some common methods used to clean and preprocess the data before one can start training reliable models.
Several of the methods presented here can lead to *information injection* if not implemented correctly, and this can cause overfitting. That is, inadvertently transferring information from the train set to the test set. This is something undesirable because both sets need to be independent so the generalization performance can be estimated accurately. You can find more details about information injection and how to avoid it in section [5\.5](preprocessing.html#infoinjection) of this chapter.
5\.1 Missing Values
-------------------
Many datasets will have missing values and we need ways to identify and deal with that. Missing data could be due to faulty sensors, processing errors, unavailable information, and so on. In this section, I present some tools that ease the identification of missing values. Later, some imputation methods used to fill in the missing values are presented.
To demonstrate some of these concepts, the *SHEEP GOATS* dataset ([Kamminga et al. 2017](#ref-kamminga2017)) will be used. Due to its big size, the files of this dataset are not included with the accompanying book files but they can be downloaded from [https://easy.dans.knaw.nl/ui/datasets/id/easy\-dataset:76131](https://easy.dans.knaw.nl/ui/datasets/id/easy-dataset:76131). The data were released as part of a study about animal behaviors. The researchers placed inertial sensors on sheep and goats and tracked their behavior during one day. They also video\-recorded the session and annotated the data with different types of behaviors such as *grazing*, *fighting*, *scratch\-biting*, etc. The device was placed on the neck with a random orientation and it collected acceleration, orientation, magnetic field, temperature, and barometric pressure. Figure [5\.1](preprocessing.html#fig:sheepsensor) shows a schematic view of the setting.
FIGURE 5\.1: Device placed on the neck of the sheep. (Author: LadyofHats. Source: Wikipedia (CC0 1\.0\)).
We will start by loading a .csv file that corresponds to one of the sheep and check if there are missing values. The `naniar` package ([Tierney et al. 2019](#ref-naniar)) offers a set of different functions to explore and deal with missing values. The `gg_miss_var()` function allows you to quickly check which variables have missing values and how many. The following code loads the data and then plots the number of missing values in each variable.
```
library(naniar)
# Path to S1.csv file.
datapath <- file.path(datasets_path,
"sheep_goats","S1.csv")
# Can take some seconds to load since the file is big.
df <- read.csv(datapath, stringsAsFactors = TRUE)
# Plot missing values.
gg_miss_var(df)
```
FIGURE 5\.2: Missing values counts.
Figure [5\.2](preprocessing.html#fig:ggmissvar) shows the resulting output. The plot shows that there are missing values in four variables: *pressure*, *cz*, *cy*, and *cx*. The last three correspond to the compass (magnetometer). For *pressure*, the number of missing values is more than \\(2\\) million! For the rest, it is a bit less (more than \\(1\\) million).
To further explore this issue, we can plot each observation in a row with the function `vis_miss()`.
```
# Select first 1000 rows.
# It can take some time to plot bigger data frames.
vis_miss(df[1:1000,])
```
FIGURE 5\.3: Rows with missing values.
Figure [5\.3](preprocessing.html#fig:vismiss) shows every observation per row and missing values are black colored (if any). From this image, it seems that missing values are systematic. It looks like there is a clear stripes pattern, especially for the compass variables. Based on these observations, it doesn’t look like random sensor failures or random noise.
If we explore the data frame’s values, for example with the RStudio viewer (Figure [5\.4](preprocessing.html#fig:missvaluesdf)), two things can be noted. First, for the compass values, there is a missing value for each present value. Thus, it looks like \\(50\\%\\) of compass values are missing. For *pressure*, it seems that there are \\(7\\) missing values for each available value.
FIGURE 5\.4: Displaying the data frame in RStudio. Source: Data from Kamminga, MSc J.W. (University of Twente) (2017\): Generic online animal activity recognition on collar tags. DANS. [https://doi.org/10\.17026/dans\-zp6\-fmna](https://doi.org/10.17026/dans-zp6-fmna)
So, what could be the root cause of those missing values? Remember that at the beginning of this chapter it was mentioned that **one of the sources of variation is sampling rate**. If we look at the data set documentation, all sensors have a sampling rate of \\(200\\) Hz except for the compass and the pressure sensor. The compass has a sampling rate of \\(100\\) Hz. That is half compared to the other sensors! This explains why \\(50\\%\\) of the rows are missing. Similarly, the pressure sensor has a sampling rate of \\(25\\) Hz. By visualizing and then inspecting the missing data, we have just found out that the missing values are not caused by random noise or sensor failures but because some sensors are not as fast as others!
Now that we know there are missing values we need to decide what to do with them. The following subsection lists some ways to deal with missing values.
### 5\.1\.1 Imputation
Imputation is the process of filling in missing values. One of the reasons for imputing missing values is that some predictive models cannot deal with missing data. Another reason is that it may help in increasing the predictions’ performance, for example, if we are trying to predict the sheep behavior from a discrete set of categories based on the inertial data. There are different ways to handle missing values:
* **Discard rows.** If the rows with missing values are not too many, they can simply be discarded.
* **Mean value.** Fill the missing values with the mean value of the corresponding variable. This method is simple and can be effective. One of the problems with this method is that it is sensitive to outliers (as it is the arithmetic mean).
* **Median value.** The median is robust against outliers, thus, it can be used instead of the arithmetic mean to fill the gaps.
* **Replace with the closest value.** For timeseries data, as is the case of the sheep readings, one could also replace missing values with the closest known value.
* **Predict the missing values.** Use the other variables to predict the missing one. This can be done by training a predictive model. A regressor if the variable is numeric or a classifier if the variable is categorical.
Another problem with the mean and median values is that they can be correlated with other variables, for example, with the class that we want to predict. One way to avoid this, is to compute the mean (or median) for each class, but still, some hidden correlations may bias the estimates.
In R, the `simputation` package ([van der Loo 2019](#ref-simputation)) has implemented various imputation techniques including: group\-wise median imputation, model\-based with linear regression, random forests, etc. The following code snippet (complete code is in `preprocessing.R`) uses the `impute_lm()` method to impute the missing values in the sheep data using linear regression.
```
library(simputation)
# Replace NaN with NAs.
# Since missing values are represented as NaN,
# first we need to replace them with NAs.
# Code to replace NaN with NA was taken from Hong Ooi:
# https://stackoverflow.com/questions/18142117/#
# how-to-replace-nan-value-with-zero-in-a-huge-data-frame/18143097
is.nan.data.frame <- function(x)do.call(cbind, lapply(x, is.nan))
df[is.nan(df)] <- NA
# Use simputation package to impute values.
# The first 4 columns are removed since we
# do not want to use them as predictor variables.
imp_df <- impute_lm(df[,-c(1:4)],
cx + cy + cz + pressure ~ . - cx - cy - cz - pressure)
# Print summary.
summary(imp_df)
```
Originally, the missing values are encoded as `NaN` but in order to use the `simputation` package functions, we need them as `NA`. First, `NaNs` are replaced with `NA`. The first argument of `impute_lm()` is a data frame and the second argument is a formula. We discard the first \\(4\\) variables of the data frame since we do not want to use them as predictors. The left\-hand side of the formula (everything before the \~ symbol) specifies the variables we want to impute. The right\-hand side specifies the variables used to build the linear models. The ‘.’ indicates that we want to use all variables while the ‘\-’ is used to specify variables that we do not want to include. The vignettes[10](#fn10) of the package contain more detailed examples.
The mean, median, etc. and the predictive models to infer missing values should be trained using data only from the train set to avoid information injection.
5\.2 Smoothing
--------------
Smoothing comprises a set of algorithms with the aim of highlighting patterns in the data or as a preprocessing step to clean the data and remove noise. These methods are widely used on timeseries data but also with spatio\-temporal data such as images. With timeseries data, they are often used to emphasize long\-term patterns and reduce short\-term signal artifacts. For example, in Figure [5\.5](preprocessing.html#fig:smoothingStock)[11](#fn11) a stock chart was smoothed using two methods: moving average and exponential moving average. The smoothed versions make it easier to spot the overall trend rather than focusing on short\-term variations.
FIGURE 5\.5: Stock chart with two smoothed versions. One with moving average and the other one with an exponential moving average. (Author: Alex Kofman. Source: Wikipedia (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
The most common smoothing method for timeseries is the **simple moving average**. With this method, the first element of the resulting smoothed series is computed by taking the average of the elements within a window of predefined size. The window’s position starts at the first element of the original series. The second element is computed in the same way but after moving the window one position to the right. Figure [5\.6](preprocessing.html#fig:movavgsteps) shows this procedure on a series with \\(5\\) elements and a window size of size \\(3\\). After the third iteration, it is not possible to move the window one more step to the right while covering \\(3\\) elements since the end of the timeseries has been reached. Because of this, the smoothed series will have some missing values at the end. Specifically, it will have \\(w\-1\\) fewer elements where \\(w\\) is the window size. A simple solution is to compute the average of the elements covered by the window even if they are less than the window size.
FIGURE 5\.6: Simple moving average step by step with window size \= 3\. Top: original array; bottom: smoothed array.
In the previous example the average is taken from the elements to the right of the pointer. There is a variation called *centered moving average* in which the center point of the window has the same elements to the left and right (Figure [5\.7](preprocessing.html#fig:centeredmovavg)). Note that with this version of moving average some values at the beginning and at the end will be empty. Also note that the window size should be odd. In practice, both versions produce very similar results.
FIGURE 5\.7: Centered moving average step by step with window size \= 3\.
In the `preprocessing.R` script, the function `movingAvg()` implements the simple moving average procedure. In the following code, note that the output vector will have the same size as the original one, but the last elements will contain `NA` values when the window cannot be moved any longer to the right.
```
movingAvg <- function(x, w = 5){
# Applies moving average to x with a window of size w.
n <- length(x) # Total number of points.
smoothedX <- rep(NA, n)
for(i in 1:(n-w+1)){
smoothedX[i] <- mean(x[i:(i-1+w)])
}
return(smoothedX)
}
```
We can apply this function to a segment of accelerometer data from the *SHEEP AND GOATS* data set.
```
datapath <- "../Sheep/S1.csv"
df <- read.csv(datapath)
# Only select a subset of the whole series.
dfsegment <- df[df$timestamp_ms < 6000,]
x <- dfsegment$ax
# Compute simple moving average with a window of size 21.
smoothed <- movingAvg(x, w = 21)
```
Figure [5\.8](preprocessing.html#fig:smoothingExample) shows the result after plotting both the original vector and the smoothed one. It can be observed that many of the small peaks are no longer present in the smoothed version. The window size is a parameter that needs to be defined by the user. If it is set too large some important information may be lost from the signal.
FIGURE 5\.8: Original time series and smoothed version using a moving average window of size 21\.
One of the disadvantages of this method is that the arithmetic mean is sensitive to noise. Instead of computing the mean, one can use the median which is more robust against outlier values. There also exist other derived methods (not covered here) such as weighted moving average and exponential moving average[12](#fn12) which assign more importance to data points closer to the central point in the window. Smoothing a signal before feature extraction is a common practice and is used to remove some of the unwanted noise.
5\.3 Normalization
------------------
Having variables on different scales can have an impact during learning and at inference time. Consider a study where the data was collected using a wristband that has a light sensor and an accelerometer. The measurement unit of the light sensor is *lux* whereas the accelerometer’s is \\(m/s^2\\). After inspecting the dataset, you realize that the *min* and *max* values of the light sensor are \\(0\\) and \\(155\\), respectively. The *min* and *max* values for the accelerometer are \\(\-0\.4\\) and \\(7\.45\\), respectively. Why is this a problem? Well, several learning methods are based on distances such as \\(k\\)\-NN and Nearest centroid thus, distances will be more heavily affected by bigger scales. Furthermore, other methods like neural networks (covered in chapter [8](deeplearning.html#deeplearning)) are also affected by different scales. They have a harder time learning their parameters (weights) when data is not normalized. On the other hand, some methods are not affected, for example, tree\-based learners such as decision trees and random forests. Since most of the time you may want to try different methods, it is a good idea to normalize your predictor variables.
A common normalization technique is to scale all the variables between \\(0\\) and \\(1\\). Suppose there is a numeric vector \\(x\\) that you want to normalize between \\(0\\) and \\(1\\). Let \\(max(x)\\) and \\(min(x)\\) be the maximum and minimum values of \\(x\\). The following can be used to normalize the \\(i^{th}\\) value of \\(x\\):
\\\[\\begin{equation}
z\_i \= \\frac{x\_i \- min(x)}{max(x)\-min(x)}
\\end{equation}\\]
where \\(z\_i\\) is the new normalized \\(i^{th}\\) value. Thus, the formula is applied to every value in \\(x\\). The \\(max(x)\\) and \\(min(x)\\) values are parameters learned from the data. Notice that if you will split your data into training and test sets the *max* and *min* values (the parameters) are learned only from the train set and then used to normalize both the train and test set. This is to avoid information injection (section [5\.5](preprocessing.html#infoinjection)). Be also aware that after the parameters are learned from the train set, and once the model is deployed in production, it is likely that some input values will be ‘out of range’. If the train set is not very representative of what you will find in real life, some values will probably be smaller than the learned \\(min(x)\\) and some will be greater than the learned \\(max(x)\\). Even if the train set is representative of the real\-life phenomenon, there is nothing that will prevent some values to be out of range. A simple way to handle this is to truncate the values. In some cases, we do know what are the possible minimum and maximum values. For example in image processing, images are usually represented as color intensities between \\(0\\) and \\(255\\). Here, we know that the *min* value cannot be less than \\(0\\) and the *max* value cannot be greater than \\(255\\).
Let’s see an example using the *HOME TASKS* dataset. The following code first loads the dataset and prints a summary of the first \\(4\\) variables.
```
# Load home activities dataset.
dataset <- read.csv(file.path(datasets_path,
"home_tasks",
"sound_acc.csv"),
stringsAsFactors = T)
# Check first 4 variables' min and max values.
summary(dataset[,1:4])
#> label v1_mfcc1 v1_mfcc2 v1_mfcc3
#> brush_teeth :180 Min. :103 Min. :-17.20 Min. :-20.90
#> eat_chips :282 1st Qu.:115 1st Qu.: -8.14 1st Qu.: -7.95
#> mop_floor :181 Median :120 Median : -3.97 Median : -4.83
#> sweep :178 Mean :121 Mean : -4.50 Mean : -5.79
#> type_on_keyboard:179 3rd Qu.:126 3rd Qu.: -1.30 3rd Qu.: -3.09
#> wash_hands :180 Max. :141 Max. : 8.98 Max. : 3.27
#> watch_tv :206
```
Since *label* is a categorical variable, the class counts are printed. For the three remaining variables, we get some statistics including their *min* and *max* values. As we can see, the min value of *v1\_mfcc1* is very different from the *min* value of *v1\_mfcc2* and the same is true for the maximum values. Thus, we want all variables to be between \\(0\\) and \\(1\\) in order to use classification methods sensitive to different scales. Let’s assume we want to train a classifier with this data so we divide it into train and test sets:
```
# Divide into 50/50% train and test set.
set.seed(1234)
folds <- sample(2, nrow(dataset), replace = T)
trainset <- dataset[folds == 1,]
testset <- dataset[folds == 2,]
```
Now we can define a function that normalizes every numeric or integer variable. If the variable is not numeric or integer it will skip them. The function will take as input a train set and a test set. The parameters (*max* and *min*) are learned from the train set and used to normalize both, the train and test sets.
```
# Define a function to normalize the train and test set
# based on the parameters learned from the train set.
normalize <- function(trainset, testset){
# Iterate columns
for(i in 1:ncol(trainset)){
c <- trainset[,i] # trainset column
c2 <- testset[,i] # testset column
# Skip if the variable is not numeric or integer.
if(class(c) != "numeric" && class(c) != "integer")next;
# Learn the max value from the trainset's column.
max <- max(c, na.rm = T)
# Learn the min value from the trainset's column.
min <- min(c, na.rm = T)
# If all values are the same set it to max.
if(max==min){
trainset[,i] <- max
testset[,i] <- max
}
else{
# Normalize trainset's column.
trainset[,i] <- (c - min) / (max - min)
# Truncate max values in testset.
idxs <- which(c2 > max)
if(length(idxs) > 0){
c2[idxs] <- max
}
# Truncate min values in testset.
idxs <- which(c2 < min)
if(length(idxs) > 0){
c2[idxs] <- min
}
# Normalize testset's column.
testset[,i] <- (c2 - min) / (max - min)
}
}
return(list(train=trainset, test=testset))
}
```
Now we can use the previous function to normalize the train and test sets. The function returns a list of two elements: a normalized train and test sets.
```
# Call our function to normalize each set.
normalizedData <- normalize(trainset, testset)
# Inspect the normalized train set.
summary(normalizedData$train[,1:4])
#> label v1_mfcc1 v1_mfcc2 v1_mfcc3
#> brush_teeth : 88 Min. :0.000 Min. :0.000 Min. :0.000
#> eat_chips :139 1st Qu.:0.350 1st Qu.:0.403 1st Qu.:0.527
#> mop_floor : 91 Median :0.464 Median :0.590 Median :0.661
#> sweep : 84 Mean :0.474 Mean :0.568 Mean :0.616
#> type_on_keyboard: 94 3rd Qu.:0.613 3rd Qu.:0.721 3rd Qu.:0.730
#> wash_hands :102 Max. :1.000 Max. :1.000 Max. :1.000
#> watch_tv : 99
# Inspect the normalized test set.
summary(normalizedData$test[,1:4])
#> label v1_mfcc1 v1_mfcc2 v1_mfcc3
#> brush_teeth : 92 Min. :0.0046 Min. :0.000 Min. :0.000
#> eat_chips :143 1st Qu.:0.3160 1st Qu.:0.421 1st Qu.:0.500
#> mop_floor : 90 Median :0.4421 Median :0.606 Median :0.644
#> sweep : 94 Mean :0.4569 Mean :0.582 Mean :0.603
#> type_on_keyboard: 85 3rd Qu.:0.5967 3rd Qu.:0.728 3rd Qu.:0.724
#> wash_hands : 78 Max. :0.9801 Max. :1.000 Max. :1.000
#> watch_tv :107
```
Now, the variables on the train set are exactly between \\(0\\) and \\(1\\) for all numeric variables. For the test set, not all *min* values will be exactly \\(0\\) but a bit higher. Conversely, some *max* values will be lower than \\(1\\). This is because the test set may have a *min* value that is greater than the *min* value of the train set and a *max* value that is smaller than the *max* value of the train set. However, after normalization, all values are guaranteed to be within \\(0\\) and \\(1\\).
5\.4 Imbalanced Classes
-----------------------
Ideally, classes will be uniformly distributed, that is, there is approximately the same number of instances per class. In real\-life (as always), this is not the case. And in many situations (more often than you may think), **class counts are heavily skewed**. When this happens the dataset is said to be imbalanced. Take as an example, bank transactions. Most of them will be normal, whereas a small percent will be fraudulent. In the medical field this is very common. It is easier to collect samples from healthy individuals compared to samples from individuals with some rare conditions. For example, a database may have thousands of images from healthy tissue but just a dozen with signs of cancer. Of course, having just a few cases with diseases is a good thing for the world! but not for machine learning methods. This is because predictive models will try to learn their parameters such that the error is reduced, and most of the time this error is based on accuracy. Thus, the models will be biased towards making correct predictions for the majority classes (the ones with higher counts) while paying little attention to minority classes. This is a problem because for some applications we are more interested in detecting the minority classes (illegal transactions, cancer cases, etc.).
Suppose a given database has \\(998\\) instances with class *‘no cancer’* and only \\(2\\) instances with class *‘cancer’*. A trivial classifier that always predicts *‘no cancer’* will have an accuracy of \\(98\.8\\%\\) but will not be able to detect any of the *‘cancer’* cases! So, what can we do?
* **Collect more data from the minority class.** In practice, this can be difficult, expensive, etc. or just impossible because the study was conducted a long time ago and it is no longer possible to replicate the context.
* **Delete data from the majority class.** Randomly discard instances from the majority class. In the previous example, we could discard \\(996\\) instances of type *‘no cancer’*. The problem with this is that we end up with insufficient data to learn good predictive models. If you have a huge dataset this can be an option, but in practice, this is rarely the case and you have the risk of having underrepresented samples.
* **Create synthetic data.** One of the most common solutions is to create synthetic data from the minority classes. In the following sections two methods that do that will be discussed: *random oversampling* and *Synthetic Minority Oversampling Technique (SMOTE)*.
* **Adapt your learning algorithm.** Another option is to use an algorithm that takes into account class counts and weights them accordingly. This is called *cost\-sensitive classification*. For example, the `rpart()` method to train decision trees has a `weight` parameter which can be used to assign more weight to minority classes. When training neural networks it is also possible to assign different weights to different classes.
The following two subsections cover two techniques to create synthetic data.
### 5\.4\.1 Random Oversampling
`shiny_random-oversampling.Rmd`
This method consists of duplicating data points from the minority class. The following code will create an imbalanced dataset with \\(200\\) instances of class *‘class1’* and only \\(15\\) instances of class *‘class2’*.
```
set.seed(1234)
# Create random data
n1 <- 200 # Number of points of majority class.
n2 <- 15 # Number of points of minority class.
# Generate random values for class1.
x <- rnorm(mean = 0, sd = 0.5, n = n1)
y <- rnorm(mean = 0, sd = 1, n = n1)
df1 <- data.frame(label=rep("class1", n1),
x=x, y=y, stringsAsFactors = T)
# Generate random values for class2.
x2 <- rnorm(mean = 1.5, sd = 0.5, n = n2)
y2 <- rnorm(mean = 1.5, sd = 1, n = n2)
df2 <- data.frame(label=rep("class2", n2),
x=x2, y=y2, stringsAsFactors = T)
# This is our imbalanced dataset.
imbalancedDf <- rbind(df1, df2)
# Print class counts.
summary(imbalancedDf$label)
#> class1 class2
#< 200 15
```
If we want to exactly balance the class counts, we will need \\(185\\) additional instances of type *‘class2’*. We can use our well known `sample()` function to pick \\(185\\) points from data frame `df2` (which contains only instances of class *‘class2’*) and store them in `new.points`. Notice the `replace = T` parameter. This allows the function to pick repeated elements. Then, the new data points are appended to the imbalanced data set which now becomes balanced.
```
# Generate new points from the minority class.
new.points <- df2[sample(nrow(df2), size = 185, replace = T),]
# Add new points to the imbalanced dataset and save the
# result in balancedDf.
balancedDf <- rbind(imbalancedDf, new.points)
# Print class counts.
summary(balancedDf$label)
#> class1 class2
#> 200 200
```
The code associated with this chapter includes a shiny app[13](#fn13) `shiny_random-oversampling.Rmd`. Shiny apps are interactive web applications. This shiny app graphically demonstrates how random oversampling works. Figure [5\.9](preprocessing.html#fig:shinyOversampling) depicts the shiny app. The user can move the slider to generate new data points. Please note that the boundaries do not change as the number of instances increases (or decreases). This is because the new points are just duplicates so they overlap with existing ones.
FIGURE 5\.9: Shiny app with random oversampling example.
It is a common mistake to generate synthetic data on the entire dataset before splitting into train and test sets. This will cause your model to be highly overfitted since several duplicate data points can end up in both sets. Create synthetic data *only* from the *train set*.
Random oversampling is simple and effective in many cases. A potential problem is that the models can overfit since there are many duplicate data points. To overcome this, the SMOTE method creates entirely new instances instead of duplicating them.
### 5\.4\.2 SMOTE
`shiny_smote-oversampling.Rmd`
SMOTE is another method that can be used to augment the data points from the minority class ([Chawla et al. 2002](#ref-chawla2002smote)). One of the limitations of random oversampling is that it creates duplicates. This has the effect of having fixed boundaries and the classifiers can overspecialize. To avoid this, SMOTE creates entirely new data points.
SMOTE operates on the feature space (on the predictor variables). To generate a new point, take the difference between a given point \\(a\\) (taken from the minority class) and one of its randomly selected nearest neighbors \\(b\\). The difference is multiplied by a random number between \\(0\\) and \\(1\\) and added to \\(a\\). This has the effect of selecting a point along the line between \\(a\\) and \\(b\\). Figure [5\.10](preprocessing.html#fig:newpoint) illustrates the procedure of generating a new point in two dimensions.
FIGURE 5\.10: Synthetic point generation.
The number of nearest neighbors \\(k\\) is a parameter defined by the user. In their original work ([Chawla et al. 2002](#ref-chawla2002smote)), the authors set \\(k\=5\\). Depending on how many new samples need to be generated, \\(k'\\) neighbors are randomly selected from the original \\(k\\) nearest neighbors. For example, if \\(200\\%\\) oversampling is needed, \\(k'\=2\\) neighbors are selected at random out of the \\(k\=5\\) and one data point is generated with each of them. This is performed for each data point in the minority class.
An implementation of SMOTE is also provided in `auxiliary_functions/functions.R`. An example of its application can be found in `preprocessing.R` in the corresponding directory of this chapter’s code. The `smote.class(completeDf, targetClass, N, k)` function has several arguments. The first one is the data frame that contains the minority and majority class, that is, the complete dataset. The second argument is the minority class label. The third argument `N` is the percent of smote and the last one (`k`) is the number of nearest neighbors to consider.
The following code shows how the function `smote.class()` can be used to generate new points from the imbalanced dataset that was introduced in the previous section ‘Random Oversampling’. Recall that it has \\(200\\) points of class *‘class1’* and \\(15\\) points of class *‘class2’*.
```
# To balance the dataset, we need to oversample 1200%.
# This means that the method will create 12 * 15 new points.
ceiling(180 / 15) * 100
#> [1] 1200
# Percent to oversample.
N <- 1200
# Generate new data points.
synthetic.points <- smote.class(imbalancedDf,
targetClass = "class2",
N = N,
k = 5)$synthetic
# Append the new points to the original dataset.
smote.balancedDf <- rbind(imbalancedDf,
synthetic.points)
# Print class counts.
summary(smote.balancedDf$label)
#> class1 class2
#> 200 195
```
The parameter `N` is set to \\(1200\\). This will create \\(12\\) new data points for every minority class instance (\\(15\\)). Thus, the method will return \\(180\\) instances. In this case, \\(k\\) is set to \\(5\\). Finally, the new points are appended to the imbalanced dataset having a total of \\(195\\) samples of class ‘class2’.
Again, a shiny app is included with this chapter’s code. Figure [5\.11](preprocessing.html#fig:shinySMOTE) shows the distribution of the original points and after applying SMOTE. Note how the boundary of *‘class2’* changes after applying SMOTE. It slightly spans in all directions. This is particularly visible in the lower right corner. This boundary expansion is what allows the classifiers to generalize better as compared to training them using random oversampled data.
FIGURE 5\.11: Shiny app with SMOTE example. a) Before applying SMOTE. b) After applying SMOTE.
5\.5 Information Injection
--------------------------
The purpose of dividing the data into train/validation/test sets is to accurately estimate the generalization performance of a predictive model when it is presented with previously unseen data points. So, it is advisable to construct such set splits in a way that they are as independent as possible. Often, before training a model and generating predictions, the data needs to be preprocessed. Preprocessing operations may include imputing missing values, normalizing, and so on. During those operations, some information can be inadvertently transferred from the train to the test set thus, violating the assumption that they are independent.
Information injection occurs when information from the train set is transferred to the test set. When having train/validation/test sets, information injection occurs when information from the train set leaks into the validation and/or test set. It also happens when information from the validation set is transferred to the test set.
Suppose that as one of the preprocessing steps, you need to subtract the mean value of a feature for each instance. For now, suppose a dataset has a single feature \\(x\\) of numeric type and a categorical response variable \\(y\\). The dataset has \\(n\\) rows. As a preprocessing step, you decide that you need to subtract the mean of \\(x\\) from each data point. Since you want to predict \\(y\\) given \\(x\\), you train a classifier by splitting your data into train and test sets as usual. So you proceed with the steps depicted in Figure [5\.12](preprocessing.html#fig:injection1).
FIGURE 5\.12: Information injection example. a) Parameters are learned from the entire dataset. b) The dataset is split intro train/test sets. c) The learned parameters are applied to both sets and information injection occurs.
First, (a) you compute the \\(mean\\) value of the of variable \\(x\\) from the entire dataset. This \\(mean\\) is known as the parameter. In this case, there is only one parameter but there could be several. For example, we could additionally need to compute the standard deviation. Once we know the mean value, the dataset is divided into train and test sets (b). Finally, the \\(mean\\) is subtracted from each element in both train and test sets (c). Without realizing, we have transferred information from the train set to the test set! But, how did this happen? Well, the *mean* parameter was computed using information from the *entire* dataset. Then, that \\(mean\\) parameter was used on the test set, but it was calculated using data points that also belong to that same test set!
Figure [5\.13](preprocessing.html#fig:injection2) shows how to correctly do the preprocessing to avoid information injection. The dataset is first split (a). Then, the \\(mean\\) parameter is calculated only with data points from the train set. Finally, the *mean* parameter is subtracted from both sets. Here, the mean contains information only from the train set.
FIGURE 5\.13: No information injection example. a) The dataset is first split into train/test sets. b) Parameters are learned only from the train set. c) The learned parameters are applied to the test set.
In the previous example, we assumed that the dataset was split into train and test sets only once. The same idea applies when performing \\(k\\)\-fold cross\-validation. In each of the \\(k\\) iterations, the preprocessing parameters need to be learned only from the train split.
5\.6 One\-hot Encoding
----------------------
Several algorithms need some or all of their input variables to be in numeric format, either the response and/or predictor variables. In R, for most classification algorithms, the class is usually encoded as a factor but some implementations may require it to be numeric. Sometimes there may be categorical variables as predictors such as gender (*‘male’*, *‘female’*). Some algorithms need those to be in numeric format because they, for example, are based on distance computations such as \\(k\\)\-NN. Other models need to perform arithmetic operations on the predictor variables like neural networks.
One way to convert categorical variables into numeric ones is called **one\-hot encoding**. The method works by creating new variables, sometimes called **dummy variables** which are boolean, one for each possible category. Suppose a dataset has a categorical variable *Job* (Figure [5\.14](preprocessing.html#fig:onehotenc)) with three possible values: *programmer*, *teacher*, and *dentist*. This variable can be one\-hot encoded by creating \\(3\\) new boolean dummy variables and setting them to \\(1\\) for the corresponding category and \\(0\\) for the rest.
FIGURE 5\.14: One\-hot encoding example
You should be aware of the dummy variable trap which means that one variable can be predicted from the others. For example, if the possible values are just *male* and *female*, then if the dummy variable for *male* is \\(1\\), we know that the dummy variable for *female* must be \\(0\\). The solution to this is to drop one of the newly created variables. Which one? It does not matter which one. This trap only applies when the variable is a predictor. If it is a response variable, nothing should be dropped.
Figure [5\.15](preprocessing.html#fig:variableConversion) presents a guideline for how to convert non\-numeric variables into numeric ones for classification tasks. This is only a guideline and the actual process will depend on each application.
FIGURE 5\.15: Variable conversion guidelines.
The `caret` package has a function `dummyVars()` that can be used to one\-hot encode the categorical variables of a data frame. Since the *STUDENTS’ MENTAL HEALTH* dataset ([Nguyen et al. 2019](#ref-Minh2019)) has several categorical variables, it can be used to demonstrate how to apply `dummyVars()`. This dataset collected at a University in Japan contains survey responses from students about their mental health and help\-seeking behaviors. We begin by loading the data.
```
# Load students mental health behavior dataset.
# stringsAsFactors is set to F since the function
# that we will use to one-hot encode expects characters.
dataset <- read.csv(file.path(datasets_path,
"students_mental_health",
"data.csv"),
stringsAsFactors = F)
```
Note that the `stringsAsFactors` parameter is set to `FALSE`. This is necessary because `dummyVars()` needs characters to work properly. Before one\-hot encoding the variables, we need to do some preprocessing to clean the dataset. This dataset contains several fields with empty characters ‘““’. Thus, we will replace them with `NA` using the `replace_with_na_all()` function from the `naniar` package. This package was first described in the missing values section of this chapter, but that function was not mentioned. The function takes as its first argument the dataset and the second one is a formula that includes a condition.
```
# The dataset contains several empty strings.
# Replace those empty strings with NAs so the following
# methods will work properly.
# We can use the replace_with_na_all() function
# from naniar package to do the replacement.
library(naniar)
dataset <- replace_with_na_all(dataset,
~.x %in% common_na_strings)
```
In this case, the condition is `~.x %in% common_na_strings` which means: replace all fields that contain one of the characters in `common_na_strings`. The variable `common_na_strings` contains a set of common strings that can be regarded as missing values, for example ‘NA’, ‘na’, ‘NULL’, empty strings, and so on. Now, we can use the `vis_miss()` function described in the missing values section to get a visual idea of the missing values.
```
# Visualize missing values.
vis_miss(dataset, warn_large_data = F)
```
FIGURE 5\.16: Missing values in the students mental health dataset.
Figure [5\.16](preprocessing.html#fig:mentalmissing) shows the output plot. We can see that the last rows contain many missing values so we will discard them and only keep the first rows (\\(1\-268\\)).
```
# Since the last rows starting at 269
# are full of missing values we will discard them.
dataset <- dataset[1:268,]
```
As an example, we will one\-hot encode the *Stay\_Cate* variable which represents how long a student has been at the university: 1 year (Short), 2–3 years (Medium), or at least 4 years (Long). The `dummyVars()` function takes a formula as its first argument. Here, we specify that we only want to convert `Stay_Cate`. This function does not do the actual encoding but returns an object that is used with `predict()` to obtain the encoded variable(s) as a new data frame.
```
# One-hot encode the Stay_Cate variable.
# This variable Stay_Cate has three possible
# values: Long, Short and Medium.
# First, create a dummyVars object with the dummyVars()
#function from caret package.
library(caret)
dummyObj <- dummyVars( ~ Stay_Cate, data = dataset)
# Perform the actual encoding using predict()
encodedVars <- data.frame(predict(dummyObj,
newdata = dataset))
```
FIGURE 5\.17: One\-hot encoded *Stay\_Cate*.
If we inspect the resulting data frame (Figure [5\.17](preprocessing.html#fig:stayCate1)), we see that it has \\(3\\) variables, one for each possible value: Long, Medium, and Short. If this variable is used as a predictor variable, we should delete one of its columns to avoid the dummy variable trap. We can do this by setting the parameter `fullRank = TRUE`.
```
dummyObj <- dummyVars( ~ Stay_Cate, data = dataset, fullRank = TRUE)
encodedVars <- data.frame(predict(dummyObj, newdata = dataset))
```
FIGURE 5\.18: One\-hot encoded *Stay\_Cate* dropping one of the columns.
In this situation, the column with ‘Long’ was discarded (Figure [5\.18](preprocessing.html#fig:stayCate2)). If you want to one\-hot encode all variables at once you can use `~ .` as the formula. But be aware that the dataset may have some categories encoded as numeric and thus will not be transformed. For example, the *Age\_cate* encodes age categories but the categories are represented as integers from \\(1\\) to \\(5\\). In this case, it may be ok not to encode this variable since lower integer numbers also imply smaller ages and bigger integer numbers represent older ages. If you still want to encode this variable you could first convert it to character by appending a letter at the beginning. Sometimes you should encode a variable, for example, if it represents colors. In that situation, it does not make sense to leave it as numeric since there is not semantic order between colors.
Actually, in some very rare situations, it would make sense to leave color categories as integers. For example, if they represent a gradient like white, light blue, blue, dark blue, and black in which case this could be treated as an ordinal variable.
5\.7 Summary
------------
Programming functions that train predictive models expect the data to be in a particular format. Furthermore, some methods make assumptions about the data like having no missing values, having all variables in the same scale, and so on. This chapter presented several commonly used methods to preprocess datasets before using them to train models.
* When collecting data from different sensors, we can face several sources of variation like **sensors’ format**, **different sampling rates**, **different scales**, and so on.
* Some preprocessing methods can lead to **information injection**. This happens when information from the train set is leaked to the test set.
* **Missing values** is a common problem in many data analysis tasks. In R, the `naniar` package can be used to spot missing values.
* **Imputation** is the process of inferring missing values. The `simputation` package can be used to impute missing values in datasets.
* **Normalization** is the process of transforming a set of variables to a common scale. For example from \\(0\\) to \\(1\\).
* An **imbalanced dataset** has a disproportionate number of classes of a certain type with respect to the others. Some methods like **random over/under sampling** and **SMOTE** can be used to balance a dataset.
* **One\-hot\-encoding** is a method that converts categorical variables into numeric ones.
5\.1 Missing Values
-------------------
Many datasets will have missing values and we need ways to identify and deal with that. Missing data could be due to faulty sensors, processing errors, unavailable information, and so on. In this section, I present some tools that ease the identification of missing values. Later, some imputation methods used to fill in the missing values are presented.
To demonstrate some of these concepts, the *SHEEP GOATS* dataset ([Kamminga et al. 2017](#ref-kamminga2017)) will be used. Due to its big size, the files of this dataset are not included with the accompanying book files but they can be downloaded from [https://easy.dans.knaw.nl/ui/datasets/id/easy\-dataset:76131](https://easy.dans.knaw.nl/ui/datasets/id/easy-dataset:76131). The data were released as part of a study about animal behaviors. The researchers placed inertial sensors on sheep and goats and tracked their behavior during one day. They also video\-recorded the session and annotated the data with different types of behaviors such as *grazing*, *fighting*, *scratch\-biting*, etc. The device was placed on the neck with a random orientation and it collected acceleration, orientation, magnetic field, temperature, and barometric pressure. Figure [5\.1](preprocessing.html#fig:sheepsensor) shows a schematic view of the setting.
FIGURE 5\.1: Device placed on the neck of the sheep. (Author: LadyofHats. Source: Wikipedia (CC0 1\.0\)).
We will start by loading a .csv file that corresponds to one of the sheep and check if there are missing values. The `naniar` package ([Tierney et al. 2019](#ref-naniar)) offers a set of different functions to explore and deal with missing values. The `gg_miss_var()` function allows you to quickly check which variables have missing values and how many. The following code loads the data and then plots the number of missing values in each variable.
```
library(naniar)
# Path to S1.csv file.
datapath <- file.path(datasets_path,
"sheep_goats","S1.csv")
# Can take some seconds to load since the file is big.
df <- read.csv(datapath, stringsAsFactors = TRUE)
# Plot missing values.
gg_miss_var(df)
```
FIGURE 5\.2: Missing values counts.
Figure [5\.2](preprocessing.html#fig:ggmissvar) shows the resulting output. The plot shows that there are missing values in four variables: *pressure*, *cz*, *cy*, and *cx*. The last three correspond to the compass (magnetometer). For *pressure*, the number of missing values is more than \\(2\\) million! For the rest, it is a bit less (more than \\(1\\) million).
To further explore this issue, we can plot each observation in a row with the function `vis_miss()`.
```
# Select first 1000 rows.
# It can take some time to plot bigger data frames.
vis_miss(df[1:1000,])
```
FIGURE 5\.3: Rows with missing values.
Figure [5\.3](preprocessing.html#fig:vismiss) shows every observation per row and missing values are black colored (if any). From this image, it seems that missing values are systematic. It looks like there is a clear stripes pattern, especially for the compass variables. Based on these observations, it doesn’t look like random sensor failures or random noise.
If we explore the data frame’s values, for example with the RStudio viewer (Figure [5\.4](preprocessing.html#fig:missvaluesdf)), two things can be noted. First, for the compass values, there is a missing value for each present value. Thus, it looks like \\(50\\%\\) of compass values are missing. For *pressure*, it seems that there are \\(7\\) missing values for each available value.
FIGURE 5\.4: Displaying the data frame in RStudio. Source: Data from Kamminga, MSc J.W. (University of Twente) (2017\): Generic online animal activity recognition on collar tags. DANS. [https://doi.org/10\.17026/dans\-zp6\-fmna](https://doi.org/10.17026/dans-zp6-fmna)
So, what could be the root cause of those missing values? Remember that at the beginning of this chapter it was mentioned that **one of the sources of variation is sampling rate**. If we look at the data set documentation, all sensors have a sampling rate of \\(200\\) Hz except for the compass and the pressure sensor. The compass has a sampling rate of \\(100\\) Hz. That is half compared to the other sensors! This explains why \\(50\\%\\) of the rows are missing. Similarly, the pressure sensor has a sampling rate of \\(25\\) Hz. By visualizing and then inspecting the missing data, we have just found out that the missing values are not caused by random noise or sensor failures but because some sensors are not as fast as others!
Now that we know there are missing values we need to decide what to do with them. The following subsection lists some ways to deal with missing values.
### 5\.1\.1 Imputation
Imputation is the process of filling in missing values. One of the reasons for imputing missing values is that some predictive models cannot deal with missing data. Another reason is that it may help in increasing the predictions’ performance, for example, if we are trying to predict the sheep behavior from a discrete set of categories based on the inertial data. There are different ways to handle missing values:
* **Discard rows.** If the rows with missing values are not too many, they can simply be discarded.
* **Mean value.** Fill the missing values with the mean value of the corresponding variable. This method is simple and can be effective. One of the problems with this method is that it is sensitive to outliers (as it is the arithmetic mean).
* **Median value.** The median is robust against outliers, thus, it can be used instead of the arithmetic mean to fill the gaps.
* **Replace with the closest value.** For timeseries data, as is the case of the sheep readings, one could also replace missing values with the closest known value.
* **Predict the missing values.** Use the other variables to predict the missing one. This can be done by training a predictive model. A regressor if the variable is numeric or a classifier if the variable is categorical.
Another problem with the mean and median values is that they can be correlated with other variables, for example, with the class that we want to predict. One way to avoid this, is to compute the mean (or median) for each class, but still, some hidden correlations may bias the estimates.
In R, the `simputation` package ([van der Loo 2019](#ref-simputation)) has implemented various imputation techniques including: group\-wise median imputation, model\-based with linear regression, random forests, etc. The following code snippet (complete code is in `preprocessing.R`) uses the `impute_lm()` method to impute the missing values in the sheep data using linear regression.
```
library(simputation)
# Replace NaN with NAs.
# Since missing values are represented as NaN,
# first we need to replace them with NAs.
# Code to replace NaN with NA was taken from Hong Ooi:
# https://stackoverflow.com/questions/18142117/#
# how-to-replace-nan-value-with-zero-in-a-huge-data-frame/18143097
is.nan.data.frame <- function(x)do.call(cbind, lapply(x, is.nan))
df[is.nan(df)] <- NA
# Use simputation package to impute values.
# The first 4 columns are removed since we
# do not want to use them as predictor variables.
imp_df <- impute_lm(df[,-c(1:4)],
cx + cy + cz + pressure ~ . - cx - cy - cz - pressure)
# Print summary.
summary(imp_df)
```
Originally, the missing values are encoded as `NaN` but in order to use the `simputation` package functions, we need them as `NA`. First, `NaNs` are replaced with `NA`. The first argument of `impute_lm()` is a data frame and the second argument is a formula. We discard the first \\(4\\) variables of the data frame since we do not want to use them as predictors. The left\-hand side of the formula (everything before the \~ symbol) specifies the variables we want to impute. The right\-hand side specifies the variables used to build the linear models. The ‘.’ indicates that we want to use all variables while the ‘\-’ is used to specify variables that we do not want to include. The vignettes[10](#fn10) of the package contain more detailed examples.
The mean, median, etc. and the predictive models to infer missing values should be trained using data only from the train set to avoid information injection.
### 5\.1\.1 Imputation
Imputation is the process of filling in missing values. One of the reasons for imputing missing values is that some predictive models cannot deal with missing data. Another reason is that it may help in increasing the predictions’ performance, for example, if we are trying to predict the sheep behavior from a discrete set of categories based on the inertial data. There are different ways to handle missing values:
* **Discard rows.** If the rows with missing values are not too many, they can simply be discarded.
* **Mean value.** Fill the missing values with the mean value of the corresponding variable. This method is simple and can be effective. One of the problems with this method is that it is sensitive to outliers (as it is the arithmetic mean).
* **Median value.** The median is robust against outliers, thus, it can be used instead of the arithmetic mean to fill the gaps.
* **Replace with the closest value.** For timeseries data, as is the case of the sheep readings, one could also replace missing values with the closest known value.
* **Predict the missing values.** Use the other variables to predict the missing one. This can be done by training a predictive model. A regressor if the variable is numeric or a classifier if the variable is categorical.
Another problem with the mean and median values is that they can be correlated with other variables, for example, with the class that we want to predict. One way to avoid this, is to compute the mean (or median) for each class, but still, some hidden correlations may bias the estimates.
In R, the `simputation` package ([van der Loo 2019](#ref-simputation)) has implemented various imputation techniques including: group\-wise median imputation, model\-based with linear regression, random forests, etc. The following code snippet (complete code is in `preprocessing.R`) uses the `impute_lm()` method to impute the missing values in the sheep data using linear regression.
```
library(simputation)
# Replace NaN with NAs.
# Since missing values are represented as NaN,
# first we need to replace them with NAs.
# Code to replace NaN with NA was taken from Hong Ooi:
# https://stackoverflow.com/questions/18142117/#
# how-to-replace-nan-value-with-zero-in-a-huge-data-frame/18143097
is.nan.data.frame <- function(x)do.call(cbind, lapply(x, is.nan))
df[is.nan(df)] <- NA
# Use simputation package to impute values.
# The first 4 columns are removed since we
# do not want to use them as predictor variables.
imp_df <- impute_lm(df[,-c(1:4)],
cx + cy + cz + pressure ~ . - cx - cy - cz - pressure)
# Print summary.
summary(imp_df)
```
Originally, the missing values are encoded as `NaN` but in order to use the `simputation` package functions, we need them as `NA`. First, `NaNs` are replaced with `NA`. The first argument of `impute_lm()` is a data frame and the second argument is a formula. We discard the first \\(4\\) variables of the data frame since we do not want to use them as predictors. The left\-hand side of the formula (everything before the \~ symbol) specifies the variables we want to impute. The right\-hand side specifies the variables used to build the linear models. The ‘.’ indicates that we want to use all variables while the ‘\-’ is used to specify variables that we do not want to include. The vignettes[10](#fn10) of the package contain more detailed examples.
The mean, median, etc. and the predictive models to infer missing values should be trained using data only from the train set to avoid information injection.
5\.2 Smoothing
--------------
Smoothing comprises a set of algorithms with the aim of highlighting patterns in the data or as a preprocessing step to clean the data and remove noise. These methods are widely used on timeseries data but also with spatio\-temporal data such as images. With timeseries data, they are often used to emphasize long\-term patterns and reduce short\-term signal artifacts. For example, in Figure [5\.5](preprocessing.html#fig:smoothingStock)[11](#fn11) a stock chart was smoothed using two methods: moving average and exponential moving average. The smoothed versions make it easier to spot the overall trend rather than focusing on short\-term variations.
FIGURE 5\.5: Stock chart with two smoothed versions. One with moving average and the other one with an exponential moving average. (Author: Alex Kofman. Source: Wikipedia (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
The most common smoothing method for timeseries is the **simple moving average**. With this method, the first element of the resulting smoothed series is computed by taking the average of the elements within a window of predefined size. The window’s position starts at the first element of the original series. The second element is computed in the same way but after moving the window one position to the right. Figure [5\.6](preprocessing.html#fig:movavgsteps) shows this procedure on a series with \\(5\\) elements and a window size of size \\(3\\). After the third iteration, it is not possible to move the window one more step to the right while covering \\(3\\) elements since the end of the timeseries has been reached. Because of this, the smoothed series will have some missing values at the end. Specifically, it will have \\(w\-1\\) fewer elements where \\(w\\) is the window size. A simple solution is to compute the average of the elements covered by the window even if they are less than the window size.
FIGURE 5\.6: Simple moving average step by step with window size \= 3\. Top: original array; bottom: smoothed array.
In the previous example the average is taken from the elements to the right of the pointer. There is a variation called *centered moving average* in which the center point of the window has the same elements to the left and right (Figure [5\.7](preprocessing.html#fig:centeredmovavg)). Note that with this version of moving average some values at the beginning and at the end will be empty. Also note that the window size should be odd. In practice, both versions produce very similar results.
FIGURE 5\.7: Centered moving average step by step with window size \= 3\.
In the `preprocessing.R` script, the function `movingAvg()` implements the simple moving average procedure. In the following code, note that the output vector will have the same size as the original one, but the last elements will contain `NA` values when the window cannot be moved any longer to the right.
```
movingAvg <- function(x, w = 5){
# Applies moving average to x with a window of size w.
n <- length(x) # Total number of points.
smoothedX <- rep(NA, n)
for(i in 1:(n-w+1)){
smoothedX[i] <- mean(x[i:(i-1+w)])
}
return(smoothedX)
}
```
We can apply this function to a segment of accelerometer data from the *SHEEP AND GOATS* data set.
```
datapath <- "../Sheep/S1.csv"
df <- read.csv(datapath)
# Only select a subset of the whole series.
dfsegment <- df[df$timestamp_ms < 6000,]
x <- dfsegment$ax
# Compute simple moving average with a window of size 21.
smoothed <- movingAvg(x, w = 21)
```
Figure [5\.8](preprocessing.html#fig:smoothingExample) shows the result after plotting both the original vector and the smoothed one. It can be observed that many of the small peaks are no longer present in the smoothed version. The window size is a parameter that needs to be defined by the user. If it is set too large some important information may be lost from the signal.
FIGURE 5\.8: Original time series and smoothed version using a moving average window of size 21\.
One of the disadvantages of this method is that the arithmetic mean is sensitive to noise. Instead of computing the mean, one can use the median which is more robust against outlier values. There also exist other derived methods (not covered here) such as weighted moving average and exponential moving average[12](#fn12) which assign more importance to data points closer to the central point in the window. Smoothing a signal before feature extraction is a common practice and is used to remove some of the unwanted noise.
5\.3 Normalization
------------------
Having variables on different scales can have an impact during learning and at inference time. Consider a study where the data was collected using a wristband that has a light sensor and an accelerometer. The measurement unit of the light sensor is *lux* whereas the accelerometer’s is \\(m/s^2\\). After inspecting the dataset, you realize that the *min* and *max* values of the light sensor are \\(0\\) and \\(155\\), respectively. The *min* and *max* values for the accelerometer are \\(\-0\.4\\) and \\(7\.45\\), respectively. Why is this a problem? Well, several learning methods are based on distances such as \\(k\\)\-NN and Nearest centroid thus, distances will be more heavily affected by bigger scales. Furthermore, other methods like neural networks (covered in chapter [8](deeplearning.html#deeplearning)) are also affected by different scales. They have a harder time learning their parameters (weights) when data is not normalized. On the other hand, some methods are not affected, for example, tree\-based learners such as decision trees and random forests. Since most of the time you may want to try different methods, it is a good idea to normalize your predictor variables.
A common normalization technique is to scale all the variables between \\(0\\) and \\(1\\). Suppose there is a numeric vector \\(x\\) that you want to normalize between \\(0\\) and \\(1\\). Let \\(max(x)\\) and \\(min(x)\\) be the maximum and minimum values of \\(x\\). The following can be used to normalize the \\(i^{th}\\) value of \\(x\\):
\\\[\\begin{equation}
z\_i \= \\frac{x\_i \- min(x)}{max(x)\-min(x)}
\\end{equation}\\]
where \\(z\_i\\) is the new normalized \\(i^{th}\\) value. Thus, the formula is applied to every value in \\(x\\). The \\(max(x)\\) and \\(min(x)\\) values are parameters learned from the data. Notice that if you will split your data into training and test sets the *max* and *min* values (the parameters) are learned only from the train set and then used to normalize both the train and test set. This is to avoid information injection (section [5\.5](preprocessing.html#infoinjection)). Be also aware that after the parameters are learned from the train set, and once the model is deployed in production, it is likely that some input values will be ‘out of range’. If the train set is not very representative of what you will find in real life, some values will probably be smaller than the learned \\(min(x)\\) and some will be greater than the learned \\(max(x)\\). Even if the train set is representative of the real\-life phenomenon, there is nothing that will prevent some values to be out of range. A simple way to handle this is to truncate the values. In some cases, we do know what are the possible minimum and maximum values. For example in image processing, images are usually represented as color intensities between \\(0\\) and \\(255\\). Here, we know that the *min* value cannot be less than \\(0\\) and the *max* value cannot be greater than \\(255\\).
Let’s see an example using the *HOME TASKS* dataset. The following code first loads the dataset and prints a summary of the first \\(4\\) variables.
```
# Load home activities dataset.
dataset <- read.csv(file.path(datasets_path,
"home_tasks",
"sound_acc.csv"),
stringsAsFactors = T)
# Check first 4 variables' min and max values.
summary(dataset[,1:4])
#> label v1_mfcc1 v1_mfcc2 v1_mfcc3
#> brush_teeth :180 Min. :103 Min. :-17.20 Min. :-20.90
#> eat_chips :282 1st Qu.:115 1st Qu.: -8.14 1st Qu.: -7.95
#> mop_floor :181 Median :120 Median : -3.97 Median : -4.83
#> sweep :178 Mean :121 Mean : -4.50 Mean : -5.79
#> type_on_keyboard:179 3rd Qu.:126 3rd Qu.: -1.30 3rd Qu.: -3.09
#> wash_hands :180 Max. :141 Max. : 8.98 Max. : 3.27
#> watch_tv :206
```
Since *label* is a categorical variable, the class counts are printed. For the three remaining variables, we get some statistics including their *min* and *max* values. As we can see, the min value of *v1\_mfcc1* is very different from the *min* value of *v1\_mfcc2* and the same is true for the maximum values. Thus, we want all variables to be between \\(0\\) and \\(1\\) in order to use classification methods sensitive to different scales. Let’s assume we want to train a classifier with this data so we divide it into train and test sets:
```
# Divide into 50/50% train and test set.
set.seed(1234)
folds <- sample(2, nrow(dataset), replace = T)
trainset <- dataset[folds == 1,]
testset <- dataset[folds == 2,]
```
Now we can define a function that normalizes every numeric or integer variable. If the variable is not numeric or integer it will skip them. The function will take as input a train set and a test set. The parameters (*max* and *min*) are learned from the train set and used to normalize both, the train and test sets.
```
# Define a function to normalize the train and test set
# based on the parameters learned from the train set.
normalize <- function(trainset, testset){
# Iterate columns
for(i in 1:ncol(trainset)){
c <- trainset[,i] # trainset column
c2 <- testset[,i] # testset column
# Skip if the variable is not numeric or integer.
if(class(c) != "numeric" && class(c) != "integer")next;
# Learn the max value from the trainset's column.
max <- max(c, na.rm = T)
# Learn the min value from the trainset's column.
min <- min(c, na.rm = T)
# If all values are the same set it to max.
if(max==min){
trainset[,i] <- max
testset[,i] <- max
}
else{
# Normalize trainset's column.
trainset[,i] <- (c - min) / (max - min)
# Truncate max values in testset.
idxs <- which(c2 > max)
if(length(idxs) > 0){
c2[idxs] <- max
}
# Truncate min values in testset.
idxs <- which(c2 < min)
if(length(idxs) > 0){
c2[idxs] <- min
}
# Normalize testset's column.
testset[,i] <- (c2 - min) / (max - min)
}
}
return(list(train=trainset, test=testset))
}
```
Now we can use the previous function to normalize the train and test sets. The function returns a list of two elements: a normalized train and test sets.
```
# Call our function to normalize each set.
normalizedData <- normalize(trainset, testset)
# Inspect the normalized train set.
summary(normalizedData$train[,1:4])
#> label v1_mfcc1 v1_mfcc2 v1_mfcc3
#> brush_teeth : 88 Min. :0.000 Min. :0.000 Min. :0.000
#> eat_chips :139 1st Qu.:0.350 1st Qu.:0.403 1st Qu.:0.527
#> mop_floor : 91 Median :0.464 Median :0.590 Median :0.661
#> sweep : 84 Mean :0.474 Mean :0.568 Mean :0.616
#> type_on_keyboard: 94 3rd Qu.:0.613 3rd Qu.:0.721 3rd Qu.:0.730
#> wash_hands :102 Max. :1.000 Max. :1.000 Max. :1.000
#> watch_tv : 99
# Inspect the normalized test set.
summary(normalizedData$test[,1:4])
#> label v1_mfcc1 v1_mfcc2 v1_mfcc3
#> brush_teeth : 92 Min. :0.0046 Min. :0.000 Min. :0.000
#> eat_chips :143 1st Qu.:0.3160 1st Qu.:0.421 1st Qu.:0.500
#> mop_floor : 90 Median :0.4421 Median :0.606 Median :0.644
#> sweep : 94 Mean :0.4569 Mean :0.582 Mean :0.603
#> type_on_keyboard: 85 3rd Qu.:0.5967 3rd Qu.:0.728 3rd Qu.:0.724
#> wash_hands : 78 Max. :0.9801 Max. :1.000 Max. :1.000
#> watch_tv :107
```
Now, the variables on the train set are exactly between \\(0\\) and \\(1\\) for all numeric variables. For the test set, not all *min* values will be exactly \\(0\\) but a bit higher. Conversely, some *max* values will be lower than \\(1\\). This is because the test set may have a *min* value that is greater than the *min* value of the train set and a *max* value that is smaller than the *max* value of the train set. However, after normalization, all values are guaranteed to be within \\(0\\) and \\(1\\).
5\.4 Imbalanced Classes
-----------------------
Ideally, classes will be uniformly distributed, that is, there is approximately the same number of instances per class. In real\-life (as always), this is not the case. And in many situations (more often than you may think), **class counts are heavily skewed**. When this happens the dataset is said to be imbalanced. Take as an example, bank transactions. Most of them will be normal, whereas a small percent will be fraudulent. In the medical field this is very common. It is easier to collect samples from healthy individuals compared to samples from individuals with some rare conditions. For example, a database may have thousands of images from healthy tissue but just a dozen with signs of cancer. Of course, having just a few cases with diseases is a good thing for the world! but not for machine learning methods. This is because predictive models will try to learn their parameters such that the error is reduced, and most of the time this error is based on accuracy. Thus, the models will be biased towards making correct predictions for the majority classes (the ones with higher counts) while paying little attention to minority classes. This is a problem because for some applications we are more interested in detecting the minority classes (illegal transactions, cancer cases, etc.).
Suppose a given database has \\(998\\) instances with class *‘no cancer’* and only \\(2\\) instances with class *‘cancer’*. A trivial classifier that always predicts *‘no cancer’* will have an accuracy of \\(98\.8\\%\\) but will not be able to detect any of the *‘cancer’* cases! So, what can we do?
* **Collect more data from the minority class.** In practice, this can be difficult, expensive, etc. or just impossible because the study was conducted a long time ago and it is no longer possible to replicate the context.
* **Delete data from the majority class.** Randomly discard instances from the majority class. In the previous example, we could discard \\(996\\) instances of type *‘no cancer’*. The problem with this is that we end up with insufficient data to learn good predictive models. If you have a huge dataset this can be an option, but in practice, this is rarely the case and you have the risk of having underrepresented samples.
* **Create synthetic data.** One of the most common solutions is to create synthetic data from the minority classes. In the following sections two methods that do that will be discussed: *random oversampling* and *Synthetic Minority Oversampling Technique (SMOTE)*.
* **Adapt your learning algorithm.** Another option is to use an algorithm that takes into account class counts and weights them accordingly. This is called *cost\-sensitive classification*. For example, the `rpart()` method to train decision trees has a `weight` parameter which can be used to assign more weight to minority classes. When training neural networks it is also possible to assign different weights to different classes.
The following two subsections cover two techniques to create synthetic data.
### 5\.4\.1 Random Oversampling
`shiny_random-oversampling.Rmd`
This method consists of duplicating data points from the minority class. The following code will create an imbalanced dataset with \\(200\\) instances of class *‘class1’* and only \\(15\\) instances of class *‘class2’*.
```
set.seed(1234)
# Create random data
n1 <- 200 # Number of points of majority class.
n2 <- 15 # Number of points of minority class.
# Generate random values for class1.
x <- rnorm(mean = 0, sd = 0.5, n = n1)
y <- rnorm(mean = 0, sd = 1, n = n1)
df1 <- data.frame(label=rep("class1", n1),
x=x, y=y, stringsAsFactors = T)
# Generate random values for class2.
x2 <- rnorm(mean = 1.5, sd = 0.5, n = n2)
y2 <- rnorm(mean = 1.5, sd = 1, n = n2)
df2 <- data.frame(label=rep("class2", n2),
x=x2, y=y2, stringsAsFactors = T)
# This is our imbalanced dataset.
imbalancedDf <- rbind(df1, df2)
# Print class counts.
summary(imbalancedDf$label)
#> class1 class2
#< 200 15
```
If we want to exactly balance the class counts, we will need \\(185\\) additional instances of type *‘class2’*. We can use our well known `sample()` function to pick \\(185\\) points from data frame `df2` (which contains only instances of class *‘class2’*) and store them in `new.points`. Notice the `replace = T` parameter. This allows the function to pick repeated elements. Then, the new data points are appended to the imbalanced data set which now becomes balanced.
```
# Generate new points from the minority class.
new.points <- df2[sample(nrow(df2), size = 185, replace = T),]
# Add new points to the imbalanced dataset and save the
# result in balancedDf.
balancedDf <- rbind(imbalancedDf, new.points)
# Print class counts.
summary(balancedDf$label)
#> class1 class2
#> 200 200
```
The code associated with this chapter includes a shiny app[13](#fn13) `shiny_random-oversampling.Rmd`. Shiny apps are interactive web applications. This shiny app graphically demonstrates how random oversampling works. Figure [5\.9](preprocessing.html#fig:shinyOversampling) depicts the shiny app. The user can move the slider to generate new data points. Please note that the boundaries do not change as the number of instances increases (or decreases). This is because the new points are just duplicates so they overlap with existing ones.
FIGURE 5\.9: Shiny app with random oversampling example.
It is a common mistake to generate synthetic data on the entire dataset before splitting into train and test sets. This will cause your model to be highly overfitted since several duplicate data points can end up in both sets. Create synthetic data *only* from the *train set*.
Random oversampling is simple and effective in many cases. A potential problem is that the models can overfit since there are many duplicate data points. To overcome this, the SMOTE method creates entirely new instances instead of duplicating them.
### 5\.4\.2 SMOTE
`shiny_smote-oversampling.Rmd`
SMOTE is another method that can be used to augment the data points from the minority class ([Chawla et al. 2002](#ref-chawla2002smote)). One of the limitations of random oversampling is that it creates duplicates. This has the effect of having fixed boundaries and the classifiers can overspecialize. To avoid this, SMOTE creates entirely new data points.
SMOTE operates on the feature space (on the predictor variables). To generate a new point, take the difference between a given point \\(a\\) (taken from the minority class) and one of its randomly selected nearest neighbors \\(b\\). The difference is multiplied by a random number between \\(0\\) and \\(1\\) and added to \\(a\\). This has the effect of selecting a point along the line between \\(a\\) and \\(b\\). Figure [5\.10](preprocessing.html#fig:newpoint) illustrates the procedure of generating a new point in two dimensions.
FIGURE 5\.10: Synthetic point generation.
The number of nearest neighbors \\(k\\) is a parameter defined by the user. In their original work ([Chawla et al. 2002](#ref-chawla2002smote)), the authors set \\(k\=5\\). Depending on how many new samples need to be generated, \\(k'\\) neighbors are randomly selected from the original \\(k\\) nearest neighbors. For example, if \\(200\\%\\) oversampling is needed, \\(k'\=2\\) neighbors are selected at random out of the \\(k\=5\\) and one data point is generated with each of them. This is performed for each data point in the minority class.
An implementation of SMOTE is also provided in `auxiliary_functions/functions.R`. An example of its application can be found in `preprocessing.R` in the corresponding directory of this chapter’s code. The `smote.class(completeDf, targetClass, N, k)` function has several arguments. The first one is the data frame that contains the minority and majority class, that is, the complete dataset. The second argument is the minority class label. The third argument `N` is the percent of smote and the last one (`k`) is the number of nearest neighbors to consider.
The following code shows how the function `smote.class()` can be used to generate new points from the imbalanced dataset that was introduced in the previous section ‘Random Oversampling’. Recall that it has \\(200\\) points of class *‘class1’* and \\(15\\) points of class *‘class2’*.
```
# To balance the dataset, we need to oversample 1200%.
# This means that the method will create 12 * 15 new points.
ceiling(180 / 15) * 100
#> [1] 1200
# Percent to oversample.
N <- 1200
# Generate new data points.
synthetic.points <- smote.class(imbalancedDf,
targetClass = "class2",
N = N,
k = 5)$synthetic
# Append the new points to the original dataset.
smote.balancedDf <- rbind(imbalancedDf,
synthetic.points)
# Print class counts.
summary(smote.balancedDf$label)
#> class1 class2
#> 200 195
```
The parameter `N` is set to \\(1200\\). This will create \\(12\\) new data points for every minority class instance (\\(15\\)). Thus, the method will return \\(180\\) instances. In this case, \\(k\\) is set to \\(5\\). Finally, the new points are appended to the imbalanced dataset having a total of \\(195\\) samples of class ‘class2’.
Again, a shiny app is included with this chapter’s code. Figure [5\.11](preprocessing.html#fig:shinySMOTE) shows the distribution of the original points and after applying SMOTE. Note how the boundary of *‘class2’* changes after applying SMOTE. It slightly spans in all directions. This is particularly visible in the lower right corner. This boundary expansion is what allows the classifiers to generalize better as compared to training them using random oversampled data.
FIGURE 5\.11: Shiny app with SMOTE example. a) Before applying SMOTE. b) After applying SMOTE.
### 5\.4\.1 Random Oversampling
`shiny_random-oversampling.Rmd`
This method consists of duplicating data points from the minority class. The following code will create an imbalanced dataset with \\(200\\) instances of class *‘class1’* and only \\(15\\) instances of class *‘class2’*.
```
set.seed(1234)
# Create random data
n1 <- 200 # Number of points of majority class.
n2 <- 15 # Number of points of minority class.
# Generate random values for class1.
x <- rnorm(mean = 0, sd = 0.5, n = n1)
y <- rnorm(mean = 0, sd = 1, n = n1)
df1 <- data.frame(label=rep("class1", n1),
x=x, y=y, stringsAsFactors = T)
# Generate random values for class2.
x2 <- rnorm(mean = 1.5, sd = 0.5, n = n2)
y2 <- rnorm(mean = 1.5, sd = 1, n = n2)
df2 <- data.frame(label=rep("class2", n2),
x=x2, y=y2, stringsAsFactors = T)
# This is our imbalanced dataset.
imbalancedDf <- rbind(df1, df2)
# Print class counts.
summary(imbalancedDf$label)
#> class1 class2
#< 200 15
```
If we want to exactly balance the class counts, we will need \\(185\\) additional instances of type *‘class2’*. We can use our well known `sample()` function to pick \\(185\\) points from data frame `df2` (which contains only instances of class *‘class2’*) and store them in `new.points`. Notice the `replace = T` parameter. This allows the function to pick repeated elements. Then, the new data points are appended to the imbalanced data set which now becomes balanced.
```
# Generate new points from the minority class.
new.points <- df2[sample(nrow(df2), size = 185, replace = T),]
# Add new points to the imbalanced dataset and save the
# result in balancedDf.
balancedDf <- rbind(imbalancedDf, new.points)
# Print class counts.
summary(balancedDf$label)
#> class1 class2
#> 200 200
```
The code associated with this chapter includes a shiny app[13](#fn13) `shiny_random-oversampling.Rmd`. Shiny apps are interactive web applications. This shiny app graphically demonstrates how random oversampling works. Figure [5\.9](preprocessing.html#fig:shinyOversampling) depicts the shiny app. The user can move the slider to generate new data points. Please note that the boundaries do not change as the number of instances increases (or decreases). This is because the new points are just duplicates so they overlap with existing ones.
FIGURE 5\.9: Shiny app with random oversampling example.
It is a common mistake to generate synthetic data on the entire dataset before splitting into train and test sets. This will cause your model to be highly overfitted since several duplicate data points can end up in both sets. Create synthetic data *only* from the *train set*.
Random oversampling is simple and effective in many cases. A potential problem is that the models can overfit since there are many duplicate data points. To overcome this, the SMOTE method creates entirely new instances instead of duplicating them.
### 5\.4\.2 SMOTE
`shiny_smote-oversampling.Rmd`
SMOTE is another method that can be used to augment the data points from the minority class ([Chawla et al. 2002](#ref-chawla2002smote)). One of the limitations of random oversampling is that it creates duplicates. This has the effect of having fixed boundaries and the classifiers can overspecialize. To avoid this, SMOTE creates entirely new data points.
SMOTE operates on the feature space (on the predictor variables). To generate a new point, take the difference between a given point \\(a\\) (taken from the minority class) and one of its randomly selected nearest neighbors \\(b\\). The difference is multiplied by a random number between \\(0\\) and \\(1\\) and added to \\(a\\). This has the effect of selecting a point along the line between \\(a\\) and \\(b\\). Figure [5\.10](preprocessing.html#fig:newpoint) illustrates the procedure of generating a new point in two dimensions.
FIGURE 5\.10: Synthetic point generation.
The number of nearest neighbors \\(k\\) is a parameter defined by the user. In their original work ([Chawla et al. 2002](#ref-chawla2002smote)), the authors set \\(k\=5\\). Depending on how many new samples need to be generated, \\(k'\\) neighbors are randomly selected from the original \\(k\\) nearest neighbors. For example, if \\(200\\%\\) oversampling is needed, \\(k'\=2\\) neighbors are selected at random out of the \\(k\=5\\) and one data point is generated with each of them. This is performed for each data point in the minority class.
An implementation of SMOTE is also provided in `auxiliary_functions/functions.R`. An example of its application can be found in `preprocessing.R` in the corresponding directory of this chapter’s code. The `smote.class(completeDf, targetClass, N, k)` function has several arguments. The first one is the data frame that contains the minority and majority class, that is, the complete dataset. The second argument is the minority class label. The third argument `N` is the percent of smote and the last one (`k`) is the number of nearest neighbors to consider.
The following code shows how the function `smote.class()` can be used to generate new points from the imbalanced dataset that was introduced in the previous section ‘Random Oversampling’. Recall that it has \\(200\\) points of class *‘class1’* and \\(15\\) points of class *‘class2’*.
```
# To balance the dataset, we need to oversample 1200%.
# This means that the method will create 12 * 15 new points.
ceiling(180 / 15) * 100
#> [1] 1200
# Percent to oversample.
N <- 1200
# Generate new data points.
synthetic.points <- smote.class(imbalancedDf,
targetClass = "class2",
N = N,
k = 5)$synthetic
# Append the new points to the original dataset.
smote.balancedDf <- rbind(imbalancedDf,
synthetic.points)
# Print class counts.
summary(smote.balancedDf$label)
#> class1 class2
#> 200 195
```
The parameter `N` is set to \\(1200\\). This will create \\(12\\) new data points for every minority class instance (\\(15\\)). Thus, the method will return \\(180\\) instances. In this case, \\(k\\) is set to \\(5\\). Finally, the new points are appended to the imbalanced dataset having a total of \\(195\\) samples of class ‘class2’.
Again, a shiny app is included with this chapter’s code. Figure [5\.11](preprocessing.html#fig:shinySMOTE) shows the distribution of the original points and after applying SMOTE. Note how the boundary of *‘class2’* changes after applying SMOTE. It slightly spans in all directions. This is particularly visible in the lower right corner. This boundary expansion is what allows the classifiers to generalize better as compared to training them using random oversampled data.
FIGURE 5\.11: Shiny app with SMOTE example. a) Before applying SMOTE. b) After applying SMOTE.
5\.5 Information Injection
--------------------------
The purpose of dividing the data into train/validation/test sets is to accurately estimate the generalization performance of a predictive model when it is presented with previously unseen data points. So, it is advisable to construct such set splits in a way that they are as independent as possible. Often, before training a model and generating predictions, the data needs to be preprocessed. Preprocessing operations may include imputing missing values, normalizing, and so on. During those operations, some information can be inadvertently transferred from the train to the test set thus, violating the assumption that they are independent.
Information injection occurs when information from the train set is transferred to the test set. When having train/validation/test sets, information injection occurs when information from the train set leaks into the validation and/or test set. It also happens when information from the validation set is transferred to the test set.
Suppose that as one of the preprocessing steps, you need to subtract the mean value of a feature for each instance. For now, suppose a dataset has a single feature \\(x\\) of numeric type and a categorical response variable \\(y\\). The dataset has \\(n\\) rows. As a preprocessing step, you decide that you need to subtract the mean of \\(x\\) from each data point. Since you want to predict \\(y\\) given \\(x\\), you train a classifier by splitting your data into train and test sets as usual. So you proceed with the steps depicted in Figure [5\.12](preprocessing.html#fig:injection1).
FIGURE 5\.12: Information injection example. a) Parameters are learned from the entire dataset. b) The dataset is split intro train/test sets. c) The learned parameters are applied to both sets and information injection occurs.
First, (a) you compute the \\(mean\\) value of the of variable \\(x\\) from the entire dataset. This \\(mean\\) is known as the parameter. In this case, there is only one parameter but there could be several. For example, we could additionally need to compute the standard deviation. Once we know the mean value, the dataset is divided into train and test sets (b). Finally, the \\(mean\\) is subtracted from each element in both train and test sets (c). Without realizing, we have transferred information from the train set to the test set! But, how did this happen? Well, the *mean* parameter was computed using information from the *entire* dataset. Then, that \\(mean\\) parameter was used on the test set, but it was calculated using data points that also belong to that same test set!
Figure [5\.13](preprocessing.html#fig:injection2) shows how to correctly do the preprocessing to avoid information injection. The dataset is first split (a). Then, the \\(mean\\) parameter is calculated only with data points from the train set. Finally, the *mean* parameter is subtracted from both sets. Here, the mean contains information only from the train set.
FIGURE 5\.13: No information injection example. a) The dataset is first split into train/test sets. b) Parameters are learned only from the train set. c) The learned parameters are applied to the test set.
In the previous example, we assumed that the dataset was split into train and test sets only once. The same idea applies when performing \\(k\\)\-fold cross\-validation. In each of the \\(k\\) iterations, the preprocessing parameters need to be learned only from the train split.
5\.6 One\-hot Encoding
----------------------
Several algorithms need some or all of their input variables to be in numeric format, either the response and/or predictor variables. In R, for most classification algorithms, the class is usually encoded as a factor but some implementations may require it to be numeric. Sometimes there may be categorical variables as predictors such as gender (*‘male’*, *‘female’*). Some algorithms need those to be in numeric format because they, for example, are based on distance computations such as \\(k\\)\-NN. Other models need to perform arithmetic operations on the predictor variables like neural networks.
One way to convert categorical variables into numeric ones is called **one\-hot encoding**. The method works by creating new variables, sometimes called **dummy variables** which are boolean, one for each possible category. Suppose a dataset has a categorical variable *Job* (Figure [5\.14](preprocessing.html#fig:onehotenc)) with three possible values: *programmer*, *teacher*, and *dentist*. This variable can be one\-hot encoded by creating \\(3\\) new boolean dummy variables and setting them to \\(1\\) for the corresponding category and \\(0\\) for the rest.
FIGURE 5\.14: One\-hot encoding example
You should be aware of the dummy variable trap which means that one variable can be predicted from the others. For example, if the possible values are just *male* and *female*, then if the dummy variable for *male* is \\(1\\), we know that the dummy variable for *female* must be \\(0\\). The solution to this is to drop one of the newly created variables. Which one? It does not matter which one. This trap only applies when the variable is a predictor. If it is a response variable, nothing should be dropped.
Figure [5\.15](preprocessing.html#fig:variableConversion) presents a guideline for how to convert non\-numeric variables into numeric ones for classification tasks. This is only a guideline and the actual process will depend on each application.
FIGURE 5\.15: Variable conversion guidelines.
The `caret` package has a function `dummyVars()` that can be used to one\-hot encode the categorical variables of a data frame. Since the *STUDENTS’ MENTAL HEALTH* dataset ([Nguyen et al. 2019](#ref-Minh2019)) has several categorical variables, it can be used to demonstrate how to apply `dummyVars()`. This dataset collected at a University in Japan contains survey responses from students about their mental health and help\-seeking behaviors. We begin by loading the data.
```
# Load students mental health behavior dataset.
# stringsAsFactors is set to F since the function
# that we will use to one-hot encode expects characters.
dataset <- read.csv(file.path(datasets_path,
"students_mental_health",
"data.csv"),
stringsAsFactors = F)
```
Note that the `stringsAsFactors` parameter is set to `FALSE`. This is necessary because `dummyVars()` needs characters to work properly. Before one\-hot encoding the variables, we need to do some preprocessing to clean the dataset. This dataset contains several fields with empty characters ‘““’. Thus, we will replace them with `NA` using the `replace_with_na_all()` function from the `naniar` package. This package was first described in the missing values section of this chapter, but that function was not mentioned. The function takes as its first argument the dataset and the second one is a formula that includes a condition.
```
# The dataset contains several empty strings.
# Replace those empty strings with NAs so the following
# methods will work properly.
# We can use the replace_with_na_all() function
# from naniar package to do the replacement.
library(naniar)
dataset <- replace_with_na_all(dataset,
~.x %in% common_na_strings)
```
In this case, the condition is `~.x %in% common_na_strings` which means: replace all fields that contain one of the characters in `common_na_strings`. The variable `common_na_strings` contains a set of common strings that can be regarded as missing values, for example ‘NA’, ‘na’, ‘NULL’, empty strings, and so on. Now, we can use the `vis_miss()` function described in the missing values section to get a visual idea of the missing values.
```
# Visualize missing values.
vis_miss(dataset, warn_large_data = F)
```
FIGURE 5\.16: Missing values in the students mental health dataset.
Figure [5\.16](preprocessing.html#fig:mentalmissing) shows the output plot. We can see that the last rows contain many missing values so we will discard them and only keep the first rows (\\(1\-268\\)).
```
# Since the last rows starting at 269
# are full of missing values we will discard them.
dataset <- dataset[1:268,]
```
As an example, we will one\-hot encode the *Stay\_Cate* variable which represents how long a student has been at the university: 1 year (Short), 2–3 years (Medium), or at least 4 years (Long). The `dummyVars()` function takes a formula as its first argument. Here, we specify that we only want to convert `Stay_Cate`. This function does not do the actual encoding but returns an object that is used with `predict()` to obtain the encoded variable(s) as a new data frame.
```
# One-hot encode the Stay_Cate variable.
# This variable Stay_Cate has three possible
# values: Long, Short and Medium.
# First, create a dummyVars object with the dummyVars()
#function from caret package.
library(caret)
dummyObj <- dummyVars( ~ Stay_Cate, data = dataset)
# Perform the actual encoding using predict()
encodedVars <- data.frame(predict(dummyObj,
newdata = dataset))
```
FIGURE 5\.17: One\-hot encoded *Stay\_Cate*.
If we inspect the resulting data frame (Figure [5\.17](preprocessing.html#fig:stayCate1)), we see that it has \\(3\\) variables, one for each possible value: Long, Medium, and Short. If this variable is used as a predictor variable, we should delete one of its columns to avoid the dummy variable trap. We can do this by setting the parameter `fullRank = TRUE`.
```
dummyObj <- dummyVars( ~ Stay_Cate, data = dataset, fullRank = TRUE)
encodedVars <- data.frame(predict(dummyObj, newdata = dataset))
```
FIGURE 5\.18: One\-hot encoded *Stay\_Cate* dropping one of the columns.
In this situation, the column with ‘Long’ was discarded (Figure [5\.18](preprocessing.html#fig:stayCate2)). If you want to one\-hot encode all variables at once you can use `~ .` as the formula. But be aware that the dataset may have some categories encoded as numeric and thus will not be transformed. For example, the *Age\_cate* encodes age categories but the categories are represented as integers from \\(1\\) to \\(5\\). In this case, it may be ok not to encode this variable since lower integer numbers also imply smaller ages and bigger integer numbers represent older ages. If you still want to encode this variable you could first convert it to character by appending a letter at the beginning. Sometimes you should encode a variable, for example, if it represents colors. In that situation, it does not make sense to leave it as numeric since there is not semantic order between colors.
Actually, in some very rare situations, it would make sense to leave color categories as integers. For example, if they represent a gradient like white, light blue, blue, dark blue, and black in which case this could be treated as an ordinal variable.
5\.7 Summary
------------
Programming functions that train predictive models expect the data to be in a particular format. Furthermore, some methods make assumptions about the data like having no missing values, having all variables in the same scale, and so on. This chapter presented several commonly used methods to preprocess datasets before using them to train models.
* When collecting data from different sensors, we can face several sources of variation like **sensors’ format**, **different sampling rates**, **different scales**, and so on.
* Some preprocessing methods can lead to **information injection**. This happens when information from the train set is leaked to the test set.
* **Missing values** is a common problem in many data analysis tasks. In R, the `naniar` package can be used to spot missing values.
* **Imputation** is the process of inferring missing values. The `simputation` package can be used to impute missing values in datasets.
* **Normalization** is the process of transforming a set of variables to a common scale. For example from \\(0\\) to \\(1\\).
* An **imbalanced dataset** has a disproportionate number of classes of a certain type with respect to the others. Some methods like **random over/under sampling** and **SMOTE** can be used to balance a dataset.
* **One\-hot\-encoding** is a method that converts categorical variables into numeric ones.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/unsupervised.html |
Chapter 6 Discovering Behaviors with Unsupervised Learning
==========================================================
So far, we have been working with supervised learning methods, that is, models for which the training instances have two elements: (1\) a set of input values (features) and (2\) the expected output (label). As mentioned in chapter [1](intro.html#intro), there are other types of machine learning methods and one of those is **unsupervised learning** which is the topic of this chapter. In unsupervised learning, the training instances do not have a response variable (e.g., a label). Thus, the objective is to extract knowledge from the available data without any type of guidance (supervision). For example, given a set of variables that characterize a person, we would like to find groups of people with similar behaviors. For physical activity behaviors, this could be done by finding groups of very active people versus finding groups of people with low physical activity. Those groups can be useful for delivering targeted suggestions or services thus, enhancing and personalizing the user experience.
This chapter starts with one of the most popular unsupervised learning algorithms: **\\(k\\)\-means clustering**. Next, an example of how this technique can be applied to find groups of students with similar characteristics is presented. Then, **association rules mining** is presented, which is another type of unsupervised learning method. Finally, association rules are used to find criminal patterns from a homicide database.
6\.1 \\(k\\)\-means Clustering
------------------------------
`kmeans_steps.R`
This is one of the most commonly used unsupervised methods due to its simplicity and efficacy. Its objective is to find groups of points such that points in the same group are similar and points from different groups are as dissimilar as possible. The number of groups \\(k\\) needs to be defined a priori. The method is based on computing distances to **centroids**. The centroid of a set of points is computed by taking the mean of each of their features. The \\(k\\)\-means algorithm is as follows:
```
Generate k centroids at random.
Repeat until no change or max iterations:
Assign each data point to the closest centroid.
Update centroids.
```
To measure the distance between a data point and a centroid, the Euclidean distance is typically used, but other distances can be used as well depending on the application. As an example, let’s cluster user responses from the *STUDENTS’ MENTAL HEALTH* dataset. This database contains questionnaire responses about depression, acculturative stress, social connectedness, and help\-seeking behaviors from students at a University in Japan. To demonstrate how \\(k\\)\-means work, we will only choose two variables so we can plot the results. The variables are *ToAS* (Total Acculturative Stress) and *ToSC* (Total Social Connectedness). The *ToAS* measures the emotional challenges when adapting to a new culture while *ToSC* measures emotional distance with oneself and other people. For the clustering, the parameter \\(k\\) will be set to \\(3\\), that is, we want to group the points into \\(3\\) disjoint groups. The code that implements the \\(k\\)\-means algorithm can be found in the script `kmeans_steps.R`. The algorithm begins by selecting \\(3\\) centroids at random. Figure [6\.1](unsupervised.html#fig:clustIt0) shows a scatterplot of the variables *ToAS* and *ToSC* along with the random centroids.
FIGURE 6\.1: Three centroids chosen randomly.
Next, at the first iteration, each point is assigned to the closest centroid. This is depicted in Figure [6\.2](unsupervised.html#fig:clustIts) (top left). Then, the centroids are updated (moved) based on the new assignments. In the next iteration, the points are reassigned to the closest centroids and so on. Figure [6\.2](unsupervised.html#fig:clustIts) shows the first \\(4\\) iterations of the algorithm.
FIGURE 6\.2: First \\(4\\) iterations of \\(k\\)\-means.
From iteration \\(1\\) to \\(2\\) the centroids moved considerably. After that, they began to stabilize. Formally, the algorithm tries to minimize the total within cluster variation of all clusters. The cluster variation of a single cluster \\(C\_k\\) is defined as:
\\\[\\begin{equation}
W(C\_k) \= \\sum\_{x\_i \\in C\_k}{(x\_i \- \\mu\_k)^2}
\\tag{6\.1}
\\end{equation}\\]
where \\(x\_i\\) is a data point and \\(\\mu\_k\\) is the centroid of cluster \\(C\_k\\). Thus, the total within cluster variation \\(TWCV\\) is:
\\\[\\begin{equation}
TWCV \= \\sum\_{i\=1}^k{W(C\_i)}
\\tag{6\.2}
\\end{equation}\\]
that is, the sum of all within\-cluster variations across all clusters. The objective is to find the \\(\\mu\_k\\) centroids that make \\(TWCV\\) minimal. Finding the global optimum is a difficult problem. However, the iterative algorithm described above often produces good approximations.
### 6\.1\.1 Grouping Student Responses
`group_students.R`
In the previous example, we only used two variables to perform the clustering. Let’s now use more variables from the *STUDENTS’ MENTAL HEALTH* dataset to find groups. The full code can be found in `group_students.R`. After removing missing values, one\-hot encoding categorical variables, and some additional cleaning, the following \\(10\\) variables were selected:
```
# Select which variables are going to be used for clustering.
selvars <- c("Stay","English_cate","Intimate"
,"APD","AHome","APH","Afear",
"ACS","AGuilt","ToAS")
```
Additionally, it is advisable to normalize the data between \\(0\\) and \\(1\\) since we are dealing with distance computations and we want to put the same weight on each variable. To plot the \\(10\\) variables, we can use **MDS** (described in chapter [4](edavis.html#edavis)) to project the data into \\(2\\) dimensions (Figure [6\.3](unsupervised.html#fig:cluster1)).
FIGURE 6\.3: Students responses projected into 2D with MDS.
Visually, it seems that there are \\(4\\) distinct groups of points. Based on this initial guess, we can set \\(k\=4\\) and use the `kmeans()` function included in base R to find the groups automatically.
```
clusters <- kmeans(normdf, 4)
```
The first argument of `kmeans()` is a data frame or a matrix and the second argument the number of clusters. Figure [6\.4](unsupervised.html#fig:cluster2) shows the resulting clustering. The `kmeans()` method returns an object that contains several components including `cluster` that stores the assigned cluster for each data point and `centers` that stores the centroids.
FIGURE 6\.4: Students responses groups when \\(k\=4\\).
The \\(k\\)\-means algorithm found the same clusters as we would intuitively expect. We can check how different the groups are by inspecting some of the variables. For example, by plotting a boxplot of the *Intimate* variable (Figure [6\.5](unsupervised.html#fig:intyes)). This variable is \\(1\\) if the student has an intimate partner or \\(0\\) otherwise. Since there are only two possible values the boxplot looks flat. This shows that *cluster\_1* and *cluster\_3* are different from *cluster\_2* and *cluster\_4*.
FIGURE 6\.5: Boxplot of Intimate variable.
Additionally, let’s plot the *ACS* variable which represents the total score of culture shock (see Figure [6\.6](unsupervised.html#fig:bpacs)). This one has a minimum value of \\(3\\) and a max value of \\(13\\).
FIGURE 6\.6: Boxplot of ACS variable.
*cluster\_2* and *cluster\_4* were similar based on the *Intimate* variable, but if we take a look at the difference in medians based on *ACS*, *cluster\_2* and *cluster\_4* are the most dissimilar clusters which gives an intuitive idea of why those two clusters were split into two different ones by the algorithm.
So far, the number of groups \\(k\\) has been chosen arbitrarily or by visual inspection. But, *is there an automatic way to select the best k?* As always… this depends on the task at hand but there is a method called **Silhouette index** that can be used to select the optimal \\(k\\) based on an optimality criterion. This index is presented in the next section.
6\.2 The Silhouette Index
-------------------------
As opposed to supervised learning, in unsupervised learning there is **no ground truth** to validate the results. In clustering, one way to validate the resulting groups is to plot them and manually explore the clusters’ data points and look for similarities and/or differences. But sometimes we may also want to automate the process and have a quantitative way to measure how well the clustering algorithm grouped the points with the given set of parameters. If we had such a method we could do parameter optimization, for example, to find the best \\(k\\). Well, there is something called *the silhouette index* ([Rousseeuw 1987](#ref-rousseeuw1987)) and it can be used to measure the correctness of the clusters.
This index is computed for each data point and tells us how well they are clustered. The total silhouette index is the mean of all points’ indices and gives us an idea of how well the points were clustered overall. This index goes from \\(\-1\\) to \\(1\\) and I’ll explain in a moment how to interpret it, but first let’s see how it is computed.
To compute the silhouette index two things are needed: the already created groups and the distances between points. Let:
\\(a(i)\=\\) average dissimilarity (distance) of point \\(i\\) to all other points in \\(A\\), where \\(A\\) is the cluster to which \\(i\\) has been assigned to (Figure [6\.7](unsupervised.html#fig:3clust)).
\\(d(i,C)\=\\) average dissimilarity between \\(i\\) and all points in some cluster \\(C\\).
\\(b(i)\=\\min\_{C \\neq A}d(i,C)\\). The cluster \\(B\\) for which the minimum is obtained is the neighbor of point \\(i\\). (The second best choice for \\(i\\)).
FIGURE 6\.7: Three resulting clusters: A, B, and C. (Reprinted from *Journal of computational and applied mathematics* Vol. 20, Rousseeuw, P. J., “Silhouettes: a graphical aid to the interpretation and validation of cluster analysis” pp. 53\-65, Copyright 1987, with permission from Elsevier. [doi:https://doi.org/10\.1016/0377\-0427(87\)90125\-7](doi:https://doi.org/10.1016/0377-0427(87)90125-7)).
Thus, \\(s(i)\\) (the silhouette index of point \\(i\\)) is obtained with the formula:
\\\[\\begin{equation}
s(i) \= \\frac{{b(i) \- a(i)}}{{\\max \\{ a(i),b(i)\\} }}
\\tag{6\.3}
\\end{equation}\\]
When \\(s(i)\\) is close to \\(1\\), it means that the *within* dissimilarity \\(a(i)\\) is much smaller than the smallest *between* dissimilarity \\(b(i)\\) thus, \\(i\\) can be considered to be well clustered. When \\(s(i)\\) is close to \\(0\\) it is not clear whether \\(i\\) belongs to \\(A\\) or \\(B\\). When \\(s(i)\\) is close to \\(\-1\\), \\(a(i)\\) is larger than \\(b(i)\\) meaning that \\(i\\) may have been misgrouped. The total silhouette index \\(S\\) is the average of all indices \\(s(i)\\) of all points.
In R, the `cluster` package has the function `silhouette()` that computes the silhouette index. The following code snippet clusters the student responses into \\(4\\) groups and computes the index of each point with the `silhouette()` function. Its first argument is the cluster assignments as returned by `kmeans()`, and the second argument is a `dist` object that contains the distances between each pair of points. We can compute this information from our data frame with the `dist()` function. The `silhouette()` function returns an object with the silhouette index of each data point. We can compute the total index by taking the average which in this case was \\(0\.346\\).
```
library(cluster) # Load the required package.
set.seed(1234)
clusters <- kmeans(normdf, 4) # Try with k=4
# Compute silhouette indices for all points.
si <- silhouette(clusters$cluster, dist(normdf))
# Print first rows.
head(si)
#> cluster neighbor sil_width
#> [1,] 1 4 0.3482364
#> [2,] 2 4 0.3718735
#> [3,] 3 1 0.3322198
#> [4,] 1 4 0.3998996
#> [5,] 1 4 0.3662811
#> [6,] 3 1 0.1463607
# Compute total Silhouette index by averaging the individual indices.
mean(si[,3])
# [1] 0.3466427
```
One nice thing about this index is that it can be presented visually. To generate a silhouette plot, use the generic `plot()` function and pass the object returned by `silhouette()`.
```
plot(si, cex.names=0.6, col = 1:4,
main = "Silhouette plot, k=4",
border=NA)
```
FIGURE 6\.8: Silhouette plot when \\(k\=4\\).
Figure [6\.8](unsupervised.html#fig:si4) shows the silhouette plot when \\(k\=4\\). The horizontal lines represent the individual silhouette indices. In this plot, all of them are positive. The height of each cluster gives a visual idea of the number of data points contained in it with respect to other clusters. We can see for example that cluster \\(2\\) is the smallest one. On the right side, is the number of points in each cluster and their average silhouette index. At the bottom, the total silhouette index is printed (\\(0\.35\\)). We can try to cluster the points into \\(7\\) groups instead of \\(4\\) and see what happens.
```
set.seed(1234)
clusters <- kmeans(normdf, 7)
si <- silhouette(clusters$cluster, dist(normdf))
plot(si, cex.names=0.6, col = 1:7,
main = "Silhouette plot, k=7",
border=NA)
```
FIGURE 6\.9: Silhouette plot when \\(k\=7\\).
Here, cluster \\(2\\) and \\(4\\) have data points with negative indices and the overall score is \\(0\.26\\). This suggests that \\(k\=4\\) produces more coherent clusters as compared to \\(k\=7\\).
In this section, we used the Silhouette index to validate the clustering results. Over the years, several other clustering validation methods have been developed. In their paper, Halkidi, Batistakis, and Vazirgiannis ([2001](#ref-Halkidi2001)) present an overview of other clustering validation methods.
6\.3 Mining Association Rules
-----------------------------
Association rule mining consists of a set of methods to extract patterns (rules) from transactional data. For example, shopping behavior can be analyzed by finding rules from customers’ shopping transactions. A **transaction** is an event that involves a set of items. For example, when someone buys a soda, a bag of chips, and a chocolate bar the purchase is registered as *one* transaction containing \\(3\\) items. I apologize for using this example for those of you with healthy diets. Based on a database that contains many transactions, it is possible to uncover item relationships. Those relationships are usually expressed as implication rules of the form \\(X \\implies Y\\) where \\(X\\) and \\(Y\\) are sets of items. Both sets are disjoint, this means that items in \\(X\\) are not in \\(Y\\) and vice\-versa which can be formally represented as \\(X \\cap Y \= \\emptyset\\). That is, the intersection of the two sets is the empty set. \\(X \\implies Y\\) is read as: if \\(X\\) then \\(Y\\). The left\-hand\-side (lhs) \\(X\\) is called the **antecedent** and the right\-hand\-side (rhs) \\(Y\\) is called the **consequent**.
In the unhealthy supermarket example, a rule like \\(\\{chips, chocolate\\} \\implies \\{soda\\}\\) can be interpreted as *if someone buys chips and chocolate then, it is likely that this same person will also buy soda*. These types of rules can be used for targeted advertisements, product placement decisions, etc.
The possible number of rules that can be generated grows exponentially as the number of items increases. Furthermore, not all rules may be interesting. The most well\-known algorithm to find interesting association rules is called **Apriori** ([Agrawal and Srikant 1994](#ref-agrawal1994)). To quantify if a rule is interesting or not, this algorithm uses two importance measures: **support** and **confidence**.
* **Support.** The support \\(\\text{supp}(X)\\) of an itemset \\(X\\) is the proportion of transactions that contain all the items in \\(X\\). This quantifies how frequent the itemset is.
* **Confidence.** The confidence of a rule \\(\\text{conf}(X \\implies Y)\=\\text{supp}(X \\cup Y)/\\text{supp}(X)\\) and can be interpreted as the conditional probability that \\(Y\\) occurs given that \\(X\\) is present. This can also be thought of as the probability that a transaction that contains \\(X\\) also contains \\(Y\\). The \\(\\cup\\) operator is the union of two sets. This means taking all elements from both sets and removing repeated elements.
Now that there is a way to measure the importance of the rules, the Apriori algorithm first finds itemsets that satisfy a minimum support and generates rules from those itemsets that satisfy a minimum confidence. Those minimum thresholds are set by the user. The lower the thresholds, the more rules returned by the algorithm. One thing to note is that Apriori only generates rules with itemsets of size \\(1\\) on the right\-hand side. Another common metric to measure the importance of a rule is the **lift**. Lift is typically used after Apriori has generated the rules to further filter and/or rank the results.
* **Lift.** The lift of a rule \\(\\text{lift}(X \\implies Y) \= \\text{supp}(X \\cup Y) / (\\text{supp}(X)\\text{supp}(Y))\\) is similar to the confidence but it also takes into consideration the frequency of \\(Y\\). A lift of \\(1\\) means that there is no association between \\(X\\) and \\(Y\\). A lift greater than \\(1\\) means that \\(Y\\) is likely to occur if \\(X\\) occurs and a lift less than \\(1\\) means that \\(Y\\) is unlikely to occur when \\(X\\) occurs.
Let’s compute all those metrics using an example. The following table shows a synthetic example database of transactions from shoppers with unhealthy behaviors.
FIGURE 6\.10: Example database with 10 transactions.
The support of the itemset consisting of a single item *‘chocolate’* is \\(\\text{supp}(\\{chocolate\\}) \= 5/10 \= 0\.5\\) because *‘chocolate’* appears in \\(5\\) out of the \\(10\\) transactions. The support of \\(\\{chips, soda\\}\\) is \\(3/10 \= 0\.3\\).
The confidence of the rule \\(\\{chocolate, chips\\} \\implies \\{soda\\}\\) is:
\\\[\\begin{align\*}
\\text{conf}(\\{chocolate, chips\\} \\implies \\{soda\\})\&\=\\frac{\\text{supp}(\\{chocolate, chips, soda\\})}{\\text{supp}(\\{chocolate,chips\\})} \\\\
\&\=(2/10\) / (3/10\) \\\\
\&\=0\.66
\\end{align\*}\\]
The lift of \\(\\{soda\\} \\implies \\{ice cream\\}\\) is:
\\\[\\begin{align\*}
\\text{lift}(\\{soda\\} \\implies \\{ice cream\\})\&\=\\frac{\\text{supp}(\\{soda, ice cream\\})}{\\text{supp}(\\{soda\\})\\text{supp}(\\{ice cream\\})} \\\\
\&\=(2/10\) / ((7/10\)(3/10\)) \\\\
\&\=0\.95\.
\\end{align\*}\\]
Association rules mining is unsupervised in the sense that there are no labels or ground truth. Many applications of association rules are targeted to market basket analysis to gain insights into shoppers’ behavior and take actions to increase sales. To find such rules it is necessary to have ‘transactions’ (sets of items), for example, supermarket products. However, this is not the only application of association rules. There are other problems that can be structured as transactions of items. For example in medicine, *diseases* can be seen as transactions and *symptoms* as items. Thus, one can apply association rules algorithms to find symptoms and disease relationships. Another application is in recommender systems. Take, for example, movies. Transactions can be the set of movies watched by every user. If you watched a movie \\(m\\) then, the recommender system can suggest another movie that co\-occurred frequently with \\(m\\) and that you have not watched yet. Furthermore, other types of relational data can be transformed into transaction\-like structures to find patterns and this is precisely what we are going to do in the next section to mine criminal patterns.
### 6\.3\.1 Finding Rules for Criminal Behavior
`crimes_process.R` `crimes_rules.R`
In this section, we will use association rules mining to find patterns in the *HOMICIDE REPORTS*[14](#fn14) dataset. This database contains homicide reports from 1980 to 2014 in the United States. The database is structured as a table with \\(24\\) columns and \\(638454\\) rows. Each row corresponds to a homicide report that includes city, state, year, month, sex of victim, sex of perpetrator, if the crime was solved or not, weapon used, age of the victim and perpetrator, the relationship type between the victim and the perpetrator, and some other information.
Before trying to find rules, the data needs to be preprocessed and converted into transactions. Each homicide report will be a transaction and the items are the possible values of \\(3\\) of the columns: **Relationship**, **Weapon**, and **Perpetrator.Age**. The **Relationship** variable can take values like *Stranger*, *Neighbor*, *Friend*, etc. In total, there are \\(28\\) possible relationship values including *Unknown*. For the purpose of our analysis, we will remove rows with unknown values in **Relationship** and **Weapon**. Since **Perpetrator.Age** is an integer, we need to convert it into categories. The following age groups are created: child (\< \\(13\\) years), teen (\\(13\\) to \\(17\\) years), adult (\\(18\\) to \\(45\\) years), and lateAdulthood (\> \\(45\\) years). After these cleaning and preprocessing steps, the dataset has \\(3\\) columns and \\(328238\\) rows (see Figure [6\.11](unsupervised.html#fig:tabcrimes)). The script used to perform the preprocessing is `crimes_process.R`.
FIGURE 6\.11: First rows of preprocessed crimes data frame. Source: Data from the Murder Accountability Project, founded by Thomas Hargrove (CC BY\-SA 4\.0\) \[[https://creativecommons.org/licenses/by\-sa/4\.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode)].
Now, we have a data frame that contains only the relevant information. Each row will be used to generate one transaction. An example transaction may be *{R.Wife, Knife, Adult}*. This one represents the case where the perpetrator is an *adult* who used a *knife* to kill his *wife*. Note the ‘R.’ at the beginning of ‘Wife’. This ‘R.’ was added for clarity in order to identify that this item is a relationship. One thing to note is that every transaction will consist of exactly \\(3\\) items. This is a bit different than the market basket case in which every transaction can include a varying number of products. Although this item\-size constraint was a design decision based on the structure of the original data, this will not prevent us from performing the analysis to find interesting rules.
To find the association rules, the `arules` package ([Hahsler et al. 2019](#ref-arules)) will be used. This package has an interface to an efficient implementation in C of the Apriori algorithm. This package needs the transactions to be stored as an object of type ‘transactions’. One way to create this object is to use a **logical matrix** and cast it into a transactions object. The rows of the logical matrix represent transactions and columns represent items. The number of columns equals the total number of possible items. A `TRUE` value indicates that the item is present in the transaction and `FALSE` otherwise. In our case, the matrix has \\(46\\) columns. The `crimes_process.R` script has code to generate this matrix `M`. The \\(46\\) items (columns of `M`) are:
```
as.character(colnames(M))
#> [1] "R.Acquaintance" "R.Wife" "R.Stranger"
#> [4] "R.Girlfriend" "R.Ex-Husband" "R.Brother"
#> [7] "R.Stepdaughter" "R.Husband" "R.Friend"
#> [10] "R.Family" "R.Neighbor" "R.Father"
#> [13] "R.In-Law" "R.Son" "R.Ex-Wife"
#> [16] "R.Boyfriend" "R.Mother" "R.Sister"
#> [19] "R.Common-Law Husband" "R.Common-Law Wife" "R.Stepfather"
#> [22] "R.Stepson" "R.Stepmother" "R.Daughter"
#> [25] "R.Boyfriend/Girlfriend" "R.Employer" "R.Employee"
#> [28] "Blunt Object" "Strangulation" "Rifle"
#> [31] "Knife" "Shotgun" "Handgun"
#> [34] "Drowning" "Firearm" "Suffocation"
#> [37] "Fire" "Drugs" "Explosives"
#> [40] "Fall" "Gun" "Poison"
#> [43] "teen" "adult" "lateAdulthood"
#> [46] "child"
```
The following snippet shows how to convert the matrix into an `arules` transactions object. Before the conversion, the package `arules` needs to be loaded. For convenience, the transactions are saved in a file `transactions.RData`.
```
library(arules)
# Convert into a transactions object.
transactions <- as(M, "transactions")
# Save transactions file.
save(transactions, file="transactions.RData")
```
Now that the database is in the required format we can start the analysis. The `crimes_rules.R` script has the code to perform the analysis. First, the transactions file that we generated before is loaded:
```
library(arules)
library(arulesViz)
# Load preprocessed data.
load("transactions.RData")
```
Note that additionally to the `arules` package, we also loaded the `arulesViz` package ([Hahsler 2019](#ref-ParulesViz)). This package has several functions to generate cool plots of the learned rules! A summary of the transactions can be printed with the `summary()` function:
```
# Print summary.
summary(transactions)
#> transactions as itemMatrix in sparse format with
#> 328238 rows (elements/itemsets/transactions) and
#> 46 columns (items) and a density of 0.06521739
#>
#> most frequent items:
#> adult Handgun R.Acquaintance R.Stranger
#> 257026 160586 117305 77725
#> Knife (Other)
#> 61936 310136
#>
#> element (itemset/transaction) length distribution:
#> sizes
#> 3
#> 328238
#>
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 3 3 3 3 3 3
#>
#> includes extended item information - examples:
#> labels
#> Relationship1 R.Acquaintance
#> Relationship2 R.Wife
#> Relationship3 R.Stranger
```
The summary shows the total number of rows (transactions) and the number of columns. It also prints the most frequent items, in this case, *adult* with \\(257026\\) occurrences, *Handgun* with \\(160586\\), and so on. The itemset sizes are also displayed. Here, all itemsets have a size of \\(3\\) (by design). Some other summary statistics are also printed.
We can use the `itemFrequencyPlot()` function from the `arulesViz` package to plot the frequency of items.
```
itemFrequencyPlot(transactions,
type = "relative",
topN = 15,
main = 'Item frequecies')
```
The `type` argument specifies that we want to plot the relative frequencies. Use `"absolute"` instead to plot the total counts. `topN` is used to select how many items are plotted. Figure [6\.12](unsupervised.html#fig:rulesfreqs) shows the output.
FIGURE 6\.12: Frequences of the top 15 items.
Now it is time to find some interesting rules! This can be done with the `apriori()` function as follows:
```
# Run apriori algorithm.
resrules <- apriori(transactions,
parameter = list(support = 0.001,
confidence = 0.5,
# Find rules with at least 2 items.
minlen = 2,
target = 'rules'))
```
The first argument is the transactions object. The second argument `parameter` specifies a list of algorithm parameters. In this case we want rules with a minimum support of \\(0\.001\\) and a confidence of at least \\(0\.5\\). The `minlen` argument specifies the minimum number of allowed items in a rule (antecedent \+ consequent). We set it to \\(2\\) since we want rules with at least one element in the antecedent and one element in the consequent, for example, *{item1 \=\> item2}*. This Apriori algorithm creates rules with only one item in the consequent. Finally, the `target` parameter is used to specify that we want to find rules because the function can also return item sets of different types (see the documentation for more details). The returned rules are saved in the `resrules` variable that can be used later to explore the results. We can also print a summary of the returned results.
```
# Print a summary of the results.
summary(resrules)
#> set of 141 rules
#>
#> rule length distribution (lhs + rhs):sizes
#> 2 3
#> 45 96
#>
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 2.000 2.000 3.000 2.681 3.000 3.000
#>
#> summary of quality measures:
#> support confidence lift count
#> Min. :0.001030 Min. :0.5045 Min. :0.6535 Min. : 338
#> 1st Qu.:0.001767 1st Qu.:0.6478 1st Qu.:0.9158 1st Qu.: 580
#> Median :0.004424 Median :0.7577 Median :1.0139 Median : 1452
#> Mean :0.021271 Mean :0.7269 Mean :1.0906 Mean : 6982
#> 3rd Qu.:0.012960 3rd Qu.:0.8131 3rd Qu.:1.0933 3rd Qu.: 4254
#> Max. :0.376836 Max. :0.9539 Max. :4.2777 Max. :123692
#>
#> mining info:
#> data ntransactions support confidence
#> transactions 328238 0.001 0.5
```
By looking at the summary, we see that the algorithm found \\(141\\) rules that satisfy the support and confidence thresholds. The rule length distribution is also printed. Here, \\(45\\) rules are of size \\(2\\) and \\(96\\) rules are of size \\(3\\). Then, some standard statistics are shown for support, confidence, and lift. The `inspect()` function can be used to print the actual rules. Rules can be sorted by one of the importance measures. The following code sorts by lift and prints the first \\(20\\) rules. Figure [6\.13](unsupervised.html#fig:resrules) shows the output.
```
# Print the first n (20) rules with highest lift in decreasing order.
inspect(sort(resrules, by='lift', decreasing = T)[1:20])
```
FIGURE 6\.13: Output of the inspect() function.
The first rule with a lift of \\(4\.27\\) says that if a homicide was committed by an *adult* and the victim was the *stepson*, then is it likely that a *blunt object* was used for the crime. By looking at the rules, one can also note that whenever *blunt object* appears either in the lhs or rhs, the victim was most likely an infant. Another thing to note is that when the victim was *boyfriend*, the crime was likely committed with a *knife*. This is also mentioned in the reports ‘Homicide trends in the United States’ ([Cooper, Smith, et al. 2012](#ref-cooper2012)):
> From 1980 through 2008 ‘Boyfriends were more likely to be killed by knives than any other group of intimates’.
According to rule \\(20\\), crimes involving *girlfriend* have a strong relationship with *strangulation*. This can also be confirmed in ([Cooper, Smith, et al. 2012](#ref-cooper2012)):
> From 1980 through 2008 ‘Girlfriends were more likely to be killed by force…’.
The resulting rules can be plotted with the `plot()` function (see Figure [6\.14](unsupervised.html#fig:rulesScatter)). By default, it generates a scatterplot with the *support* in the \\(x\\) axis and *confidence* in the \\(y\\) axis colored by *lift*.
```
# Plot a default scatterplot of support vs. confidence colored by lift.
plot(resrules)
```
FIGURE 6\.14: Scatterplot of rules support vs. confidence colored by lift.
The plot shows that rules with a high *lift* also have a low *support* and *confidence*. Hahsler ([2017](#ref-Hahsler2017)) mentioned that rules with high *lift* typically have low *support*. The plot can be customized for example to show the *support* and *lift* in the axes and color them by confidence. The axes can be set with the `measure` parameter and the coloring with the `shading` parameter. The function also supports different plotting engines including static and interactive. The following code generates a customized interactive plot by setting `engine = "htmlwidget"`. This is very handy if you want to know which points correspond to which rules. By hovering the mouse on the desired point the corresponding rule is shown as a tooltip box (Figure [6\.15](unsupervised.html#fig:rulesScatterInt)). The interactive plots also allow to zoom in regions by clicking and dragging.
```
# Customize scatterplot to make it interactive
# and plot support vs. lift colored by confidence.
plot(resrules, engine = "htmlwidget",
measure = c("support", "lift"), shading = "confidence")
```
FIGURE 6\.15: Interactive scatterplot of rules.
The `arulesViz` package has a nice option to plot rules as a graph. This is done by setting `method = "graph"`. We can also make the graph interactive for easier exploration by setting `engine="htmlwidget"`. For clarity, the font size is reduced with `cex=0.9`. Here we plot the first \\(25\\) rules.
```
# Plot rules as a graph.
plot(head(sort(resrules, by = "lift"), n=25),
method = "graph",
control=list(cex=.9),
engine="htmlwidget")
```
FIGURE 6\.16: Interactive graph of rules.
Figure [6\.16](unsupervised.html#fig:rulesGraphInt) shows a zoomed\-in portion of the entire graph. Circles represent rules and rounded squares items. The size of the circle is relative to the *support* and color relative to the *lift*. Incoming arrows represent the items in the antecedent and the outgoing arrow of a circle points to the item in the consequent part of the rule. From this graph, some interesting patterns can be seen. First, when the age category of the perpetrator is *lateAdulthood*, the victims were the *husband* or *ex\-wife*. When the perpetrator is a *teen*, the victim was likely a *friend* or *stranger*.
The `arulesViz` package has a cool function `ruleExplorer()` that generates a shiny app with interactive controls and several plot types. When running the following code (output not shown) you may be asked to install additional shiny related packages.
```
# Opens a shiny app with several interactive plots.
ruleExplorer(resrules)
```
Sometimes Apriori returns thousands of rules. There is a convenient `subset()` function to extract rules of interest. For example, we can select only the rules that contain *R.Girlfriend* in the antecedent (lhs) and print the top three with highest lift (Figure [6\.17](unsupervised.html#fig:resrulesGirlfriend) shows the result):
```
# Subset transactions.
rulesGirlfriend <- subset(resrules, subset = lhs %in% "R.Girlfriend")
# Print rules with highest lift.
inspect(head(rulesGirlfriend, n = 3, by = "lift"))
```
FIGURE 6\.17: Output of the inspect() function.
In this section, we showed how interesting rules can be extracted from a crimes database. Several preprocessing steps were required to transform the tabular data into transactional data. This example already demonstrated how the same data can be represented in different ways (tabular and transactional). The next chapter will cover more details about how data can be transformed into **different representations** suitable for different types of learning algorithms.
6\.4 Summary
------------
One of the types of machine learning is **unsupervised learning** in which there are no labels. This chapter introduced some unsupervised methods such as clustering and association rules.
* The objective of **\\(k\\)\-means clustering** is to find groups of points such that points in the same group are similar and points from different groups are as dissimilar as possible.
* The **centroid** of a group is calculated by taking the mean value of each feature.
* In **\\(k\\)\-means**, one needs to specify the number of groups \\(k\\) before running the algorithm.
* The **Silhouette Index** is a measure that tells us how well a set of points were clustered. This measure can be used to find the optimal number of groups \\(k\\).
* **Association rules** can find patterns in an unsupervised manner.
* The **Apriori algorithm** is the most well\-known method for finding association rules.
* Before using the *Apriori algorithm*, one needs to format the data as **transactions**.
* A **transaction** is an event that involves a set of items.
6\.1 \\(k\\)\-means Clustering
------------------------------
`kmeans_steps.R`
This is one of the most commonly used unsupervised methods due to its simplicity and efficacy. Its objective is to find groups of points such that points in the same group are similar and points from different groups are as dissimilar as possible. The number of groups \\(k\\) needs to be defined a priori. The method is based on computing distances to **centroids**. The centroid of a set of points is computed by taking the mean of each of their features. The \\(k\\)\-means algorithm is as follows:
```
Generate k centroids at random.
Repeat until no change or max iterations:
Assign each data point to the closest centroid.
Update centroids.
```
To measure the distance between a data point and a centroid, the Euclidean distance is typically used, but other distances can be used as well depending on the application. As an example, let’s cluster user responses from the *STUDENTS’ MENTAL HEALTH* dataset. This database contains questionnaire responses about depression, acculturative stress, social connectedness, and help\-seeking behaviors from students at a University in Japan. To demonstrate how \\(k\\)\-means work, we will only choose two variables so we can plot the results. The variables are *ToAS* (Total Acculturative Stress) and *ToSC* (Total Social Connectedness). The *ToAS* measures the emotional challenges when adapting to a new culture while *ToSC* measures emotional distance with oneself and other people. For the clustering, the parameter \\(k\\) will be set to \\(3\\), that is, we want to group the points into \\(3\\) disjoint groups. The code that implements the \\(k\\)\-means algorithm can be found in the script `kmeans_steps.R`. The algorithm begins by selecting \\(3\\) centroids at random. Figure [6\.1](unsupervised.html#fig:clustIt0) shows a scatterplot of the variables *ToAS* and *ToSC* along with the random centroids.
FIGURE 6\.1: Three centroids chosen randomly.
Next, at the first iteration, each point is assigned to the closest centroid. This is depicted in Figure [6\.2](unsupervised.html#fig:clustIts) (top left). Then, the centroids are updated (moved) based on the new assignments. In the next iteration, the points are reassigned to the closest centroids and so on. Figure [6\.2](unsupervised.html#fig:clustIts) shows the first \\(4\\) iterations of the algorithm.
FIGURE 6\.2: First \\(4\\) iterations of \\(k\\)\-means.
From iteration \\(1\\) to \\(2\\) the centroids moved considerably. After that, they began to stabilize. Formally, the algorithm tries to minimize the total within cluster variation of all clusters. The cluster variation of a single cluster \\(C\_k\\) is defined as:
\\\[\\begin{equation}
W(C\_k) \= \\sum\_{x\_i \\in C\_k}{(x\_i \- \\mu\_k)^2}
\\tag{6\.1}
\\end{equation}\\]
where \\(x\_i\\) is a data point and \\(\\mu\_k\\) is the centroid of cluster \\(C\_k\\). Thus, the total within cluster variation \\(TWCV\\) is:
\\\[\\begin{equation}
TWCV \= \\sum\_{i\=1}^k{W(C\_i)}
\\tag{6\.2}
\\end{equation}\\]
that is, the sum of all within\-cluster variations across all clusters. The objective is to find the \\(\\mu\_k\\) centroids that make \\(TWCV\\) minimal. Finding the global optimum is a difficult problem. However, the iterative algorithm described above often produces good approximations.
### 6\.1\.1 Grouping Student Responses
`group_students.R`
In the previous example, we only used two variables to perform the clustering. Let’s now use more variables from the *STUDENTS’ MENTAL HEALTH* dataset to find groups. The full code can be found in `group_students.R`. After removing missing values, one\-hot encoding categorical variables, and some additional cleaning, the following \\(10\\) variables were selected:
```
# Select which variables are going to be used for clustering.
selvars <- c("Stay","English_cate","Intimate"
,"APD","AHome","APH","Afear",
"ACS","AGuilt","ToAS")
```
Additionally, it is advisable to normalize the data between \\(0\\) and \\(1\\) since we are dealing with distance computations and we want to put the same weight on each variable. To plot the \\(10\\) variables, we can use **MDS** (described in chapter [4](edavis.html#edavis)) to project the data into \\(2\\) dimensions (Figure [6\.3](unsupervised.html#fig:cluster1)).
FIGURE 6\.3: Students responses projected into 2D with MDS.
Visually, it seems that there are \\(4\\) distinct groups of points. Based on this initial guess, we can set \\(k\=4\\) and use the `kmeans()` function included in base R to find the groups automatically.
```
clusters <- kmeans(normdf, 4)
```
The first argument of `kmeans()` is a data frame or a matrix and the second argument the number of clusters. Figure [6\.4](unsupervised.html#fig:cluster2) shows the resulting clustering. The `kmeans()` method returns an object that contains several components including `cluster` that stores the assigned cluster for each data point and `centers` that stores the centroids.
FIGURE 6\.4: Students responses groups when \\(k\=4\\).
The \\(k\\)\-means algorithm found the same clusters as we would intuitively expect. We can check how different the groups are by inspecting some of the variables. For example, by plotting a boxplot of the *Intimate* variable (Figure [6\.5](unsupervised.html#fig:intyes)). This variable is \\(1\\) if the student has an intimate partner or \\(0\\) otherwise. Since there are only two possible values the boxplot looks flat. This shows that *cluster\_1* and *cluster\_3* are different from *cluster\_2* and *cluster\_4*.
FIGURE 6\.5: Boxplot of Intimate variable.
Additionally, let’s plot the *ACS* variable which represents the total score of culture shock (see Figure [6\.6](unsupervised.html#fig:bpacs)). This one has a minimum value of \\(3\\) and a max value of \\(13\\).
FIGURE 6\.6: Boxplot of ACS variable.
*cluster\_2* and *cluster\_4* were similar based on the *Intimate* variable, but if we take a look at the difference in medians based on *ACS*, *cluster\_2* and *cluster\_4* are the most dissimilar clusters which gives an intuitive idea of why those two clusters were split into two different ones by the algorithm.
So far, the number of groups \\(k\\) has been chosen arbitrarily or by visual inspection. But, *is there an automatic way to select the best k?* As always… this depends on the task at hand but there is a method called **Silhouette index** that can be used to select the optimal \\(k\\) based on an optimality criterion. This index is presented in the next section.
### 6\.1\.1 Grouping Student Responses
`group_students.R`
In the previous example, we only used two variables to perform the clustering. Let’s now use more variables from the *STUDENTS’ MENTAL HEALTH* dataset to find groups. The full code can be found in `group_students.R`. After removing missing values, one\-hot encoding categorical variables, and some additional cleaning, the following \\(10\\) variables were selected:
```
# Select which variables are going to be used for clustering.
selvars <- c("Stay","English_cate","Intimate"
,"APD","AHome","APH","Afear",
"ACS","AGuilt","ToAS")
```
Additionally, it is advisable to normalize the data between \\(0\\) and \\(1\\) since we are dealing with distance computations and we want to put the same weight on each variable. To plot the \\(10\\) variables, we can use **MDS** (described in chapter [4](edavis.html#edavis)) to project the data into \\(2\\) dimensions (Figure [6\.3](unsupervised.html#fig:cluster1)).
FIGURE 6\.3: Students responses projected into 2D with MDS.
Visually, it seems that there are \\(4\\) distinct groups of points. Based on this initial guess, we can set \\(k\=4\\) and use the `kmeans()` function included in base R to find the groups automatically.
```
clusters <- kmeans(normdf, 4)
```
The first argument of `kmeans()` is a data frame or a matrix and the second argument the number of clusters. Figure [6\.4](unsupervised.html#fig:cluster2) shows the resulting clustering. The `kmeans()` method returns an object that contains several components including `cluster` that stores the assigned cluster for each data point and `centers` that stores the centroids.
FIGURE 6\.4: Students responses groups when \\(k\=4\\).
The \\(k\\)\-means algorithm found the same clusters as we would intuitively expect. We can check how different the groups are by inspecting some of the variables. For example, by plotting a boxplot of the *Intimate* variable (Figure [6\.5](unsupervised.html#fig:intyes)). This variable is \\(1\\) if the student has an intimate partner or \\(0\\) otherwise. Since there are only two possible values the boxplot looks flat. This shows that *cluster\_1* and *cluster\_3* are different from *cluster\_2* and *cluster\_4*.
FIGURE 6\.5: Boxplot of Intimate variable.
Additionally, let’s plot the *ACS* variable which represents the total score of culture shock (see Figure [6\.6](unsupervised.html#fig:bpacs)). This one has a minimum value of \\(3\\) and a max value of \\(13\\).
FIGURE 6\.6: Boxplot of ACS variable.
*cluster\_2* and *cluster\_4* were similar based on the *Intimate* variable, but if we take a look at the difference in medians based on *ACS*, *cluster\_2* and *cluster\_4* are the most dissimilar clusters which gives an intuitive idea of why those two clusters were split into two different ones by the algorithm.
So far, the number of groups \\(k\\) has been chosen arbitrarily or by visual inspection. But, *is there an automatic way to select the best k?* As always… this depends on the task at hand but there is a method called **Silhouette index** that can be used to select the optimal \\(k\\) based on an optimality criterion. This index is presented in the next section.
6\.2 The Silhouette Index
-------------------------
As opposed to supervised learning, in unsupervised learning there is **no ground truth** to validate the results. In clustering, one way to validate the resulting groups is to plot them and manually explore the clusters’ data points and look for similarities and/or differences. But sometimes we may also want to automate the process and have a quantitative way to measure how well the clustering algorithm grouped the points with the given set of parameters. If we had such a method we could do parameter optimization, for example, to find the best \\(k\\). Well, there is something called *the silhouette index* ([Rousseeuw 1987](#ref-rousseeuw1987)) and it can be used to measure the correctness of the clusters.
This index is computed for each data point and tells us how well they are clustered. The total silhouette index is the mean of all points’ indices and gives us an idea of how well the points were clustered overall. This index goes from \\(\-1\\) to \\(1\\) and I’ll explain in a moment how to interpret it, but first let’s see how it is computed.
To compute the silhouette index two things are needed: the already created groups and the distances between points. Let:
\\(a(i)\=\\) average dissimilarity (distance) of point \\(i\\) to all other points in \\(A\\), where \\(A\\) is the cluster to which \\(i\\) has been assigned to (Figure [6\.7](unsupervised.html#fig:3clust)).
\\(d(i,C)\=\\) average dissimilarity between \\(i\\) and all points in some cluster \\(C\\).
\\(b(i)\=\\min\_{C \\neq A}d(i,C)\\). The cluster \\(B\\) for which the minimum is obtained is the neighbor of point \\(i\\). (The second best choice for \\(i\\)).
FIGURE 6\.7: Three resulting clusters: A, B, and C. (Reprinted from *Journal of computational and applied mathematics* Vol. 20, Rousseeuw, P. J., “Silhouettes: a graphical aid to the interpretation and validation of cluster analysis” pp. 53\-65, Copyright 1987, with permission from Elsevier. [doi:https://doi.org/10\.1016/0377\-0427(87\)90125\-7](doi:https://doi.org/10.1016/0377-0427(87)90125-7)).
Thus, \\(s(i)\\) (the silhouette index of point \\(i\\)) is obtained with the formula:
\\\[\\begin{equation}
s(i) \= \\frac{{b(i) \- a(i)}}{{\\max \\{ a(i),b(i)\\} }}
\\tag{6\.3}
\\end{equation}\\]
When \\(s(i)\\) is close to \\(1\\), it means that the *within* dissimilarity \\(a(i)\\) is much smaller than the smallest *between* dissimilarity \\(b(i)\\) thus, \\(i\\) can be considered to be well clustered. When \\(s(i)\\) is close to \\(0\\) it is not clear whether \\(i\\) belongs to \\(A\\) or \\(B\\). When \\(s(i)\\) is close to \\(\-1\\), \\(a(i)\\) is larger than \\(b(i)\\) meaning that \\(i\\) may have been misgrouped. The total silhouette index \\(S\\) is the average of all indices \\(s(i)\\) of all points.
In R, the `cluster` package has the function `silhouette()` that computes the silhouette index. The following code snippet clusters the student responses into \\(4\\) groups and computes the index of each point with the `silhouette()` function. Its first argument is the cluster assignments as returned by `kmeans()`, and the second argument is a `dist` object that contains the distances between each pair of points. We can compute this information from our data frame with the `dist()` function. The `silhouette()` function returns an object with the silhouette index of each data point. We can compute the total index by taking the average which in this case was \\(0\.346\\).
```
library(cluster) # Load the required package.
set.seed(1234)
clusters <- kmeans(normdf, 4) # Try with k=4
# Compute silhouette indices for all points.
si <- silhouette(clusters$cluster, dist(normdf))
# Print first rows.
head(si)
#> cluster neighbor sil_width
#> [1,] 1 4 0.3482364
#> [2,] 2 4 0.3718735
#> [3,] 3 1 0.3322198
#> [4,] 1 4 0.3998996
#> [5,] 1 4 0.3662811
#> [6,] 3 1 0.1463607
# Compute total Silhouette index by averaging the individual indices.
mean(si[,3])
# [1] 0.3466427
```
One nice thing about this index is that it can be presented visually. To generate a silhouette plot, use the generic `plot()` function and pass the object returned by `silhouette()`.
```
plot(si, cex.names=0.6, col = 1:4,
main = "Silhouette plot, k=4",
border=NA)
```
FIGURE 6\.8: Silhouette plot when \\(k\=4\\).
Figure [6\.8](unsupervised.html#fig:si4) shows the silhouette plot when \\(k\=4\\). The horizontal lines represent the individual silhouette indices. In this plot, all of them are positive. The height of each cluster gives a visual idea of the number of data points contained in it with respect to other clusters. We can see for example that cluster \\(2\\) is the smallest one. On the right side, is the number of points in each cluster and their average silhouette index. At the bottom, the total silhouette index is printed (\\(0\.35\\)). We can try to cluster the points into \\(7\\) groups instead of \\(4\\) and see what happens.
```
set.seed(1234)
clusters <- kmeans(normdf, 7)
si <- silhouette(clusters$cluster, dist(normdf))
plot(si, cex.names=0.6, col = 1:7,
main = "Silhouette plot, k=7",
border=NA)
```
FIGURE 6\.9: Silhouette plot when \\(k\=7\\).
Here, cluster \\(2\\) and \\(4\\) have data points with negative indices and the overall score is \\(0\.26\\). This suggests that \\(k\=4\\) produces more coherent clusters as compared to \\(k\=7\\).
In this section, we used the Silhouette index to validate the clustering results. Over the years, several other clustering validation methods have been developed. In their paper, Halkidi, Batistakis, and Vazirgiannis ([2001](#ref-Halkidi2001)) present an overview of other clustering validation methods.
6\.3 Mining Association Rules
-----------------------------
Association rule mining consists of a set of methods to extract patterns (rules) from transactional data. For example, shopping behavior can be analyzed by finding rules from customers’ shopping transactions. A **transaction** is an event that involves a set of items. For example, when someone buys a soda, a bag of chips, and a chocolate bar the purchase is registered as *one* transaction containing \\(3\\) items. I apologize for using this example for those of you with healthy diets. Based on a database that contains many transactions, it is possible to uncover item relationships. Those relationships are usually expressed as implication rules of the form \\(X \\implies Y\\) where \\(X\\) and \\(Y\\) are sets of items. Both sets are disjoint, this means that items in \\(X\\) are not in \\(Y\\) and vice\-versa which can be formally represented as \\(X \\cap Y \= \\emptyset\\). That is, the intersection of the two sets is the empty set. \\(X \\implies Y\\) is read as: if \\(X\\) then \\(Y\\). The left\-hand\-side (lhs) \\(X\\) is called the **antecedent** and the right\-hand\-side (rhs) \\(Y\\) is called the **consequent**.
In the unhealthy supermarket example, a rule like \\(\\{chips, chocolate\\} \\implies \\{soda\\}\\) can be interpreted as *if someone buys chips and chocolate then, it is likely that this same person will also buy soda*. These types of rules can be used for targeted advertisements, product placement decisions, etc.
The possible number of rules that can be generated grows exponentially as the number of items increases. Furthermore, not all rules may be interesting. The most well\-known algorithm to find interesting association rules is called **Apriori** ([Agrawal and Srikant 1994](#ref-agrawal1994)). To quantify if a rule is interesting or not, this algorithm uses two importance measures: **support** and **confidence**.
* **Support.** The support \\(\\text{supp}(X)\\) of an itemset \\(X\\) is the proportion of transactions that contain all the items in \\(X\\). This quantifies how frequent the itemset is.
* **Confidence.** The confidence of a rule \\(\\text{conf}(X \\implies Y)\=\\text{supp}(X \\cup Y)/\\text{supp}(X)\\) and can be interpreted as the conditional probability that \\(Y\\) occurs given that \\(X\\) is present. This can also be thought of as the probability that a transaction that contains \\(X\\) also contains \\(Y\\). The \\(\\cup\\) operator is the union of two sets. This means taking all elements from both sets and removing repeated elements.
Now that there is a way to measure the importance of the rules, the Apriori algorithm first finds itemsets that satisfy a minimum support and generates rules from those itemsets that satisfy a minimum confidence. Those minimum thresholds are set by the user. The lower the thresholds, the more rules returned by the algorithm. One thing to note is that Apriori only generates rules with itemsets of size \\(1\\) on the right\-hand side. Another common metric to measure the importance of a rule is the **lift**. Lift is typically used after Apriori has generated the rules to further filter and/or rank the results.
* **Lift.** The lift of a rule \\(\\text{lift}(X \\implies Y) \= \\text{supp}(X \\cup Y) / (\\text{supp}(X)\\text{supp}(Y))\\) is similar to the confidence but it also takes into consideration the frequency of \\(Y\\). A lift of \\(1\\) means that there is no association between \\(X\\) and \\(Y\\). A lift greater than \\(1\\) means that \\(Y\\) is likely to occur if \\(X\\) occurs and a lift less than \\(1\\) means that \\(Y\\) is unlikely to occur when \\(X\\) occurs.
Let’s compute all those metrics using an example. The following table shows a synthetic example database of transactions from shoppers with unhealthy behaviors.
FIGURE 6\.10: Example database with 10 transactions.
The support of the itemset consisting of a single item *‘chocolate’* is \\(\\text{supp}(\\{chocolate\\}) \= 5/10 \= 0\.5\\) because *‘chocolate’* appears in \\(5\\) out of the \\(10\\) transactions. The support of \\(\\{chips, soda\\}\\) is \\(3/10 \= 0\.3\\).
The confidence of the rule \\(\\{chocolate, chips\\} \\implies \\{soda\\}\\) is:
\\\[\\begin{align\*}
\\text{conf}(\\{chocolate, chips\\} \\implies \\{soda\\})\&\=\\frac{\\text{supp}(\\{chocolate, chips, soda\\})}{\\text{supp}(\\{chocolate,chips\\})} \\\\
\&\=(2/10\) / (3/10\) \\\\
\&\=0\.66
\\end{align\*}\\]
The lift of \\(\\{soda\\} \\implies \\{ice cream\\}\\) is:
\\\[\\begin{align\*}
\\text{lift}(\\{soda\\} \\implies \\{ice cream\\})\&\=\\frac{\\text{supp}(\\{soda, ice cream\\})}{\\text{supp}(\\{soda\\})\\text{supp}(\\{ice cream\\})} \\\\
\&\=(2/10\) / ((7/10\)(3/10\)) \\\\
\&\=0\.95\.
\\end{align\*}\\]
Association rules mining is unsupervised in the sense that there are no labels or ground truth. Many applications of association rules are targeted to market basket analysis to gain insights into shoppers’ behavior and take actions to increase sales. To find such rules it is necessary to have ‘transactions’ (sets of items), for example, supermarket products. However, this is not the only application of association rules. There are other problems that can be structured as transactions of items. For example in medicine, *diseases* can be seen as transactions and *symptoms* as items. Thus, one can apply association rules algorithms to find symptoms and disease relationships. Another application is in recommender systems. Take, for example, movies. Transactions can be the set of movies watched by every user. If you watched a movie \\(m\\) then, the recommender system can suggest another movie that co\-occurred frequently with \\(m\\) and that you have not watched yet. Furthermore, other types of relational data can be transformed into transaction\-like structures to find patterns and this is precisely what we are going to do in the next section to mine criminal patterns.
### 6\.3\.1 Finding Rules for Criminal Behavior
`crimes_process.R` `crimes_rules.R`
In this section, we will use association rules mining to find patterns in the *HOMICIDE REPORTS*[14](#fn14) dataset. This database contains homicide reports from 1980 to 2014 in the United States. The database is structured as a table with \\(24\\) columns and \\(638454\\) rows. Each row corresponds to a homicide report that includes city, state, year, month, sex of victim, sex of perpetrator, if the crime was solved or not, weapon used, age of the victim and perpetrator, the relationship type between the victim and the perpetrator, and some other information.
Before trying to find rules, the data needs to be preprocessed and converted into transactions. Each homicide report will be a transaction and the items are the possible values of \\(3\\) of the columns: **Relationship**, **Weapon**, and **Perpetrator.Age**. The **Relationship** variable can take values like *Stranger*, *Neighbor*, *Friend*, etc. In total, there are \\(28\\) possible relationship values including *Unknown*. For the purpose of our analysis, we will remove rows with unknown values in **Relationship** and **Weapon**. Since **Perpetrator.Age** is an integer, we need to convert it into categories. The following age groups are created: child (\< \\(13\\) years), teen (\\(13\\) to \\(17\\) years), adult (\\(18\\) to \\(45\\) years), and lateAdulthood (\> \\(45\\) years). After these cleaning and preprocessing steps, the dataset has \\(3\\) columns and \\(328238\\) rows (see Figure [6\.11](unsupervised.html#fig:tabcrimes)). The script used to perform the preprocessing is `crimes_process.R`.
FIGURE 6\.11: First rows of preprocessed crimes data frame. Source: Data from the Murder Accountability Project, founded by Thomas Hargrove (CC BY\-SA 4\.0\) \[[https://creativecommons.org/licenses/by\-sa/4\.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode)].
Now, we have a data frame that contains only the relevant information. Each row will be used to generate one transaction. An example transaction may be *{R.Wife, Knife, Adult}*. This one represents the case where the perpetrator is an *adult* who used a *knife* to kill his *wife*. Note the ‘R.’ at the beginning of ‘Wife’. This ‘R.’ was added for clarity in order to identify that this item is a relationship. One thing to note is that every transaction will consist of exactly \\(3\\) items. This is a bit different than the market basket case in which every transaction can include a varying number of products. Although this item\-size constraint was a design decision based on the structure of the original data, this will not prevent us from performing the analysis to find interesting rules.
To find the association rules, the `arules` package ([Hahsler et al. 2019](#ref-arules)) will be used. This package has an interface to an efficient implementation in C of the Apriori algorithm. This package needs the transactions to be stored as an object of type ‘transactions’. One way to create this object is to use a **logical matrix** and cast it into a transactions object. The rows of the logical matrix represent transactions and columns represent items. The number of columns equals the total number of possible items. A `TRUE` value indicates that the item is present in the transaction and `FALSE` otherwise. In our case, the matrix has \\(46\\) columns. The `crimes_process.R` script has code to generate this matrix `M`. The \\(46\\) items (columns of `M`) are:
```
as.character(colnames(M))
#> [1] "R.Acquaintance" "R.Wife" "R.Stranger"
#> [4] "R.Girlfriend" "R.Ex-Husband" "R.Brother"
#> [7] "R.Stepdaughter" "R.Husband" "R.Friend"
#> [10] "R.Family" "R.Neighbor" "R.Father"
#> [13] "R.In-Law" "R.Son" "R.Ex-Wife"
#> [16] "R.Boyfriend" "R.Mother" "R.Sister"
#> [19] "R.Common-Law Husband" "R.Common-Law Wife" "R.Stepfather"
#> [22] "R.Stepson" "R.Stepmother" "R.Daughter"
#> [25] "R.Boyfriend/Girlfriend" "R.Employer" "R.Employee"
#> [28] "Blunt Object" "Strangulation" "Rifle"
#> [31] "Knife" "Shotgun" "Handgun"
#> [34] "Drowning" "Firearm" "Suffocation"
#> [37] "Fire" "Drugs" "Explosives"
#> [40] "Fall" "Gun" "Poison"
#> [43] "teen" "adult" "lateAdulthood"
#> [46] "child"
```
The following snippet shows how to convert the matrix into an `arules` transactions object. Before the conversion, the package `arules` needs to be loaded. For convenience, the transactions are saved in a file `transactions.RData`.
```
library(arules)
# Convert into a transactions object.
transactions <- as(M, "transactions")
# Save transactions file.
save(transactions, file="transactions.RData")
```
Now that the database is in the required format we can start the analysis. The `crimes_rules.R` script has the code to perform the analysis. First, the transactions file that we generated before is loaded:
```
library(arules)
library(arulesViz)
# Load preprocessed data.
load("transactions.RData")
```
Note that additionally to the `arules` package, we also loaded the `arulesViz` package ([Hahsler 2019](#ref-ParulesViz)). This package has several functions to generate cool plots of the learned rules! A summary of the transactions can be printed with the `summary()` function:
```
# Print summary.
summary(transactions)
#> transactions as itemMatrix in sparse format with
#> 328238 rows (elements/itemsets/transactions) and
#> 46 columns (items) and a density of 0.06521739
#>
#> most frequent items:
#> adult Handgun R.Acquaintance R.Stranger
#> 257026 160586 117305 77725
#> Knife (Other)
#> 61936 310136
#>
#> element (itemset/transaction) length distribution:
#> sizes
#> 3
#> 328238
#>
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 3 3 3 3 3 3
#>
#> includes extended item information - examples:
#> labels
#> Relationship1 R.Acquaintance
#> Relationship2 R.Wife
#> Relationship3 R.Stranger
```
The summary shows the total number of rows (transactions) and the number of columns. It also prints the most frequent items, in this case, *adult* with \\(257026\\) occurrences, *Handgun* with \\(160586\\), and so on. The itemset sizes are also displayed. Here, all itemsets have a size of \\(3\\) (by design). Some other summary statistics are also printed.
We can use the `itemFrequencyPlot()` function from the `arulesViz` package to plot the frequency of items.
```
itemFrequencyPlot(transactions,
type = "relative",
topN = 15,
main = 'Item frequecies')
```
The `type` argument specifies that we want to plot the relative frequencies. Use `"absolute"` instead to plot the total counts. `topN` is used to select how many items are plotted. Figure [6\.12](unsupervised.html#fig:rulesfreqs) shows the output.
FIGURE 6\.12: Frequences of the top 15 items.
Now it is time to find some interesting rules! This can be done with the `apriori()` function as follows:
```
# Run apriori algorithm.
resrules <- apriori(transactions,
parameter = list(support = 0.001,
confidence = 0.5,
# Find rules with at least 2 items.
minlen = 2,
target = 'rules'))
```
The first argument is the transactions object. The second argument `parameter` specifies a list of algorithm parameters. In this case we want rules with a minimum support of \\(0\.001\\) and a confidence of at least \\(0\.5\\). The `minlen` argument specifies the minimum number of allowed items in a rule (antecedent \+ consequent). We set it to \\(2\\) since we want rules with at least one element in the antecedent and one element in the consequent, for example, *{item1 \=\> item2}*. This Apriori algorithm creates rules with only one item in the consequent. Finally, the `target` parameter is used to specify that we want to find rules because the function can also return item sets of different types (see the documentation for more details). The returned rules are saved in the `resrules` variable that can be used later to explore the results. We can also print a summary of the returned results.
```
# Print a summary of the results.
summary(resrules)
#> set of 141 rules
#>
#> rule length distribution (lhs + rhs):sizes
#> 2 3
#> 45 96
#>
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 2.000 2.000 3.000 2.681 3.000 3.000
#>
#> summary of quality measures:
#> support confidence lift count
#> Min. :0.001030 Min. :0.5045 Min. :0.6535 Min. : 338
#> 1st Qu.:0.001767 1st Qu.:0.6478 1st Qu.:0.9158 1st Qu.: 580
#> Median :0.004424 Median :0.7577 Median :1.0139 Median : 1452
#> Mean :0.021271 Mean :0.7269 Mean :1.0906 Mean : 6982
#> 3rd Qu.:0.012960 3rd Qu.:0.8131 3rd Qu.:1.0933 3rd Qu.: 4254
#> Max. :0.376836 Max. :0.9539 Max. :4.2777 Max. :123692
#>
#> mining info:
#> data ntransactions support confidence
#> transactions 328238 0.001 0.5
```
By looking at the summary, we see that the algorithm found \\(141\\) rules that satisfy the support and confidence thresholds. The rule length distribution is also printed. Here, \\(45\\) rules are of size \\(2\\) and \\(96\\) rules are of size \\(3\\). Then, some standard statistics are shown for support, confidence, and lift. The `inspect()` function can be used to print the actual rules. Rules can be sorted by one of the importance measures. The following code sorts by lift and prints the first \\(20\\) rules. Figure [6\.13](unsupervised.html#fig:resrules) shows the output.
```
# Print the first n (20) rules with highest lift in decreasing order.
inspect(sort(resrules, by='lift', decreasing = T)[1:20])
```
FIGURE 6\.13: Output of the inspect() function.
The first rule with a lift of \\(4\.27\\) says that if a homicide was committed by an *adult* and the victim was the *stepson*, then is it likely that a *blunt object* was used for the crime. By looking at the rules, one can also note that whenever *blunt object* appears either in the lhs or rhs, the victim was most likely an infant. Another thing to note is that when the victim was *boyfriend*, the crime was likely committed with a *knife*. This is also mentioned in the reports ‘Homicide trends in the United States’ ([Cooper, Smith, et al. 2012](#ref-cooper2012)):
> From 1980 through 2008 ‘Boyfriends were more likely to be killed by knives than any other group of intimates’.
According to rule \\(20\\), crimes involving *girlfriend* have a strong relationship with *strangulation*. This can also be confirmed in ([Cooper, Smith, et al. 2012](#ref-cooper2012)):
> From 1980 through 2008 ‘Girlfriends were more likely to be killed by force…’.
The resulting rules can be plotted with the `plot()` function (see Figure [6\.14](unsupervised.html#fig:rulesScatter)). By default, it generates a scatterplot with the *support* in the \\(x\\) axis and *confidence* in the \\(y\\) axis colored by *lift*.
```
# Plot a default scatterplot of support vs. confidence colored by lift.
plot(resrules)
```
FIGURE 6\.14: Scatterplot of rules support vs. confidence colored by lift.
The plot shows that rules with a high *lift* also have a low *support* and *confidence*. Hahsler ([2017](#ref-Hahsler2017)) mentioned that rules with high *lift* typically have low *support*. The plot can be customized for example to show the *support* and *lift* in the axes and color them by confidence. The axes can be set with the `measure` parameter and the coloring with the `shading` parameter. The function also supports different plotting engines including static and interactive. The following code generates a customized interactive plot by setting `engine = "htmlwidget"`. This is very handy if you want to know which points correspond to which rules. By hovering the mouse on the desired point the corresponding rule is shown as a tooltip box (Figure [6\.15](unsupervised.html#fig:rulesScatterInt)). The interactive plots also allow to zoom in regions by clicking and dragging.
```
# Customize scatterplot to make it interactive
# and plot support vs. lift colored by confidence.
plot(resrules, engine = "htmlwidget",
measure = c("support", "lift"), shading = "confidence")
```
FIGURE 6\.15: Interactive scatterplot of rules.
The `arulesViz` package has a nice option to plot rules as a graph. This is done by setting `method = "graph"`. We can also make the graph interactive for easier exploration by setting `engine="htmlwidget"`. For clarity, the font size is reduced with `cex=0.9`. Here we plot the first \\(25\\) rules.
```
# Plot rules as a graph.
plot(head(sort(resrules, by = "lift"), n=25),
method = "graph",
control=list(cex=.9),
engine="htmlwidget")
```
FIGURE 6\.16: Interactive graph of rules.
Figure [6\.16](unsupervised.html#fig:rulesGraphInt) shows a zoomed\-in portion of the entire graph. Circles represent rules and rounded squares items. The size of the circle is relative to the *support* and color relative to the *lift*. Incoming arrows represent the items in the antecedent and the outgoing arrow of a circle points to the item in the consequent part of the rule. From this graph, some interesting patterns can be seen. First, when the age category of the perpetrator is *lateAdulthood*, the victims were the *husband* or *ex\-wife*. When the perpetrator is a *teen*, the victim was likely a *friend* or *stranger*.
The `arulesViz` package has a cool function `ruleExplorer()` that generates a shiny app with interactive controls and several plot types. When running the following code (output not shown) you may be asked to install additional shiny related packages.
```
# Opens a shiny app with several interactive plots.
ruleExplorer(resrules)
```
Sometimes Apriori returns thousands of rules. There is a convenient `subset()` function to extract rules of interest. For example, we can select only the rules that contain *R.Girlfriend* in the antecedent (lhs) and print the top three with highest lift (Figure [6\.17](unsupervised.html#fig:resrulesGirlfriend) shows the result):
```
# Subset transactions.
rulesGirlfriend <- subset(resrules, subset = lhs %in% "R.Girlfriend")
# Print rules with highest lift.
inspect(head(rulesGirlfriend, n = 3, by = "lift"))
```
FIGURE 6\.17: Output of the inspect() function.
In this section, we showed how interesting rules can be extracted from a crimes database. Several preprocessing steps were required to transform the tabular data into transactional data. This example already demonstrated how the same data can be represented in different ways (tabular and transactional). The next chapter will cover more details about how data can be transformed into **different representations** suitable for different types of learning algorithms.
### 6\.3\.1 Finding Rules for Criminal Behavior
`crimes_process.R` `crimes_rules.R`
In this section, we will use association rules mining to find patterns in the *HOMICIDE REPORTS*[14](#fn14) dataset. This database contains homicide reports from 1980 to 2014 in the United States. The database is structured as a table with \\(24\\) columns and \\(638454\\) rows. Each row corresponds to a homicide report that includes city, state, year, month, sex of victim, sex of perpetrator, if the crime was solved or not, weapon used, age of the victim and perpetrator, the relationship type between the victim and the perpetrator, and some other information.
Before trying to find rules, the data needs to be preprocessed and converted into transactions. Each homicide report will be a transaction and the items are the possible values of \\(3\\) of the columns: **Relationship**, **Weapon**, and **Perpetrator.Age**. The **Relationship** variable can take values like *Stranger*, *Neighbor*, *Friend*, etc. In total, there are \\(28\\) possible relationship values including *Unknown*. For the purpose of our analysis, we will remove rows with unknown values in **Relationship** and **Weapon**. Since **Perpetrator.Age** is an integer, we need to convert it into categories. The following age groups are created: child (\< \\(13\\) years), teen (\\(13\\) to \\(17\\) years), adult (\\(18\\) to \\(45\\) years), and lateAdulthood (\> \\(45\\) years). After these cleaning and preprocessing steps, the dataset has \\(3\\) columns and \\(328238\\) rows (see Figure [6\.11](unsupervised.html#fig:tabcrimes)). The script used to perform the preprocessing is `crimes_process.R`.
FIGURE 6\.11: First rows of preprocessed crimes data frame. Source: Data from the Murder Accountability Project, founded by Thomas Hargrove (CC BY\-SA 4\.0\) \[[https://creativecommons.org/licenses/by\-sa/4\.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode)].
Now, we have a data frame that contains only the relevant information. Each row will be used to generate one transaction. An example transaction may be *{R.Wife, Knife, Adult}*. This one represents the case where the perpetrator is an *adult* who used a *knife* to kill his *wife*. Note the ‘R.’ at the beginning of ‘Wife’. This ‘R.’ was added for clarity in order to identify that this item is a relationship. One thing to note is that every transaction will consist of exactly \\(3\\) items. This is a bit different than the market basket case in which every transaction can include a varying number of products. Although this item\-size constraint was a design decision based on the structure of the original data, this will not prevent us from performing the analysis to find interesting rules.
To find the association rules, the `arules` package ([Hahsler et al. 2019](#ref-arules)) will be used. This package has an interface to an efficient implementation in C of the Apriori algorithm. This package needs the transactions to be stored as an object of type ‘transactions’. One way to create this object is to use a **logical matrix** and cast it into a transactions object. The rows of the logical matrix represent transactions and columns represent items. The number of columns equals the total number of possible items. A `TRUE` value indicates that the item is present in the transaction and `FALSE` otherwise. In our case, the matrix has \\(46\\) columns. The `crimes_process.R` script has code to generate this matrix `M`. The \\(46\\) items (columns of `M`) are:
```
as.character(colnames(M))
#> [1] "R.Acquaintance" "R.Wife" "R.Stranger"
#> [4] "R.Girlfriend" "R.Ex-Husband" "R.Brother"
#> [7] "R.Stepdaughter" "R.Husband" "R.Friend"
#> [10] "R.Family" "R.Neighbor" "R.Father"
#> [13] "R.In-Law" "R.Son" "R.Ex-Wife"
#> [16] "R.Boyfriend" "R.Mother" "R.Sister"
#> [19] "R.Common-Law Husband" "R.Common-Law Wife" "R.Stepfather"
#> [22] "R.Stepson" "R.Stepmother" "R.Daughter"
#> [25] "R.Boyfriend/Girlfriend" "R.Employer" "R.Employee"
#> [28] "Blunt Object" "Strangulation" "Rifle"
#> [31] "Knife" "Shotgun" "Handgun"
#> [34] "Drowning" "Firearm" "Suffocation"
#> [37] "Fire" "Drugs" "Explosives"
#> [40] "Fall" "Gun" "Poison"
#> [43] "teen" "adult" "lateAdulthood"
#> [46] "child"
```
The following snippet shows how to convert the matrix into an `arules` transactions object. Before the conversion, the package `arules` needs to be loaded. For convenience, the transactions are saved in a file `transactions.RData`.
```
library(arules)
# Convert into a transactions object.
transactions <- as(M, "transactions")
# Save transactions file.
save(transactions, file="transactions.RData")
```
Now that the database is in the required format we can start the analysis. The `crimes_rules.R` script has the code to perform the analysis. First, the transactions file that we generated before is loaded:
```
library(arules)
library(arulesViz)
# Load preprocessed data.
load("transactions.RData")
```
Note that additionally to the `arules` package, we also loaded the `arulesViz` package ([Hahsler 2019](#ref-ParulesViz)). This package has several functions to generate cool plots of the learned rules! A summary of the transactions can be printed with the `summary()` function:
```
# Print summary.
summary(transactions)
#> transactions as itemMatrix in sparse format with
#> 328238 rows (elements/itemsets/transactions) and
#> 46 columns (items) and a density of 0.06521739
#>
#> most frequent items:
#> adult Handgun R.Acquaintance R.Stranger
#> 257026 160586 117305 77725
#> Knife (Other)
#> 61936 310136
#>
#> element (itemset/transaction) length distribution:
#> sizes
#> 3
#> 328238
#>
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 3 3 3 3 3 3
#>
#> includes extended item information - examples:
#> labels
#> Relationship1 R.Acquaintance
#> Relationship2 R.Wife
#> Relationship3 R.Stranger
```
The summary shows the total number of rows (transactions) and the number of columns. It also prints the most frequent items, in this case, *adult* with \\(257026\\) occurrences, *Handgun* with \\(160586\\), and so on. The itemset sizes are also displayed. Here, all itemsets have a size of \\(3\\) (by design). Some other summary statistics are also printed.
We can use the `itemFrequencyPlot()` function from the `arulesViz` package to plot the frequency of items.
```
itemFrequencyPlot(transactions,
type = "relative",
topN = 15,
main = 'Item frequecies')
```
The `type` argument specifies that we want to plot the relative frequencies. Use `"absolute"` instead to plot the total counts. `topN` is used to select how many items are plotted. Figure [6\.12](unsupervised.html#fig:rulesfreqs) shows the output.
FIGURE 6\.12: Frequences of the top 15 items.
Now it is time to find some interesting rules! This can be done with the `apriori()` function as follows:
```
# Run apriori algorithm.
resrules <- apriori(transactions,
parameter = list(support = 0.001,
confidence = 0.5,
# Find rules with at least 2 items.
minlen = 2,
target = 'rules'))
```
The first argument is the transactions object. The second argument `parameter` specifies a list of algorithm parameters. In this case we want rules with a minimum support of \\(0\.001\\) and a confidence of at least \\(0\.5\\). The `minlen` argument specifies the minimum number of allowed items in a rule (antecedent \+ consequent). We set it to \\(2\\) since we want rules with at least one element in the antecedent and one element in the consequent, for example, *{item1 \=\> item2}*. This Apriori algorithm creates rules with only one item in the consequent. Finally, the `target` parameter is used to specify that we want to find rules because the function can also return item sets of different types (see the documentation for more details). The returned rules are saved in the `resrules` variable that can be used later to explore the results. We can also print a summary of the returned results.
```
# Print a summary of the results.
summary(resrules)
#> set of 141 rules
#>
#> rule length distribution (lhs + rhs):sizes
#> 2 3
#> 45 96
#>
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 2.000 2.000 3.000 2.681 3.000 3.000
#>
#> summary of quality measures:
#> support confidence lift count
#> Min. :0.001030 Min. :0.5045 Min. :0.6535 Min. : 338
#> 1st Qu.:0.001767 1st Qu.:0.6478 1st Qu.:0.9158 1st Qu.: 580
#> Median :0.004424 Median :0.7577 Median :1.0139 Median : 1452
#> Mean :0.021271 Mean :0.7269 Mean :1.0906 Mean : 6982
#> 3rd Qu.:0.012960 3rd Qu.:0.8131 3rd Qu.:1.0933 3rd Qu.: 4254
#> Max. :0.376836 Max. :0.9539 Max. :4.2777 Max. :123692
#>
#> mining info:
#> data ntransactions support confidence
#> transactions 328238 0.001 0.5
```
By looking at the summary, we see that the algorithm found \\(141\\) rules that satisfy the support and confidence thresholds. The rule length distribution is also printed. Here, \\(45\\) rules are of size \\(2\\) and \\(96\\) rules are of size \\(3\\). Then, some standard statistics are shown for support, confidence, and lift. The `inspect()` function can be used to print the actual rules. Rules can be sorted by one of the importance measures. The following code sorts by lift and prints the first \\(20\\) rules. Figure [6\.13](unsupervised.html#fig:resrules) shows the output.
```
# Print the first n (20) rules with highest lift in decreasing order.
inspect(sort(resrules, by='lift', decreasing = T)[1:20])
```
FIGURE 6\.13: Output of the inspect() function.
The first rule with a lift of \\(4\.27\\) says that if a homicide was committed by an *adult* and the victim was the *stepson*, then is it likely that a *blunt object* was used for the crime. By looking at the rules, one can also note that whenever *blunt object* appears either in the lhs or rhs, the victim was most likely an infant. Another thing to note is that when the victim was *boyfriend*, the crime was likely committed with a *knife*. This is also mentioned in the reports ‘Homicide trends in the United States’ ([Cooper, Smith, et al. 2012](#ref-cooper2012)):
> From 1980 through 2008 ‘Boyfriends were more likely to be killed by knives than any other group of intimates’.
According to rule \\(20\\), crimes involving *girlfriend* have a strong relationship with *strangulation*. This can also be confirmed in ([Cooper, Smith, et al. 2012](#ref-cooper2012)):
> From 1980 through 2008 ‘Girlfriends were more likely to be killed by force…’.
The resulting rules can be plotted with the `plot()` function (see Figure [6\.14](unsupervised.html#fig:rulesScatter)). By default, it generates a scatterplot with the *support* in the \\(x\\) axis and *confidence* in the \\(y\\) axis colored by *lift*.
```
# Plot a default scatterplot of support vs. confidence colored by lift.
plot(resrules)
```
FIGURE 6\.14: Scatterplot of rules support vs. confidence colored by lift.
The plot shows that rules with a high *lift* also have a low *support* and *confidence*. Hahsler ([2017](#ref-Hahsler2017)) mentioned that rules with high *lift* typically have low *support*. The plot can be customized for example to show the *support* and *lift* in the axes and color them by confidence. The axes can be set with the `measure` parameter and the coloring with the `shading` parameter. The function also supports different plotting engines including static and interactive. The following code generates a customized interactive plot by setting `engine = "htmlwidget"`. This is very handy if you want to know which points correspond to which rules. By hovering the mouse on the desired point the corresponding rule is shown as a tooltip box (Figure [6\.15](unsupervised.html#fig:rulesScatterInt)). The interactive plots also allow to zoom in regions by clicking and dragging.
```
# Customize scatterplot to make it interactive
# and plot support vs. lift colored by confidence.
plot(resrules, engine = "htmlwidget",
measure = c("support", "lift"), shading = "confidence")
```
FIGURE 6\.15: Interactive scatterplot of rules.
The `arulesViz` package has a nice option to plot rules as a graph. This is done by setting `method = "graph"`. We can also make the graph interactive for easier exploration by setting `engine="htmlwidget"`. For clarity, the font size is reduced with `cex=0.9`. Here we plot the first \\(25\\) rules.
```
# Plot rules as a graph.
plot(head(sort(resrules, by = "lift"), n=25),
method = "graph",
control=list(cex=.9),
engine="htmlwidget")
```
FIGURE 6\.16: Interactive graph of rules.
Figure [6\.16](unsupervised.html#fig:rulesGraphInt) shows a zoomed\-in portion of the entire graph. Circles represent rules and rounded squares items. The size of the circle is relative to the *support* and color relative to the *lift*. Incoming arrows represent the items in the antecedent and the outgoing arrow of a circle points to the item in the consequent part of the rule. From this graph, some interesting patterns can be seen. First, when the age category of the perpetrator is *lateAdulthood*, the victims were the *husband* or *ex\-wife*. When the perpetrator is a *teen*, the victim was likely a *friend* or *stranger*.
The `arulesViz` package has a cool function `ruleExplorer()` that generates a shiny app with interactive controls and several plot types. When running the following code (output not shown) you may be asked to install additional shiny related packages.
```
# Opens a shiny app with several interactive plots.
ruleExplorer(resrules)
```
Sometimes Apriori returns thousands of rules. There is a convenient `subset()` function to extract rules of interest. For example, we can select only the rules that contain *R.Girlfriend* in the antecedent (lhs) and print the top three with highest lift (Figure [6\.17](unsupervised.html#fig:resrulesGirlfriend) shows the result):
```
# Subset transactions.
rulesGirlfriend <- subset(resrules, subset = lhs %in% "R.Girlfriend")
# Print rules with highest lift.
inspect(head(rulesGirlfriend, n = 3, by = "lift"))
```
FIGURE 6\.17: Output of the inspect() function.
In this section, we showed how interesting rules can be extracted from a crimes database. Several preprocessing steps were required to transform the tabular data into transactional data. This example already demonstrated how the same data can be represented in different ways (tabular and transactional). The next chapter will cover more details about how data can be transformed into **different representations** suitable for different types of learning algorithms.
6\.4 Summary
------------
One of the types of machine learning is **unsupervised learning** in which there are no labels. This chapter introduced some unsupervised methods such as clustering and association rules.
* The objective of **\\(k\\)\-means clustering** is to find groups of points such that points in the same group are similar and points from different groups are as dissimilar as possible.
* The **centroid** of a group is calculated by taking the mean value of each feature.
* In **\\(k\\)\-means**, one needs to specify the number of groups \\(k\\) before running the algorithm.
* The **Silhouette Index** is a measure that tells us how well a set of points were clustered. This measure can be used to find the optimal number of groups \\(k\\).
* **Association rules** can find patterns in an unsupervised manner.
* The **Apriori algorithm** is the most well\-known method for finding association rules.
* Before using the *Apriori algorithm*, one needs to format the data as **transactions**.
* A **transaction** is an event that involves a set of items.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/representations.html |
Chapter 7 Encoding Behavioral Data
==================================
Behavioral data comes in many different flavors and shapes. Data stored in databases also have different structures (relational, graph, plain text, etc.). As mentioned in chapter [1](intro.html#intro), before training a predictive model, data goes through a series of steps, from data collection to preprocessing (Figure [1\.7](intro.html#fig:pipeline)). During those steps, data is transformed and shaped with the aim of easing the operations in the subsequent tasks. Finally, the data needs to be encoded in a very specific format as expected by the predictive model. For example, decision trees and many other classifier methods expect their input data to be formatted as **feature vectors** while Dynamic Time Warping expects the data to be represented as **timeseries**. Images are usually encoded as \\(n\\)\-dimensional matrices. When it comes to social network analysis, a **graph** is the preferred representation.
So far, I have been mentioning two key terms: **encode** and **representation**. The Cambridge Dictionary[15](#fn15) defines the verb *encode* as:
> *“To put information into a form in which it can be stored, and which can only be read using special technology or knowledge…”.*
while TechTerms.com[16](#fn16) defines it as:
> *“Encoding is the process of converting data from one form to another”.*
Both definitions are similar, but in this chapter’s context, the second one makes more sense. The Cambridge Dictionary[17](#fn17) defines *representation* as:
> *“The way that someone or something is shown or described”.*
TechTerms.com returned no results for that word. From now on, I will use the term *encode* to refer to the process of transforming the data and *representation* as the way data is ‘conceptually’ described. Note the ‘conceptually’ part which means the way we humans think about it. This means that data can have a conceptual representation but that does not necessarily mean it is digitally stored in that way. For example, a physical activity like *walking* captured with a motion sensor can be *conceptually* represented by humans as a feature vector but its actual digital format inside a computer is binary (see Figure [7\.1](representations.html#fig:conceptualRep)).
FIGURE 7\.1: The real world walking activity as a) human conceptual representation and b) computer format.
It is also possible to encode the same data into different representations (see Figure [7\.2](representations.html#fig:imgRepepresentations) for an example) depending on the application or the predictive model to be used. Each representation has its own advantages and limitations (discussed in the following subsections) and they capture different aspects of the real\-world phenomenon. Sometimes it is useful to encode the same data into different representations so more information can be extracted and complemented as discussed in section [3\.4](ensemble.html#multiviewhometasks). In the next sections, several types of representations will be presented along with some ideas of how the same raw data can be encoded into different ones.
FIGURE 7\.2: Example of some raw data encoded into different representations.
7\.1 Feature Vectors
--------------------
From previous chapters, we have already seen how data can be represented as feature vectors. For example, when classifying physical activities (section [2\.3\.1](classification.html#activityRecognition)) or clustering questionnaire answers (section [6\.1\.1](unsupervised.html#studentresponses)). Feature vectors are compact representations of real\-world phenomena or objects and usually, they are modeled in a computer as numeric arrays. Most machine learning algorithms work with feature vectors. Generating feature vectors requires knowledge of the application domain. Ideally, the feature vectors should represent the real\-world situation as accurately as possible. We could achieve a good mapping by having feature vectors of infinite size, unfortunately, that is infeasible. In practice, small feature vectors are desired because that reduces storage requirements and computational time.
The process of designing and extracting feature vectors from raw data is known as **feature engineering**. This also involves the process of deciding which features to extract. This requires domain knowledge as the features should capture the information needed to solve the problem. Suppose we want to classify if a person is *‘tired’* or *‘not tired’*. We have access to some details about the person like age, height, the activities performed during the last \\(30\\) minutes, and so on. For simplicity, let’s assume we can generate feature vectors of size \\(2\\) and we have two options:
* **Option 1\.** Feature vectors where the first element is *age* and the second element is *height*.
* **Option 2\.** Feature vectors where the first element is the *number of squats* done by the user during the last \\(30\\) minutes and the second element is *heart rate*.
Clearly, for this specific classification problem the second option is more likely to produce better results. The first option may not even contain enough information and will lead the predictive model to produce random predictions. With the second option, the boundaries between classes are more clear (see Figure [7\.3](representations.html#fig:tired)) and classifiers will have an easier time finding them.
FIGURE 7\.3: Two different feature vectors for classifying tired and not tired.
In R, feature vectors are stored as data frames where rows are individual instances and columns are features. Some of the advantages and limitations of feature vectors are listed below.
**Advantages:**
* Efficient in terms of memory.
* Most machine learning algorithms support them.
* Efficient in terms of computations compared to other representations.
**Limitations:**
* Are static in the sense that they cannot capture temporal relationships.
* A lot of information and/or temporal relationships may be lost.
* Some features may be redundant leading to decreased performance.
* It requires effort and domain knowledge to extract them.
* They are difficult to plot if the dimension is \\(\> 2\\) unless some dimensionality reduction method is applied such as Multidimensional Scaling (chapter [4](edavis.html#edavis)).
7\.2 Timeseries
---------------
A timeseries is a sequence of data points ordered in time. We have already worked with timeseries data in previous chapters when classifying physical activities and hand gestures (chapter [2](classification.html#classification)). Timeseries can be multi\-dimensional. For example, typical inertial sensors capture motion forces in three axes. Timeseries analysis methods can be used to find underlying time\-dependent patterns while timeseries forecasting methods aim to predict future data points based on historical data. Timeseries analysis is a very extensive topic and there are a number of books on the topic. For example, the book “Forecasting: Principles and Practice” by Hyndman and Athanasopoulos ([2018](#ref-Hyndman2018)) focuses on timeseries forecasting with R.
In this book we mainly use timeseries data collected from sensors in the context of behavior predictions using machine learning. We have already seen how classification models (like decision trees) can be trained with timeseries converted into feature vectors (section [2\.3\.1](classification.html#activityRecognition)) or by using the raw timeseries data with Dynamic Time Warping (section [2\.5\.1](classification.html#sechandgestures)).
**Advantages:**
* Many problems have this form and can be naturally modeled as timeseries.
* Temporal information is preserved.
* Easy to plot and visualize.
**Limitations:**
* Not all algorithms support timeseries of varying lengths so, one needs to truncate and/or do some type of padding.
* Many timeseries algorithms are slower than the ones that work with feature vectors.
* Timeseries can be very long, thus, making computations very slow.
7\.3 Transactions
-----------------
Sometimes we may want to represent data as transactions, as we did in section [6\.3](unsupervised.html#associationrules). Data represented as transactions are usually intended to be used by association rule mining algorithms (see section [6\.3](unsupervised.html#associationrules)). As a minimum, a transaction has a unique identifier and a set of items. Items can be types of products, symptoms, ingredients, etc. A set of transactions is called a database. Figure [7\.4](representations.html#fig:transactionsTab2) taken from chapter [6](unsupervised.html#unsupervised) shows an example database with \\(10\\) transactions. In this example, items are sets of products from a supermarket.
FIGURE 7\.4: Example database with 10 transactions.
Transactions can include additional information like customer id, date, total cost, etc. Transactions can be coded as logical matrices where rows represent transactions and columns represent items. A `TRUE` value indicates that the particular item is present and `FALSE` indicates that the particular item is not part of that set. When the number of possible items is huge and item sets contain a small number of items, this type of matrix can be memory\-inefficient. This is called a *sparse matrix*, that is, a matrix where many of its entries are `FALSE` (or empty, in general). Transactions can also be stored as lists or in a relational database such as MySQL. Below are some advantages of representing data as transactions.
**Advantages:**
* Association rule mining algorithms such as Apriori can be used to extract interesting behavior relationships.
* Recommendation systems can be built based on transactional data.
**Limitations:**
* Can be inefficient to store them as a logical matrices.
* There is no order associated with the items or temporal information.
7\.4 Images
-----------
`timeseries_to_images.R` `plot_activity_images.R`
Images are rich visual representations that capture a lot of information –including spatial relationships. Pictures taken from a camera, drawings, scanned documents, etc., already are examples of images. However, other types of non\-image data can be converted into images. One of the main advantages of analyzing images is that they retain spatial information (distance between pixels). This information is useful when using predictive models that take advantage of those properties such as Convolutional Neural Networks (CNNs) which will be presented in chapter [8](deeplearning.html#deeplearning). CNNs have proven to produce state of the art results in many vision\-based tasks and are very flexible models in the sense that they can be adapted for a variety of applications with little effort.
Before CNNs were introduced by Lecun ([LeCun et al. 1998](#ref-lecun1998gradient)), image classification used to be feature\-based. One first needed to extract hand\-crafted features from images and then use a classifier to make predictions. Also, images can be *flattened* into one\-dimensional arrays where each element represents a pixel (Figure [7\.5](representations.html#fig:flattening)). Then, those \\(1\\)D arrays can be used as feature vectors to perform training and inference.
FIGURE 7\.5: Flattening a matrix into a 1D array.
Flattening an image can lead to information loss and the dimension of the resulting vector can be very high, sometimes limiting its applicability and/or performance. Feature extraction from images can also be a complicated task and is very application dependent. CNNs have changed that. They take as input raw images, that is, matrices and automatically extract features and perform classification or regression.
What if the data are not represented as images but we still want to take advantage of featureless models like CNNs? Depending on the type of data, it may be possible to encode it as an image. For example, timeseries data can be encoded as an image. In fact, a timeseries can already be considered an image with a height of \\(1\\) but they can also be reshaped into square matrices.
Take for example the *SMARTPHONE ACTIVITIES* dataset which contains accelerometer data for each of the \\(x\\), \\(y\\), and \\(z\\) axes. The script `timeseries_to_images.R` shows how the acceleration timeseries can be converted to images. A window size of \\(100\\) is defined. Since the sampling rate was \\(20\\) Hz, each window corresponds to \\(100/20 \= 5\\) seconds. For each window, we have \\(3\\) timeseries (\\(x\\),\\(y\\),\\(z\\)). We can reshape each of them into \\(10 \\times 10\\) matrices by arranging the elements into columns. Then, the three matrices can be stacked to form a \\(3\\)D image similar to an RGB image. Figure [7\.6](representations.html#fig:seriesToImage) shows the process of reshaping \\(3\\) timeseries of size \\(9\\) into \\(3 \\times 3\\) matrices to generate an RGB\-like image.
FIGURE 7\.6: Encoding 3 accelerometer timeseries as an image.
The script then moves to the next window with no overlap and repeats the process. Actually, the script saves each image as one line of text. The first \\(100\\) elements correspond to the \\(x\\) axis, the next \\(100\\) to \\(y\\), and the remaining to \\(z\\). Thus each line has \\(300\\) values. Finally, the user id and the corresponding activity label are added at the end. This format will make it easy to read the file and reconstruct the images later on. The resulting file is called `images.txt` and is already included in the `smartphone_activities` dataset folder.
The script `plot_activity_images.R` shows how to read the `images.txt` file and reconstruct the images so we can plot them. Figure [7\.7](representations.html#fig:activitiesImages) shows three different activities plotted as colored images of \\(10 \\times 10\\) pixels. Before generating the plots, the images were normalized between \\(0\\) and \\(1\\).
FIGURE 7\.7: Three activities captured with an accelerometer represented as images.
We can see that the patterns for *‘jogging’* look more “chaotic” compared to the others while the *‘sitting’* activity looks like a plain solid square. Then, we can use those images to train a CNN and perform inference. CNNs will be covered in chapter [8](deeplearning.html#deeplearning) and used to build adaptive models using these activity images.
**Advantages:**
* Spatial relationships can be captured.
* Can be multi\-dimensional. For example \\(3\\)D RGB images.
* Can be efficiently processed with CNNs.
**Limitations:**
* Computational time can be higher than when processing feature vectors. Still, modern hardware and methods allow us to perform operations very efficiently.
* It can take some extra processing to convert non\-image data into images.
7\.5 Recurrence Plots
---------------------
Recurrence plots (RPs) are visual representations similar to images but typically they only have one dimension (depth). They are encoded as \\(n \\times n\\) matrices, that is, the same number of rows and columns (a square matrix). Even though these are like a special case of images, I thought it would be worth having them in their own subsection! Just as with images, timeseries can be converted into RPs and then used to train a CNN.
A RP is a visual representation of time patterns of dynamical systems (for example, timeseries). RPs were introduced by Eckmann, Kamphorst, and Ruelle ([1987](#ref-eckmann1987recurrence)) and they depict all the times when a trajectory is roughly in the same state. They are visual representations of the dynamics of a system. Biological systems possess behavioral patterns and activity dynamics that can be captured with RPs, for example, the dynamics of ant colonies ([Neves 2017](#ref-Neves2017)).
At this point, you may be curious about how a RP looks like. So let me begin by showing a picture[18](#fn18) of \\(4\\) time series with their respective RP (Figure [7\.8](representations.html#fig:rpExamples)).
FIGURE 7\.8: Four timeseries (top) with their respective RPs (bottom). (Author: Norbert Marwan/Pucicu at German Wikipedia. Source: Wikipedia (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
The first RP (leftmost) does not seem to have a clear pattern (white noise) whereas the other three show some patterns like diagonals of different sizes, some square and circular shapes, and so on. RPs can be characterized by small\-scale and large\-scale patterns. Examples of small\-scale patterns are diagonals, horizontal/vertical lines, dots, etc. Large\-scale patterns are called *typology* and they depict the global characteristics of the dynamic system [19](#fn19).
The visual interpretation of RPs requires some experience and is out of the scope of this book. However, they can be used as a visual pattern extraction tool to represent the data and then, in conjunction with machine learning methods like CNNs, used to solve classification problems.
There is an objective way to analyze RPs known as **recurrence quantification analysis (RQA)** ([Zbilut and Webber 1992](#ref-ZBILUT1992)). It introduces several measures like percentage of recurrence points (recurrence rate), percentage of points that form vertical lines (laminarity), average length of diagonal lines, length of the longest diagonal line, etc. Those measures can then be used as features to train classification models.
But how are RPs computed? Well, that is the topic of the next section.
### 7\.5\.1 Computing Recurrence Plots
It’s time to delve into the details about how these mysterious plots are computed. Suppose there is a timeseries with \\(n\\) elements (points). To compute its RP we need to compute the distance between each pair of points. We can store this information in a \\(n \\times n\\) matrix. Let’s call this a *distance matrix* \\(D\\). Then, we need to define a threshold \\(\\epsilon\\). For each entry in \\(D\\), if the distance is less or equal than the threshold \\(\\epsilon\\), it is set to \\(1\\) and \\(0\\) otherwise.
Formally, a recurrence of a state at time \\(i\\) at a different time \\(j\\) is marked within a two\-dimensional squared matrix with ones and zeros where both axes represent time:
\\\[\\begin{equation}
R\_{i,j} \\left( x \\right) \=
\\begin{cases}
1 \& \\textbf{if } \\lvert\\lvert \\vec{x}\_i \- \\vec{x}\_j \\rvert \\rvert \\leq \\epsilon \\\\
0 \& \\textbf{otherwise},
\\end{cases}
\\tag{7\.1}
\\end{equation}\\]
where \\(\\vec{x}\\) are the states and \\(\\lvert\\lvert \\cdot \\rvert \\rvert\\) is a norm (for example Euclidean distance). \\(R\_{i,j}\\) is the square matrix and will be \\(1\\) if \\(\\vec{x}\_i \\approx \\vec{x}\_j\\) up to an error \\(\\epsilon\\). The \\(\\epsilon\\) is important since systems often do not recur exactly to a previously visited state.
The threshold \\(\\epsilon\\) needs to be set manually which can be difficult in some situations. If not set properly, the RP can end up having excessive ones or zeros. If you plan to use RPs as part of an automated process and fed them to a classifier, you can use the distance matrix instead. The advantage is that you don’t need to specify any parameter except for the distance function. The distance matrix can be defined as:
\\\[\\begin{equation} \\label{eq:distance\_matrix}
D\_{i,j} \\left( x \\right) \= \\lvert\\lvert \\vec{x}\_i \- \\vec{x}\_j \\rvert \\rvert
\\end{equation}\\]
which is similar to Equation [(7\.1\)](representations.html#eq:rp) but without the extra step of applying a threshold.
**Advantages:**
* RPs capture dynamic patterns of a system.
* They can be used to extract small and large scale patterns.
* Timeseries can be easily encoded as RPs.
* Can be used as input to CNNs for supervised learning tasks.
**Limitations:**
* Computationally intensive since all pairs of distances need to be calculated.
* Their visual interpretation requires experience.
* A threshold needs to be defined and it is not always easy to find the correct value. However, the distance matrix can be used instead.
### 7\.5\.2 Recurrence Plots of Hand Gestures
`recurrence_plots.R`
In this section, I am going to show you how to compute recurrence plots in R using the *HAND GESTURES* dataset. The code can be found in the script `recurrence_plots.R`. First, we need a norm (distance function), for example the Euclidean distance:
```
# Computes Euclidean distance between x and y.
norm2 <- function(x, y){
return(sqrt((x - y)^2))
}
```
The following function computes a distance matrix and a recurrence plot and returns both of them. The first argument `x` is a vector representing a timeseries, `e` is the threshold and `f` is a distance function.
```
rp <- function(x, e, f=norm2){
#x: vector
#e: threshold
#f: norm (distance function)
N <- length(x)
# This will store the recurrence plot.
M <- matrix(nrow=N, ncol=N)
# This will store the distance matrix.
D <- matrix(nrow=N, ncol=N)
for(i in 1:N){
for(j in 1:N){
# Compute the distance between a pair of points.
d <- f(x[i], x[j])
# Store result in D.
# Start filling values from bottom left.
D[N - (i-1), j] <- d
if(d <= e){
M[N - (i-1), j] <- 1
}
else{
M[N - (i-1), j] <- 0
}
}
}
return(list(D=D, RP=M))
}
```
This function first defines two square matrices `M` and `D` to store the recurrence plot and the distance matrix, respectively. Then, it iterates the matrices from bottom left to top right and fills the corresponding values for `M` and `D`. The distance between elements `i` and `j` from the vector is computed. That distance is directly stored in `D`. To generate the RP we check if the distance is less or equal to the threshold. If that is the case the corresponding entry in `M` is set to \\(1\\). Finally, both matrices are returned by the function.
Now, we can try our `rp()` function on the *HAND GESTURES* dataset to convert one of the timeseries into a RP. First, we read one of the gesture files. For example, the first gesture *‘1’* from user \\(1\\). We only extract the acceleration from the \\(x\\) axis and store it in variable `x`.
```
df <- read.csv(file.path(datasets_path,
"hand_gestures/1/1_20130703-120056.txt"),
header = F)
x <- df$V1
```
If we plot vector `x` we get something like in Figure [7\.9](representations.html#fig:gesture1X).
```
# Plot vector x.
plot(x, type="l", main="Hand gesture 1", xlab = "time", ylab = "")
```
FIGURE 7\.9: Acceleration of x of gesture 1\.
Now the `rp()` function that we just defined is used to calculate the RP and distance matrix of vector `x`. We set a threshold of \\(0\.5\\) and store the result in `res`.
```
# Compute RP and distance matrix.
res <- rp(x, 0.5, norm2)
```
Let’s first plot the distance matrix stored in `res$D`. The `pheatmap()` function can be used to generate the plot.
```
library(pheatmap)
pheatmap(res$D, main="Distance matrix of gesture 1", cluster_row = FALSE,
cluster_col = FALSE,
legend = F,
color = colorRampPalette(c("white", "black"))(50))
```
FIGURE 7\.10: Distance matrix of gesture 1\.
From figure [7\.10](representations.html#fig:gesture1D) we can see that the diagonal cells are all white. Those represent values of \\(0\\), the distance between a point and itself. Apart from that, there are no other human intuitive patterns to look for. Now, let’s see how the recurrence plot stored in `res$RP` looks like (Figure [7\.11](representations.html#fig:gesture1rp5)).
```
pheatmap(res$RP, main="RP with threshold = 0.5", cluster_row = FALSE,
cluster_col = FALSE,
legend = F,
color = colorRampPalette(c("white", "black"))(50))
```
FIGURE 7\.11: RP of gesture 1 with a threshold of 0\.5\.
Here, we see that this is kind of an inverted version of the distance matrix. Now, the diagonal is black because small distances are encoded as ones. There are also some clusters of points and vertical and horizontal line patterns. If we wanted to build a classifier, we would not need to interpret those extraterrestrial images. We could just treat each distance matrix or RP as an image and feed them directly to a CNN (CNNs will be covered in chapter [8](deeplearning.html#deeplearning)).
Finally, we can try to see what happens if we change the threshold. Figure [7\.12](representations.html#fig:rpComp) shows two RPs. In the left one, a small threshold of \\(0\.01\\) was used. Here, many details were lost and only very small distances show up. In the plot to the right, a threshold of \\(1\.5\\) was used. Here, the plot is cluttered with black pixels which makes it difficult to see any patterns. On the other hand, a distance matrix will remain the same regardless of the threshold selection.
FIGURE 7\.12: RP of gesture 1 with two different thresholds.
`shiny_rp.R` This shiny app allows you to select hand gestures, plot their corresponding distance matrix and recurrence plot, and see how the threshold affects the final result.
7\.6 Bag\-of\-Words
-------------------
The main idea of the Bag\-of\-Words (BoW) encoding is to represent a complex entity as a set of its constituent parts. It is called Bag\-of\-Words because one of the first applications was in natural language processing. Say there is a set of documents about different topics such as medicine, arts, engineering, etc., and you would like to classify them automatically based on their words. In BoW, each document is represented as a table that contains the unique words across all documents and their respective counts for each document. With this representation, one may see that documents about medicine will contain higher counts of words like *treatment*, *diagnosis*, *health*, etc., compared to documents about art or engineering. Figures [7\.13](representations.html#fig:bowExample) and [7\.14](representations.html#fig:bowTab) show the conceptual view and the table view, respectively.
FIGURE 7\.13: Conceptual view of two documents as BoW.
FIGURE 7\.14: Table view of two documents as BoW.
From these representations, it is now easy to build a document classifier. The word\-counts table can be used as an input feature vector. That is, each position in the feature vector represents a word and its value is an integer representing the total count for that word.
Note that in practice documents will differ in length, thus, it is a good idea to use percentages instead of total counts. This can be achieved by dividing each word count by the total number of counts. Also note that some frequent words like ‘the’, ‘is’, ‘it’ can cause problems, so some extra preprocessing is needed. This was a simple example but if you are interested in more advanced text processing techniques I refer you to the book “Text Mining with R: A Tidy Approach” by Silge and Robinson ([2017](#ref-silge2017)).
BoW can also be used for image classification in complex scenarios. For example when dealing with composed scenes like classrooms, parks, shops, and streets. First, the scene (document) can be decomposed into smaller elements (words) by identifying objects like trees, chairs, cars, cashiers, etc. In this case, instead of bags of words we have bags of objects but the idea is the same. The object identification part can be done in a *supervised* manner where there is already a classifier that assigns labels to objects.
Using a supervised approach can work in some simple cases but is not scalable for more complex ones. *Why?* Because the classifier would need to be trained for each type of object. Furthermore, those types of objects need to be manually defined beforehand. If we want to apply this method on scenes where most of their elements do not have a corresponding label in the object classifier we will be missing a lot of information and will end up having incomplete word count tables.
A possible solution is to instead, use an *unsupervised* approach. The image scene can be divided into squared (but not necessarily) patches. Conceptually, each patch may represent an independent object (a tree, a chair, etc.). Then, feature extraction can be performed on each patch so ultimately patches are encoded as feature vectors. Again, each feature vector represents an individual possible object inside the complex scene. At this point, those feature vectors do not have a label so we can’t build the BoW (table counts) for the whole scene. Then, how are those *unlabeled* feature vectors useful? We could use a pre\-trained classifier to assign them labels –but we would be relying into the supervised approach along with its aforementioned limitations. Instead, we can use an *unsupervised* method, for example, *k\-means*! which was presented in chapter [6](unsupervised.html#unsupervised).
We can cluster all the *unlabeled* feature vectors into \\(k\\) groups where \\(k\\) is the number of possible unique labels. After the clustering, we can compute the centroid of each group. To assign a label to an *unlabeled feature vector*, we can compute the closest centroid and use its id as the label. The id of each centroid can be an integer. Intuitively, similar feature vectors will end up in the same group. For example, there could be a group of objects that look like *chairs*, another for objects that look like *cars*, and so on. Usually, it may happen that elements in the same groups will not look similar for the human eye, but they are similar in the feature space. Also, the objects’ shape inside the groups may not make sense at all for the human eye. If the objective is to classify the complex scene, then we do not necessarily need to understand the individual objects nor do they need to have a corresponding mapping into a real\-world object.
Once the feature vectors are labeled, we can build the word\-count table but instead of having ‘meaningful’ words, the entries will be ids with their corresponding counts. As you might have guessed, one limitation is that we do not know how many clusters (labels) there should be for a given problem. One approach is to try out for different values of \\(k\\) and use the one that optimizes your performance metric of interest.
But, what this BoW thing has to do with behavior? Well, we can use this method to decompose complex behaviors into simpler ones and encode them as BoW as we will see in the next subsection for complex activities analysis.
**Advantages**
* Able to represent complex situations/objects/etc., by decomposing them into simpler elements.
* The resulting BoW can be very efficient and effective for classification tasks.
* Can be used in several domains including text, computer vision, sensor data, and so on.
* The BoW can be constructed in an unsupervised manner.
**Limitations**
* Temporal and spatial information is not preserved.
* It may require some effort to define how to generate the words.
* There are cases where one needs to find the optimal number of words.
### 7\.6\.1 BoW for Complex Activities.
`bagwords/bow_functions.R` `bagwords/bow_run.R`
So far, I have been talking about BoW applications for text and images. In this section, I will show you how to decompose **complex activities** from accelerometer data into simpler activities and encode them as BoW. In chapters [2](classification.html#classification) and [3](ensemble.html#ensemble), we trained supervised models for *simple* activity recognition. Those activities were like: *walking*, *jogging*, *standing*, etc. For those, it is sufficient to divide them into windows of size equivalent to a couple of seconds in order to infer their labels. On the other hand, the duration of *complex* activities are longer and they are composed of many simple activities. One example is the activity **shopping**. When we are shopping we perform many different activities including *walking*, *taking groceries*, *paying*, *standing while looking at the stands*, and so on. Another example is **commuting**. When we commute, we need to walk but also take the train, or drive, or cycle.
Using the same approach for simple activity classification on complex ones may not work. Representing a complex activity using fixed\-size windows can cause some conflicts. For example, a window may be covering the time span when the user was *walking*, but *walking* can be present in different types of complex activities. If a window happens to be part of a segment when the person was walking, there is not enough information to know which was the complex activity at that time. This is where BoW comes into play. If we represent a complex activity as a bag of *simple activities* then, a classifier will have an easier time differentiating between classes. For instance, when **exercising**, the frequencies (counts) of high\-intensity activities (like running or jogging) will be higher compared to when someone is shopping.
In practice, it would be very tedious to manually label all possible simple activities to form the BoW. Instead, we will use the unsupervised approach discussed in the previous section to automatically label the simple activities so we only need to manually label the complex ones.
Here, I will use the *COMPLEX ACTIVITIES* dataset which consists of five complex activities: *‘commuting’*, *‘working’*, *‘being at home’*, *‘shopping’* and *‘exercising’*. The duration of the activities varies from some minutes to a couple of hours. Accelerometer data at \\(50\\) Hz. was collected with a cellphone placed in the user’s belt. The dataset has \\(80\\) accelerometer files, each representing a complex activity.
The task is to go from the raw accelerometer data of the complex activity to a BoW representation where each word will represent a simple activity. The overall steps are as follows:
1. Divide the raw data into small fixed\-length windows and generate feature vectors from them. Intuitively, these are the simple activities.
2. Cluster the feature vectors.
3. Label the vectors by assigning them to the closest centroid.
4. Build the word\-count table.
FIGURE 7\.15: BoW steps. From raw signal to BoW table.
Figure [7\.15](representations.html#fig:bowProcess) shows the overall steps graphically. All the functions to perform the above steps are implemented in `bow_functions.R`. The functions are called in the appropriate order in `bow_run.R`.
First of all, and to avoid overfitting, we need to hold out an independent set of instances. These instances will be used to generate the clusters and their respective centroids. The dataset is already divided into a train and test set. The train set contains \\(13\\) instances out of the \\(80\\). The remaining \\(67\\) are assigned to the test set.
In the first step, we need to extract the feature vectors from the raw data. This is implemented in the function `extractSimpleActivities()`. This function divides the raw data of each file into fixed\-length windows of size \\(150\\) which corresponds to \\(3\\) seconds. Each window can be thought of as a simple activity. For each window, it extracts \\(14\\) features like mean, standard deviation, correlation between axes, etc. The output is stored in the folder `simple_activities/`. Each file corresponds to one of the complex activities and each row in a file is a feature vector (simple activity). **At this time the feature vectors (simple activities) are unlabeled.** Notice that in the script `bow_run.R` the function is called twice:
```
# Extract simple activities for train set.
extractSimpleActivities(train = TRUE)
# Extract simple activities for test set (may take some minutes).
extractSimpleActivities(train = FALSE)
```
This is because we divided the data into train and test sets. So we need to extract the features from both sets by setting the `train` parameter accordingly.
The second step consists of clustering the extracted feature vectors. To avoid overfitting, this step is only performed on the train set. The function `clusterSimpleActivities()` implements this step. The feature vectors are grouped into \\(15\\) groups. This can be changed by setting `constants$wordsize <- 15` to some other value. The function stores all feature vectors from all files in a single data frame and runs \\(k\\)\-means. Finally, the resulting centroids are saved in the text file `clustering/centroids.txt` inside the train set directory.
The next step is to label each feature vector (simple activity) by assigning it to its closest centroid. The function `assignSimpleActivitiesToCluster()` reads the centroids from the text file, and for each simple activity in the test set it finds the closest centroid using the Euclidean distance. The label (an integer from \\(1\\) to \\(15\\)) of the closest centroid is assigned and the resulting files are saved in the `labeled_activities/` directory. Each file contains the assigned labels (integers) for the corresponding feature vectors file in the `simple_activities/` directory. Thus, if a file inside `simple_activities/` has \\(100\\) feature vectors then, its corresponding file in `labeled_activities/` should have \\(100\\) labels.
In the last step, the function `convertToHistogram()` will generate the bag of words from the labeled activities. The BoW are stored as histograms (encoded as vectors) with each element representing a label and its corresponding counts. In this case, the labels are \\(w1\..w15\\). The \\(w\\) stands for word and was only appended for clarity to show that this is a label. This function will convert the counts into percentages (normalization) in case we want to perform classification, that is, the percentage of time that each word (simple activity) occurred during the entire complex activity. The resulting `histograms/histograms.csv` file contains the BoW as one histogram per row. One per each complex activity. The first column is the complex activity’s label in text format.
Figures [7\.16](representations.html#fig:complexWorking) and [7\.17](representations.html#fig:complexExercising) show the histogram for one instance of *‘working’* and *‘exercising’*. The x\-axis shows the labels of the simple activities and the y\-axis their relative frequencies.
FIGURE 7\.16: Histogram of working activity.
FIGURE 7\.17: Histogram of exercising activity.
Here, we can see that the *‘working’* activity is composed mainly by the simple activities *w1*, *w3*, and *w12*. The *exercising* activity is mainly composed of *w15* and *w14* which perhaps are high\-intensity movements like jogging or running.
Once the complex activities are encoded as BoW (histograms), one could train a classifier using the histogram frequencies as features.
7\.7 Graphs
-----------
Graphs are one of the most general data structures (and my favorite one). The two basic components of a graph are its **vertices** and **edges**. Vertices are also called **nodes** and edges are also called **arcs**. Vertices are connected by edges. Figure [7\.18](representations.html#fig:graphTypes) shows three different types of graphs. Graph (a) is an undirected graph that consists of \\(3\\) vertices and \\(3\\) edges. Graph (b) is a directed graph, that is, its edges have a direction. Graph (c) is a weighted directed graph because its edges have a direction and they also have an associated weight.
FIGURE 7\.18: Three different types of graphs.
Weights can represent anything, for example, distances between cities or number of messages sent between devices. In the previous graph, the vertices also have a label (integer numbers but could be strings). In general, vertices and edges can have any number of attributes, not just weight and/or labels. Many data structures like binary trees and lists are graphs *with constraints*. For example, a list is also a graph in which all vertices are connected as a sequence: a\-\>b\-\>c. Trees are also graphs with the constraint that there is only one root node and nodes can only have edges to their children. Graphs are very useful to represent many types of real\-world things like interactions, social relationships, geographical locations, the world wide web, and so on.
There are two main ways to encode a graph. The first one is as an **adjacency list**. An adjacency list consists of a list of tuples per node. The tuples represent edges. The first element of a tuple indicates the target node and the second element the weight of the edge. Figure [7\.19](representations.html#fig:graphOptions)\-b shows the adjacency list representation of the corresponding weighted directed graph in the same figure.
The second main way to encode a graph is as an **adjacency matrix**. This is a square \\(n\\times n\\) matrix where \\(n\\) is the number of nodes. Edges are represented as entries in the matrix. If there is an edge between node \\(a\\) and node \\(b\\), the corresponding cell contains the edge’s weight where rows represent the source nodes and columns the destination nodes. Otherwise, it contains a \\(0\\) or just an empty value. Figure [7\.19](representations.html#fig:graphOptions)\-c shows the corresponding adjacency matrix. The disadvantage of the adjacency matrix is that for sparse graphs (many nodes and few edges), a lot of space is wasted. In practice, this can be overcome by using a sparse matrix implementation.
FIGURE 7\.19: Different ways to store a graph.
**Advantages:**
* Many real\-world situations can be naturally represented as graphs.
* Some partial order is preserved.
* Specialized graph analytics can be performed to gain insights and understand the data. See for example the book by Samatova et al. ([2013](#ref-samatova2013)).
* Can be plotted and different visual properties can be tuned to convey information such as edge width and colors, vertex size and color, distance between nodes, etc.
**Limitations:**
* Some graph analytic algorithms are computationally demanding.
* It can be difficult to use graphs to solve classification problems.
* It is not always clear if the data can be represented as a graph.
### 7\.7\.1 Complex Activities as Graphs
`plot_graphs.R`
In the previous section, it was shown how complex activities can be represented as Bag\-of\-Words. This was done by decomposing the complex activities into simpler ones. The BoW is composed of the simple activities counts (frequencies). In the process of building the BoW in the previous section, some intermediate text files stored in `labeled_activities/` were generated. These files contain the sequence of simple activities (their ids as integers) that constitute the complex activity. From these sequences, histograms were generated and in doing so, the order was lost.
One thing we can do is build a graph where vertices represent simple activities and edges represent the interactions between them. For instance, if we have a sequence of simple activities ids like: \\(3,2,2,4\\) we can represent this as a graph with \\(3\\) vertices and \\(3\\) edges. One vertex per activity. The first edge would go from vertex \\(3\\) to vertex \\(2\\), the next one from vertex \\(2\\) to vertex \\(2\\), and so on. In this way we can use a graph to capture the interactions between simple activities.
The script `plot_graphs.R` implements a function named `ids.to.graph()` that reads the sequence files from `labeled_activities/` and converts them into weighted directed graphs. The weight of the edge \\((a,b)\\) is equal to the total number of transitions from vertex \\(a\\) to vertex \\(b\\). The script uses the `igraph` package ([Csardi and Nepusz 2006](#ref-igraph)) to store and plot the resulting graphs. The `ids.to.graph()` function receives as its first argument the sequence of ids. Its second argument indicates whether the edge weights should be normalized or not. If normalized, the sum of all weights will be \\(1\\).
The following code snippet reads one of the sequence files, converts it into a graph, and plots the graph.
```
datapath <- "../labeled_activitires/"
# Select one of the 'work' complex activities.
filename <- "2_20120606-111732.txt"
# Read it as a data frame.
df <- read.csv(paste0(datapath, filename), header = F)
# Convert the sequence of ids into an igraph graph.
g <- ids.to.graph(df$V1, relative.weights = T)
# Plot the result.
set.seed(12345)
plot(g, vertex.label.cex = 0.7,
edge.arrow.size = 0.2,
edge.arrow.width = 1,
edge.curved = 0.1,
edge.width = E(g)$weight * 8,
edge.label = round(E(g)$weight, digits = 3),
edge.label.cex = 0.4,
edge.color = "orange",
edge.label.color = "black",
vertex.color = "skyblue"
)
```
FIGURE 7\.20: Complex activity ‘working’ plotted as a graph. Nodes are simple activities and edges transitions between them.
Figure [7\.20](representations.html#fig:graphActivity) shows the resulting plot. The plot can be customized to change the vertex and edge color, size, curvature, etc. For more details please read the `igraph` package documentation.
The width of the edges is proportional to its weight. For instance, transitions from simple activity \\(3\\) to itself are very frequent (\\(53\.2\\%\\) of the time) for the *‘work’* complex activity, but transitions from \\(8\\) to \\(4\\) are very infrequent. Note that with this graph representation, some temporal dependencies are preserved but the complete sequence order is lost. Still this captures more information compared to BoW. The relationships between consecutive simple activities are preserved.
It is also possible to get the adjacency matrix with the method `as_adjacency_matrix()`.
```
as_adjacency_matrix(g)
#> 6 x 6 sparse Matrix of class "dgCMatrix"
#> 1 11 12 3 4 8
#> 1 1 1 . 1 . .
#> 11 . 1 1 1 1 .
#> 12 . 1 . . . .
#> 3 1 1 . 1 . 1
#> 4 . . . 1 1 .
#> 8 . . . 1 1 .
```
In this matrix, there is a \\(1\\) if the edge is present and a ‘.’ if there is no edge. However, this adjacency matrix does not contain information about the weights. We can print the adjacency matrix with weights by specifying `attr = "weight"`.
```
as_adjacency_matrix(g, attr = "weight")
#> 6 x 6 sparse Matrix of class "dgCMatrix"
#> 1 11 12 3 4 8
#> 1 0.06066946 0.001046025 . 0.023012552 . .
#> 11 . 0.309623431 0.00209205 0.017782427 0.001046025 .
#> 12 . 0.002092050 . . . .
#> 3 0.02405858 0.017782427 . 0.532426778 . 0.00209205
#> 4 . . . 0.002092050 0.002092050 .
#> 8 . . . 0.001046025 0.001046025 .
```
The adjacency matrices can then be used to train a classifier. Since many classifiers expect one\-dimensional vectors and not matrices, we can flatten the matrix. This is left as an exercise for the reader to try. Which representation produces better classification results (adjacency matrix or BoW)?
The book “Practical graph mining with R” ([Samatova et al. 2013](#ref-samatova2013)) is a good source to learn more about graph analytics with R.
7\.8 Summary
------------
Depending on the problem at hand, the data can be encoded in different forms. Representing data in a particular way, can simplify the problem solving process and the application of specialized algorithms. This chapter presented different ways in which data can be encoded along with some of their advantages/disadvantages.
* **Feature vectors** are fixed\-size arrays that capture the properties of an instance. This is the most common form of data representation in machine learning.
* Most machine learning algorithms expect their inputs to be encoded as feature vectors.
* **Transactions** is another way in which data can be encoded. This representation is appropriate for association rule mining algorithms.
* Data can also be represented as **images**. Algorithms like CNNs (covered in chapter [8](deeplearning.html#deeplearning)) can work directly on images.
* The **Bag\-of\-Words** representation is useful when we want to model a complex behavior as a composition of simpler ones.
* A **graph** is a general data structure composed of *vertices* and *edges* and is used to model relationships between entities.
* Sometimes it is possible to convert data into multiple representations. For example, timeseries can be converted into images, recurrence plots, etc.
7\.1 Feature Vectors
--------------------
From previous chapters, we have already seen how data can be represented as feature vectors. For example, when classifying physical activities (section [2\.3\.1](classification.html#activityRecognition)) or clustering questionnaire answers (section [6\.1\.1](unsupervised.html#studentresponses)). Feature vectors are compact representations of real\-world phenomena or objects and usually, they are modeled in a computer as numeric arrays. Most machine learning algorithms work with feature vectors. Generating feature vectors requires knowledge of the application domain. Ideally, the feature vectors should represent the real\-world situation as accurately as possible. We could achieve a good mapping by having feature vectors of infinite size, unfortunately, that is infeasible. In practice, small feature vectors are desired because that reduces storage requirements and computational time.
The process of designing and extracting feature vectors from raw data is known as **feature engineering**. This also involves the process of deciding which features to extract. This requires domain knowledge as the features should capture the information needed to solve the problem. Suppose we want to classify if a person is *‘tired’* or *‘not tired’*. We have access to some details about the person like age, height, the activities performed during the last \\(30\\) minutes, and so on. For simplicity, let’s assume we can generate feature vectors of size \\(2\\) and we have two options:
* **Option 1\.** Feature vectors where the first element is *age* and the second element is *height*.
* **Option 2\.** Feature vectors where the first element is the *number of squats* done by the user during the last \\(30\\) minutes and the second element is *heart rate*.
Clearly, for this specific classification problem the second option is more likely to produce better results. The first option may not even contain enough information and will lead the predictive model to produce random predictions. With the second option, the boundaries between classes are more clear (see Figure [7\.3](representations.html#fig:tired)) and classifiers will have an easier time finding them.
FIGURE 7\.3: Two different feature vectors for classifying tired and not tired.
In R, feature vectors are stored as data frames where rows are individual instances and columns are features. Some of the advantages and limitations of feature vectors are listed below.
**Advantages:**
* Efficient in terms of memory.
* Most machine learning algorithms support them.
* Efficient in terms of computations compared to other representations.
**Limitations:**
* Are static in the sense that they cannot capture temporal relationships.
* A lot of information and/or temporal relationships may be lost.
* Some features may be redundant leading to decreased performance.
* It requires effort and domain knowledge to extract them.
* They are difficult to plot if the dimension is \\(\> 2\\) unless some dimensionality reduction method is applied such as Multidimensional Scaling (chapter [4](edavis.html#edavis)).
7\.2 Timeseries
---------------
A timeseries is a sequence of data points ordered in time. We have already worked with timeseries data in previous chapters when classifying physical activities and hand gestures (chapter [2](classification.html#classification)). Timeseries can be multi\-dimensional. For example, typical inertial sensors capture motion forces in three axes. Timeseries analysis methods can be used to find underlying time\-dependent patterns while timeseries forecasting methods aim to predict future data points based on historical data. Timeseries analysis is a very extensive topic and there are a number of books on the topic. For example, the book “Forecasting: Principles and Practice” by Hyndman and Athanasopoulos ([2018](#ref-Hyndman2018)) focuses on timeseries forecasting with R.
In this book we mainly use timeseries data collected from sensors in the context of behavior predictions using machine learning. We have already seen how classification models (like decision trees) can be trained with timeseries converted into feature vectors (section [2\.3\.1](classification.html#activityRecognition)) or by using the raw timeseries data with Dynamic Time Warping (section [2\.5\.1](classification.html#sechandgestures)).
**Advantages:**
* Many problems have this form and can be naturally modeled as timeseries.
* Temporal information is preserved.
* Easy to plot and visualize.
**Limitations:**
* Not all algorithms support timeseries of varying lengths so, one needs to truncate and/or do some type of padding.
* Many timeseries algorithms are slower than the ones that work with feature vectors.
* Timeseries can be very long, thus, making computations very slow.
7\.3 Transactions
-----------------
Sometimes we may want to represent data as transactions, as we did in section [6\.3](unsupervised.html#associationrules). Data represented as transactions are usually intended to be used by association rule mining algorithms (see section [6\.3](unsupervised.html#associationrules)). As a minimum, a transaction has a unique identifier and a set of items. Items can be types of products, symptoms, ingredients, etc. A set of transactions is called a database. Figure [7\.4](representations.html#fig:transactionsTab2) taken from chapter [6](unsupervised.html#unsupervised) shows an example database with \\(10\\) transactions. In this example, items are sets of products from a supermarket.
FIGURE 7\.4: Example database with 10 transactions.
Transactions can include additional information like customer id, date, total cost, etc. Transactions can be coded as logical matrices where rows represent transactions and columns represent items. A `TRUE` value indicates that the particular item is present and `FALSE` indicates that the particular item is not part of that set. When the number of possible items is huge and item sets contain a small number of items, this type of matrix can be memory\-inefficient. This is called a *sparse matrix*, that is, a matrix where many of its entries are `FALSE` (or empty, in general). Transactions can also be stored as lists or in a relational database such as MySQL. Below are some advantages of representing data as transactions.
**Advantages:**
* Association rule mining algorithms such as Apriori can be used to extract interesting behavior relationships.
* Recommendation systems can be built based on transactional data.
**Limitations:**
* Can be inefficient to store them as a logical matrices.
* There is no order associated with the items or temporal information.
7\.4 Images
-----------
`timeseries_to_images.R` `plot_activity_images.R`
Images are rich visual representations that capture a lot of information –including spatial relationships. Pictures taken from a camera, drawings, scanned documents, etc., already are examples of images. However, other types of non\-image data can be converted into images. One of the main advantages of analyzing images is that they retain spatial information (distance between pixels). This information is useful when using predictive models that take advantage of those properties such as Convolutional Neural Networks (CNNs) which will be presented in chapter [8](deeplearning.html#deeplearning). CNNs have proven to produce state of the art results in many vision\-based tasks and are very flexible models in the sense that they can be adapted for a variety of applications with little effort.
Before CNNs were introduced by Lecun ([LeCun et al. 1998](#ref-lecun1998gradient)), image classification used to be feature\-based. One first needed to extract hand\-crafted features from images and then use a classifier to make predictions. Also, images can be *flattened* into one\-dimensional arrays where each element represents a pixel (Figure [7\.5](representations.html#fig:flattening)). Then, those \\(1\\)D arrays can be used as feature vectors to perform training and inference.
FIGURE 7\.5: Flattening a matrix into a 1D array.
Flattening an image can lead to information loss and the dimension of the resulting vector can be very high, sometimes limiting its applicability and/or performance. Feature extraction from images can also be a complicated task and is very application dependent. CNNs have changed that. They take as input raw images, that is, matrices and automatically extract features and perform classification or regression.
What if the data are not represented as images but we still want to take advantage of featureless models like CNNs? Depending on the type of data, it may be possible to encode it as an image. For example, timeseries data can be encoded as an image. In fact, a timeseries can already be considered an image with a height of \\(1\\) but they can also be reshaped into square matrices.
Take for example the *SMARTPHONE ACTIVITIES* dataset which contains accelerometer data for each of the \\(x\\), \\(y\\), and \\(z\\) axes. The script `timeseries_to_images.R` shows how the acceleration timeseries can be converted to images. A window size of \\(100\\) is defined. Since the sampling rate was \\(20\\) Hz, each window corresponds to \\(100/20 \= 5\\) seconds. For each window, we have \\(3\\) timeseries (\\(x\\),\\(y\\),\\(z\\)). We can reshape each of them into \\(10 \\times 10\\) matrices by arranging the elements into columns. Then, the three matrices can be stacked to form a \\(3\\)D image similar to an RGB image. Figure [7\.6](representations.html#fig:seriesToImage) shows the process of reshaping \\(3\\) timeseries of size \\(9\\) into \\(3 \\times 3\\) matrices to generate an RGB\-like image.
FIGURE 7\.6: Encoding 3 accelerometer timeseries as an image.
The script then moves to the next window with no overlap and repeats the process. Actually, the script saves each image as one line of text. The first \\(100\\) elements correspond to the \\(x\\) axis, the next \\(100\\) to \\(y\\), and the remaining to \\(z\\). Thus each line has \\(300\\) values. Finally, the user id and the corresponding activity label are added at the end. This format will make it easy to read the file and reconstruct the images later on. The resulting file is called `images.txt` and is already included in the `smartphone_activities` dataset folder.
The script `plot_activity_images.R` shows how to read the `images.txt` file and reconstruct the images so we can plot them. Figure [7\.7](representations.html#fig:activitiesImages) shows three different activities plotted as colored images of \\(10 \\times 10\\) pixels. Before generating the plots, the images were normalized between \\(0\\) and \\(1\\).
FIGURE 7\.7: Three activities captured with an accelerometer represented as images.
We can see that the patterns for *‘jogging’* look more “chaotic” compared to the others while the *‘sitting’* activity looks like a plain solid square. Then, we can use those images to train a CNN and perform inference. CNNs will be covered in chapter [8](deeplearning.html#deeplearning) and used to build adaptive models using these activity images.
**Advantages:**
* Spatial relationships can be captured.
* Can be multi\-dimensional. For example \\(3\\)D RGB images.
* Can be efficiently processed with CNNs.
**Limitations:**
* Computational time can be higher than when processing feature vectors. Still, modern hardware and methods allow us to perform operations very efficiently.
* It can take some extra processing to convert non\-image data into images.
7\.5 Recurrence Plots
---------------------
Recurrence plots (RPs) are visual representations similar to images but typically they only have one dimension (depth). They are encoded as \\(n \\times n\\) matrices, that is, the same number of rows and columns (a square matrix). Even though these are like a special case of images, I thought it would be worth having them in their own subsection! Just as with images, timeseries can be converted into RPs and then used to train a CNN.
A RP is a visual representation of time patterns of dynamical systems (for example, timeseries). RPs were introduced by Eckmann, Kamphorst, and Ruelle ([1987](#ref-eckmann1987recurrence)) and they depict all the times when a trajectory is roughly in the same state. They are visual representations of the dynamics of a system. Biological systems possess behavioral patterns and activity dynamics that can be captured with RPs, for example, the dynamics of ant colonies ([Neves 2017](#ref-Neves2017)).
At this point, you may be curious about how a RP looks like. So let me begin by showing a picture[18](#fn18) of \\(4\\) time series with their respective RP (Figure [7\.8](representations.html#fig:rpExamples)).
FIGURE 7\.8: Four timeseries (top) with their respective RPs (bottom). (Author: Norbert Marwan/Pucicu at German Wikipedia. Source: Wikipedia (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
The first RP (leftmost) does not seem to have a clear pattern (white noise) whereas the other three show some patterns like diagonals of different sizes, some square and circular shapes, and so on. RPs can be characterized by small\-scale and large\-scale patterns. Examples of small\-scale patterns are diagonals, horizontal/vertical lines, dots, etc. Large\-scale patterns are called *typology* and they depict the global characteristics of the dynamic system [19](#fn19).
The visual interpretation of RPs requires some experience and is out of the scope of this book. However, they can be used as a visual pattern extraction tool to represent the data and then, in conjunction with machine learning methods like CNNs, used to solve classification problems.
There is an objective way to analyze RPs known as **recurrence quantification analysis (RQA)** ([Zbilut and Webber 1992](#ref-ZBILUT1992)). It introduces several measures like percentage of recurrence points (recurrence rate), percentage of points that form vertical lines (laminarity), average length of diagonal lines, length of the longest diagonal line, etc. Those measures can then be used as features to train classification models.
But how are RPs computed? Well, that is the topic of the next section.
### 7\.5\.1 Computing Recurrence Plots
It’s time to delve into the details about how these mysterious plots are computed. Suppose there is a timeseries with \\(n\\) elements (points). To compute its RP we need to compute the distance between each pair of points. We can store this information in a \\(n \\times n\\) matrix. Let’s call this a *distance matrix* \\(D\\). Then, we need to define a threshold \\(\\epsilon\\). For each entry in \\(D\\), if the distance is less or equal than the threshold \\(\\epsilon\\), it is set to \\(1\\) and \\(0\\) otherwise.
Formally, a recurrence of a state at time \\(i\\) at a different time \\(j\\) is marked within a two\-dimensional squared matrix with ones and zeros where both axes represent time:
\\\[\\begin{equation}
R\_{i,j} \\left( x \\right) \=
\\begin{cases}
1 \& \\textbf{if } \\lvert\\lvert \\vec{x}\_i \- \\vec{x}\_j \\rvert \\rvert \\leq \\epsilon \\\\
0 \& \\textbf{otherwise},
\\end{cases}
\\tag{7\.1}
\\end{equation}\\]
where \\(\\vec{x}\\) are the states and \\(\\lvert\\lvert \\cdot \\rvert \\rvert\\) is a norm (for example Euclidean distance). \\(R\_{i,j}\\) is the square matrix and will be \\(1\\) if \\(\\vec{x}\_i \\approx \\vec{x}\_j\\) up to an error \\(\\epsilon\\). The \\(\\epsilon\\) is important since systems often do not recur exactly to a previously visited state.
The threshold \\(\\epsilon\\) needs to be set manually which can be difficult in some situations. If not set properly, the RP can end up having excessive ones or zeros. If you plan to use RPs as part of an automated process and fed them to a classifier, you can use the distance matrix instead. The advantage is that you don’t need to specify any parameter except for the distance function. The distance matrix can be defined as:
\\\[\\begin{equation} \\label{eq:distance\_matrix}
D\_{i,j} \\left( x \\right) \= \\lvert\\lvert \\vec{x}\_i \- \\vec{x}\_j \\rvert \\rvert
\\end{equation}\\]
which is similar to Equation [(7\.1\)](representations.html#eq:rp) but without the extra step of applying a threshold.
**Advantages:**
* RPs capture dynamic patterns of a system.
* They can be used to extract small and large scale patterns.
* Timeseries can be easily encoded as RPs.
* Can be used as input to CNNs for supervised learning tasks.
**Limitations:**
* Computationally intensive since all pairs of distances need to be calculated.
* Their visual interpretation requires experience.
* A threshold needs to be defined and it is not always easy to find the correct value. However, the distance matrix can be used instead.
### 7\.5\.2 Recurrence Plots of Hand Gestures
`recurrence_plots.R`
In this section, I am going to show you how to compute recurrence plots in R using the *HAND GESTURES* dataset. The code can be found in the script `recurrence_plots.R`. First, we need a norm (distance function), for example the Euclidean distance:
```
# Computes Euclidean distance between x and y.
norm2 <- function(x, y){
return(sqrt((x - y)^2))
}
```
The following function computes a distance matrix and a recurrence plot and returns both of them. The first argument `x` is a vector representing a timeseries, `e` is the threshold and `f` is a distance function.
```
rp <- function(x, e, f=norm2){
#x: vector
#e: threshold
#f: norm (distance function)
N <- length(x)
# This will store the recurrence plot.
M <- matrix(nrow=N, ncol=N)
# This will store the distance matrix.
D <- matrix(nrow=N, ncol=N)
for(i in 1:N){
for(j in 1:N){
# Compute the distance between a pair of points.
d <- f(x[i], x[j])
# Store result in D.
# Start filling values from bottom left.
D[N - (i-1), j] <- d
if(d <= e){
M[N - (i-1), j] <- 1
}
else{
M[N - (i-1), j] <- 0
}
}
}
return(list(D=D, RP=M))
}
```
This function first defines two square matrices `M` and `D` to store the recurrence plot and the distance matrix, respectively. Then, it iterates the matrices from bottom left to top right and fills the corresponding values for `M` and `D`. The distance between elements `i` and `j` from the vector is computed. That distance is directly stored in `D`. To generate the RP we check if the distance is less or equal to the threshold. If that is the case the corresponding entry in `M` is set to \\(1\\). Finally, both matrices are returned by the function.
Now, we can try our `rp()` function on the *HAND GESTURES* dataset to convert one of the timeseries into a RP. First, we read one of the gesture files. For example, the first gesture *‘1’* from user \\(1\\). We only extract the acceleration from the \\(x\\) axis and store it in variable `x`.
```
df <- read.csv(file.path(datasets_path,
"hand_gestures/1/1_20130703-120056.txt"),
header = F)
x <- df$V1
```
If we plot vector `x` we get something like in Figure [7\.9](representations.html#fig:gesture1X).
```
# Plot vector x.
plot(x, type="l", main="Hand gesture 1", xlab = "time", ylab = "")
```
FIGURE 7\.9: Acceleration of x of gesture 1\.
Now the `rp()` function that we just defined is used to calculate the RP and distance matrix of vector `x`. We set a threshold of \\(0\.5\\) and store the result in `res`.
```
# Compute RP and distance matrix.
res <- rp(x, 0.5, norm2)
```
Let’s first plot the distance matrix stored in `res$D`. The `pheatmap()` function can be used to generate the plot.
```
library(pheatmap)
pheatmap(res$D, main="Distance matrix of gesture 1", cluster_row = FALSE,
cluster_col = FALSE,
legend = F,
color = colorRampPalette(c("white", "black"))(50))
```
FIGURE 7\.10: Distance matrix of gesture 1\.
From figure [7\.10](representations.html#fig:gesture1D) we can see that the diagonal cells are all white. Those represent values of \\(0\\), the distance between a point and itself. Apart from that, there are no other human intuitive patterns to look for. Now, let’s see how the recurrence plot stored in `res$RP` looks like (Figure [7\.11](representations.html#fig:gesture1rp5)).
```
pheatmap(res$RP, main="RP with threshold = 0.5", cluster_row = FALSE,
cluster_col = FALSE,
legend = F,
color = colorRampPalette(c("white", "black"))(50))
```
FIGURE 7\.11: RP of gesture 1 with a threshold of 0\.5\.
Here, we see that this is kind of an inverted version of the distance matrix. Now, the diagonal is black because small distances are encoded as ones. There are also some clusters of points and vertical and horizontal line patterns. If we wanted to build a classifier, we would not need to interpret those extraterrestrial images. We could just treat each distance matrix or RP as an image and feed them directly to a CNN (CNNs will be covered in chapter [8](deeplearning.html#deeplearning)).
Finally, we can try to see what happens if we change the threshold. Figure [7\.12](representations.html#fig:rpComp) shows two RPs. In the left one, a small threshold of \\(0\.01\\) was used. Here, many details were lost and only very small distances show up. In the plot to the right, a threshold of \\(1\.5\\) was used. Here, the plot is cluttered with black pixels which makes it difficult to see any patterns. On the other hand, a distance matrix will remain the same regardless of the threshold selection.
FIGURE 7\.12: RP of gesture 1 with two different thresholds.
`shiny_rp.R` This shiny app allows you to select hand gestures, plot their corresponding distance matrix and recurrence plot, and see how the threshold affects the final result.
### 7\.5\.1 Computing Recurrence Plots
It’s time to delve into the details about how these mysterious plots are computed. Suppose there is a timeseries with \\(n\\) elements (points). To compute its RP we need to compute the distance between each pair of points. We can store this information in a \\(n \\times n\\) matrix. Let’s call this a *distance matrix* \\(D\\). Then, we need to define a threshold \\(\\epsilon\\). For each entry in \\(D\\), if the distance is less or equal than the threshold \\(\\epsilon\\), it is set to \\(1\\) and \\(0\\) otherwise.
Formally, a recurrence of a state at time \\(i\\) at a different time \\(j\\) is marked within a two\-dimensional squared matrix with ones and zeros where both axes represent time:
\\\[\\begin{equation}
R\_{i,j} \\left( x \\right) \=
\\begin{cases}
1 \& \\textbf{if } \\lvert\\lvert \\vec{x}\_i \- \\vec{x}\_j \\rvert \\rvert \\leq \\epsilon \\\\
0 \& \\textbf{otherwise},
\\end{cases}
\\tag{7\.1}
\\end{equation}\\]
where \\(\\vec{x}\\) are the states and \\(\\lvert\\lvert \\cdot \\rvert \\rvert\\) is a norm (for example Euclidean distance). \\(R\_{i,j}\\) is the square matrix and will be \\(1\\) if \\(\\vec{x}\_i \\approx \\vec{x}\_j\\) up to an error \\(\\epsilon\\). The \\(\\epsilon\\) is important since systems often do not recur exactly to a previously visited state.
The threshold \\(\\epsilon\\) needs to be set manually which can be difficult in some situations. If not set properly, the RP can end up having excessive ones or zeros. If you plan to use RPs as part of an automated process and fed them to a classifier, you can use the distance matrix instead. The advantage is that you don’t need to specify any parameter except for the distance function. The distance matrix can be defined as:
\\\[\\begin{equation} \\label{eq:distance\_matrix}
D\_{i,j} \\left( x \\right) \= \\lvert\\lvert \\vec{x}\_i \- \\vec{x}\_j \\rvert \\rvert
\\end{equation}\\]
which is similar to Equation [(7\.1\)](representations.html#eq:rp) but without the extra step of applying a threshold.
**Advantages:**
* RPs capture dynamic patterns of a system.
* They can be used to extract small and large scale patterns.
* Timeseries can be easily encoded as RPs.
* Can be used as input to CNNs for supervised learning tasks.
**Limitations:**
* Computationally intensive since all pairs of distances need to be calculated.
* Their visual interpretation requires experience.
* A threshold needs to be defined and it is not always easy to find the correct value. However, the distance matrix can be used instead.
### 7\.5\.2 Recurrence Plots of Hand Gestures
`recurrence_plots.R`
In this section, I am going to show you how to compute recurrence plots in R using the *HAND GESTURES* dataset. The code can be found in the script `recurrence_plots.R`. First, we need a norm (distance function), for example the Euclidean distance:
```
# Computes Euclidean distance between x and y.
norm2 <- function(x, y){
return(sqrt((x - y)^2))
}
```
The following function computes a distance matrix and a recurrence plot and returns both of them. The first argument `x` is a vector representing a timeseries, `e` is the threshold and `f` is a distance function.
```
rp <- function(x, e, f=norm2){
#x: vector
#e: threshold
#f: norm (distance function)
N <- length(x)
# This will store the recurrence plot.
M <- matrix(nrow=N, ncol=N)
# This will store the distance matrix.
D <- matrix(nrow=N, ncol=N)
for(i in 1:N){
for(j in 1:N){
# Compute the distance between a pair of points.
d <- f(x[i], x[j])
# Store result in D.
# Start filling values from bottom left.
D[N - (i-1), j] <- d
if(d <= e){
M[N - (i-1), j] <- 1
}
else{
M[N - (i-1), j] <- 0
}
}
}
return(list(D=D, RP=M))
}
```
This function first defines two square matrices `M` and `D` to store the recurrence plot and the distance matrix, respectively. Then, it iterates the matrices from bottom left to top right and fills the corresponding values for `M` and `D`. The distance between elements `i` and `j` from the vector is computed. That distance is directly stored in `D`. To generate the RP we check if the distance is less or equal to the threshold. If that is the case the corresponding entry in `M` is set to \\(1\\). Finally, both matrices are returned by the function.
Now, we can try our `rp()` function on the *HAND GESTURES* dataset to convert one of the timeseries into a RP. First, we read one of the gesture files. For example, the first gesture *‘1’* from user \\(1\\). We only extract the acceleration from the \\(x\\) axis and store it in variable `x`.
```
df <- read.csv(file.path(datasets_path,
"hand_gestures/1/1_20130703-120056.txt"),
header = F)
x <- df$V1
```
If we plot vector `x` we get something like in Figure [7\.9](representations.html#fig:gesture1X).
```
# Plot vector x.
plot(x, type="l", main="Hand gesture 1", xlab = "time", ylab = "")
```
FIGURE 7\.9: Acceleration of x of gesture 1\.
Now the `rp()` function that we just defined is used to calculate the RP and distance matrix of vector `x`. We set a threshold of \\(0\.5\\) and store the result in `res`.
```
# Compute RP and distance matrix.
res <- rp(x, 0.5, norm2)
```
Let’s first plot the distance matrix stored in `res$D`. The `pheatmap()` function can be used to generate the plot.
```
library(pheatmap)
pheatmap(res$D, main="Distance matrix of gesture 1", cluster_row = FALSE,
cluster_col = FALSE,
legend = F,
color = colorRampPalette(c("white", "black"))(50))
```
FIGURE 7\.10: Distance matrix of gesture 1\.
From figure [7\.10](representations.html#fig:gesture1D) we can see that the diagonal cells are all white. Those represent values of \\(0\\), the distance between a point and itself. Apart from that, there are no other human intuitive patterns to look for. Now, let’s see how the recurrence plot stored in `res$RP` looks like (Figure [7\.11](representations.html#fig:gesture1rp5)).
```
pheatmap(res$RP, main="RP with threshold = 0.5", cluster_row = FALSE,
cluster_col = FALSE,
legend = F,
color = colorRampPalette(c("white", "black"))(50))
```
FIGURE 7\.11: RP of gesture 1 with a threshold of 0\.5\.
Here, we see that this is kind of an inverted version of the distance matrix. Now, the diagonal is black because small distances are encoded as ones. There are also some clusters of points and vertical and horizontal line patterns. If we wanted to build a classifier, we would not need to interpret those extraterrestrial images. We could just treat each distance matrix or RP as an image and feed them directly to a CNN (CNNs will be covered in chapter [8](deeplearning.html#deeplearning)).
Finally, we can try to see what happens if we change the threshold. Figure [7\.12](representations.html#fig:rpComp) shows two RPs. In the left one, a small threshold of \\(0\.01\\) was used. Here, many details were lost and only very small distances show up. In the plot to the right, a threshold of \\(1\.5\\) was used. Here, the plot is cluttered with black pixels which makes it difficult to see any patterns. On the other hand, a distance matrix will remain the same regardless of the threshold selection.
FIGURE 7\.12: RP of gesture 1 with two different thresholds.
`shiny_rp.R` This shiny app allows you to select hand gestures, plot their corresponding distance matrix and recurrence plot, and see how the threshold affects the final result.
7\.6 Bag\-of\-Words
-------------------
The main idea of the Bag\-of\-Words (BoW) encoding is to represent a complex entity as a set of its constituent parts. It is called Bag\-of\-Words because one of the first applications was in natural language processing. Say there is a set of documents about different topics such as medicine, arts, engineering, etc., and you would like to classify them automatically based on their words. In BoW, each document is represented as a table that contains the unique words across all documents and their respective counts for each document. With this representation, one may see that documents about medicine will contain higher counts of words like *treatment*, *diagnosis*, *health*, etc., compared to documents about art or engineering. Figures [7\.13](representations.html#fig:bowExample) and [7\.14](representations.html#fig:bowTab) show the conceptual view and the table view, respectively.
FIGURE 7\.13: Conceptual view of two documents as BoW.
FIGURE 7\.14: Table view of two documents as BoW.
From these representations, it is now easy to build a document classifier. The word\-counts table can be used as an input feature vector. That is, each position in the feature vector represents a word and its value is an integer representing the total count for that word.
Note that in practice documents will differ in length, thus, it is a good idea to use percentages instead of total counts. This can be achieved by dividing each word count by the total number of counts. Also note that some frequent words like ‘the’, ‘is’, ‘it’ can cause problems, so some extra preprocessing is needed. This was a simple example but if you are interested in more advanced text processing techniques I refer you to the book “Text Mining with R: A Tidy Approach” by Silge and Robinson ([2017](#ref-silge2017)).
BoW can also be used for image classification in complex scenarios. For example when dealing with composed scenes like classrooms, parks, shops, and streets. First, the scene (document) can be decomposed into smaller elements (words) by identifying objects like trees, chairs, cars, cashiers, etc. In this case, instead of bags of words we have bags of objects but the idea is the same. The object identification part can be done in a *supervised* manner where there is already a classifier that assigns labels to objects.
Using a supervised approach can work in some simple cases but is not scalable for more complex ones. *Why?* Because the classifier would need to be trained for each type of object. Furthermore, those types of objects need to be manually defined beforehand. If we want to apply this method on scenes where most of their elements do not have a corresponding label in the object classifier we will be missing a lot of information and will end up having incomplete word count tables.
A possible solution is to instead, use an *unsupervised* approach. The image scene can be divided into squared (but not necessarily) patches. Conceptually, each patch may represent an independent object (a tree, a chair, etc.). Then, feature extraction can be performed on each patch so ultimately patches are encoded as feature vectors. Again, each feature vector represents an individual possible object inside the complex scene. At this point, those feature vectors do not have a label so we can’t build the BoW (table counts) for the whole scene. Then, how are those *unlabeled* feature vectors useful? We could use a pre\-trained classifier to assign them labels –but we would be relying into the supervised approach along with its aforementioned limitations. Instead, we can use an *unsupervised* method, for example, *k\-means*! which was presented in chapter [6](unsupervised.html#unsupervised).
We can cluster all the *unlabeled* feature vectors into \\(k\\) groups where \\(k\\) is the number of possible unique labels. After the clustering, we can compute the centroid of each group. To assign a label to an *unlabeled feature vector*, we can compute the closest centroid and use its id as the label. The id of each centroid can be an integer. Intuitively, similar feature vectors will end up in the same group. For example, there could be a group of objects that look like *chairs*, another for objects that look like *cars*, and so on. Usually, it may happen that elements in the same groups will not look similar for the human eye, but they are similar in the feature space. Also, the objects’ shape inside the groups may not make sense at all for the human eye. If the objective is to classify the complex scene, then we do not necessarily need to understand the individual objects nor do they need to have a corresponding mapping into a real\-world object.
Once the feature vectors are labeled, we can build the word\-count table but instead of having ‘meaningful’ words, the entries will be ids with their corresponding counts. As you might have guessed, one limitation is that we do not know how many clusters (labels) there should be for a given problem. One approach is to try out for different values of \\(k\\) and use the one that optimizes your performance metric of interest.
But, what this BoW thing has to do with behavior? Well, we can use this method to decompose complex behaviors into simpler ones and encode them as BoW as we will see in the next subsection for complex activities analysis.
**Advantages**
* Able to represent complex situations/objects/etc., by decomposing them into simpler elements.
* The resulting BoW can be very efficient and effective for classification tasks.
* Can be used in several domains including text, computer vision, sensor data, and so on.
* The BoW can be constructed in an unsupervised manner.
**Limitations**
* Temporal and spatial information is not preserved.
* It may require some effort to define how to generate the words.
* There are cases where one needs to find the optimal number of words.
### 7\.6\.1 BoW for Complex Activities.
`bagwords/bow_functions.R` `bagwords/bow_run.R`
So far, I have been talking about BoW applications for text and images. In this section, I will show you how to decompose **complex activities** from accelerometer data into simpler activities and encode them as BoW. In chapters [2](classification.html#classification) and [3](ensemble.html#ensemble), we trained supervised models for *simple* activity recognition. Those activities were like: *walking*, *jogging*, *standing*, etc. For those, it is sufficient to divide them into windows of size equivalent to a couple of seconds in order to infer their labels. On the other hand, the duration of *complex* activities are longer and they are composed of many simple activities. One example is the activity **shopping**. When we are shopping we perform many different activities including *walking*, *taking groceries*, *paying*, *standing while looking at the stands*, and so on. Another example is **commuting**. When we commute, we need to walk but also take the train, or drive, or cycle.
Using the same approach for simple activity classification on complex ones may not work. Representing a complex activity using fixed\-size windows can cause some conflicts. For example, a window may be covering the time span when the user was *walking*, but *walking* can be present in different types of complex activities. If a window happens to be part of a segment when the person was walking, there is not enough information to know which was the complex activity at that time. This is where BoW comes into play. If we represent a complex activity as a bag of *simple activities* then, a classifier will have an easier time differentiating between classes. For instance, when **exercising**, the frequencies (counts) of high\-intensity activities (like running or jogging) will be higher compared to when someone is shopping.
In practice, it would be very tedious to manually label all possible simple activities to form the BoW. Instead, we will use the unsupervised approach discussed in the previous section to automatically label the simple activities so we only need to manually label the complex ones.
Here, I will use the *COMPLEX ACTIVITIES* dataset which consists of five complex activities: *‘commuting’*, *‘working’*, *‘being at home’*, *‘shopping’* and *‘exercising’*. The duration of the activities varies from some minutes to a couple of hours. Accelerometer data at \\(50\\) Hz. was collected with a cellphone placed in the user’s belt. The dataset has \\(80\\) accelerometer files, each representing a complex activity.
The task is to go from the raw accelerometer data of the complex activity to a BoW representation where each word will represent a simple activity. The overall steps are as follows:
1. Divide the raw data into small fixed\-length windows and generate feature vectors from them. Intuitively, these are the simple activities.
2. Cluster the feature vectors.
3. Label the vectors by assigning them to the closest centroid.
4. Build the word\-count table.
FIGURE 7\.15: BoW steps. From raw signal to BoW table.
Figure [7\.15](representations.html#fig:bowProcess) shows the overall steps graphically. All the functions to perform the above steps are implemented in `bow_functions.R`. The functions are called in the appropriate order in `bow_run.R`.
First of all, and to avoid overfitting, we need to hold out an independent set of instances. These instances will be used to generate the clusters and their respective centroids. The dataset is already divided into a train and test set. The train set contains \\(13\\) instances out of the \\(80\\). The remaining \\(67\\) are assigned to the test set.
In the first step, we need to extract the feature vectors from the raw data. This is implemented in the function `extractSimpleActivities()`. This function divides the raw data of each file into fixed\-length windows of size \\(150\\) which corresponds to \\(3\\) seconds. Each window can be thought of as a simple activity. For each window, it extracts \\(14\\) features like mean, standard deviation, correlation between axes, etc. The output is stored in the folder `simple_activities/`. Each file corresponds to one of the complex activities and each row in a file is a feature vector (simple activity). **At this time the feature vectors (simple activities) are unlabeled.** Notice that in the script `bow_run.R` the function is called twice:
```
# Extract simple activities for train set.
extractSimpleActivities(train = TRUE)
# Extract simple activities for test set (may take some minutes).
extractSimpleActivities(train = FALSE)
```
This is because we divided the data into train and test sets. So we need to extract the features from both sets by setting the `train` parameter accordingly.
The second step consists of clustering the extracted feature vectors. To avoid overfitting, this step is only performed on the train set. The function `clusterSimpleActivities()` implements this step. The feature vectors are grouped into \\(15\\) groups. This can be changed by setting `constants$wordsize <- 15` to some other value. The function stores all feature vectors from all files in a single data frame and runs \\(k\\)\-means. Finally, the resulting centroids are saved in the text file `clustering/centroids.txt` inside the train set directory.
The next step is to label each feature vector (simple activity) by assigning it to its closest centroid. The function `assignSimpleActivitiesToCluster()` reads the centroids from the text file, and for each simple activity in the test set it finds the closest centroid using the Euclidean distance. The label (an integer from \\(1\\) to \\(15\\)) of the closest centroid is assigned and the resulting files are saved in the `labeled_activities/` directory. Each file contains the assigned labels (integers) for the corresponding feature vectors file in the `simple_activities/` directory. Thus, if a file inside `simple_activities/` has \\(100\\) feature vectors then, its corresponding file in `labeled_activities/` should have \\(100\\) labels.
In the last step, the function `convertToHistogram()` will generate the bag of words from the labeled activities. The BoW are stored as histograms (encoded as vectors) with each element representing a label and its corresponding counts. In this case, the labels are \\(w1\..w15\\). The \\(w\\) stands for word and was only appended for clarity to show that this is a label. This function will convert the counts into percentages (normalization) in case we want to perform classification, that is, the percentage of time that each word (simple activity) occurred during the entire complex activity. The resulting `histograms/histograms.csv` file contains the BoW as one histogram per row. One per each complex activity. The first column is the complex activity’s label in text format.
Figures [7\.16](representations.html#fig:complexWorking) and [7\.17](representations.html#fig:complexExercising) show the histogram for one instance of *‘working’* and *‘exercising’*. The x\-axis shows the labels of the simple activities and the y\-axis their relative frequencies.
FIGURE 7\.16: Histogram of working activity.
FIGURE 7\.17: Histogram of exercising activity.
Here, we can see that the *‘working’* activity is composed mainly by the simple activities *w1*, *w3*, and *w12*. The *exercising* activity is mainly composed of *w15* and *w14* which perhaps are high\-intensity movements like jogging or running.
Once the complex activities are encoded as BoW (histograms), one could train a classifier using the histogram frequencies as features.
### 7\.6\.1 BoW for Complex Activities.
`bagwords/bow_functions.R` `bagwords/bow_run.R`
So far, I have been talking about BoW applications for text and images. In this section, I will show you how to decompose **complex activities** from accelerometer data into simpler activities and encode them as BoW. In chapters [2](classification.html#classification) and [3](ensemble.html#ensemble), we trained supervised models for *simple* activity recognition. Those activities were like: *walking*, *jogging*, *standing*, etc. For those, it is sufficient to divide them into windows of size equivalent to a couple of seconds in order to infer their labels. On the other hand, the duration of *complex* activities are longer and they are composed of many simple activities. One example is the activity **shopping**. When we are shopping we perform many different activities including *walking*, *taking groceries*, *paying*, *standing while looking at the stands*, and so on. Another example is **commuting**. When we commute, we need to walk but also take the train, or drive, or cycle.
Using the same approach for simple activity classification on complex ones may not work. Representing a complex activity using fixed\-size windows can cause some conflicts. For example, a window may be covering the time span when the user was *walking*, but *walking* can be present in different types of complex activities. If a window happens to be part of a segment when the person was walking, there is not enough information to know which was the complex activity at that time. This is where BoW comes into play. If we represent a complex activity as a bag of *simple activities* then, a classifier will have an easier time differentiating between classes. For instance, when **exercising**, the frequencies (counts) of high\-intensity activities (like running or jogging) will be higher compared to when someone is shopping.
In practice, it would be very tedious to manually label all possible simple activities to form the BoW. Instead, we will use the unsupervised approach discussed in the previous section to automatically label the simple activities so we only need to manually label the complex ones.
Here, I will use the *COMPLEX ACTIVITIES* dataset which consists of five complex activities: *‘commuting’*, *‘working’*, *‘being at home’*, *‘shopping’* and *‘exercising’*. The duration of the activities varies from some minutes to a couple of hours. Accelerometer data at \\(50\\) Hz. was collected with a cellphone placed in the user’s belt. The dataset has \\(80\\) accelerometer files, each representing a complex activity.
The task is to go from the raw accelerometer data of the complex activity to a BoW representation where each word will represent a simple activity. The overall steps are as follows:
1. Divide the raw data into small fixed\-length windows and generate feature vectors from them. Intuitively, these are the simple activities.
2. Cluster the feature vectors.
3. Label the vectors by assigning them to the closest centroid.
4. Build the word\-count table.
FIGURE 7\.15: BoW steps. From raw signal to BoW table.
Figure [7\.15](representations.html#fig:bowProcess) shows the overall steps graphically. All the functions to perform the above steps are implemented in `bow_functions.R`. The functions are called in the appropriate order in `bow_run.R`.
First of all, and to avoid overfitting, we need to hold out an independent set of instances. These instances will be used to generate the clusters and their respective centroids. The dataset is already divided into a train and test set. The train set contains \\(13\\) instances out of the \\(80\\). The remaining \\(67\\) are assigned to the test set.
In the first step, we need to extract the feature vectors from the raw data. This is implemented in the function `extractSimpleActivities()`. This function divides the raw data of each file into fixed\-length windows of size \\(150\\) which corresponds to \\(3\\) seconds. Each window can be thought of as a simple activity. For each window, it extracts \\(14\\) features like mean, standard deviation, correlation between axes, etc. The output is stored in the folder `simple_activities/`. Each file corresponds to one of the complex activities and each row in a file is a feature vector (simple activity). **At this time the feature vectors (simple activities) are unlabeled.** Notice that in the script `bow_run.R` the function is called twice:
```
# Extract simple activities for train set.
extractSimpleActivities(train = TRUE)
# Extract simple activities for test set (may take some minutes).
extractSimpleActivities(train = FALSE)
```
This is because we divided the data into train and test sets. So we need to extract the features from both sets by setting the `train` parameter accordingly.
The second step consists of clustering the extracted feature vectors. To avoid overfitting, this step is only performed on the train set. The function `clusterSimpleActivities()` implements this step. The feature vectors are grouped into \\(15\\) groups. This can be changed by setting `constants$wordsize <- 15` to some other value. The function stores all feature vectors from all files in a single data frame and runs \\(k\\)\-means. Finally, the resulting centroids are saved in the text file `clustering/centroids.txt` inside the train set directory.
The next step is to label each feature vector (simple activity) by assigning it to its closest centroid. The function `assignSimpleActivitiesToCluster()` reads the centroids from the text file, and for each simple activity in the test set it finds the closest centroid using the Euclidean distance. The label (an integer from \\(1\\) to \\(15\\)) of the closest centroid is assigned and the resulting files are saved in the `labeled_activities/` directory. Each file contains the assigned labels (integers) for the corresponding feature vectors file in the `simple_activities/` directory. Thus, if a file inside `simple_activities/` has \\(100\\) feature vectors then, its corresponding file in `labeled_activities/` should have \\(100\\) labels.
In the last step, the function `convertToHistogram()` will generate the bag of words from the labeled activities. The BoW are stored as histograms (encoded as vectors) with each element representing a label and its corresponding counts. In this case, the labels are \\(w1\..w15\\). The \\(w\\) stands for word and was only appended for clarity to show that this is a label. This function will convert the counts into percentages (normalization) in case we want to perform classification, that is, the percentage of time that each word (simple activity) occurred during the entire complex activity. The resulting `histograms/histograms.csv` file contains the BoW as one histogram per row. One per each complex activity. The first column is the complex activity’s label in text format.
Figures [7\.16](representations.html#fig:complexWorking) and [7\.17](representations.html#fig:complexExercising) show the histogram for one instance of *‘working’* and *‘exercising’*. The x\-axis shows the labels of the simple activities and the y\-axis their relative frequencies.
FIGURE 7\.16: Histogram of working activity.
FIGURE 7\.17: Histogram of exercising activity.
Here, we can see that the *‘working’* activity is composed mainly by the simple activities *w1*, *w3*, and *w12*. The *exercising* activity is mainly composed of *w15* and *w14* which perhaps are high\-intensity movements like jogging or running.
Once the complex activities are encoded as BoW (histograms), one could train a classifier using the histogram frequencies as features.
7\.7 Graphs
-----------
Graphs are one of the most general data structures (and my favorite one). The two basic components of a graph are its **vertices** and **edges**. Vertices are also called **nodes** and edges are also called **arcs**. Vertices are connected by edges. Figure [7\.18](representations.html#fig:graphTypes) shows three different types of graphs. Graph (a) is an undirected graph that consists of \\(3\\) vertices and \\(3\\) edges. Graph (b) is a directed graph, that is, its edges have a direction. Graph (c) is a weighted directed graph because its edges have a direction and they also have an associated weight.
FIGURE 7\.18: Three different types of graphs.
Weights can represent anything, for example, distances between cities or number of messages sent between devices. In the previous graph, the vertices also have a label (integer numbers but could be strings). In general, vertices and edges can have any number of attributes, not just weight and/or labels. Many data structures like binary trees and lists are graphs *with constraints*. For example, a list is also a graph in which all vertices are connected as a sequence: a\-\>b\-\>c. Trees are also graphs with the constraint that there is only one root node and nodes can only have edges to their children. Graphs are very useful to represent many types of real\-world things like interactions, social relationships, geographical locations, the world wide web, and so on.
There are two main ways to encode a graph. The first one is as an **adjacency list**. An adjacency list consists of a list of tuples per node. The tuples represent edges. The first element of a tuple indicates the target node and the second element the weight of the edge. Figure [7\.19](representations.html#fig:graphOptions)\-b shows the adjacency list representation of the corresponding weighted directed graph in the same figure.
The second main way to encode a graph is as an **adjacency matrix**. This is a square \\(n\\times n\\) matrix where \\(n\\) is the number of nodes. Edges are represented as entries in the matrix. If there is an edge between node \\(a\\) and node \\(b\\), the corresponding cell contains the edge’s weight where rows represent the source nodes and columns the destination nodes. Otherwise, it contains a \\(0\\) or just an empty value. Figure [7\.19](representations.html#fig:graphOptions)\-c shows the corresponding adjacency matrix. The disadvantage of the adjacency matrix is that for sparse graphs (many nodes and few edges), a lot of space is wasted. In practice, this can be overcome by using a sparse matrix implementation.
FIGURE 7\.19: Different ways to store a graph.
**Advantages:**
* Many real\-world situations can be naturally represented as graphs.
* Some partial order is preserved.
* Specialized graph analytics can be performed to gain insights and understand the data. See for example the book by Samatova et al. ([2013](#ref-samatova2013)).
* Can be plotted and different visual properties can be tuned to convey information such as edge width and colors, vertex size and color, distance between nodes, etc.
**Limitations:**
* Some graph analytic algorithms are computationally demanding.
* It can be difficult to use graphs to solve classification problems.
* It is not always clear if the data can be represented as a graph.
### 7\.7\.1 Complex Activities as Graphs
`plot_graphs.R`
In the previous section, it was shown how complex activities can be represented as Bag\-of\-Words. This was done by decomposing the complex activities into simpler ones. The BoW is composed of the simple activities counts (frequencies). In the process of building the BoW in the previous section, some intermediate text files stored in `labeled_activities/` were generated. These files contain the sequence of simple activities (their ids as integers) that constitute the complex activity. From these sequences, histograms were generated and in doing so, the order was lost.
One thing we can do is build a graph where vertices represent simple activities and edges represent the interactions between them. For instance, if we have a sequence of simple activities ids like: \\(3,2,2,4\\) we can represent this as a graph with \\(3\\) vertices and \\(3\\) edges. One vertex per activity. The first edge would go from vertex \\(3\\) to vertex \\(2\\), the next one from vertex \\(2\\) to vertex \\(2\\), and so on. In this way we can use a graph to capture the interactions between simple activities.
The script `plot_graphs.R` implements a function named `ids.to.graph()` that reads the sequence files from `labeled_activities/` and converts them into weighted directed graphs. The weight of the edge \\((a,b)\\) is equal to the total number of transitions from vertex \\(a\\) to vertex \\(b\\). The script uses the `igraph` package ([Csardi and Nepusz 2006](#ref-igraph)) to store and plot the resulting graphs. The `ids.to.graph()` function receives as its first argument the sequence of ids. Its second argument indicates whether the edge weights should be normalized or not. If normalized, the sum of all weights will be \\(1\\).
The following code snippet reads one of the sequence files, converts it into a graph, and plots the graph.
```
datapath <- "../labeled_activitires/"
# Select one of the 'work' complex activities.
filename <- "2_20120606-111732.txt"
# Read it as a data frame.
df <- read.csv(paste0(datapath, filename), header = F)
# Convert the sequence of ids into an igraph graph.
g <- ids.to.graph(df$V1, relative.weights = T)
# Plot the result.
set.seed(12345)
plot(g, vertex.label.cex = 0.7,
edge.arrow.size = 0.2,
edge.arrow.width = 1,
edge.curved = 0.1,
edge.width = E(g)$weight * 8,
edge.label = round(E(g)$weight, digits = 3),
edge.label.cex = 0.4,
edge.color = "orange",
edge.label.color = "black",
vertex.color = "skyblue"
)
```
FIGURE 7\.20: Complex activity ‘working’ plotted as a graph. Nodes are simple activities and edges transitions between them.
Figure [7\.20](representations.html#fig:graphActivity) shows the resulting plot. The plot can be customized to change the vertex and edge color, size, curvature, etc. For more details please read the `igraph` package documentation.
The width of the edges is proportional to its weight. For instance, transitions from simple activity \\(3\\) to itself are very frequent (\\(53\.2\\%\\) of the time) for the *‘work’* complex activity, but transitions from \\(8\\) to \\(4\\) are very infrequent. Note that with this graph representation, some temporal dependencies are preserved but the complete sequence order is lost. Still this captures more information compared to BoW. The relationships between consecutive simple activities are preserved.
It is also possible to get the adjacency matrix with the method `as_adjacency_matrix()`.
```
as_adjacency_matrix(g)
#> 6 x 6 sparse Matrix of class "dgCMatrix"
#> 1 11 12 3 4 8
#> 1 1 1 . 1 . .
#> 11 . 1 1 1 1 .
#> 12 . 1 . . . .
#> 3 1 1 . 1 . 1
#> 4 . . . 1 1 .
#> 8 . . . 1 1 .
```
In this matrix, there is a \\(1\\) if the edge is present and a ‘.’ if there is no edge. However, this adjacency matrix does not contain information about the weights. We can print the adjacency matrix with weights by specifying `attr = "weight"`.
```
as_adjacency_matrix(g, attr = "weight")
#> 6 x 6 sparse Matrix of class "dgCMatrix"
#> 1 11 12 3 4 8
#> 1 0.06066946 0.001046025 . 0.023012552 . .
#> 11 . 0.309623431 0.00209205 0.017782427 0.001046025 .
#> 12 . 0.002092050 . . . .
#> 3 0.02405858 0.017782427 . 0.532426778 . 0.00209205
#> 4 . . . 0.002092050 0.002092050 .
#> 8 . . . 0.001046025 0.001046025 .
```
The adjacency matrices can then be used to train a classifier. Since many classifiers expect one\-dimensional vectors and not matrices, we can flatten the matrix. This is left as an exercise for the reader to try. Which representation produces better classification results (adjacency matrix or BoW)?
The book “Practical graph mining with R” ([Samatova et al. 2013](#ref-samatova2013)) is a good source to learn more about graph analytics with R.
### 7\.7\.1 Complex Activities as Graphs
`plot_graphs.R`
In the previous section, it was shown how complex activities can be represented as Bag\-of\-Words. This was done by decomposing the complex activities into simpler ones. The BoW is composed of the simple activities counts (frequencies). In the process of building the BoW in the previous section, some intermediate text files stored in `labeled_activities/` were generated. These files contain the sequence of simple activities (their ids as integers) that constitute the complex activity. From these sequences, histograms were generated and in doing so, the order was lost.
One thing we can do is build a graph where vertices represent simple activities and edges represent the interactions between them. For instance, if we have a sequence of simple activities ids like: \\(3,2,2,4\\) we can represent this as a graph with \\(3\\) vertices and \\(3\\) edges. One vertex per activity. The first edge would go from vertex \\(3\\) to vertex \\(2\\), the next one from vertex \\(2\\) to vertex \\(2\\), and so on. In this way we can use a graph to capture the interactions between simple activities.
The script `plot_graphs.R` implements a function named `ids.to.graph()` that reads the sequence files from `labeled_activities/` and converts them into weighted directed graphs. The weight of the edge \\((a,b)\\) is equal to the total number of transitions from vertex \\(a\\) to vertex \\(b\\). The script uses the `igraph` package ([Csardi and Nepusz 2006](#ref-igraph)) to store and plot the resulting graphs. The `ids.to.graph()` function receives as its first argument the sequence of ids. Its second argument indicates whether the edge weights should be normalized or not. If normalized, the sum of all weights will be \\(1\\).
The following code snippet reads one of the sequence files, converts it into a graph, and plots the graph.
```
datapath <- "../labeled_activitires/"
# Select one of the 'work' complex activities.
filename <- "2_20120606-111732.txt"
# Read it as a data frame.
df <- read.csv(paste0(datapath, filename), header = F)
# Convert the sequence of ids into an igraph graph.
g <- ids.to.graph(df$V1, relative.weights = T)
# Plot the result.
set.seed(12345)
plot(g, vertex.label.cex = 0.7,
edge.arrow.size = 0.2,
edge.arrow.width = 1,
edge.curved = 0.1,
edge.width = E(g)$weight * 8,
edge.label = round(E(g)$weight, digits = 3),
edge.label.cex = 0.4,
edge.color = "orange",
edge.label.color = "black",
vertex.color = "skyblue"
)
```
FIGURE 7\.20: Complex activity ‘working’ plotted as a graph. Nodes are simple activities and edges transitions between them.
Figure [7\.20](representations.html#fig:graphActivity) shows the resulting plot. The plot can be customized to change the vertex and edge color, size, curvature, etc. For more details please read the `igraph` package documentation.
The width of the edges is proportional to its weight. For instance, transitions from simple activity \\(3\\) to itself are very frequent (\\(53\.2\\%\\) of the time) for the *‘work’* complex activity, but transitions from \\(8\\) to \\(4\\) are very infrequent. Note that with this graph representation, some temporal dependencies are preserved but the complete sequence order is lost. Still this captures more information compared to BoW. The relationships between consecutive simple activities are preserved.
It is also possible to get the adjacency matrix with the method `as_adjacency_matrix()`.
```
as_adjacency_matrix(g)
#> 6 x 6 sparse Matrix of class "dgCMatrix"
#> 1 11 12 3 4 8
#> 1 1 1 . 1 . .
#> 11 . 1 1 1 1 .
#> 12 . 1 . . . .
#> 3 1 1 . 1 . 1
#> 4 . . . 1 1 .
#> 8 . . . 1 1 .
```
In this matrix, there is a \\(1\\) if the edge is present and a ‘.’ if there is no edge. However, this adjacency matrix does not contain information about the weights. We can print the adjacency matrix with weights by specifying `attr = "weight"`.
```
as_adjacency_matrix(g, attr = "weight")
#> 6 x 6 sparse Matrix of class "dgCMatrix"
#> 1 11 12 3 4 8
#> 1 0.06066946 0.001046025 . 0.023012552 . .
#> 11 . 0.309623431 0.00209205 0.017782427 0.001046025 .
#> 12 . 0.002092050 . . . .
#> 3 0.02405858 0.017782427 . 0.532426778 . 0.00209205
#> 4 . . . 0.002092050 0.002092050 .
#> 8 . . . 0.001046025 0.001046025 .
```
The adjacency matrices can then be used to train a classifier. Since many classifiers expect one\-dimensional vectors and not matrices, we can flatten the matrix. This is left as an exercise for the reader to try. Which representation produces better classification results (adjacency matrix or BoW)?
The book “Practical graph mining with R” ([Samatova et al. 2013](#ref-samatova2013)) is a good source to learn more about graph analytics with R.
7\.8 Summary
------------
Depending on the problem at hand, the data can be encoded in different forms. Representing data in a particular way, can simplify the problem solving process and the application of specialized algorithms. This chapter presented different ways in which data can be encoded along with some of their advantages/disadvantages.
* **Feature vectors** are fixed\-size arrays that capture the properties of an instance. This is the most common form of data representation in machine learning.
* Most machine learning algorithms expect their inputs to be encoded as feature vectors.
* **Transactions** is another way in which data can be encoded. This representation is appropriate for association rule mining algorithms.
* Data can also be represented as **images**. Algorithms like CNNs (covered in chapter [8](deeplearning.html#deeplearning)) can work directly on images.
* The **Bag\-of\-Words** representation is useful when we want to model a complex behavior as a composition of simpler ones.
* A **graph** is a general data structure composed of *vertices* and *edges* and is used to model relationships between entities.
* Sometimes it is possible to convert data into multiple representations. For example, timeseries can be converted into images, recurrence plots, etc.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/deeplearning.html |
Chapter 8 Predicting Behavior with Deep Learning
================================================
Deep learning (DL) consists of a set of model architectures and algorithms with applications in supervised, semi\-supervised, unsupervised and reinforcement learning. Deep learning is mainly based on artificial neural networks (ANNs). One of the main characteristics of DL is that the models are composed of several levels. Each level transforms its input into more abstract representations. For example, for an image recognition task, the first level corresponds to raw pixels, the next level transforms pixels into simple shapes like horizontal/vertical lines, diagonals, etc. The next level may abstract more complex shapes like wheels, windows, and so on; and the final level could detect if the image contains a car or a human, or maybe both.
Examples of DL architectures include deep neural networks (DNNs), Convolutional Neural Networks (CNNs), recurrent neural networks (RNNs), and autoencoders, to name a few. One of the reasons of the success of DL is due to its flexibility to deal with different types of data and problems. For example, CNNs can be used for image classification, RNNs can be used for timeseries data, and autoencoders can be used to generate new data and perform anomaly detection. Another advantage of DL is that it is not always required to do feature engineering. That is, extract different features depending on the problem domain. Depending on the problem and the DL architecture, it is possible to feed the raw data (with some preprocessing) to the model. The model will then, automatically extract features at each level with an increasing level of abstraction. DL has achieved state\-of\-the\-art results in many different tasks including speech recognition, image recognition, and translation. It has also been successfully applied to different types of behavior prediction.
In this chapter, an introduction to artificial neural networks will be presented. Next, I will explain how to train deep models in R using Keras and TensorFlow. The models will be applied to behavior prediction tasks. This chapter also includes a section on Convolutional Neural Networks and their application to behavior prediction.
8\.1 Introduction to Artificial Neural Networks
-----------------------------------------------
Artificial neural networks (ANNs) are mathematical models *inspired* by the brain. Here, I would like to emphasize the word *inspired* because ANNs do not model how a biological brain actually works. In fact, there is little knowledge about how a biological brain works. ANNs are composed of **units** (also called **neurons** or **nodes**) and connections between units. Each unit can receive inputs from other units. Those inputs are processed inside the unit and produce an output. Typically, units are arranged into layers (as we will see later) and connections between units have an associated weight. Those weights are learned during training and they are the core elements that make a network behave in a certain way.
For the rest of the chapter I will mostly use the term **units** to refer to neurons/nodes. I will also use the term **network** to refer to artificial neural networks.
Before going into details of how multi\-layer ANNs work, let’s start with a very simple neural network consisting of a **single unit**. See Figure [8\.1](deeplearning.html#fig:nnPerceptron). Even though this network only has one node, it is already composed of several interesting elements which are the basis of more complex networks. First, it has \\(n\\) input variables \\(x\_1 \\ldots x\_n\\) which are real numbers. Second, the unit has a set of \\(n\\) weights \\(w\_1 \\ldots w\_n\\) associated with each input. These weights can take real numbers as values. Finally, there is an output \\(y'\\) which is binary (it can take two values: \\(1\\) or \\(0\\)).
FIGURE 8\.1: A neural network composed of a single unit (perceptron).
This simple network consisting of one unit with a binary output is called a **perceptron** and was proposed by Rosenblatt ([1958](#ref-rosenblatt1958)).
This single unit also known as *perceptron* is capable of making binary decisions based on the input and the weights. To get the final decision \\(y'\\) the inputs are multiplied by their corresponding weights and the results are summed. If the sum is greater than a given threshold, then the output is \\(1\\) and \\(0\\) otherwise. Formally:
\\\[\\begin{equation}
y' \=
\\begin{cases}
1 \& \\textit{if } \\sum\_{i}{w\_i x\_i \> t}, \\\\
0 \& \\textit{if } \\sum\_{i}{w\_i x\_i \\leq t}
\\end{cases}
\\tag{8\.1}
\\end{equation}\\]
where \\(t\\) is a threshold. We can use a perceptron to make important decisions in life. For example, suppose you need to decide whether or not to go to the movies. Assume this decision is based on two pieces of information:
1. You have money to pay the entrance (or not) and,
2. it is a horror movie (or not).
There are two additional assumptions as well:
1. The movie theater only projects \\(1\\) film.
2. You don’t like horror movies.
This decision\-making process can be modeled with the perceptron of Figure [8\.2](deeplearning.html#fig:nnMovies). This perceptron has two binary input variables: *money* and *horror*. Each variable has an associated weight. Suppose there is a decision threshold of \\(t\=3\\). Finally, there is a binary output: \\(1\\) means you should go to the movies and \\(0\\) indicates that you should not go.
FIGURE 8\.2: Perceptron to decide whether or not to go to the movies based on two input variables.
In this example, the weights (\\(5\\) and \\(\-3\\)) and the threshold \\(t\=3\\) were already provided. The weights and the threshold are called the *parameters* of the network. Later, we will see how the parameters can be learned automatically from data.
Suppose that today was payday and the theater is projecting an action movie. Then, we can set the input variables \\(money\=1\\) and \\(horror\=0\\). Now we want to decide if we should go to the movie theater or not. To get the final answer we can use Equation [(8\.1\)](deeplearning.html#eq:perceptron). This formula tells us that we need to multiply each input variable with their corresponding weights and add them:
\\\[\\begin{align\*}
(money)(5\) \+ (horror)(\-3\)
\\end{align\*}\\]
Substituting *money* and *horror* with their corresponding values:
\\\[\\begin{align\*}
(1\)(5\) \+ (0\)(\-3\) \= 5
\\end{align\*}\\]
Since \\(5 \> t\\) (remember the threshold \\(t\=3\\)), the final output will be \\(1\\), thus, the advice is to go to the movies. Let’s try the scenario when you have money but they are projecting a horror movie: \\(money\=1\\), \\(horror\=1\\).
\\\[\\begin{align\*}
(1\)(5\) \+ (1\)(\-3\) \= 2
\\end{align\*}\\]
In this case, \\(2 \< t\\) and the final output is \\(0\\). Even if you have money, you should not waste it on a movie that you know you most likely will not like. This process of applying operations to the inputs and obtaining the final result is called **forward propagation** because the inputs are ‘pushed’ all the way through the network (a single perceptron in this case). For bigger networks, the outputs of the current layer become the inputs of the next layer, and so on.
For convenience, a simplified version of Equation [(8\.1\)](deeplearning.html#eq:perceptron) can be used. This alternative representation is useful because it provides flexibility to change the internals of the units (neurons) as we will see. The first simplification consists of representing the inputs and weights as vectors:
\\\[\\begin{equation}
\\sum\_{i}{w\_i x\_i} \= \\boldsymbol{w} \\cdot \\boldsymbol{x}
\\end{equation}\\]
The summation becomes a dot product between \\(\\boldsymbol{w}\\) and \\(\\boldsymbol{x}\\). Next, the threshold \\(t\\) can be moved to the left and renamed to \\(b\\) which stands for **bias**. This is only for notation but you can still think of the *bias* as a threshold.
\\\[\\begin{equation}
y' \= f(\\boldsymbol{x}) \=
\\begin{cases}
1 \& \\textit{if } \\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b \> 0, \\\\
0 \& \\textit{otherwise}
\\end{cases}
\\end{equation}\\]
The output \\(y'\\) is a function of \\(\\boldsymbol{x}\\) with \\(\\boldsymbol{w}\\) and \\(b\\) as fixed parameters. One thing to note is that first, we are performing the operation \\(\\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b\\) and then, another operation is applied to the result. In this case, it is a comparison. If the result is greater than \\(0\\) the final output is \\(1\\). You can think of this second operation as another function. Call it \\(g(x)\\).
\\\[\\begin{equation}
f(\\boldsymbol{x}) \= g(\\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b)
\\tag{8\.2}
\\end{equation}\\]
In neural networks terminology, this \\(g(x)\\) is known as the **activation function**. Its result indicates how much active this unit is based on its inputs. If the result is \\(1\\), it means that this unit is active. If the result is \\(0\\), it means the unit is inactive.
This new notation allows us to use different activation functions by substituting \\(g(x)\\) with some other function in Equation [(8\.2\)](deeplearning.html#eq:nnUnit). In the case of the perceptron, the activation function \\(g(x)\\) is the threshold function, which is known as the *step function*:
\\\[\\begin{equation}
g(x) \= step(x) \=
\\begin{cases}
1 \& \\textit{if } x \> 0 \\\\
0 \& \\textit{if } x \\leq 0
\\end{cases}
\\tag{8\.3}
\\end{equation}\\]
Figure [8\.3](deeplearning.html#fig:nnStep) shows the plot of the step function.
FIGURE 8\.3: The step function.
It is worth noting that perceptrons have two major limitations:
1. The output is binary.
2. Perceptrons are linear functions.
The first limitation imposes some restrictions on its applicability. For example, a perceptron cannot be used to predict real\-valued outputs which is a fundamental aspect for regression problems. The second limitation makes the perceptron only capable of solving linear problems. Figure [8\.4](deeplearning.html#fig:nnLinearity) graphically shows this limitation. In the first case, the outputs of the OR logical operator can be classified (separated) using a line. On the other hand, it is not possible to classify the output of the XOR function using a single line.
FIGURE 8\.4: The OR and the XOR logical operators.
To overcome those limitations, several modifications to the perceptron were introduced. This allows us to build models capable of solving more complex non\-linear problems. One such modification is to change the activation function. Another improvement is to add the ability to have several layers of interconnected units. In the next section, two new types of units will be presented. Then, the following section will introduce neural networks also known as multilayer perceptrons which are more complex models built by connecting many units and arranging them into layers.
### 8\.1\.1 Sigmoid and ReLU Units
As previously mentioned, perceptrons have some limitations that restrict their applicability including the fact that they are linear models. In practice, problems are complex and most of them are non\-linear. One way to overcome this limitation is to introduce non\-linearities and this can be done by using a different type of activation function. Remember that a unit can be modeled as \\(f(x) \= g(wx\+b)\\) where \\(g(x)\\) is some activation function. For the perceptron, \\(g(x)\\) is the *step function*. However, another practical limitation not mentioned before is that the step function can change abruptly from \\(0\\) to \\(1\\) and vice versa. Small changes in \\(x\\), \\(w\\), or \\(b\\) can completely change the output. This is a problem during learning and inference time. Instead, we would prefer a smooth version of the step function, for example, the **sigmoid function** which is also known as the **logistic function**:
\\\[\\begin{equation}
s(x) \= \\frac{1}{1 \+ e^{\-x}}
\\tag{8\.4}
\\end{equation}\\]
This function has an ‘S’ shape (Figure [8\.5](deeplearning.html#fig:nnSigmoid)) and as opposed to a step function, this one is smooth. The range of this function is from \\(0\\) to \\(1\\).
FIGURE 8\.5: Sigmoid function.
If we substitute the activation function in Equation [(8\.2\)](deeplearning.html#eq:nnUnit) with the sigmoid function we get our **sigmoid unit**:
\\\[\\begin{equation}
f(x) \= \\frac{1}{1 \+ e^{\-(w \\cdot x \+ b)}}
\\tag{8\.5}
\\end{equation}\\]
Sigmoid units have been one of the most commonly used types of units when building bigger neural networks. Another advantage is that the outputs are real values that can be interpreted as probabilities. For instance, if we want to make binary decisions we can set a threshold. For example, if the output of the sigmoid unit is \\(\> 0\.5\\) then return a \\(1\\). Of course, that threshold would depend on the application. If we need more confidence about the result we can set a higher threshold.
In the last years, another type of unit has been successfully applied to train neural networks, the **rectified linear unit** or **ReLU** for short (Figure [8\.6](deeplearning.html#fig:nnRectified)).
FIGURE 8\.6: Rectifier function.
The activation function of this unit is the rectifier function:
\\\[\\begin{equation}
rectifier(x) \=
\\begin{cases}
0 \& \\textit{if } x \< 0, \\\\
x \& \\textit{if } x \\geq 0
\\end{cases}
\\tag{8\.6}
\\end{equation}\\]
This one is also called the *ramp function* and is one of the simplest non\-linear functions and probably the most common one used in modern big neural networks. These units present several advantages, being among them, efficiency during training and inference time.
In practice, many other activation functions are used but the most common ones are sigmoid and ReLU units. In the following link, you can find an extensive list of activation functions: <https://en.wikipedia.org/wiki/Activation_function>
So far, we have been talking about **single units**. In the next section, we will see how these single units can be assembled to build bigger artificial neural networks.
### 8\.1\.2 Assembling Units into Layers
Perceptrons, sigmoid, and ReLU units can be thought of as very simple neural networks. By connecting several units, one can build more complex neural networks. For historical reasons, neural networks are also called **multilayer perceptrons** regardless whether the units are perceptrons or not. Typically, units are grouped into layers. Figure [8\.7](deeplearning.html#fig:nnExampleNN) shows an example neural network with \\(3\\) layers. An **input layer** with \\(3\\) nodes, a **hidden layer** with \\(2\\) nodes, and an **output layer** with \\(1\\) node.
FIGURE 8\.7: Example neural network.
In this type of diagram, the nodes represent units (perceptrons, sigmoids, ReLUs, etc.) except for the input layer. In the input layer, nodes represent input variables (input features). In the above example, the \\(3\\) nodes in the input layer simply indicate that the network takes as input \\(3\\) variables. In this layer, no operations are performed but the input values are passed to the next layer after multipliying them by their corresponding edge weights.
This network only has one hidden layer. Hidden layers are called like that because they do not have direct contact with the external world. Finally, there is an output layer with a single unit. We could also have an output layer with more than one unit. Most of the time, we will have **fully connected** neural networks. That is, all units have incoming connections from all nodes in the previous layer (as in the previous example).
For each specific problem, we need to define several building blocks for the network. For example, the number of layers, the number of units in each layer, the type of units (sigmoid, ReLU, etc.), and so on. This is known as the **architecture** of the network. Choosing a good architecture for a given problem is not a trivial task. It is advised to start with an architecture that was used to solve a similar problem and then fine\-tune it for your specific problem. There exist some automatic ways to optimize the network architecture but those methods are out of the scope of this book.
We already saw how a unit can produce a result based on the inputs by using *forward propagation*. For more complex networks the process is the same! Consider the network shown in Figure [8\.8](deeplearning.html#fig:nnForward). It consists of two inputs and one output. It also has one hidden layer with \\(2\\) units.
FIGURE 8\.8: Example of forward propagation.
Each node is labeled as \\(n\_{l,n}\\) where \\(l\\) is the layer and \\(n\\) is the unit number.
The two input values are \\(1\\) and \\(0\.5\\). They could be temperature measurements, for example. Each edge has an associated weight. For simplicity, let’s assume that the activation function of the units is the identity function \\(g(x)\=x\\). The bold underlined number inside the nodes of the hidden and output layers are the biases. Here we assume that the network is already trained (later we will see how those weights and biases are learned). To get the final result, for each node, its inputs are multiplied by their corresponding weights and added. Then, the bias is added. Next, the activation function is applied. In this case, it is just the identify function (returns the same value). The outputs of the nodes in the hidden layer become the inputs of the next layer and so on.
In this example, first we need to compute the outputs of nodes \\(n\_{2,1}\\) and \\(n\_{2,2}\\):
output of \\(n\_{2,1} \= (1\)(2\) \+ (0\.5\)(1\) \+ 1 \= 3\.5\\)
output of \\(n\_{2,2} \= (1\)(\-3\) \+ (0\.5\)(5\) \+ 0 \= \-0\.5\\)
Finally, we can compute the output of the last node using the outputs of the previous nodes:
output of \\(n\_{3,1} \= (3\.5\)(1\) \+ (\-0\.5\)(\-1\) \+ 3 \= 7\\).
### 8\.1\.3 Deep Neural Networks
By increasing the number of layers and the number of units in each layer, one can build more complex networks. But what is a deep neural network (DNN)? There is not a strict rule but some people say that a network with more than \\(2\\) hidden layers is a deep network. Yes, that’s all it takes to build a DNN! Figure [8\.9](deeplearning.html#fig:nnDNN) shows an example of a deep neural network.
FIGURE 8\.9: Example of a deep neural network.
A DNN has nothing special compared to a traditional neural network except that it has many layers. One of the reasons why they became so popular until recent years is because before, it was not possible to efficiently train them. With the advent of specialized hardware like graphics processing units (GPUs), it is now possible to efficiently train big DNNs. The introduction of ReLU units was also a key factor that allowed the training of even bigger networks. The availability of big quantities of data was another key factor that allowed the development of deep learning technologies. Note that deep learning is not limited to DNNs but it also encompasses other types of architectures like convolutional networks and recurrent neural networks, to name a few. Convolutional layers will be covered later in this chapter.
### 8\.1\.4 Learning the Parameters
We have seen how *forward propagation* can be used at inference time to compute the output of the network based on the input values. In the previous examples, we assumed that the network’s parameters (weights and biases) were already learned. In practice, you most likely will use libraries and frameworks to build and train neural networks. Later in this chapter, I will show you how to use TensorFlow and Keras within R. But, before that, I will explain how the networks’ parameters are learned and how to code and train a very simple network from scratch.
Back to the problem, the objective is to find the parameters’ values based on training data such that the predicted result for any input data point is as close as possible as the true value. Put in other words, we want to find the parameters’ values that reduce the network’s prediction error.
One way to estimate the network’s error is by computing the squared difference between the prediction \\(y'\\) and the real value \\(y\\), that is, \\(error \= (y' \- y)^2\\). This is how the error can be computed for a single training data point. The error function is typically called the **loss function** and denoted by \\(L(\\theta)\\) where \\(\\theta\\) represents the parameters of the network (weights and biases). In this example the loss function is \\(L(\\theta)\=(y'\- y)^2\\).
If there is more than one training data point (which is often the case), the loss function is just the average of the individual squared differences which is known as the **mean squared error (MSE)**:
\\\[\\begin{equation}
L(\\theta) \= \\frac{1}{N} \\sum\_{n\=1}^N{(y'\_n \- y\_n)^2}
\\tag{8\.7}
\\end{equation}\\]
The mean squared error (MSE) loss function is commonly used for regression problems. For classification problems, the average cross\-entropy loss function is usually preferred (covered later in this chapter).
The problem of finding the best parameters can be formulated as an optimization problem, that is, find the optimal parameters such that the loss function is minimized. This is the learning/training phase of a neural network. Formally, this can be stated as:
\\\[\\begin{equation}
\\operatorname\*{arg min}\_{\\theta} L(\\theta)
\\tag{8\.8}
\\end{equation}\\]
This notation means: find and return the weights and biases that make the loss function be as small as possible.
The most common method to train neural networks is called **gradient descent**. The algorithm updates the parameters in an iterative fashion based on the loss. This algorithm is suitable for complex functions with millions of parameters.
Suppose there is a network with only \\(1\\) weight and no bias with MSE as loss function (Equation [(8\.7\)](deeplearning.html#eq:lossMSE)). Figure [8\.10](deeplearning.html#fig:nnGD) shows a plot of the loss function. This is a quadratic function that only depends on the value of \\(w\\). The task is to find the \\(w\\) where the function is at its minimum.
FIGURE 8\.10: Gradient descent in action.
Gradient descent starts by assigning \\(w\\) a random value. Then, at each step and based on the error, \\(w\\) is updated in the direction that minimizes the loss function. In the previous figure, the **global minimum** is found after \\(5\\) iterations. In practice, loss functions are more complex and have many **local minima** (Figure [8\.11](deeplearning.html#fig:nnLM)). For complex functions, it is difficult to find a global minimum but gradient descent can find a local minimum that is good enough to solve the problem at hand.
FIGURE 8\.11: Function with 1 global minimum and several local minima.
But in what direction and how much is \\(w\\) moved in each iteration? The direction and magnitude are estimated by computing the derivative of the loss function with respect to the weight \\(\\frac{\\partial L}{\\partial w}\\). The derivative is also called the gradient and denoted by \\(\\nabla L\\). The iterative gradient descent procedure is listed below:
**loop** until convergence or max iterations (*epochs*)
**for each** \\(w\_i\\) in \\(W\\) **do:**
\\(w\_i \= w\_i \- \\alpha \\frac{\\partial L(W)}{\\partial w\_i}\\)
The outer loop is run until the algorithm converges or until a predefined number of iterations is reached. Each iteration is also called an **epoch**. Each weight is updated with the rule: \\(w\_i \= w\_i \- \\alpha \\frac{\\partial L(W)}{\\partial w\_i}\\). The derivative part will give us the direction and magnitude. The \\(\\alpha\\) is called the **learning rate** and it controls how ‘fast’ we move. The learning rate is a constant defined by the user, thus, it is a **hyperparameter**. A high learning rate can cause the algorithm to miss the local minima and the loss can start to increase. A small learning rate will cause the algorithm to take more time to converge. Figure [8\.12](deeplearning.html#fig:nnLR) illustrates both scenarios.
FIGURE 8\.12: Comparison between high and low learning rates. a) Big learning rate. b) Small learning rate.
Selecting an appropriate learning rate will depend on the application but common values are between \\(0\.0001\\) and \\(0\.05\\).
Let’s see how gradient descent works with a step by step example. Consider a very simple neural network consisting of an input layer with only one input feature and an output layer with one unit and no bias. To make it even simpler, the activation function of the output unit is the identity function \\(f(x)\=x\\). Assume that as training data we have a single data point. Figure [8\.13](deeplearning.html#fig:nnStepExample) shows the simple network and the training data. The training data point only has one input variable (\\(x\\)) and an output (\\(y\\)). We want to train this network such that it can make predictions on new data points. The training point has an input feature of \\(x\=3\\) and the expected output is \\(y\=1\.5\\). For this particular training point, it seems that the output is equal to the input divided by \\(2\\). Thus, based on this single training data point the network should learn how to divide any other input by \\(2\\).
FIGURE 8\.13: a) A simple neural network consisting of one unit. b) The training data with only one row.
Before we start the training we need to define \\(3\\) things:
1. The loss function. This is a regression problem so we can use the MSE. Since there is a single data point our loss function becomes \\(L(w)\=(y' \- y)^2\\). Here, \\(y\\) is the ground truth output value and \\(y'\\) is the predicted value. We know how to make predictions using forward propagation. In this case, it is the product between the input value and the single weight, and the activation function has no effect (it returns the same value as its input). We can rewrite the loss function as \\(L(w)\=(xw \- y)^2\\).
2. We need to define a learning rate. For now, we can set it to \\(\\alpha \= 0\.05\\).
3. The weights need to be initialized at random. Let’s assume the single weight is ‘randomly’ initialized with \\(w\=2\\).
Now we can use gradient descent to iteratively update the weight. Remember that the updating rule is:
\\\[\\begin{equation}
w \= w \- \\alpha \\frac{\\partial L(w)}{\\partial w}
\\end{equation}\\]
The partial derivative of the loss function with respect to \\(w\\) is:
\\\[\\begin{equation}
\\frac{\\partial L(w)}{\\partial w} \= 2x(xw \- y)
\\end{equation}\\]
If we substitute the derivative in the updating rule we get:
\\\[\\begin{equation}
w \= w \- \\alpha 2x(xw \- y)
\\end{equation}\\]
We already know that \\(\\alpha\=0\.05\\), the input value is \\(x\=3\\), the output is \\(y\=1\.5\\) and the initial weight is \\(w\=2\\). So we can start updating \\(w\\). Figure [8\.14](deeplearning.html#fig:nnTrainProgress) shows the initial state (iteration 0\) and \\(3\\) additional iterations. In the initial state, \\(w\=2\\) and with that weight the loss is \\(20\.25\\). In iteration \\(1\\), the weight is updated and now its value is \\(0\.65\\). With this new weight, the loss is \\(0\.2025\\). That was a substantial reduction in the error! After three iterations we see that the final weight is \\(w\=0\.501\\) and the loss is very close to zero.
FIGURE 8\.14: First 3 gradient descent iterations (epochs).
Now, we can start doing predictions with our very simple neural network! To do so, we use forward propagation on the new input data using the learned weight \\(w\=0\.501\\). Figure [8\.15](deeplearning.html#fig:nnExamplePredictions) shows the predictions on new training data points that were never seen before by the network.
FIGURE 8\.15: Example predictions on new data points.
Even though the predictions are not perfect, they are very close to the expected value (division by \\(2\\)) considering that the network is very simple and was only trained with a single data point and for only \\(3\\) epochs!
If the training set has more than one data point, then we need to compute the derivative of each point and accumulate them (the derivative of a sum is equal to the sum of the derivatives). In the previous example, the update rule becomes:
\\\[\\begin{equation}
w \= w \- \\alpha \\sum\_{i\=1}^N{2x\_i(x\_i w \- y)}
\\end{equation}\\]
This means that before updating a weight, first, we need to compute the derivative for each point and add them. This needs to be done for every parameter in the network. Thus, one **epoch** is a pass through all training points and all parameters.
### 8\.1\.5 Parameter Learning Example in R
`gradient_descent.R`
In the previous section, we went step by step to train a neural network with a single unit and with a single training data point. Here, we will see how we can implement that simple network in R but when we have more training data. The code can be found in the script `gradient_descent.R`.
This code implements the same network as the previous example. That is, one neuron, one input, no bias, and activation function \\(f(x) \= x\\). We start by creating a sample training set with \\(3\\) points. Again, the output is the input divided by \\(2\\).
```
train_set <- data.frame(x = c(3.0,4.0,1.0), y = c(1.5, 2.0, 0.5))
# Print the train set.
print(train_set)
#> x y
#> 1 3 1.5
#> 2 4 2.0
#> 3 1 0.5
```
Then we need to implement three functions: forward propagation, the loss function, and the derivative of the loss function.
```
# Forward propagation w*x
fp <- function(w, x){
return(w * x)
}
# Loss function (y - y')^2
loss <- function(w, x, y){
predicted <- fp(w, x) # This is y'
return((y - predicted)^2)
}
# Derivative of the loss function. 2x(xw - y)
derivative <- function(w, x, y){
return(2.0 * x * ((x * w) - y))
}
```
Now we are all set to implement the `gradient.descent()` function. The first parameter is the train set, the second parameter is the learning rate \\(\\alpha\\), and the last parameter is the number of epochs. The initial weight is initialized to some ‘random’ number (selected manually here for the sake of the example). The function returns the final learned weight.
```
# Gradient descent.
gradient.descent <- function(train_set, lr = 0.01, epochs = 5){
w = -2.5 # Initialize weight at 'random'
for(i in 1:epochs){
derivative.sum <- 0.0
loss.sum <- 0.0
# Iterate each data point in train_set.
for(j in 1:nrow(train_set)){
point <- train_set[j, ]
derivative.sum <- derivative.sum + derivative(w, point$x, point$y)
loss.sum <- loss.sum + loss(w, point$x, point$y)
}
# Update weight.
w <- w - lr * derivative.sum
# mean squared error (MSE)
mse <- loss.sum / nrow(train_set)
print(paste0("epoch: ", i, " loss: ",
formatC(mse, digits = 8, format = "f"),
" w = ", formatC(w, digits = 5, format = "f")))
}
return(w)
}
```
Now, let’s train the network with a learning rate of \\(0\.01\\) and for \\(10\\) epochs. This function will print for each epoch, the loss and the current weight.
```
#### Train the 1 unit network with gradient descent ####
lr <- 0.01 # set learning rate.
set.seed(123)
# Run gradient decent to find the optimal weight.
learned_w = gradient.descent(train_set, lr, epochs = 10)
#> [1] "epoch: 1 loss: 78.00000000 w = -0.94000"
#> [1] "epoch: 2 loss: 17.97120000 w = -0.19120"
#> [1] "epoch: 3 loss: 4.14056448 w = 0.16822"
#> [1] "epoch: 4 loss: 0.95398606 w = 0.34075"
#> [1] "epoch: 5 loss: 0.21979839 w = 0.42356"
#> [1] "epoch: 6 loss: 0.05064155 w = 0.46331"
#> [1] "epoch: 7 loss: 0.01166781 w = 0.48239"
#> [1] "epoch: 8 loss: 0.00268826 w = 0.49155"
#> [1] "epoch: 9 loss: 0.00061938 w = 0.49594"
#> [1] "epoch: 10 loss: 0.00014270 w = 0.49805"
```
From the output, we can see that the loss decreases as the weight is updated. The final value of the weight at iteration \\(10\\) is \\(0\.49805\\). We can now make predictions on new data.
```
# Make predictions on new data using the learned weight.
fp(learned_w, 7)
#> [1] 3.486366
fp(learned_w, -88)
#> [1] -43.8286
```
Now, you can try to change the training set to make the network learn a different arithmetic operation!
In the previous example, we considered a very simple neural network consisting of a single unit. In this case, the partial derivative with respect to the single weight was calculated directly. For bigger networks with more layers and activations, the final output becomes a composition of functions. That is, the activation values of a layer \\(l\\) depend on its weights which are also affected by the previous layer’s \\(l\-1\\) weights and so on. So, the derivatives (gradients) can be computed using the chain rule \\(f(g(x))' \= f'(g(x)) \\cdot g'(x)\\). This can be performed efficiently by an algorithm known as **backpropagation**.
> “What backpropagation actually lets us do is compute the partial derivatives \\(\\partial C\_x / \\partial w\\) and \\(\\partial C\_x / \\partial b\\) for a single training example”. (Michael Nielsen, 2019\)[20](#fn20).
Here, \\(C\\) refers to the loss function which is also called the cost function. In modern deep learning libraries like TensorFlow, this procedure is efficiently implemented with a computational graph. If you want to learn the details about backpropagation I recommend you to check this post by DEEPLIZARD (<https://deeplizard.com/learn/video/XE3krf3CQls>) which consists of \\(5\\) parts including videos.
### 8\.1\.6 Stochastic Gradient Descent
We have seen how gradient descent iterates over all training points before updating each parameter. To recall, an epoch is one pass through all parameters and for each parameter, the derivative with each training point needs to be computed. If the training set consists of thousands or millions of points, this method becomes very time\-consuming. Furthermore, in practice neural networks do not have one or two parameters but thousands or millions. In those cases, the training can be done more efficiently by using **stochastic gradient descent (SGD)**. This method adds two main modifications to the classic gradient descent:
1. At the beginning, the training set is shuffled (this is the stochastic part). This is necessary for the method to work.
2. The training set is divided into \\(b\\) batches with \\(m\\) data points each. This \\(m\\) is known as the **batch size** and is a hyperparameter that we need to define.
Then, at each epoch all batches are iterated and the parameters are updated based on each batch and not the entire training set, for example:
\\\[\\begin{equation}
w \= w \- \\alpha \\sum\_{i\=1}^m{2x\_i(x\_i w \- y)}
\\end{equation}\\]
Again, an epoch is one pass through all parameters and all batches. Now you may be wondering why this method is more efficient if an epoch still involves the same number of operations but they are split into chunks. Part of this is because since the parameter updates are more frequent, the loss also improves quicker. Another reason is that the operations within each batch can be optimized and performed in parallel, for example, by using a GPU. One thing to note is that each update is based on less information by only using \\(m\\) points instead of the entire data set. This can introduce some noise in the learning but at the same time this can help to get out of local minima. In practice, SGD needs more epochs to converge compared to gradient descent but overall, it will take less time. From now on, this is the method we will use to train our networks.
Typical batch sizes are: \\(4\\),\\(8\\),\\(16\\),\\(32\\),\\(64\\),\\(128\\), etc. There is a divided opinion in this respect. Some say it’s better to choose small batch sizes but others say the bigger the better. For any particular problem, it is difficult to say what batch size is the optimal. Usually, one needs to choose the batch size empirically by trying different ones.
Be aware that when using GPUs, a big batch size can cause out of memory errors since the GPU may not have enough memory to allocate the batch.
8\.2 Keras and TensorFlow with R
--------------------------------
TensorFlow[21](#fn21) is an open\-source computational library used mainly for machine learning and more specifically, for deep learning. It has many available tools and extensions to perform a wide variety of tasks such as data pre\-processing, model optimization, reinforcement learning, probabilistic reasoning, to name a few. TensorFlow is very flexible and is used for research, development, and in production environments. It provides an API that contains the necessary building blocks to build different types of neural networks including CNNs, autoencoders, Recurrent Neural Networks, etc. It has two main versions. A CPU version and a GPU version. The latter allows the execution of programs by taking advantage of the computational power of graphic processing units. This makes training models much faster. Despite all this flexibility and power, it can take some time to learn the basics. Sometimes you need a way to build and test machine learning models in a simple way, for example, when trying new ideas or prototyping. Fortunately, there exists an interface to TensorFlow called Keras[22](#fn22).
Keras offers an API that abstracts many of the TensorFlow’s details making it easier to build and train machine learning models. Keras is what I will use when building deep learning models in this book. Keras does not only provide an interface to TensorFlow but also to other deep learning engines such as Theano[23](#fn23), Microsoft Cognitive Toolkit[24](#fn24), etc. Keras was developed by François Chollet and later, it was integrated with TensorFlow.
Most of the time its API should be enough to do common tasks and it provides ways to add extensions in case that is not enough. In this book, we will only use a subset of the available Keras functions but that will be enough for our purposes of building models to predict behaviors. If you want to learn more about Keras, I recommend the book *“Deep Learning with R”* by Chollet and Allaire ([2018](#ref-Chollet2018)).
Examples in this book will use Keras with TensorFlow as the backend. In R, we can access Keras through the `keras` package ([Allaire and Chollet 2019](#ref-keras)).
Instructions on how to install Keras and TensorFlow can be found in Appendix [A](appendixInstall.html#appendixInstall). At this point, I would recommend you to install them since the next section will make use of Keras.
In the next section, we will start with a simple model built with Keras and the following examples will introduce more functions. By the end of this chapter you will be able to build and train efficient deep neural networks including Convolutional Neural Networks.
### 8\.2\.1 Keras Example
`keras_simple_network.R`
If you haven’t already installed Keras and TensorFlow, I would recommend you to do so at this point. Instructions on how to install the required software can be found in Appendix [A](appendixInstall.html#appendixInstall).
In the previous section, I showed how to implement gradient descent in R (see `gradient_descent.R`). Now, I will show how to implement the same simple network using Keras. Recall that our network has one unit, one input, one output, and no bias. The code can be found in the script `keras_simple_network.R`. First, the `keras` library is loaded and a sample training set is created. Then, the function `keras_model_sequential()` is used to instantiate a new empty model. It is called sequential because it consists of a sequence of layers. At this point it does not have any layers yet.
```
library(keras)
# Generate a train set.
# First element is the input x and
# the second element is the output y.
train_set <- data.frame(x = c(3.0,4.0,1.0),
y = c(1.5, 2.0, 0.5))
# Instantiate a sequential model.
model <- keras_model_sequential()
```
We can now start adding layers (only one in this example). To do so, the `layer_dense()` method can be used. The *dense* name means that this will be a densely (fully) connected layer. This layer will be the output layer with a single unit.
```
model %>%
layer_dense(units = 1,
use_bias = FALSE,
activation = 'linear',
input_shape = 1)
```
The first argument `units = 1` specifies the number of units in this layer. By default, a bias is added in each layer. To make it the same as in the previous example, we will not use a bias so `use_bias` is set to `FALSE`. The `activation` specifies the activation function. Here it is set to `'linear'` which means that no activation function is applied \\(f(x)\=x\\). Finally, we need to specify the number of inputs with `input_shape`. In this case, there is only one feature.
Before training the network we need to compile the model and specify the learning algorithm. In this case, stochastic gradient descent with a learning rate of \\(\\alpha\=0\.01\\). We also need to specify which loss function to use (we’ll use mean squared error). At every epoch, some performance metrics can be computed. Here, we specify that we want the mean squared error and mean absolute error. These metrics are computed on the train data. After compiling the model, the `summary()` method can be used to print a textual description of it. Figure [8\.16](deeplearning.html#fig:simpleSummary) shows the output of the `summary()` function.
```
model %>% compile(
optimizer = optimizer_sgd(lr = 0.01),
loss = 'mse',
metrics = list('mse','mae')
)
summary(model)
```
FIGURE 8\.16: Summary of the simple neural network.
From this output, we see that the network consists of a single dense layer with \\(1\\) unit.
To start the actual training procedure we need to call the `fit()` function. Its first argument is the input training data (features) as a matrix. The second argument specifies the corresponding true outputs. We let the algorithm run for \\(30\\) epochs. The batch size is set to \\(3\\) which is also the total number of data points in our data. In this example the dataset is very small so we set the batch size equal to the total number of instances. In practice, datasets can contain thousands of instances but the batch size will be relatively small (e.g., \\(8\\), \\(16\\), \\(32\\), etc.).
Additionally, there is a `validation_split` parameter that specifies the fraction of the train data to be used for validation. This is set to \\(0\\) (the default) since the dataset is very small. If the validation split is greater than \\(0\\), its performance metrics will also be computed. The `verbose` parameter sets the amount of information to be printed during training. A \\(0\\) will not print anything. A \\(2\\) will print one line of information per epoch. The last parameter `view_metrics` specifies if you want the progress of the loss and performance metrics to be plotted. The `fit()` function returns an object with summary statistics collected during training and is saved in the variable `history`.
```
history <- model %>% fit(
as.matrix(train_set$x), as.matrix(train_set$y),
epochs = 30,
batch_size = 3,
validation_split = 0,
verbose = 2,
view_metrics = TRUE
)
```
Figure [8\.17](deeplearning.html#fig:nnEpochs) presents the output of the `fit()` function in RStudio. In the console, the training loss, mean squared error, and mean absolute error are printed during each epoch. In the viewer pane, plots of the same metrics are shown. Here, we can see that the loss is nicely decreasing over time. The loss at epoch \\(30\\) should be close to \\(0\\).
FIGURE 8\.17: fit() function output.
The information saved in the `history` variable can be plotted with `plot(history)`. This will generate plots for the *loss*, *MSE*, and *MAE*.
The results can slightly differ every time the training is run due to random weight initializations performed by the back end.
Once the model is trained, we can perform inference on new data points with the `predict_on_batch()` function. Here we are passing three data points.
```
model %>% predict_on_batch(c(7, 50, -220))
#> [,1]
#> [1,] 3.465378
#> [2,] 24.752701
#> [3,] -108.911880
```
Now, try setting a higher learning rate, for example, \\(0\.05\\). With this learning rate, the algorithm will converge much faster. In my computer, at epoch \\(11\\) the loss was already \\(0\\).
One practical thing to note is that if you make any changes in the `compile()` or `fit()` functions, you will have to rerun the code that instantiates and defines the network. This is because the model object saves the current state including the learned weights. If you rerun the `fit()` function on a previously trained model, it will start with the previously learned weights.
8\.3 Classification with Neural Networks
----------------------------------------
Neural networks are trained iteratively by modifying their weights while aiming to minimize the loss function. When the network predicts real numbers, the MSE loss function is normally used. For classification problems, the network should predict the most likely class out of \\(k\\) possible categories. To make a neural network work for classification problems, we need to introduce new elements to its architecture:
1. Add more units to the output layer.
2. Use a **softmax** activation function in the output layer.
3. Use **average cross\-entropy** as the loss function.
Let’s start with point number \\(1\\) (add more units to the output layer). This means that if the number of classes is \\(k\\), then the last layer needs to have \\(k\\) units, one for each class. That’s it! Figure [8\.18](deeplearning.html#fig:nnCrossEntropy) shows an example of a neural network with an output layer having \\(3\\) units. Each unit predicts a score for each of the \\(3\\) classes. Let’s call the vector of predicted scores \\(y'\\).
FIGURE 8\.18: Neural network with 3 output scores. Softmax is applied to the scores and the cross\-entropy with the true scores is calculated. This gives us an estimate of the similarity between the network’s predictions and the true values.
Point number \\(2\\) says that a **softmax** activation function should be used in the output layer. When training the network, just as with regression, we need a way to compute the error between the predicted values \\(y'\\) and the true values \\(y\\). In this case, \\(y\\) is a one\-hot encoded vector with a \\(1\\) at the position of the true class and \\(0s\\) elsewhere. If you are not familiar with one\-hot encoding, you can check the topic in chapter [5](preprocessing.html#preprocessing). As opposed to other classifiers like decision trees, \\(k\\)\-NN, etc., neural networks need the classes to be one\-hot encoded.
With regression problems, one way to compare the prediction with the true value is by using the squared difference: \\((y' \- y)^2\\). With classification, \\(y\\) and \\(y'\\) are vectors so we need another way to compare them. The true values \\(y\\) are represented as a vector of probabilities with a \\(1\\) at the position of the true class. The output scores \\(y'\\) do not necessarily sum up to \\(1\\) thus, they are not proper probabilities. Before comparing \\(y\\) and \\(y'\\) we need both to be probabilities. The **softmax** activation function is used to convert \\(y'\\) into a vector of probabilities. The softmax function is applied individually to each element of a vector:
\\\[\\begin{equation}
softmax(\\boldsymbol{x},i) \= \\frac{e^{\\boldsymbol{x}\_i}}{\\sum\_{j}{e^{\\boldsymbol{x}\_j}}}
\\tag{8\.9}
\\end{equation}\\]
where \\(\\boldsymbol{x}\\) is a vector and \\(i\\) is an index pointing to a particular element in the vector. Thus, to convert \\(y'\\) into a vector of probabilities we need to apply softmax to each of its elements. One thing to note is that this activation function depends on all the values in the vector (the output values of all units). Figure [8\.18](deeplearning.html#fig:nnCrossEntropy) shows the resulting vector of probabilities after applying softmax to each element of \\(y'\\). In R this can be implemented like the following:
```
# Scores from the figure.
scores <- c(3.0, 0.03, 1.2)
# Softmax function.
softmax <- function(scores){
exp(scores) / sum(exp(scores))
}
probabilities <- softmax(scores)
print(probabilities)
#> [1] 0.82196 0.04217 0.13587
print(sum(probabilities)) # Should sum up to 1.
#> [1] 1
```
We used R’s vectorization capabilities to compute the final vector of probabilities within the same function without having to iterate through each element. When using Keras, these operations are efficiently computed by the backend (for example, TensorFlow).
Finally, point \\(3\\) states that we need to use **average cross\-entropy** as the **loss function**. Now that we have converted \\(y'\\) into probabilities, we can compute its dissimilarity with \\(y\\). The distance (dissimilarity) between two vectors (\\(A\\),\\(B\\)) of probabilities can be computed using **cross\-entropy**:
\\\[\\begin{equation}
CE(A,B) \= \- \\sum\_{i}{B\_i log(A\_i)}
\\tag{8\.10}
\\end{equation}\\]
Thus, to get the dissimilarity between \\(y'\\) and \\(y\\) first we apply softmax to \\(y'\\) (to transform it into proper probabilities) and then, we compute the cross entropy between the resulting vector of probabilities and \\(y\\):
\\\[\\begin{equation}
CE(softmax(y'),y).
\\end{equation}\\]
In R this can be implemented with the following:
```
# Cross-entropy
CE <- function(A,B){
- sum(B * log(A))
}
y <- c(1, 0, 0)
print(CE(softmax(scores), y))
#> [1] 0.1961
```
Be aware that when computing the cross\-entropy with equation [(8\.10\)](deeplearning.html#eq:crossentropy) **order matters**. The first element should be the predicted scores \\(y'\\) and the second element should be the true one\-hot encoded vector \\(y\\). We don’t want to apply a log function to a vector with values of \\(0\\). Most of the time, the predicted scores \\(y'\\) will be different from \\(0\\). That’s why we prefer to apply the log function to them. In the very rare case when the predicted scores have zeros, we can add a very small number. In practice, this is taken care of by the backend (e.g., Tensorflow).
Now we know how to compute the cross\-entropy for each training instance. The total loss function is then, the **average cross\-entropy across the training points**. The next section shows how to build a neural network for classification using Keras.
### 8\.3\.1 Classification of Electromyography Signals
`keras_electromyography.R`
In this example, we will train a neural network with Keras to classify hand gestures based on muscle electrical activity. The *ELECTROYMYOGRAPHY* dataset will be used here. The electrical activity was recorded with an electromyography (EMG) sensor worn as an armband. The data were collected and made available by Yashuk ([2019](#ref-kirill)). The armband device has \\(8\\) sensors which are placed on the skin surface and measure electrical activity from the right forearm at a sampling rate of \\(200\\) Hz. A video of the device can be found here: <https://youtu.be/OuwDHfY2Awg>
The data contains \\(4\\) different gestures: 0\-rock, 1\-scissors, 2\-paper, 3\-OK, and has \\(65\\) columns. The last column is the class label from \\(0\\) to \\(3\\). The first \\(64\\) columns are electrical measurements. \\(8\\) consecutive readings for each of the \\(8\\) sensors. The objective is to use the first \\(64\\) variables to predict the class.
The script `keras_electromyography.R` has the full code. We start by splitting the `dataset` into train (\\(60\\%\\)), validation (\\(10\\%\\)) and test (\\(30\\%\\)) sets. We will use the validation set to monitor the performance during each epoch. We also need to normalize the three sets but only learn the normalization parameters from the train set. The `normalize()` function included in the script will do the job.
One last thing we need to do is to format the data as matrices and one\-hot encode the class. The following code defines a function that takes as input a data frame and the expected number of classes. It assumes that the first columns are the features and the last column contains the class. First, it converts the features into a matrix and stores them in `x`. Then, it converts the class into an array and one\-hot encodes it using the `to_categorical()` function from Keras. The classes are stored in `y` and the function returns a list with the features and one\-hot encoded classes. Then, we can call the function with the train, validation, and test sets.
```
# Define a function to format features and one-hot encode the class.
format.to.array <- function(data, numclasses = 4){
x <- as.matrix(data[, 1:(ncol(data)-1)])
y <- as.array(data[, ncol(data)])
y <- to_categorical(y, num_classes = numclasses)
l <- list(x=x, y=y)
return(l)
}
# Format data
trainset <- format.to.array(trainset, numclasses = 4)
valset <- format.to.array(valset, numclasses = 4)
testset <- format.to.array(testset, numclasses = 4)
```
Let’s print the first one\-hot encoded classes from the train set:
```
head(trainset$y)
#> [,1] [,2] [,3] [,4]
#> [1,] 0 0 1 0
#> [2,] 0 0 1 0
#> [3,] 0 0 1 0
#> [4,] 0 0 0 1
#> [5,] 1 0 0 0
#> [6,] 0 0 0 1
```
The first three instances belong to the class *‘paper’* because the \\(1s\\) are in the third position. The corresponding integers are 0\-rock, 1\-scissors, 2\-paper, 3\-OK. So *‘paper’* comes in the third position. The fourth instance belongs to the class *‘OK’*, the fifth to *‘rock’*, and so on.
Now it’s time to define the neural network architecture! We will do so inside a function:
```
# Define the network's architecture.
get.nn <- function(ninputs = 64, nclasses = 4, lr = 0.01){
model <- keras_model_sequential()
model %>%
layer_dense(units = 32, activation = 'relu',
input_shape = ninputs) %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = nclasses, activation = 'softmax')
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_sgd(lr = lr),
metrics = c('accuracy')
)
return(model)
}
```
The first argument takes the number of inputs (features), the second argument specifies the number of classes and the last argument is the learning rate \\(\\alpha\\). The first line instantiates an empty keras sequential model. Then we add three layers. The first two are hidden layers and the last one will be the output layer. The input layer is implicitly defined when setting the `input_shape` parameter in the first layer. The first hidden layer has \\(32\\) units with a ReLU activation function. Since this is the first hidden layer, we also need to specify what is the expected input by setting the `input_shape`. In this case, the number of input features is \\(64\\). The next hidden layer has \\(16\\) ReLU units. For the output layer, the number of units needs to be equal to the number of classes (\\(4\\), in this case). Since this is a classification problem we also set the activation function to `softmax`.
Then, the model is compiled and the loss function is set to `categorical_crossentropy` because this is a classification problem. Stochastic gradient descent is used with a learning rate passed as a parameter. During training, we want to monitor the *accuracy*. Finally, the function returns the compiled model.
Now we can call our function to create the model. This one will have \\(64\\) inputs and \\(4\\) outputs and the learning rate is set to \\(0\.01\\). It is always useful to print a summary of the model with the `summary()` function.
```
model <- get.nn(64, 4, lr = 0.01)
summary(model)
```
FIGURE 8\.19: Summary of the network.
From the summary, we can see that the network has \\(3\\) layers. The second column shows the output shape which in this case corresponds to the number of units in each layer. The last column shows the number of parameters of each layer. For example, the first layer has \\(2080\\) parameters! Those come from the weights and biases. There are \\(64\\) (inputs) \* \\(32\\) (units) \= \\(2048\\) weights plus the \\(32\\) biases (one for each unit). The biases are included by default on each layer unless otherwise specified.
The second layer receives \\(32\\) inputs on each of its \\(16\\) units. Thus \\(32\\) \* \\(16\\) \+ \\(16\\) (biases) \= \\(528\\). The last layer has \\(16\\) inputs from the previous layer on each of its \\(4\\) units plus \\(4\\) biases giving a total of \\(68\\) parameters. In total, the network has \\(2676\\) parameters. Here, we see how fast the number of parameters grows when adding more layers and units. Now, we use the `fit()` function to train the model.
```
history <- model %>% fit(
trainset$x, trainset$y,
epochs = 300,
batch_size = 8,
validation_data = list(valset$x, valset$y),
verbose = 1,
view_metrics = TRUE
)
```
The model is trained for \\(300\\) epochs with a batch size of \\(8\\). We used the `validation_data` parameter to specify the validation set to compute the performance on unseen data. The training will take some minutes to complete. Bigger models can take hours or even several days. Thus, it is a good idea to save a model once it is trained. You can do so with the `save_model_hdf5()` or `save_model_tf()` methods. The former saves the model in `hdf5` format while the later saves it in TensorFlow’s `SavedModel` format. The `SavedModel` is stored as a directory containing the necessary serialized files to restore the model’s state.
```
# Save model as hdf5.
save_model_hdf5(model, "electromyography.hdf5")
# Alternatively, save model as SavedModel.
save_model_tf(model, "electromyography_tf")
```
We can load a previously saved model with:
```
# Load model.
model <- load_model_hdf5("electromyography.hdf5")
# Or alternatively if the model is in SavedModel format.
model <- load_model_tf("electromyography")
```
The source code files include the trained models used in this book in case you want to reproduce the results. Both, the `hdf5` and `SavedModel` versions are included.
Due to some version incompatibilities with the h5py underlying library, you may get the following error when trying to load the `hdf5` files. `AttributeError: 'str' object has no attribute 'decode'`. If you encounter this error, load the models in `SavedModel` format using the `load_model_tf()` method instead.
Figure [8\.20](deeplearning.html#fig:nnEMGloss) shows the train and validation loss and accuracy as produced by `plot(history)`. We see that both the training and validation loss are decreasing over time. The accuracy increases over time.
FIGURE 8\.20: Loss and accuracy of the electromyography model.
Now, we evaluate the performance of the trained model with the test set using the `evaluate()` function.
```
# Evaluate model.
model %>% evaluate(testset$x, testset$y)
#> loss accuracy
#> 0.4045424 0.8474576
```
The accuracy was pretty decent (\\(\\approx 84\\%\\)). To get the actual class predictions you can use the `predict_classes()` function.
```
# Predict classes.
classes <- model %>% predict_classes(testset$x)
head(classes)
#> [1] 2 2 1 3 0 1
```
Note that this function returns the classes with numbers starting with \\(0\\) just as in the original dataset.
Sometimes it is useful to access the actual predicted scores for each class. This can be done with the `predict_on_batch()` function.
```
# Make predictions on the test set.
predictions <- model %>% predict_on_batch(testset$x)
head(predictions)
#> [,1] [,2] [,3] [,4]
#> [1,] 1.957638e-05 8.726048e-02 7.708290e-01 1.418910e-01
#> [2,] 3.937355e-05 2.571992e-04 9.965665e-01 3.136863e-03
#> [3,] 4.261451e-03 7.343097e-01 7.226156e-02 1.891673e-01
#> [4,] 8.669784e-06 2.088269e-04 1.339851e-01 8.657974e-01
#> [5,] 9.999956e-01 7.354113e-26 1.299388e-08 4.451362e-06
#> [6,] 2.513005e-05 9.914154e-01 7.252949e-03 1.306421e-03
```
To obtain the actual classes from the scores, we can compute the index of the maximum column. Then we subtract \\(\-1\\) so the classes start at \\(0\\).
```
classes <- max.col(predictions) - 1
head(classes)
#> [1] 2 2 1 3 0 1
```
Since the true classes are also one\-hot encoded we need to do the same to get the ground truth.
```
groundTruth <- max.col(testset$y) - 1
# Compute accuracy.
sum(classes == groundTruth) / length(classes)
#> [1] 0.8474576
```
The integers are mapped to class strings. Then, a confusion matrix is generated.
```
# Convert classes to strings.
# Class mapping by index: rock 0, scissors 1, paper 2, ok 3.
mapping <- c("rock", "scissors", "paper", "ok")
# Need to add 1 because indices in R start at 1.
str.predictions <- mapping[classes+1]
str.groundTruth <- mapping[groundTruth+1]
library(caret)
cm <- confusionMatrix(as.factor(str.predictions),
as.factor(str.groundTruth))
cm$table
#> Reference
#> Prediction ok paper rock scissors
#> ok 681 118 24 27
#> paper 54 681 47 12
#> rock 29 18 771 1
#> scissors 134 68 8 867
```
Now, try to modify the network by making it deeper (adding more layers) and fine\-tune the hyperparameters like the learning rate, batch size, etc., to increase the performance.
8\.4 Overfitting
----------------
One important thing to look at when training a network is **overfitting**. That is, when the model memorizes instead of learning (see chapter [1](intro.html#intro)). Overfitting means that the model becomes very specialized at mapping inputs to outputs from the *train set* but fails to do so with new *test samples*. One reason is that a model can become too complex and with so many parameters that it will perfectly adapt to its training data but will miss more general patterns preventing it to perform well on unseen instances. To diagnose this, one can plot loss/accuracy curves during training epochs.
FIGURE 8\.21: Loss and accuracy curves.
In Figure [8\.21](deeplearning.html#fig:lossAccuracy) we can see that after some epochs the *validation loss* starts to increase even though the *train loss* is still decreasing. This is because the model is getting better on reducing the error on the train set but its performance starts to decrease when presented with new instances. Conversely, one can observe a similar effect with the accuracy. The model keeps improving its performance on the train set but at some point, the accuracy on the validation set starts to decrease. Usually, one stops the training before overfitting starts to occur. In the following, I will introduce you to two common techniques to combat overfitting in neural networks.
### 8\.4\.1 Early Stopping
`keras_electromyography_earlystopping.R`
Neural networks are trained for several epochs using gradient descent. But the question is: *For how many epochs?*. As can be seen in Figure [8\.21](deeplearning.html#fig:lossAccuracy), too many epochs can lead to overfitting and too few can cause underfitting. *Early stopping* is a simple but effective method to reduce the risk of overfitting. The method consists of setting a large number of epochs and stop updating the network’s parameters when a condition is met. For example, one condition can be to stop when there is no performance improvement on the validation set after \\(n\\) epochs or when there is a decrease of some percent in accuracy.
Keras provides some mechanisms to implement early stopping and this is accomplished via **callbacks**. A callback is a function that is run at different stages during training such as at the beginning or end of an epoch or at the beginning or end of a batch operation. Callbacks are passed as a list to the `fit()` function. You can define custom callbacks or use some of the built\-in ones including `callback_early_stopping()`. This callback will cause the training to stop when a metric stops improving. The metric can be *accuracy*, *loss*, etc. The following callback will stop the training if after \\(10\\) epochs (`patience`) there is no improvement of at least \\(1\\%\\) (`min_delta`) in accuracy on the validation set.
```
callback_early_stopping(monitor = "val_acc",
min_delta = 0.01,
patience = 10,
verbose = 1,
mode = "max")
```
The `min_delta` parameter specifies the minimum change in the monitored metric to qualify as an improvement. The `mode` specifies if training should be stopped when the metric has stopped decreasing, if it is set to `"min"`. If it is set to `"max"`, training will stop when the monitored metric has stopped increasing.
It may be the case that the best validation performance was achieved not in the last epoch but at some previous point. By setting the `restore_best_weights` parameter to `TRUE` the model weights from the epoch with the best value of the monitored metric will be restored.
The script `keras_electromyography_earlystopping.R` shows how to use the early stopping callback in Keras with the electromyography dataset. The following code is an extract that shows how to define the callback and pass it to the `fit()` function.
```
# Define early stopping callback.
my_callback <- callback_early_stopping(monitor = "val_acc",
min_delta = 0.01,
patience = 50,
verbose = 1,
mode = "max",
restore_best_weights = TRUE)
history <- model %>% fit(
trainset$x, trainset$y,
epochs = 500,
batch_size = 8,
validation_data = list(valset$x, valset$y),
verbose = 1,
view_metrics = TRUE,
callbacks = list(my_callback)
)
```
This code will cause the training to stop if after \\(50\\) epochs there is no improvement in accuracy of at least \\(1\\%\\) and will restore the model’s weights to the ones during the epoch with the highest accuracy. Figure [8\.22](deeplearning.html#fig:earlyStopping) shows how the training stopped at epoch \\(241\\).
FIGURE 8\.22: Early stopping example.
If we evaluate the final model on the test set, we see that the accuracy is \\(86\.4\\%\\), a noticeable increase compared to the \\(84\.7\\%\\) that we got when training for \\(300\\) epochs without early stopping.
```
# Evaluate model.
model %>% evaluate(testset$x, testset$y)
#> $loss
#> [1] 0.3777530
#> $acc
#> [1] 0.8641243
```
### 8\.4\.2 Dropout
Dropout is another technique to reduce overfitting proposed by Srivastava et al. ([2014](#ref-srivastava14)). It consists of ‘dropping’ some of the units from a hidden layer for each sample during training. In theory, it can also be applied to input and output layers but that is not very common. The incoming and outgoing connections of a dropped unit are discarded. Figure [8\.23](deeplearning.html#fig:imgDropout) shows an example of applying dropout to a network. In Figure [8\.23](deeplearning.html#fig:imgDropout) b, the middle unit was removed from the network whereas in Figure [8\.23](deeplearning.html#fig:imgDropout) c, the top and bottom units were removed.
FIGURE 8\.23: Dropout example.
Each unit has an associated probability \\(p\\) (independent of other units) of being dropped. This probability is another hyperparameter but typically it is set to \\(0\.5\\). Thus, during each iteration and for each sample, half of the units are discarded. The effect of this, is having more simple networks (see Figure [8\.23](deeplearning.html#fig:imgDropout)) and thus, less prone to overfitting. Intuitively, you can also think of dropout as training an **ensemble of neural networks**, each having a slightly different structure.
From the perspective of one unit that receives inputs from the previous hidden layer with dropout, approximately half of its incoming connections will be gone (if \\(p\=0\.5\\)). See Figure [8\.24](deeplearning.html#fig:dropoutUnit).
FIGURE 8\.24: Incoming connections to one unit when the previous layer has dropout.
Dropout has the effect of making units not to rely on any single incoming connection. This makes the whole network able to compensate for the lack of connections by learning alternative paths. In practice and for many applications, this results in a more robust model. A side effect of applying dropout is that the expected value of the activation function of a unit will be diminished because some of the previous activations will be \\(0\\). Recall that the output of a neuron is computed as:
\\\[\\begin{equation}
f(\\boldsymbol{x}) \= g(\\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b)
\\end{equation}\\]
where \\(\\boldsymbol{x}\\) contains the input values from the previous layer, \\(\\boldsymbol{w}\\) the corresponding weights and \\(g()\\) is the activation function. With dropout, approximately half of the values of \\(\\boldsymbol{x}\\) will be \\(0\\) (if \\(p\=0\.5\\)). To compensate for that, the input values need to be scaled, in this case, by a factor of \\(2\\).
\\\[\\begin{equation}
f(\\boldsymbol{x}) \= g(\\boldsymbol{w} \\cdot 2 \\boldsymbol{x} \+ b)
\\end{equation}\\]
In modern implementations, this scaling is done during training so at inference time
there is no need to apply dropout. The predictions are done as usual. In Keras, the `layer_dropout()` can be used to add dropout to any layer. Its parameter `rate` is a float between \\(0\\) and \\(1\\) that specifies the fraction of units to drop. The following code snippet builds a neural network with \\(2\\) hidden layers. Then, dropout with a rate of \\(0\.5\\) is applied to both of them.
```
model <- keras_model_sequential()
model %>%
layer_dense(units = 256, activation = 'relu', input_shape = 1000) %>%
layer_dropout(0.5) %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dropout(0.5) %>%
layer_dense(units = 2, activation = 'softmax')
```
It is very common to apply dropout to networks in computer vision because the inputs are images or videos containing a lot of input values (pixels) but the number of samples is often very limited causing overfitting. In section [8\.6](deeplearning.html#cnns) Convolutional Neural Networks (CNNs) will be introduced. They are suitable for computer vision problems. In the corresponding smile detection example (section [8\.8](deeplearning.html#cnnSmile)), we will use dropout. When building CNNs, dropout is almost always added to the different layers.
8\.5 Fine\-tuning a Neural Network
----------------------------------
When deciding for a neural network’s architecture, no formula will tell you how many hidden layers or number of units each layer should have. There is also no formula for determining the batch size, the learning rate, type of activation function, for how many epochs should we train the network, and so on. All those are called the **hyperparameters** of the network. Hyperparameter tuning is a complex optimization problem and there is a lot of research going on that tackles the issue from different angles. My suggestion is to start with a simple architecture that has been used before to solve a similar problem and then fine\-tune it for your specific task. If you are not aware of such a network, there are some guidelines (described below) to get you started. Always keep in mind that those are only recommendations, so you do not need to abide by them and you should feel free to try configurations that deviate from those guidelines depending on your problem at hand.
Training neural networks is a time\-consuming process, especially in deep networks. Training a network can take from several minutes to weeks. In many cases, performing cross\-validation is not feasible. A common practice is to divide the data into train/validation/test sets. The training data is used to train a network with a given architecture and a set of hyperparameters. The validation set is used to evaluate the generalization performance of the network. Then, you can try different architectures and hyperparameters and evaluate the performance again and again with the validation set. Typically, the network’s performance is monitored during training epochs by plotting the loss and accuracy of the train and validation sets. Once you are happy with your model, you test its performance on the test set **only once** and that is the result that is reported.
Here are some starting point guidelines, however, also take into consideration that those hyperparameters can be dependent on each other. So, if you modify a hyperparameter it may impact other(s).
**Number of hidden layers.**
Most of the time one or two hidden layers are enough to solve not too complex problems. One advice is to start with one hidden layer and if that one is not enough to capture the complexity of the problem, add another layer and so on.
**Number of units.**
If a network has too few units it can underfit, that is, the model will be too simple to capture the underlying data patterns. If the network has too many units this can result in overfitting. Also, it will take more time to learn the parameters. Some guidelines mention that the number of units should be somewhere between the number of input features and the number of units in the output layer[25](#fn25). Guang\-Bin Huang ([2003](#ref-huang2003)) has even proposed a formula for the two\-hidden layer case to calculate the number of units that are enough to learn \\(N\\) samples: \\(2\\sqrt{(m\+2\)N}\\) where \\(m\\) is the number of output units.
My suggestion is to first gain some practice and intuition with simple problems. A good way to do so is with the TensorFlow playground (<https://playground.tensorflow.org/>) created by Daniel Smilkov and Shan Carter. This is a web\-based implementation of a neural network that you can fine\-tune to solve a predefined set of classification and regression problems. For example, Figure [8\.25](deeplearning.html#fig:playground) shows how I tried to solve the XOR problem with a neural network with \\(1\\) hidden layer and \\(1\\) unit with a sigmoid activation function. After more than \\(1,000\\) epochs the loss is still quite high (\\(0\.38\\)). Try to add more neurons and/or hidden layers and see if you can solve the XOR problem with fewer epochs.
FIGURE 8\.25: Screenshot of the TensorFlow playground. (Daniel Smilkov and Shan Carter, <https://github.com/tensorflow/playground> (Apache License 2\.0\)).
**Batch size.**
Batch sizes typically range between \\(4\\) and \\(512\\). Big batch sizes provide a better estimate of the gradient but are more computationally expensive. On the other hand, small batch sizes are faster to compute but will incur in more noise in the gradient estimation requiring more epochs to converge. When using a GPU or other specialized hardware, the computations can be performed in parallel thus, allowing bigger batch sizes to be computed in a reasonable time. Some people argue that the noise introduced with small batch sizes is good to escape from local minima. Keskar et al. ([2016](#ref-keskar2016)) showed that in practice, big batch sizes can result in degraded models. A good starting point is \\(32\\) which is the default in Keras.
**Learning rate.**
This is one of the most important hyperparameters. The learning rate specifies how fast gradient descent ‘moves’ when trying to find an optimal minimum. However, this doesn’t mean that the algorithm will *learn* faster if the learning rate is set to a high value. If it is too high, the loss can start oscillating. If it is too low, the learning will take a lot of time. One way to fine\-tune it, is to start with the default one. In Keras, the default learning rate for stochastic gradient descent is \\(0\.01\\). Then, based on the loss plot across epochs, you can decrease/increase it. If learning is taking long, try to increase it. If the loss seems to be oscillating or stuck, try reducing it. Typical values are \\(0\.1\\), \\(0\.01\\), \\(0\.001\\), \\(0\.0001\\), \\(0\.00001\\). Additionally to stochastic gradient descent, Keras provides implementations of other optimizers[26](#fn26) like Adam[27](#fn27) which have adaptive learning rates, but still, one needs to specify an initial one.
Before training a network it is a good practice to shuffle the rows of the train set if the data points are independent. Neural networks tend to ‘forget’ patterns learned from previous points during training as the wights are updated. For example, if the train set happens to be oredered by class labels, the network may ‘forget’ how to identify the first classes and will put more emphasis on the last ones.
It is also a good practice to normalize the input features before training a network. This will make the training process more efficient.
8\.6 Convolutional Neural Networks
----------------------------------
Convolutional Neural Networks or CNNs for short, have become extremely popular due to their capacity to solve computer vision problems. Most of the time they are used for image classification tasks but can also be used for regression and for time series data. If we wanted to perform image classification with a traditional neural network, first we would need to either build a feature vector by:
1. extracting features from the image or,
2. flattening the image pixels into a 1D array.
The first solution requires a lot of image processing expertise and domain knowledge. Extracting features from images is not a trivial task and requires a lot of preprocessing to reduce noise, artifacts, segment the objects of interest, remove background, etc. Additionally, considerable effort is spent on feature engineering. The drawback of the second solution is that spatial information is lost, that is, the relationship between neighboring pixels. CNNs solve the two previous problems by automatically extracting features while preserving spatial information. As opposed to traditional networks, CNNs can take as input \\(n\\)\-dimensional images and process them efficiently. The main building blocks of a CNN are:
1. **Convolution layers**
2. **Pooling operations**
3. **Traditional fully connected layers**
Figure [8\.26](deeplearning.html#fig:cnnArchitecture) shows a simple CNN and its basic components. First, the input image goes through a convolution layer with \\(4\\) kernels (details about the convolution operation are described in the next subsection). This layer is in charge of extracting features by applying the kernels on top of the image. The result of this operation is a convolved image, also known as **feature maps**. The number of feature maps is equal to the number of kernels, in this case, \\(4\\). Then, a **pooling operation** is applied on top of the feature maps. This operation reduces the size of the feature maps by downsampling them (details on this in a following subsection). The output of the pooling operation is a set of feature maps with reduced size. Here, the outputs are \\(4\\) reduced feature maps since the pooling operation is applied to each feature map independently of the others. Then, the feature maps are flattened into a one\-dimensional array. Conceptually, this array represents all the features extracted from the previous steps. These features are then used as inputs to a neural network with its respective input, hidden, and output layers. An ’\*’ and underlined text means that parameter learning occurs in that layer. For example, in the convolution layer, the parameters of the kernels need to be learned. On the other hand, the pooling operation does not require parameter learning since it is a fixed operation. Finally, the parameters of the neural network are learned too, including the hidden layers and the output layer.
FIGURE 8\.26: Simple CNN architecture. An \`\*’ indicates that parameter learning occurs.
One can build more complex CNNs by stacking more convolution layers and pooling operations. By doing so, the level of abstraction increases. For example, the first convolution extracts simple features like horizontal, vertical, diagonal lines, etc. The next convolution could extract more complex features like squares, triangles, and so on. The parameter learning of all layers (including the convolution layers) occurs during the same forward and backpropagation step just as with a normal neural network. Both, the features and the classification task are learned at the same time! During learning, batches of images are forward propagated and the parameters are adjusted accordingly to minimize the error (for example, the average cross\-entropy for classification). The same methods for training normal neural networks are used for CNNs, for example, stochastic gradient descent.
Each kernel in a convolution layer can have an associated bias which is also a parameter to be learned. By default, Keras uses a bias for each kernel. Furthermore, an activation function can be applied to the outputs of the convolution layer. This is applied element\-wise. ReLU is the most common one.
At inference time, the convolution layers and pooling operations act as feature extractors by generating feature maps that are ultimately flattened and passed to a normal neural network. It is also common to use the first layers as feature extractors and then replace the neural network with another model (Random Forest, SVM, etc.). In the following sections, details about the convolution and pooling operations are presented.
### 8\.6\.1 Convolutions
Convolutions are used to automatically extract feature maps from images. A convolution operation consists of a **kernel** also known as a **filter** which is a matrix with real values. Kernels are usually much smaller than the original image. For example, for a grayscale image of height and width of \\(100\\)x\\(100\\) a typical kernel size would be \\(3\\)x\\(3\\). The size of the kernel is a hyperparameter. The convolution operation consists of applying the kernel over the image starting at the upper left corner and moving forward row by row until reaching the bottom right corner. The **stride** controls how many elements the kernel is moved at a time and this is also a hyperparameter. A typical value for the stride is \\(1\\).
The convolution operation computes the sum of the element\-wise product between the kernel and the image region it is covering. The output of this operation is used to generate the convolved image (feature map). Figure [8\.27](deeplearning.html#fig:cnnConv) shows the first two iterations and the final iteration of the convolution operation on an image. In this case, the kernel is a \\(3\\)x\\(3\\) matrix with \\(1\\)s in its first row and \\(0\\)s elsewhere. The original image has a size of \\(5\\)x\\(5\\)x\\(1\\) (height, width, depth) and it seems to be a number \\(7\\).
FIGURE 8\.27: Convolution operation with a kernel of size 3x3 and stride\=1\. Iterations 1, 2, and 9\.
In the first iteration, the kernel is aligned with the upper left corner of the original image. An element\-wise multiplication is performed and the results are summed. The operation is shown at the top of the figure. In the first iteration, the result was \\(3\\) and it is set at the corresponding position of the final convolved image (feature map). In the next iteration, the kernel is moved one position to the right and again, the final result is \\(3\\) which is set in the next position of the convolved image. The process continues until the kernel reaches the bottom right corner. At the last iteration (9\), the result is \\(1\\).
Now, the convolved image (feature map) represents the features extracted by this particular kernel. Also, note that the feature map is a \\(3\\)x\\(3\\) matrix which is smaller than the original image. It is also possible to force the feature map to have the same size as the original image by padding it with zeros.
Before learning starts, the kernel values are initialized at random. In this example, the kernel has \\(1\\)s in the first row and it has \\(3\\)x\\(3\=9\\) parameters. This is what makes CNNs so efficient since the same kernel is applied to the entire image. This is known as ‘parameter sharing’. Our kernel has \\(1\\)s at the top and \\(0\\)s elsewhere so it seems that this kernel learned to detect horizontal lines. If we look at the final convolved image, we see that the horizontal lines were emphasized by this kernel. This would be a good candidate kernel to differentiate between \\(7\\)s and \\(0\\)s, for example. Since \\(0\\)s does not have long horizontal lines. But maybe it will have difficulties discriminating between \\(7\\)s and \\(5\\)s since both have horizontal lines at the top.
In this example, only \\(1\\) kernel was used but in practice, you may want more kernels, each in charge of identifying the best features for the given problem. For example, another kernel could learn to identify diagonal lines which would be useful to differentiate between \\(7\\)s and \\(5\\)s. The number of kernels per convolution layer is a hyperparameter. In the previous example, we could have defined to have \\(4\\) kernels instead of one. In that case, the output of that layer would have been \\(4\\) feature maps of size \\(3\\)x\\(3\\) each (Figure [8\.28](deeplearning.html#fig:cnn4kernels)).
FIGURE 8\.28: A convolution with 4 kernels. The output is 4 feature maps.
What would be the output of a convolution layer with \\(4\\) kernels of size \\(3\\)x\\(3\\) if it is applied to an RGB color image of size \\(5\\)x\\(5\\)x\\(3\\))? In that case, the output will be the same (\\(4\\) feature maps of size \\(3\\)x\\(3\\)) as if the image were in grayscale (\\(5\\)x\\(5\\)x\\(1\\)). Remember that the number of output feature maps is equal to the number of kernels regardless of the depth of the image. However, in this case, each kernel will have a depth of \\(3\\). Each depth is applied independently to the corresponding R, G, and B image channels. Thus, each kernel has \\(3\\)x\\(3\\)x\\(3\=27\\) parameters that need to be learned. After applying each kernel to each image channel (in this example, \\(3\\) channels), **the results of each channel are added** and this is why we end up with one feature map per kernel. The following course website has a nice interactive animation of how convolutions are applied to an image with \\(3\\) channels: [https://cs231n.github.io/convolutional\-networks/](https://cs231n.github.io/convolutional-networks/). In the next section (‘CNNs with Keras’), a couple of examples that demonstrate how to calculate the number of parameters and the outputs’ shape will be presented as well.
### 8\.6\.2 Pooling Operations
Pooling operations are typically applied after convolution layers. Their purpose is to reduce the size of the data and to emphasize important regions. These operations perform a fixed computation on the image and do no have learnable parameters. Similar to kernels, we need to define a window size. Then, this window is moved throughout the image and a computation is performed on the pixels covered by the window. The difference with kernels is that this window is just a guide but does not have parameters to be learned. The most common pooling operation is **max pooling** which consists of selecting the highest value.
Figure [8\.29](deeplearning.html#fig:cnnMaxPooling) shows an example of a max pooling operation on a \\(4\\)x\\(4\\) image. The window size is \\(2\\)x\\(2\\) and the stride is \\(2\\). The latter means that the window moves \\(2\\) places at a time.
FIGURE 8\.29: Max pooling with a window of size 2x2 and stride \= 2\.
The result of this operation is an image of size \\(2\\)x\\(2\\) which is half of the original one. Aside from max pooling, average pooling can be applied instead. In that case, it computes the mean value across all values covered by the window.
8\.7 CNNs with Keras
--------------------
keras\_cnns.R
Keras provides several functions to define convolution layers and pooling operations. In TensorFlow, image dimensions are specified with the following order: height, width, and depth. In Keras, the `layer_conv_2d()` function is used to add a convolution layer to a sequential model. This function has several arguments but the \\(6\\) most common ones are: `filters`,`kernel_size`,`strides`,`padding`,`activation`, and `input_shape`.
```
# Convolution layer.
layer_conv_2d(filters = 4, # Number of kernels.
kernel_size = c(3,3), # Kernel size.
strides = c(1,1), # Stride.
padding = "same", # Type of padding.
activation = 'relu', # Activation function.
input_shape = c(5,5,1)) # Input image dimensions.
# Only specified in first layer.
```
The `filters` parameter specifies the number of kernels. The `kernel_size` specifies the kernel size (height, width). The `strides` is an integer or list of \\(2\\) integers, specifying the strides of the convolution along the width and height (the default is \\(1\\)). The `padding` can take two possible strings: `"same"` or `"valid"`. If `padding="same"` the input image will be padded with zeros based on the kernel size and strides such that the convolved image has the same size as the original one. If `padding="valid"` it means no padding is applied. The default is `"valid"`. The `activation` parameter takes as input a string with the name of the activation function to use. The `input_shape` parameter is required when this layer is the first one and specifies the dimensions of the input image.
To add a max pooling operation you can use the `layer_max_pooling_2d()` function. Its most important parameter is `pool_size`.
```
layer_max_pooling_2d(pool_size = c(2, 2))
```
The `pool_size` specifies the window size (height, width). By default, the strides will be equal to `pool_size` but if desired, this can be changed with the `strides` parameter. This function also accepts a `padding` parameter similar to the one for `layer_max_pooling_2d()`.
In Keras, if the stride is not specified, it defaults to the window size (`pool_size` parameter).
To illustrate this convolution and pooling operations I will use two simple examples. The complete code for the two examples can be found in the script `keras_cnns.R`.
### 8\.7\.1 Example 1
Let’s create our first CNN in Keras. For now, this CNN will not be trained but only its architecture will be defined. The objective is to understand the building blocks of the network. In the next section, we will build and train a CNN that detects smiles from image faces.
Our network will consist of **\\(1\\) convolution layer**, **\\(1\\) max pooling layer**, **\\(1\\) fully connected hidden layer**, and **\\(1\\) output layer** as if this were a classification problem. The code to build such a network is shown below and the output of the `summary()` function in Figure [8\.30](deeplearning.html#fig:cnnEx1).
```
library(keras)
model <- keras_model_sequential()
model %>%
layer_conv_2d(filters = 4,
kernel_size = c(3,3),
padding = "valid",
activation = 'relu',
input_shape = c(10,10,1)) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dense(units = 2, activation = 'softmax')
summary(model)
```
FIGURE 8\.30: Output of summary().
The first convolution layer has \\(4\\) kernels of size \\(3\\)x\\(3\\) and a ReLU as the activation function. The padding is set to `"valid"` so no padding will be performed. The input image is of size \\(10\\)x\\(10\\)x\\(1\\) (height, width, depth). Then, we apply max pooling with a window size of \\(2\\)x\\(2\\). Later, the output is flattened and fed into a fully connected layer with \\(32\\) units. Finally, the output layer has \\(2\\) units with a softmax activation function for classification.
From the summary, the output of the first Conv2D layer is (None, 8, 8, 4\). The ‘None’ means that the number of input images is not fixed and depends on the batch size. The next two numbers correspond to the height and width which are both \\(8\\). This is because the image was not padded and after applying the convolution operation on the original \\(10\\)x\\(10\\) height and width image, its dimensions are reduced to \\(8\\). The last number (\\(4\\)) is the number of feature maps which is equal to the number of kernels (`filters=4`). The number of parameters is \\(40\\) (last column). This is because there are \\(4\\) kernels with \\(3\\)x\\(3\=9\\) parameters each, and there is one bias per kernel included by default: \\(4 \\times 3 \\times 3 \\times \+ 4 \= 40\\).
The output of MaxPooling2D is (None, 4, 4, 4\). The height and width are \\(4\\) because the pool size was \\(2\\) and the stride was \\(2\\). This had the effect of reducing to half the height and width of the output of the previous layer. Max pooling preserves the number of feature maps, thus, the last number is \\(4\\) (the number of feature maps from the previous layer). Max pooling does not have any learnable parameters since it applies a fixed operation every time.
Before passing the downsampled feature maps to the next fully connected layer they need to be **flattened** into a \\(1\\)\-dimensional array. This is done with the `layer_flatten()` function. Its output has a shape of (None, 64\) which corresponds to the \\(4 \\times 4 \\times 4 \=64\\) features of the previous layer. The next fully connected layer has \\(32\\) units each with a connection with every one of the \\(64\\) input features. Each unit has a bias. Thus the number of parameters is \\(64 \\times 32 \+ 32 \= 2080\\).
Finally the output layer has \\(32 \\times 2 \+ 2\=66\\) parameters. And the entire network has \\(2,186\\) parameters! Now, you can try to modify, the kernel size, the strides, the padding, and input shape and see how the output dimensions and the number of parameters vary.
### 8\.7\.2 Example 2
Now let’s try another example, but this time the input image will have a depth of \\(3\\) simulating an RGB image.
```
model2 <- keras_model_sequential()
model2 %>%
layer_conv_2d(filters = 16,
kernel_size = c(3,3),
padding = "same",
activation = 'relu',
input_shape = c(28,28,3)) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(units = 64, activation = 'relu') %>%
layer_dense(units = 5, activation = 'softmax')
summary(model2)
```
FIGURE 8\.31: Output of summary().
Figure [8\.31](deeplearning.html#fig:cnnEx2) shows that the output height and width of the first Conv2D layer is \\(28\\) which is the same as the input image size. This is because this time we set `padding = "same"` and the image dimensions were preserved. The \\(16\\) corresponds to the number of feature maps which was set with `filters = 16`.
The total parameter count for this layer is \\(448\\). Each kernel has \\(3 \\times 3 \= 9\\) parameters. There are \\(16\\) kernels but each kernel has a \\(depth\=3\\) because the input image is RGB. \\(9 \\times 16\[kernels] \\times 3\[depth] \+ 16\[biases] \= 448\\). Notice that even though each kernel has a depth of \\(3\\) the output number of feature maps of this layer is \\(16\\) and not \\(16 \\times 3 \= 48\\). This is because as mentioned before, each kernel produces a single feature map regardless of the depth because the values are summed depth\-wise. The rest of the layers are similar to the previous example.
8\.8 Smiles Detection with a CNN
--------------------------------
keras\_smile\_detection.R
In this section, we will build a CNN that detects smiling and non\-smiling faces from pictures from the *SMILES* dataset. This information could be used, for example, to analyze smiling patterns during job interviews, exams, etc. For this task, we will use a cropped ([Sanderson and Lovell 2009](#ref-sanderson2009multi)) version of the Labeled Faces in the Wild (LFW) database ([Gary B. Huang et al. 2008](#ref-huang2008labeled)). A subset of the database was labeled by O. A. Arigbabu et al. ([2016](#ref-arigbabu2016smile)), O. Arigbabu ([2017](#ref-olasimbo)). The labels are provided as two text files, each, containing the list of files that correspond to smiling and non\-smiling faces. The dataset can be downloaded from: <http://conradsanderson.id.au/lfwcrop/> and the labels list from: <https://data.mendeley.com/datasets/yz4v8tb3tp/5>. See Appendix [B](appendixDatasets.html#appendixDatasets) for instructions on how to setup the dataset.
The smiling set has \\(600\\) pictures and the non\-smiling has \\(603\\) pictures. Figure [8\.32](deeplearning.html#fig:cnnSmileNotSmile) shows an example of one image from each of the sets.
FIGURE 8\.32: Example of a smiling and a non\-smiling face. (Adapted from the LFWcrop Face Dataset: C. Sanderson, B.C. Lovell. “Multi\-Region Probabilistic Histograms for Robust and Scalable Identity Inference.” *Lecture Notes in Computer Science (LNCS)*, Vol. 5558, pp. 199\-208, 2009\. doi: [https://doi.org/10\.1007/978\-3\-642\-01793\-3\_21](https://doi.org/10.1007/978-3-642-01793-3_21)).
The script `keras_smile_detection.R` has the full code of the analysis. First, we load the list of smiling pictures.
```
datapath <- file.path(datasets_path,"smiles")
smile.list <- read.table(paste0(datapath,"SMILE_list.txt"))
head(smile.list)
#> V1
#> 1 James_Jones_0001.jpg
#> 2 James_Kelly_0009.jpg
#> 3 James_McPherson_0001.jpg
#> 4 James_Watt_0001.jpg
#> 5 Jamie_Carey_0001.jpg
#> 6 Jamie_King_0001.jpg
# Substitute jpg with ppm.
smile.list <- gsub("jpg", "ppm", smile.list$V1)
```
The SMILE\_list.txt points to the names of the pictures in *jpg* format, but they are actually stored as *ppm* files. Thus, the *jpg* extension is replaced by *ppm* with the `gsub()` function. Since the images are in *ppm* format, we can use the `pixmap` library ([Bivand, Leisch, and Maechler 2011](#ref-pixmap)) to read and plot them. The `print()` function can be used to display the image properties. Here, we see that these are RGB images of \\(64\\)x\\(64\\) pixels.
```
library(pixmap)
# Read an smiling face.
img <- read.pnm(paste0(datapath,"faces/", smile.list[10]), cellres = 1)
# Plot the image.
plot(img)
# Print its properties.
print(img)
#> Pixmap image
#> Type : pixmapRGB
#> Size : 64x64
#> Resolution : 1x1
#> Bounding box : 0 0 64 64
```
Then, we load the images into two arrays `smiling.images` and `nonsmiling.images` (code omitted here). If we print the array dimensions we see that there are \\(600\\) smiling images of size \\(64 \\times 64 \\times 3\\).
```
# Print dimensions.
dim(smiling.images)
#> [1] 600 64 64 3
```
If we print the minimum and maximum values we see that they are \\(0\\) and \\(1\\) so there is no need for normalization.
```
max(smiling.images)
#> [1] 1
min(smiling.images)
#> [1] 0
```
The next step is to randomly split the dataset into train and test sets. We will use \\(85\\%\\) for the train set and \\(15\\%\\) for the test set. We set the `validation_split` parameter of the `fit()` function to choose a small percent (\\(10\\%\\)) of the train set as the validation set during training.
After creating the train and test sets, the train set images and labels are stored in `trainX` and `trainY`, respectively and the test set data is stored in `testX` and `testY`. The labels in `trainY` and `testY` were one\-hot encoded. Now that the data is in place, let’s build the CNN.
```
model <- keras_model_sequential()
model %>%
layer_conv_2d(filters = 8,
kernel_size = c(3,3),
activation = 'relu',
input_shape = c(64,64,3)) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_dropout(0.25) %>%
layer_conv_2d(filters = 16,
kernel_size = c(3,3),
activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_dropout(0.25) %>%
layer_flatten() %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dropout(0.5) %>%
layer_dense(units = 2, activation = 'softmax')
```
Our CNN consists of two convolution layers each followed by a max pooling operation and dropout. The feature maps are then flattened and passed to a fully connected layer with \\(32\\) units followed by a dropout. Since this is a binary classification problem (*‘smile’* vs. *‘non\-smile’*) the output layer has \\(2\\) units with a softmax activation function. Now the model can be compiled and the `fit()` function used to begin the training!
```
# Compile model.
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c("accuracy")
)
# Fit model.
history <- model %>% fit(
trainX, trainY,
epochs = 50,
batch_size = 8,
validation_split = 0.10,
verbose = 1,
view_metrics = TRUE
)
```
We are using a stochastic gradient descent optimizer with a learning rate of \\(0\.01\\) and cross\-entropy as the loss function. We can use \\(10\\%\\) of the train set as the validation set by setting `validation_split = 0.10`. Once the training is done, we can plot the *loss* and *accuracy* of each epoch.
```
plot(history)
```
FIGURE 8\.33: Train/test loss and accuracy.
After epoch \\(25\\) (see Figure [8\.33](deeplearning.html#fig:cnnSmilesLoss)) it looks like the training loss is decreasing faster than the validation loss. After epoch \\(40\\) it seems that the model starts to overfit (the validation loss is increasing a bit). If we look at the validation accuracy, it seems that it starts to get flat after epoch \\(30\\). Now we evaluate the model on the test set:
```
# Evaluate model on test set.
model %>% evaluate(testX, testY)
#> $loss
#> [1] 0.1862139
#> $acc
#> [1] 0.9222222
```
An accuracy of \\(92\\%\\) is pretty decent if we take into account that we didn’t have to do any image preprocessing or feature extraction! We can print the predictions of the first \\(16\\) test images (see Figure [8\.34](deeplearning.html#fig:cnnSmileResults)).
FIGURE 8\.34: Predictions of the first \\(16\\) test set images. Correct predictions are in green and incorrect ones in red. (Adapted from the LFWcrop Face Dataset: C. Sanderson, B.C. Lovell. “Multi\-Region Probabilistic Histograms for Robust and Scalable Identity Inference.” *Lecture Notes in Computer Science (LNCS)*, Vol. 5558, pp. 199\-208, 2009\. doi: [https://doi.org/10\.1007/978\-3\-642\-01793\-3\_21](https://doi.org/10.1007/978-3-642-01793-3_21)).
From those \\(16\\), all but one were correctly classified. The correct ones are shown in green and the incorrect one in red. Some faces seem to be smiling (last row, third image) but the mouth is closed, though. It seems that this CNN classifies images as *‘smiling’* only when the mouth is open which may be the way the train labels were defined.
8\.9 Summary
------------
**Deep learning (DL)** consists of a set of different architectures and algorithms. As of now, it mainly focuses on artificial neural networks (ANNs). This chapter introduced two main types of DL models (ANNs and CNNs) and their application to behavior analysis.
* Artificial neural networks (ANNs) are mathematical models inspired by the brain. But that does not mean they work the same as the brain.
* The **perceptron** is one of the simplest ANNs.
* ANNs consist of an input layer, hidden layer(s) and an output layer.
* Deep networks have many hidden layers.
* **Gradient descent** can be used to learn the parameters of a network.
* Overfitting is a recurring problem in ANNs. Some methods like **dropout** and **early stopping** can be used to reduce the effect of overfitting.
* A Convolutional Neural Network (CNN) is a type of ANN that can process \\(N\\)\-dimensional arrays very efficiently. They are used mainly for computer vision tasks.
* CNNs consist of **convolution** and **pooling** layers.
8\.1 Introduction to Artificial Neural Networks
-----------------------------------------------
Artificial neural networks (ANNs) are mathematical models *inspired* by the brain. Here, I would like to emphasize the word *inspired* because ANNs do not model how a biological brain actually works. In fact, there is little knowledge about how a biological brain works. ANNs are composed of **units** (also called **neurons** or **nodes**) and connections between units. Each unit can receive inputs from other units. Those inputs are processed inside the unit and produce an output. Typically, units are arranged into layers (as we will see later) and connections between units have an associated weight. Those weights are learned during training and they are the core elements that make a network behave in a certain way.
For the rest of the chapter I will mostly use the term **units** to refer to neurons/nodes. I will also use the term **network** to refer to artificial neural networks.
Before going into details of how multi\-layer ANNs work, let’s start with a very simple neural network consisting of a **single unit**. See Figure [8\.1](deeplearning.html#fig:nnPerceptron). Even though this network only has one node, it is already composed of several interesting elements which are the basis of more complex networks. First, it has \\(n\\) input variables \\(x\_1 \\ldots x\_n\\) which are real numbers. Second, the unit has a set of \\(n\\) weights \\(w\_1 \\ldots w\_n\\) associated with each input. These weights can take real numbers as values. Finally, there is an output \\(y'\\) which is binary (it can take two values: \\(1\\) or \\(0\\)).
FIGURE 8\.1: A neural network composed of a single unit (perceptron).
This simple network consisting of one unit with a binary output is called a **perceptron** and was proposed by Rosenblatt ([1958](#ref-rosenblatt1958)).
This single unit also known as *perceptron* is capable of making binary decisions based on the input and the weights. To get the final decision \\(y'\\) the inputs are multiplied by their corresponding weights and the results are summed. If the sum is greater than a given threshold, then the output is \\(1\\) and \\(0\\) otherwise. Formally:
\\\[\\begin{equation}
y' \=
\\begin{cases}
1 \& \\textit{if } \\sum\_{i}{w\_i x\_i \> t}, \\\\
0 \& \\textit{if } \\sum\_{i}{w\_i x\_i \\leq t}
\\end{cases}
\\tag{8\.1}
\\end{equation}\\]
where \\(t\\) is a threshold. We can use a perceptron to make important decisions in life. For example, suppose you need to decide whether or not to go to the movies. Assume this decision is based on two pieces of information:
1. You have money to pay the entrance (or not) and,
2. it is a horror movie (or not).
There are two additional assumptions as well:
1. The movie theater only projects \\(1\\) film.
2. You don’t like horror movies.
This decision\-making process can be modeled with the perceptron of Figure [8\.2](deeplearning.html#fig:nnMovies). This perceptron has two binary input variables: *money* and *horror*. Each variable has an associated weight. Suppose there is a decision threshold of \\(t\=3\\). Finally, there is a binary output: \\(1\\) means you should go to the movies and \\(0\\) indicates that you should not go.
FIGURE 8\.2: Perceptron to decide whether or not to go to the movies based on two input variables.
In this example, the weights (\\(5\\) and \\(\-3\\)) and the threshold \\(t\=3\\) were already provided. The weights and the threshold are called the *parameters* of the network. Later, we will see how the parameters can be learned automatically from data.
Suppose that today was payday and the theater is projecting an action movie. Then, we can set the input variables \\(money\=1\\) and \\(horror\=0\\). Now we want to decide if we should go to the movie theater or not. To get the final answer we can use Equation [(8\.1\)](deeplearning.html#eq:perceptron). This formula tells us that we need to multiply each input variable with their corresponding weights and add them:
\\\[\\begin{align\*}
(money)(5\) \+ (horror)(\-3\)
\\end{align\*}\\]
Substituting *money* and *horror* with their corresponding values:
\\\[\\begin{align\*}
(1\)(5\) \+ (0\)(\-3\) \= 5
\\end{align\*}\\]
Since \\(5 \> t\\) (remember the threshold \\(t\=3\\)), the final output will be \\(1\\), thus, the advice is to go to the movies. Let’s try the scenario when you have money but they are projecting a horror movie: \\(money\=1\\), \\(horror\=1\\).
\\\[\\begin{align\*}
(1\)(5\) \+ (1\)(\-3\) \= 2
\\end{align\*}\\]
In this case, \\(2 \< t\\) and the final output is \\(0\\). Even if you have money, you should not waste it on a movie that you know you most likely will not like. This process of applying operations to the inputs and obtaining the final result is called **forward propagation** because the inputs are ‘pushed’ all the way through the network (a single perceptron in this case). For bigger networks, the outputs of the current layer become the inputs of the next layer, and so on.
For convenience, a simplified version of Equation [(8\.1\)](deeplearning.html#eq:perceptron) can be used. This alternative representation is useful because it provides flexibility to change the internals of the units (neurons) as we will see. The first simplification consists of representing the inputs and weights as vectors:
\\\[\\begin{equation}
\\sum\_{i}{w\_i x\_i} \= \\boldsymbol{w} \\cdot \\boldsymbol{x}
\\end{equation}\\]
The summation becomes a dot product between \\(\\boldsymbol{w}\\) and \\(\\boldsymbol{x}\\). Next, the threshold \\(t\\) can be moved to the left and renamed to \\(b\\) which stands for **bias**. This is only for notation but you can still think of the *bias* as a threshold.
\\\[\\begin{equation}
y' \= f(\\boldsymbol{x}) \=
\\begin{cases}
1 \& \\textit{if } \\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b \> 0, \\\\
0 \& \\textit{otherwise}
\\end{cases}
\\end{equation}\\]
The output \\(y'\\) is a function of \\(\\boldsymbol{x}\\) with \\(\\boldsymbol{w}\\) and \\(b\\) as fixed parameters. One thing to note is that first, we are performing the operation \\(\\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b\\) and then, another operation is applied to the result. In this case, it is a comparison. If the result is greater than \\(0\\) the final output is \\(1\\). You can think of this second operation as another function. Call it \\(g(x)\\).
\\\[\\begin{equation}
f(\\boldsymbol{x}) \= g(\\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b)
\\tag{8\.2}
\\end{equation}\\]
In neural networks terminology, this \\(g(x)\\) is known as the **activation function**. Its result indicates how much active this unit is based on its inputs. If the result is \\(1\\), it means that this unit is active. If the result is \\(0\\), it means the unit is inactive.
This new notation allows us to use different activation functions by substituting \\(g(x)\\) with some other function in Equation [(8\.2\)](deeplearning.html#eq:nnUnit). In the case of the perceptron, the activation function \\(g(x)\\) is the threshold function, which is known as the *step function*:
\\\[\\begin{equation}
g(x) \= step(x) \=
\\begin{cases}
1 \& \\textit{if } x \> 0 \\\\
0 \& \\textit{if } x \\leq 0
\\end{cases}
\\tag{8\.3}
\\end{equation}\\]
Figure [8\.3](deeplearning.html#fig:nnStep) shows the plot of the step function.
FIGURE 8\.3: The step function.
It is worth noting that perceptrons have two major limitations:
1. The output is binary.
2. Perceptrons are linear functions.
The first limitation imposes some restrictions on its applicability. For example, a perceptron cannot be used to predict real\-valued outputs which is a fundamental aspect for regression problems. The second limitation makes the perceptron only capable of solving linear problems. Figure [8\.4](deeplearning.html#fig:nnLinearity) graphically shows this limitation. In the first case, the outputs of the OR logical operator can be classified (separated) using a line. On the other hand, it is not possible to classify the output of the XOR function using a single line.
FIGURE 8\.4: The OR and the XOR logical operators.
To overcome those limitations, several modifications to the perceptron were introduced. This allows us to build models capable of solving more complex non\-linear problems. One such modification is to change the activation function. Another improvement is to add the ability to have several layers of interconnected units. In the next section, two new types of units will be presented. Then, the following section will introduce neural networks also known as multilayer perceptrons which are more complex models built by connecting many units and arranging them into layers.
### 8\.1\.1 Sigmoid and ReLU Units
As previously mentioned, perceptrons have some limitations that restrict their applicability including the fact that they are linear models. In practice, problems are complex and most of them are non\-linear. One way to overcome this limitation is to introduce non\-linearities and this can be done by using a different type of activation function. Remember that a unit can be modeled as \\(f(x) \= g(wx\+b)\\) where \\(g(x)\\) is some activation function. For the perceptron, \\(g(x)\\) is the *step function*. However, another practical limitation not mentioned before is that the step function can change abruptly from \\(0\\) to \\(1\\) and vice versa. Small changes in \\(x\\), \\(w\\), or \\(b\\) can completely change the output. This is a problem during learning and inference time. Instead, we would prefer a smooth version of the step function, for example, the **sigmoid function** which is also known as the **logistic function**:
\\\[\\begin{equation}
s(x) \= \\frac{1}{1 \+ e^{\-x}}
\\tag{8\.4}
\\end{equation}\\]
This function has an ‘S’ shape (Figure [8\.5](deeplearning.html#fig:nnSigmoid)) and as opposed to a step function, this one is smooth. The range of this function is from \\(0\\) to \\(1\\).
FIGURE 8\.5: Sigmoid function.
If we substitute the activation function in Equation [(8\.2\)](deeplearning.html#eq:nnUnit) with the sigmoid function we get our **sigmoid unit**:
\\\[\\begin{equation}
f(x) \= \\frac{1}{1 \+ e^{\-(w \\cdot x \+ b)}}
\\tag{8\.5}
\\end{equation}\\]
Sigmoid units have been one of the most commonly used types of units when building bigger neural networks. Another advantage is that the outputs are real values that can be interpreted as probabilities. For instance, if we want to make binary decisions we can set a threshold. For example, if the output of the sigmoid unit is \\(\> 0\.5\\) then return a \\(1\\). Of course, that threshold would depend on the application. If we need more confidence about the result we can set a higher threshold.
In the last years, another type of unit has been successfully applied to train neural networks, the **rectified linear unit** or **ReLU** for short (Figure [8\.6](deeplearning.html#fig:nnRectified)).
FIGURE 8\.6: Rectifier function.
The activation function of this unit is the rectifier function:
\\\[\\begin{equation}
rectifier(x) \=
\\begin{cases}
0 \& \\textit{if } x \< 0, \\\\
x \& \\textit{if } x \\geq 0
\\end{cases}
\\tag{8\.6}
\\end{equation}\\]
This one is also called the *ramp function* and is one of the simplest non\-linear functions and probably the most common one used in modern big neural networks. These units present several advantages, being among them, efficiency during training and inference time.
In practice, many other activation functions are used but the most common ones are sigmoid and ReLU units. In the following link, you can find an extensive list of activation functions: <https://en.wikipedia.org/wiki/Activation_function>
So far, we have been talking about **single units**. In the next section, we will see how these single units can be assembled to build bigger artificial neural networks.
### 8\.1\.2 Assembling Units into Layers
Perceptrons, sigmoid, and ReLU units can be thought of as very simple neural networks. By connecting several units, one can build more complex neural networks. For historical reasons, neural networks are also called **multilayer perceptrons** regardless whether the units are perceptrons or not. Typically, units are grouped into layers. Figure [8\.7](deeplearning.html#fig:nnExampleNN) shows an example neural network with \\(3\\) layers. An **input layer** with \\(3\\) nodes, a **hidden layer** with \\(2\\) nodes, and an **output layer** with \\(1\\) node.
FIGURE 8\.7: Example neural network.
In this type of diagram, the nodes represent units (perceptrons, sigmoids, ReLUs, etc.) except for the input layer. In the input layer, nodes represent input variables (input features). In the above example, the \\(3\\) nodes in the input layer simply indicate that the network takes as input \\(3\\) variables. In this layer, no operations are performed but the input values are passed to the next layer after multipliying them by their corresponding edge weights.
This network only has one hidden layer. Hidden layers are called like that because they do not have direct contact with the external world. Finally, there is an output layer with a single unit. We could also have an output layer with more than one unit. Most of the time, we will have **fully connected** neural networks. That is, all units have incoming connections from all nodes in the previous layer (as in the previous example).
For each specific problem, we need to define several building blocks for the network. For example, the number of layers, the number of units in each layer, the type of units (sigmoid, ReLU, etc.), and so on. This is known as the **architecture** of the network. Choosing a good architecture for a given problem is not a trivial task. It is advised to start with an architecture that was used to solve a similar problem and then fine\-tune it for your specific problem. There exist some automatic ways to optimize the network architecture but those methods are out of the scope of this book.
We already saw how a unit can produce a result based on the inputs by using *forward propagation*. For more complex networks the process is the same! Consider the network shown in Figure [8\.8](deeplearning.html#fig:nnForward). It consists of two inputs and one output. It also has one hidden layer with \\(2\\) units.
FIGURE 8\.8: Example of forward propagation.
Each node is labeled as \\(n\_{l,n}\\) where \\(l\\) is the layer and \\(n\\) is the unit number.
The two input values are \\(1\\) and \\(0\.5\\). They could be temperature measurements, for example. Each edge has an associated weight. For simplicity, let’s assume that the activation function of the units is the identity function \\(g(x)\=x\\). The bold underlined number inside the nodes of the hidden and output layers are the biases. Here we assume that the network is already trained (later we will see how those weights and biases are learned). To get the final result, for each node, its inputs are multiplied by their corresponding weights and added. Then, the bias is added. Next, the activation function is applied. In this case, it is just the identify function (returns the same value). The outputs of the nodes in the hidden layer become the inputs of the next layer and so on.
In this example, first we need to compute the outputs of nodes \\(n\_{2,1}\\) and \\(n\_{2,2}\\):
output of \\(n\_{2,1} \= (1\)(2\) \+ (0\.5\)(1\) \+ 1 \= 3\.5\\)
output of \\(n\_{2,2} \= (1\)(\-3\) \+ (0\.5\)(5\) \+ 0 \= \-0\.5\\)
Finally, we can compute the output of the last node using the outputs of the previous nodes:
output of \\(n\_{3,1} \= (3\.5\)(1\) \+ (\-0\.5\)(\-1\) \+ 3 \= 7\\).
### 8\.1\.3 Deep Neural Networks
By increasing the number of layers and the number of units in each layer, one can build more complex networks. But what is a deep neural network (DNN)? There is not a strict rule but some people say that a network with more than \\(2\\) hidden layers is a deep network. Yes, that’s all it takes to build a DNN! Figure [8\.9](deeplearning.html#fig:nnDNN) shows an example of a deep neural network.
FIGURE 8\.9: Example of a deep neural network.
A DNN has nothing special compared to a traditional neural network except that it has many layers. One of the reasons why they became so popular until recent years is because before, it was not possible to efficiently train them. With the advent of specialized hardware like graphics processing units (GPUs), it is now possible to efficiently train big DNNs. The introduction of ReLU units was also a key factor that allowed the training of even bigger networks. The availability of big quantities of data was another key factor that allowed the development of deep learning technologies. Note that deep learning is not limited to DNNs but it also encompasses other types of architectures like convolutional networks and recurrent neural networks, to name a few. Convolutional layers will be covered later in this chapter.
### 8\.1\.4 Learning the Parameters
We have seen how *forward propagation* can be used at inference time to compute the output of the network based on the input values. In the previous examples, we assumed that the network’s parameters (weights and biases) were already learned. In practice, you most likely will use libraries and frameworks to build and train neural networks. Later in this chapter, I will show you how to use TensorFlow and Keras within R. But, before that, I will explain how the networks’ parameters are learned and how to code and train a very simple network from scratch.
Back to the problem, the objective is to find the parameters’ values based on training data such that the predicted result for any input data point is as close as possible as the true value. Put in other words, we want to find the parameters’ values that reduce the network’s prediction error.
One way to estimate the network’s error is by computing the squared difference between the prediction \\(y'\\) and the real value \\(y\\), that is, \\(error \= (y' \- y)^2\\). This is how the error can be computed for a single training data point. The error function is typically called the **loss function** and denoted by \\(L(\\theta)\\) where \\(\\theta\\) represents the parameters of the network (weights and biases). In this example the loss function is \\(L(\\theta)\=(y'\- y)^2\\).
If there is more than one training data point (which is often the case), the loss function is just the average of the individual squared differences which is known as the **mean squared error (MSE)**:
\\\[\\begin{equation}
L(\\theta) \= \\frac{1}{N} \\sum\_{n\=1}^N{(y'\_n \- y\_n)^2}
\\tag{8\.7}
\\end{equation}\\]
The mean squared error (MSE) loss function is commonly used for regression problems. For classification problems, the average cross\-entropy loss function is usually preferred (covered later in this chapter).
The problem of finding the best parameters can be formulated as an optimization problem, that is, find the optimal parameters such that the loss function is minimized. This is the learning/training phase of a neural network. Formally, this can be stated as:
\\\[\\begin{equation}
\\operatorname\*{arg min}\_{\\theta} L(\\theta)
\\tag{8\.8}
\\end{equation}\\]
This notation means: find and return the weights and biases that make the loss function be as small as possible.
The most common method to train neural networks is called **gradient descent**. The algorithm updates the parameters in an iterative fashion based on the loss. This algorithm is suitable for complex functions with millions of parameters.
Suppose there is a network with only \\(1\\) weight and no bias with MSE as loss function (Equation [(8\.7\)](deeplearning.html#eq:lossMSE)). Figure [8\.10](deeplearning.html#fig:nnGD) shows a plot of the loss function. This is a quadratic function that only depends on the value of \\(w\\). The task is to find the \\(w\\) where the function is at its minimum.
FIGURE 8\.10: Gradient descent in action.
Gradient descent starts by assigning \\(w\\) a random value. Then, at each step and based on the error, \\(w\\) is updated in the direction that minimizes the loss function. In the previous figure, the **global minimum** is found after \\(5\\) iterations. In practice, loss functions are more complex and have many **local minima** (Figure [8\.11](deeplearning.html#fig:nnLM)). For complex functions, it is difficult to find a global minimum but gradient descent can find a local minimum that is good enough to solve the problem at hand.
FIGURE 8\.11: Function with 1 global minimum and several local minima.
But in what direction and how much is \\(w\\) moved in each iteration? The direction and magnitude are estimated by computing the derivative of the loss function with respect to the weight \\(\\frac{\\partial L}{\\partial w}\\). The derivative is also called the gradient and denoted by \\(\\nabla L\\). The iterative gradient descent procedure is listed below:
**loop** until convergence or max iterations (*epochs*)
**for each** \\(w\_i\\) in \\(W\\) **do:**
\\(w\_i \= w\_i \- \\alpha \\frac{\\partial L(W)}{\\partial w\_i}\\)
The outer loop is run until the algorithm converges or until a predefined number of iterations is reached. Each iteration is also called an **epoch**. Each weight is updated with the rule: \\(w\_i \= w\_i \- \\alpha \\frac{\\partial L(W)}{\\partial w\_i}\\). The derivative part will give us the direction and magnitude. The \\(\\alpha\\) is called the **learning rate** and it controls how ‘fast’ we move. The learning rate is a constant defined by the user, thus, it is a **hyperparameter**. A high learning rate can cause the algorithm to miss the local minima and the loss can start to increase. A small learning rate will cause the algorithm to take more time to converge. Figure [8\.12](deeplearning.html#fig:nnLR) illustrates both scenarios.
FIGURE 8\.12: Comparison between high and low learning rates. a) Big learning rate. b) Small learning rate.
Selecting an appropriate learning rate will depend on the application but common values are between \\(0\.0001\\) and \\(0\.05\\).
Let’s see how gradient descent works with a step by step example. Consider a very simple neural network consisting of an input layer with only one input feature and an output layer with one unit and no bias. To make it even simpler, the activation function of the output unit is the identity function \\(f(x)\=x\\). Assume that as training data we have a single data point. Figure [8\.13](deeplearning.html#fig:nnStepExample) shows the simple network and the training data. The training data point only has one input variable (\\(x\\)) and an output (\\(y\\)). We want to train this network such that it can make predictions on new data points. The training point has an input feature of \\(x\=3\\) and the expected output is \\(y\=1\.5\\). For this particular training point, it seems that the output is equal to the input divided by \\(2\\). Thus, based on this single training data point the network should learn how to divide any other input by \\(2\\).
FIGURE 8\.13: a) A simple neural network consisting of one unit. b) The training data with only one row.
Before we start the training we need to define \\(3\\) things:
1. The loss function. This is a regression problem so we can use the MSE. Since there is a single data point our loss function becomes \\(L(w)\=(y' \- y)^2\\). Here, \\(y\\) is the ground truth output value and \\(y'\\) is the predicted value. We know how to make predictions using forward propagation. In this case, it is the product between the input value and the single weight, and the activation function has no effect (it returns the same value as its input). We can rewrite the loss function as \\(L(w)\=(xw \- y)^2\\).
2. We need to define a learning rate. For now, we can set it to \\(\\alpha \= 0\.05\\).
3. The weights need to be initialized at random. Let’s assume the single weight is ‘randomly’ initialized with \\(w\=2\\).
Now we can use gradient descent to iteratively update the weight. Remember that the updating rule is:
\\\[\\begin{equation}
w \= w \- \\alpha \\frac{\\partial L(w)}{\\partial w}
\\end{equation}\\]
The partial derivative of the loss function with respect to \\(w\\) is:
\\\[\\begin{equation}
\\frac{\\partial L(w)}{\\partial w} \= 2x(xw \- y)
\\end{equation}\\]
If we substitute the derivative in the updating rule we get:
\\\[\\begin{equation}
w \= w \- \\alpha 2x(xw \- y)
\\end{equation}\\]
We already know that \\(\\alpha\=0\.05\\), the input value is \\(x\=3\\), the output is \\(y\=1\.5\\) and the initial weight is \\(w\=2\\). So we can start updating \\(w\\). Figure [8\.14](deeplearning.html#fig:nnTrainProgress) shows the initial state (iteration 0\) and \\(3\\) additional iterations. In the initial state, \\(w\=2\\) and with that weight the loss is \\(20\.25\\). In iteration \\(1\\), the weight is updated and now its value is \\(0\.65\\). With this new weight, the loss is \\(0\.2025\\). That was a substantial reduction in the error! After three iterations we see that the final weight is \\(w\=0\.501\\) and the loss is very close to zero.
FIGURE 8\.14: First 3 gradient descent iterations (epochs).
Now, we can start doing predictions with our very simple neural network! To do so, we use forward propagation on the new input data using the learned weight \\(w\=0\.501\\). Figure [8\.15](deeplearning.html#fig:nnExamplePredictions) shows the predictions on new training data points that were never seen before by the network.
FIGURE 8\.15: Example predictions on new data points.
Even though the predictions are not perfect, they are very close to the expected value (division by \\(2\\)) considering that the network is very simple and was only trained with a single data point and for only \\(3\\) epochs!
If the training set has more than one data point, then we need to compute the derivative of each point and accumulate them (the derivative of a sum is equal to the sum of the derivatives). In the previous example, the update rule becomes:
\\\[\\begin{equation}
w \= w \- \\alpha \\sum\_{i\=1}^N{2x\_i(x\_i w \- y)}
\\end{equation}\\]
This means that before updating a weight, first, we need to compute the derivative for each point and add them. This needs to be done for every parameter in the network. Thus, one **epoch** is a pass through all training points and all parameters.
### 8\.1\.5 Parameter Learning Example in R
`gradient_descent.R`
In the previous section, we went step by step to train a neural network with a single unit and with a single training data point. Here, we will see how we can implement that simple network in R but when we have more training data. The code can be found in the script `gradient_descent.R`.
This code implements the same network as the previous example. That is, one neuron, one input, no bias, and activation function \\(f(x) \= x\\). We start by creating a sample training set with \\(3\\) points. Again, the output is the input divided by \\(2\\).
```
train_set <- data.frame(x = c(3.0,4.0,1.0), y = c(1.5, 2.0, 0.5))
# Print the train set.
print(train_set)
#> x y
#> 1 3 1.5
#> 2 4 2.0
#> 3 1 0.5
```
Then we need to implement three functions: forward propagation, the loss function, and the derivative of the loss function.
```
# Forward propagation w*x
fp <- function(w, x){
return(w * x)
}
# Loss function (y - y')^2
loss <- function(w, x, y){
predicted <- fp(w, x) # This is y'
return((y - predicted)^2)
}
# Derivative of the loss function. 2x(xw - y)
derivative <- function(w, x, y){
return(2.0 * x * ((x * w) - y))
}
```
Now we are all set to implement the `gradient.descent()` function. The first parameter is the train set, the second parameter is the learning rate \\(\\alpha\\), and the last parameter is the number of epochs. The initial weight is initialized to some ‘random’ number (selected manually here for the sake of the example). The function returns the final learned weight.
```
# Gradient descent.
gradient.descent <- function(train_set, lr = 0.01, epochs = 5){
w = -2.5 # Initialize weight at 'random'
for(i in 1:epochs){
derivative.sum <- 0.0
loss.sum <- 0.0
# Iterate each data point in train_set.
for(j in 1:nrow(train_set)){
point <- train_set[j, ]
derivative.sum <- derivative.sum + derivative(w, point$x, point$y)
loss.sum <- loss.sum + loss(w, point$x, point$y)
}
# Update weight.
w <- w - lr * derivative.sum
# mean squared error (MSE)
mse <- loss.sum / nrow(train_set)
print(paste0("epoch: ", i, " loss: ",
formatC(mse, digits = 8, format = "f"),
" w = ", formatC(w, digits = 5, format = "f")))
}
return(w)
}
```
Now, let’s train the network with a learning rate of \\(0\.01\\) and for \\(10\\) epochs. This function will print for each epoch, the loss and the current weight.
```
#### Train the 1 unit network with gradient descent ####
lr <- 0.01 # set learning rate.
set.seed(123)
# Run gradient decent to find the optimal weight.
learned_w = gradient.descent(train_set, lr, epochs = 10)
#> [1] "epoch: 1 loss: 78.00000000 w = -0.94000"
#> [1] "epoch: 2 loss: 17.97120000 w = -0.19120"
#> [1] "epoch: 3 loss: 4.14056448 w = 0.16822"
#> [1] "epoch: 4 loss: 0.95398606 w = 0.34075"
#> [1] "epoch: 5 loss: 0.21979839 w = 0.42356"
#> [1] "epoch: 6 loss: 0.05064155 w = 0.46331"
#> [1] "epoch: 7 loss: 0.01166781 w = 0.48239"
#> [1] "epoch: 8 loss: 0.00268826 w = 0.49155"
#> [1] "epoch: 9 loss: 0.00061938 w = 0.49594"
#> [1] "epoch: 10 loss: 0.00014270 w = 0.49805"
```
From the output, we can see that the loss decreases as the weight is updated. The final value of the weight at iteration \\(10\\) is \\(0\.49805\\). We can now make predictions on new data.
```
# Make predictions on new data using the learned weight.
fp(learned_w, 7)
#> [1] 3.486366
fp(learned_w, -88)
#> [1] -43.8286
```
Now, you can try to change the training set to make the network learn a different arithmetic operation!
In the previous example, we considered a very simple neural network consisting of a single unit. In this case, the partial derivative with respect to the single weight was calculated directly. For bigger networks with more layers and activations, the final output becomes a composition of functions. That is, the activation values of a layer \\(l\\) depend on its weights which are also affected by the previous layer’s \\(l\-1\\) weights and so on. So, the derivatives (gradients) can be computed using the chain rule \\(f(g(x))' \= f'(g(x)) \\cdot g'(x)\\). This can be performed efficiently by an algorithm known as **backpropagation**.
> “What backpropagation actually lets us do is compute the partial derivatives \\(\\partial C\_x / \\partial w\\) and \\(\\partial C\_x / \\partial b\\) for a single training example”. (Michael Nielsen, 2019\)[20](#fn20).
Here, \\(C\\) refers to the loss function which is also called the cost function. In modern deep learning libraries like TensorFlow, this procedure is efficiently implemented with a computational graph. If you want to learn the details about backpropagation I recommend you to check this post by DEEPLIZARD (<https://deeplizard.com/learn/video/XE3krf3CQls>) which consists of \\(5\\) parts including videos.
### 8\.1\.6 Stochastic Gradient Descent
We have seen how gradient descent iterates over all training points before updating each parameter. To recall, an epoch is one pass through all parameters and for each parameter, the derivative with each training point needs to be computed. If the training set consists of thousands or millions of points, this method becomes very time\-consuming. Furthermore, in practice neural networks do not have one or two parameters but thousands or millions. In those cases, the training can be done more efficiently by using **stochastic gradient descent (SGD)**. This method adds two main modifications to the classic gradient descent:
1. At the beginning, the training set is shuffled (this is the stochastic part). This is necessary for the method to work.
2. The training set is divided into \\(b\\) batches with \\(m\\) data points each. This \\(m\\) is known as the **batch size** and is a hyperparameter that we need to define.
Then, at each epoch all batches are iterated and the parameters are updated based on each batch and not the entire training set, for example:
\\\[\\begin{equation}
w \= w \- \\alpha \\sum\_{i\=1}^m{2x\_i(x\_i w \- y)}
\\end{equation}\\]
Again, an epoch is one pass through all parameters and all batches. Now you may be wondering why this method is more efficient if an epoch still involves the same number of operations but they are split into chunks. Part of this is because since the parameter updates are more frequent, the loss also improves quicker. Another reason is that the operations within each batch can be optimized and performed in parallel, for example, by using a GPU. One thing to note is that each update is based on less information by only using \\(m\\) points instead of the entire data set. This can introduce some noise in the learning but at the same time this can help to get out of local minima. In practice, SGD needs more epochs to converge compared to gradient descent but overall, it will take less time. From now on, this is the method we will use to train our networks.
Typical batch sizes are: \\(4\\),\\(8\\),\\(16\\),\\(32\\),\\(64\\),\\(128\\), etc. There is a divided opinion in this respect. Some say it’s better to choose small batch sizes but others say the bigger the better. For any particular problem, it is difficult to say what batch size is the optimal. Usually, one needs to choose the batch size empirically by trying different ones.
Be aware that when using GPUs, a big batch size can cause out of memory errors since the GPU may not have enough memory to allocate the batch.
### 8\.1\.1 Sigmoid and ReLU Units
As previously mentioned, perceptrons have some limitations that restrict their applicability including the fact that they are linear models. In practice, problems are complex and most of them are non\-linear. One way to overcome this limitation is to introduce non\-linearities and this can be done by using a different type of activation function. Remember that a unit can be modeled as \\(f(x) \= g(wx\+b)\\) where \\(g(x)\\) is some activation function. For the perceptron, \\(g(x)\\) is the *step function*. However, another practical limitation not mentioned before is that the step function can change abruptly from \\(0\\) to \\(1\\) and vice versa. Small changes in \\(x\\), \\(w\\), or \\(b\\) can completely change the output. This is a problem during learning and inference time. Instead, we would prefer a smooth version of the step function, for example, the **sigmoid function** which is also known as the **logistic function**:
\\\[\\begin{equation}
s(x) \= \\frac{1}{1 \+ e^{\-x}}
\\tag{8\.4}
\\end{equation}\\]
This function has an ‘S’ shape (Figure [8\.5](deeplearning.html#fig:nnSigmoid)) and as opposed to a step function, this one is smooth. The range of this function is from \\(0\\) to \\(1\\).
FIGURE 8\.5: Sigmoid function.
If we substitute the activation function in Equation [(8\.2\)](deeplearning.html#eq:nnUnit) with the sigmoid function we get our **sigmoid unit**:
\\\[\\begin{equation}
f(x) \= \\frac{1}{1 \+ e^{\-(w \\cdot x \+ b)}}
\\tag{8\.5}
\\end{equation}\\]
Sigmoid units have been one of the most commonly used types of units when building bigger neural networks. Another advantage is that the outputs are real values that can be interpreted as probabilities. For instance, if we want to make binary decisions we can set a threshold. For example, if the output of the sigmoid unit is \\(\> 0\.5\\) then return a \\(1\\). Of course, that threshold would depend on the application. If we need more confidence about the result we can set a higher threshold.
In the last years, another type of unit has been successfully applied to train neural networks, the **rectified linear unit** or **ReLU** for short (Figure [8\.6](deeplearning.html#fig:nnRectified)).
FIGURE 8\.6: Rectifier function.
The activation function of this unit is the rectifier function:
\\\[\\begin{equation}
rectifier(x) \=
\\begin{cases}
0 \& \\textit{if } x \< 0, \\\\
x \& \\textit{if } x \\geq 0
\\end{cases}
\\tag{8\.6}
\\end{equation}\\]
This one is also called the *ramp function* and is one of the simplest non\-linear functions and probably the most common one used in modern big neural networks. These units present several advantages, being among them, efficiency during training and inference time.
In practice, many other activation functions are used but the most common ones are sigmoid and ReLU units. In the following link, you can find an extensive list of activation functions: <https://en.wikipedia.org/wiki/Activation_function>
So far, we have been talking about **single units**. In the next section, we will see how these single units can be assembled to build bigger artificial neural networks.
### 8\.1\.2 Assembling Units into Layers
Perceptrons, sigmoid, and ReLU units can be thought of as very simple neural networks. By connecting several units, one can build more complex neural networks. For historical reasons, neural networks are also called **multilayer perceptrons** regardless whether the units are perceptrons or not. Typically, units are grouped into layers. Figure [8\.7](deeplearning.html#fig:nnExampleNN) shows an example neural network with \\(3\\) layers. An **input layer** with \\(3\\) nodes, a **hidden layer** with \\(2\\) nodes, and an **output layer** with \\(1\\) node.
FIGURE 8\.7: Example neural network.
In this type of diagram, the nodes represent units (perceptrons, sigmoids, ReLUs, etc.) except for the input layer. In the input layer, nodes represent input variables (input features). In the above example, the \\(3\\) nodes in the input layer simply indicate that the network takes as input \\(3\\) variables. In this layer, no operations are performed but the input values are passed to the next layer after multipliying them by their corresponding edge weights.
This network only has one hidden layer. Hidden layers are called like that because they do not have direct contact with the external world. Finally, there is an output layer with a single unit. We could also have an output layer with more than one unit. Most of the time, we will have **fully connected** neural networks. That is, all units have incoming connections from all nodes in the previous layer (as in the previous example).
For each specific problem, we need to define several building blocks for the network. For example, the number of layers, the number of units in each layer, the type of units (sigmoid, ReLU, etc.), and so on. This is known as the **architecture** of the network. Choosing a good architecture for a given problem is not a trivial task. It is advised to start with an architecture that was used to solve a similar problem and then fine\-tune it for your specific problem. There exist some automatic ways to optimize the network architecture but those methods are out of the scope of this book.
We already saw how a unit can produce a result based on the inputs by using *forward propagation*. For more complex networks the process is the same! Consider the network shown in Figure [8\.8](deeplearning.html#fig:nnForward). It consists of two inputs and one output. It also has one hidden layer with \\(2\\) units.
FIGURE 8\.8: Example of forward propagation.
Each node is labeled as \\(n\_{l,n}\\) where \\(l\\) is the layer and \\(n\\) is the unit number.
The two input values are \\(1\\) and \\(0\.5\\). They could be temperature measurements, for example. Each edge has an associated weight. For simplicity, let’s assume that the activation function of the units is the identity function \\(g(x)\=x\\). The bold underlined number inside the nodes of the hidden and output layers are the biases. Here we assume that the network is already trained (later we will see how those weights and biases are learned). To get the final result, for each node, its inputs are multiplied by their corresponding weights and added. Then, the bias is added. Next, the activation function is applied. In this case, it is just the identify function (returns the same value). The outputs of the nodes in the hidden layer become the inputs of the next layer and so on.
In this example, first we need to compute the outputs of nodes \\(n\_{2,1}\\) and \\(n\_{2,2}\\):
output of \\(n\_{2,1} \= (1\)(2\) \+ (0\.5\)(1\) \+ 1 \= 3\.5\\)
output of \\(n\_{2,2} \= (1\)(\-3\) \+ (0\.5\)(5\) \+ 0 \= \-0\.5\\)
Finally, we can compute the output of the last node using the outputs of the previous nodes:
output of \\(n\_{3,1} \= (3\.5\)(1\) \+ (\-0\.5\)(\-1\) \+ 3 \= 7\\).
### 8\.1\.3 Deep Neural Networks
By increasing the number of layers and the number of units in each layer, one can build more complex networks. But what is a deep neural network (DNN)? There is not a strict rule but some people say that a network with more than \\(2\\) hidden layers is a deep network. Yes, that’s all it takes to build a DNN! Figure [8\.9](deeplearning.html#fig:nnDNN) shows an example of a deep neural network.
FIGURE 8\.9: Example of a deep neural network.
A DNN has nothing special compared to a traditional neural network except that it has many layers. One of the reasons why they became so popular until recent years is because before, it was not possible to efficiently train them. With the advent of specialized hardware like graphics processing units (GPUs), it is now possible to efficiently train big DNNs. The introduction of ReLU units was also a key factor that allowed the training of even bigger networks. The availability of big quantities of data was another key factor that allowed the development of deep learning technologies. Note that deep learning is not limited to DNNs but it also encompasses other types of architectures like convolutional networks and recurrent neural networks, to name a few. Convolutional layers will be covered later in this chapter.
### 8\.1\.4 Learning the Parameters
We have seen how *forward propagation* can be used at inference time to compute the output of the network based on the input values. In the previous examples, we assumed that the network’s parameters (weights and biases) were already learned. In practice, you most likely will use libraries and frameworks to build and train neural networks. Later in this chapter, I will show you how to use TensorFlow and Keras within R. But, before that, I will explain how the networks’ parameters are learned and how to code and train a very simple network from scratch.
Back to the problem, the objective is to find the parameters’ values based on training data such that the predicted result for any input data point is as close as possible as the true value. Put in other words, we want to find the parameters’ values that reduce the network’s prediction error.
One way to estimate the network’s error is by computing the squared difference between the prediction \\(y'\\) and the real value \\(y\\), that is, \\(error \= (y' \- y)^2\\). This is how the error can be computed for a single training data point. The error function is typically called the **loss function** and denoted by \\(L(\\theta)\\) where \\(\\theta\\) represents the parameters of the network (weights and biases). In this example the loss function is \\(L(\\theta)\=(y'\- y)^2\\).
If there is more than one training data point (which is often the case), the loss function is just the average of the individual squared differences which is known as the **mean squared error (MSE)**:
\\\[\\begin{equation}
L(\\theta) \= \\frac{1}{N} \\sum\_{n\=1}^N{(y'\_n \- y\_n)^2}
\\tag{8\.7}
\\end{equation}\\]
The mean squared error (MSE) loss function is commonly used for regression problems. For classification problems, the average cross\-entropy loss function is usually preferred (covered later in this chapter).
The problem of finding the best parameters can be formulated as an optimization problem, that is, find the optimal parameters such that the loss function is minimized. This is the learning/training phase of a neural network. Formally, this can be stated as:
\\\[\\begin{equation}
\\operatorname\*{arg min}\_{\\theta} L(\\theta)
\\tag{8\.8}
\\end{equation}\\]
This notation means: find and return the weights and biases that make the loss function be as small as possible.
The most common method to train neural networks is called **gradient descent**. The algorithm updates the parameters in an iterative fashion based on the loss. This algorithm is suitable for complex functions with millions of parameters.
Suppose there is a network with only \\(1\\) weight and no bias with MSE as loss function (Equation [(8\.7\)](deeplearning.html#eq:lossMSE)). Figure [8\.10](deeplearning.html#fig:nnGD) shows a plot of the loss function. This is a quadratic function that only depends on the value of \\(w\\). The task is to find the \\(w\\) where the function is at its minimum.
FIGURE 8\.10: Gradient descent in action.
Gradient descent starts by assigning \\(w\\) a random value. Then, at each step and based on the error, \\(w\\) is updated in the direction that minimizes the loss function. In the previous figure, the **global minimum** is found after \\(5\\) iterations. In practice, loss functions are more complex and have many **local minima** (Figure [8\.11](deeplearning.html#fig:nnLM)). For complex functions, it is difficult to find a global minimum but gradient descent can find a local minimum that is good enough to solve the problem at hand.
FIGURE 8\.11: Function with 1 global minimum and several local minima.
But in what direction and how much is \\(w\\) moved in each iteration? The direction and magnitude are estimated by computing the derivative of the loss function with respect to the weight \\(\\frac{\\partial L}{\\partial w}\\). The derivative is also called the gradient and denoted by \\(\\nabla L\\). The iterative gradient descent procedure is listed below:
**loop** until convergence or max iterations (*epochs*)
**for each** \\(w\_i\\) in \\(W\\) **do:**
\\(w\_i \= w\_i \- \\alpha \\frac{\\partial L(W)}{\\partial w\_i}\\)
The outer loop is run until the algorithm converges or until a predefined number of iterations is reached. Each iteration is also called an **epoch**. Each weight is updated with the rule: \\(w\_i \= w\_i \- \\alpha \\frac{\\partial L(W)}{\\partial w\_i}\\). The derivative part will give us the direction and magnitude. The \\(\\alpha\\) is called the **learning rate** and it controls how ‘fast’ we move. The learning rate is a constant defined by the user, thus, it is a **hyperparameter**. A high learning rate can cause the algorithm to miss the local minima and the loss can start to increase. A small learning rate will cause the algorithm to take more time to converge. Figure [8\.12](deeplearning.html#fig:nnLR) illustrates both scenarios.
FIGURE 8\.12: Comparison between high and low learning rates. a) Big learning rate. b) Small learning rate.
Selecting an appropriate learning rate will depend on the application but common values are between \\(0\.0001\\) and \\(0\.05\\).
Let’s see how gradient descent works with a step by step example. Consider a very simple neural network consisting of an input layer with only one input feature and an output layer with one unit and no bias. To make it even simpler, the activation function of the output unit is the identity function \\(f(x)\=x\\). Assume that as training data we have a single data point. Figure [8\.13](deeplearning.html#fig:nnStepExample) shows the simple network and the training data. The training data point only has one input variable (\\(x\\)) and an output (\\(y\\)). We want to train this network such that it can make predictions on new data points. The training point has an input feature of \\(x\=3\\) and the expected output is \\(y\=1\.5\\). For this particular training point, it seems that the output is equal to the input divided by \\(2\\). Thus, based on this single training data point the network should learn how to divide any other input by \\(2\\).
FIGURE 8\.13: a) A simple neural network consisting of one unit. b) The training data with only one row.
Before we start the training we need to define \\(3\\) things:
1. The loss function. This is a regression problem so we can use the MSE. Since there is a single data point our loss function becomes \\(L(w)\=(y' \- y)^2\\). Here, \\(y\\) is the ground truth output value and \\(y'\\) is the predicted value. We know how to make predictions using forward propagation. In this case, it is the product between the input value and the single weight, and the activation function has no effect (it returns the same value as its input). We can rewrite the loss function as \\(L(w)\=(xw \- y)^2\\).
2. We need to define a learning rate. For now, we can set it to \\(\\alpha \= 0\.05\\).
3. The weights need to be initialized at random. Let’s assume the single weight is ‘randomly’ initialized with \\(w\=2\\).
Now we can use gradient descent to iteratively update the weight. Remember that the updating rule is:
\\\[\\begin{equation}
w \= w \- \\alpha \\frac{\\partial L(w)}{\\partial w}
\\end{equation}\\]
The partial derivative of the loss function with respect to \\(w\\) is:
\\\[\\begin{equation}
\\frac{\\partial L(w)}{\\partial w} \= 2x(xw \- y)
\\end{equation}\\]
If we substitute the derivative in the updating rule we get:
\\\[\\begin{equation}
w \= w \- \\alpha 2x(xw \- y)
\\end{equation}\\]
We already know that \\(\\alpha\=0\.05\\), the input value is \\(x\=3\\), the output is \\(y\=1\.5\\) and the initial weight is \\(w\=2\\). So we can start updating \\(w\\). Figure [8\.14](deeplearning.html#fig:nnTrainProgress) shows the initial state (iteration 0\) and \\(3\\) additional iterations. In the initial state, \\(w\=2\\) and with that weight the loss is \\(20\.25\\). In iteration \\(1\\), the weight is updated and now its value is \\(0\.65\\). With this new weight, the loss is \\(0\.2025\\). That was a substantial reduction in the error! After three iterations we see that the final weight is \\(w\=0\.501\\) and the loss is very close to zero.
FIGURE 8\.14: First 3 gradient descent iterations (epochs).
Now, we can start doing predictions with our very simple neural network! To do so, we use forward propagation on the new input data using the learned weight \\(w\=0\.501\\). Figure [8\.15](deeplearning.html#fig:nnExamplePredictions) shows the predictions on new training data points that were never seen before by the network.
FIGURE 8\.15: Example predictions on new data points.
Even though the predictions are not perfect, they are very close to the expected value (division by \\(2\\)) considering that the network is very simple and was only trained with a single data point and for only \\(3\\) epochs!
If the training set has more than one data point, then we need to compute the derivative of each point and accumulate them (the derivative of a sum is equal to the sum of the derivatives). In the previous example, the update rule becomes:
\\\[\\begin{equation}
w \= w \- \\alpha \\sum\_{i\=1}^N{2x\_i(x\_i w \- y)}
\\end{equation}\\]
This means that before updating a weight, first, we need to compute the derivative for each point and add them. This needs to be done for every parameter in the network. Thus, one **epoch** is a pass through all training points and all parameters.
### 8\.1\.5 Parameter Learning Example in R
`gradient_descent.R`
In the previous section, we went step by step to train a neural network with a single unit and with a single training data point. Here, we will see how we can implement that simple network in R but when we have more training data. The code can be found in the script `gradient_descent.R`.
This code implements the same network as the previous example. That is, one neuron, one input, no bias, and activation function \\(f(x) \= x\\). We start by creating a sample training set with \\(3\\) points. Again, the output is the input divided by \\(2\\).
```
train_set <- data.frame(x = c(3.0,4.0,1.0), y = c(1.5, 2.0, 0.5))
# Print the train set.
print(train_set)
#> x y
#> 1 3 1.5
#> 2 4 2.0
#> 3 1 0.5
```
Then we need to implement three functions: forward propagation, the loss function, and the derivative of the loss function.
```
# Forward propagation w*x
fp <- function(w, x){
return(w * x)
}
# Loss function (y - y')^2
loss <- function(w, x, y){
predicted <- fp(w, x) # This is y'
return((y - predicted)^2)
}
# Derivative of the loss function. 2x(xw - y)
derivative <- function(w, x, y){
return(2.0 * x * ((x * w) - y))
}
```
Now we are all set to implement the `gradient.descent()` function. The first parameter is the train set, the second parameter is the learning rate \\(\\alpha\\), and the last parameter is the number of epochs. The initial weight is initialized to some ‘random’ number (selected manually here for the sake of the example). The function returns the final learned weight.
```
# Gradient descent.
gradient.descent <- function(train_set, lr = 0.01, epochs = 5){
w = -2.5 # Initialize weight at 'random'
for(i in 1:epochs){
derivative.sum <- 0.0
loss.sum <- 0.0
# Iterate each data point in train_set.
for(j in 1:nrow(train_set)){
point <- train_set[j, ]
derivative.sum <- derivative.sum + derivative(w, point$x, point$y)
loss.sum <- loss.sum + loss(w, point$x, point$y)
}
# Update weight.
w <- w - lr * derivative.sum
# mean squared error (MSE)
mse <- loss.sum / nrow(train_set)
print(paste0("epoch: ", i, " loss: ",
formatC(mse, digits = 8, format = "f"),
" w = ", formatC(w, digits = 5, format = "f")))
}
return(w)
}
```
Now, let’s train the network with a learning rate of \\(0\.01\\) and for \\(10\\) epochs. This function will print for each epoch, the loss and the current weight.
```
#### Train the 1 unit network with gradient descent ####
lr <- 0.01 # set learning rate.
set.seed(123)
# Run gradient decent to find the optimal weight.
learned_w = gradient.descent(train_set, lr, epochs = 10)
#> [1] "epoch: 1 loss: 78.00000000 w = -0.94000"
#> [1] "epoch: 2 loss: 17.97120000 w = -0.19120"
#> [1] "epoch: 3 loss: 4.14056448 w = 0.16822"
#> [1] "epoch: 4 loss: 0.95398606 w = 0.34075"
#> [1] "epoch: 5 loss: 0.21979839 w = 0.42356"
#> [1] "epoch: 6 loss: 0.05064155 w = 0.46331"
#> [1] "epoch: 7 loss: 0.01166781 w = 0.48239"
#> [1] "epoch: 8 loss: 0.00268826 w = 0.49155"
#> [1] "epoch: 9 loss: 0.00061938 w = 0.49594"
#> [1] "epoch: 10 loss: 0.00014270 w = 0.49805"
```
From the output, we can see that the loss decreases as the weight is updated. The final value of the weight at iteration \\(10\\) is \\(0\.49805\\). We can now make predictions on new data.
```
# Make predictions on new data using the learned weight.
fp(learned_w, 7)
#> [1] 3.486366
fp(learned_w, -88)
#> [1] -43.8286
```
Now, you can try to change the training set to make the network learn a different arithmetic operation!
In the previous example, we considered a very simple neural network consisting of a single unit. In this case, the partial derivative with respect to the single weight was calculated directly. For bigger networks with more layers and activations, the final output becomes a composition of functions. That is, the activation values of a layer \\(l\\) depend on its weights which are also affected by the previous layer’s \\(l\-1\\) weights and so on. So, the derivatives (gradients) can be computed using the chain rule \\(f(g(x))' \= f'(g(x)) \\cdot g'(x)\\). This can be performed efficiently by an algorithm known as **backpropagation**.
> “What backpropagation actually lets us do is compute the partial derivatives \\(\\partial C\_x / \\partial w\\) and \\(\\partial C\_x / \\partial b\\) for a single training example”. (Michael Nielsen, 2019\)[20](#fn20).
Here, \\(C\\) refers to the loss function which is also called the cost function. In modern deep learning libraries like TensorFlow, this procedure is efficiently implemented with a computational graph. If you want to learn the details about backpropagation I recommend you to check this post by DEEPLIZARD (<https://deeplizard.com/learn/video/XE3krf3CQls>) which consists of \\(5\\) parts including videos.
### 8\.1\.6 Stochastic Gradient Descent
We have seen how gradient descent iterates over all training points before updating each parameter. To recall, an epoch is one pass through all parameters and for each parameter, the derivative with each training point needs to be computed. If the training set consists of thousands or millions of points, this method becomes very time\-consuming. Furthermore, in practice neural networks do not have one or two parameters but thousands or millions. In those cases, the training can be done more efficiently by using **stochastic gradient descent (SGD)**. This method adds two main modifications to the classic gradient descent:
1. At the beginning, the training set is shuffled (this is the stochastic part). This is necessary for the method to work.
2. The training set is divided into \\(b\\) batches with \\(m\\) data points each. This \\(m\\) is known as the **batch size** and is a hyperparameter that we need to define.
Then, at each epoch all batches are iterated and the parameters are updated based on each batch and not the entire training set, for example:
\\\[\\begin{equation}
w \= w \- \\alpha \\sum\_{i\=1}^m{2x\_i(x\_i w \- y)}
\\end{equation}\\]
Again, an epoch is one pass through all parameters and all batches. Now you may be wondering why this method is more efficient if an epoch still involves the same number of operations but they are split into chunks. Part of this is because since the parameter updates are more frequent, the loss also improves quicker. Another reason is that the operations within each batch can be optimized and performed in parallel, for example, by using a GPU. One thing to note is that each update is based on less information by only using \\(m\\) points instead of the entire data set. This can introduce some noise in the learning but at the same time this can help to get out of local minima. In practice, SGD needs more epochs to converge compared to gradient descent but overall, it will take less time. From now on, this is the method we will use to train our networks.
Typical batch sizes are: \\(4\\),\\(8\\),\\(16\\),\\(32\\),\\(64\\),\\(128\\), etc. There is a divided opinion in this respect. Some say it’s better to choose small batch sizes but others say the bigger the better. For any particular problem, it is difficult to say what batch size is the optimal. Usually, one needs to choose the batch size empirically by trying different ones.
Be aware that when using GPUs, a big batch size can cause out of memory errors since the GPU may not have enough memory to allocate the batch.
8\.2 Keras and TensorFlow with R
--------------------------------
TensorFlow[21](#fn21) is an open\-source computational library used mainly for machine learning and more specifically, for deep learning. It has many available tools and extensions to perform a wide variety of tasks such as data pre\-processing, model optimization, reinforcement learning, probabilistic reasoning, to name a few. TensorFlow is very flexible and is used for research, development, and in production environments. It provides an API that contains the necessary building blocks to build different types of neural networks including CNNs, autoencoders, Recurrent Neural Networks, etc. It has two main versions. A CPU version and a GPU version. The latter allows the execution of programs by taking advantage of the computational power of graphic processing units. This makes training models much faster. Despite all this flexibility and power, it can take some time to learn the basics. Sometimes you need a way to build and test machine learning models in a simple way, for example, when trying new ideas or prototyping. Fortunately, there exists an interface to TensorFlow called Keras[22](#fn22).
Keras offers an API that abstracts many of the TensorFlow’s details making it easier to build and train machine learning models. Keras is what I will use when building deep learning models in this book. Keras does not only provide an interface to TensorFlow but also to other deep learning engines such as Theano[23](#fn23), Microsoft Cognitive Toolkit[24](#fn24), etc. Keras was developed by François Chollet and later, it was integrated with TensorFlow.
Most of the time its API should be enough to do common tasks and it provides ways to add extensions in case that is not enough. In this book, we will only use a subset of the available Keras functions but that will be enough for our purposes of building models to predict behaviors. If you want to learn more about Keras, I recommend the book *“Deep Learning with R”* by Chollet and Allaire ([2018](#ref-Chollet2018)).
Examples in this book will use Keras with TensorFlow as the backend. In R, we can access Keras through the `keras` package ([Allaire and Chollet 2019](#ref-keras)).
Instructions on how to install Keras and TensorFlow can be found in Appendix [A](appendixInstall.html#appendixInstall). At this point, I would recommend you to install them since the next section will make use of Keras.
In the next section, we will start with a simple model built with Keras and the following examples will introduce more functions. By the end of this chapter you will be able to build and train efficient deep neural networks including Convolutional Neural Networks.
### 8\.2\.1 Keras Example
`keras_simple_network.R`
If you haven’t already installed Keras and TensorFlow, I would recommend you to do so at this point. Instructions on how to install the required software can be found in Appendix [A](appendixInstall.html#appendixInstall).
In the previous section, I showed how to implement gradient descent in R (see `gradient_descent.R`). Now, I will show how to implement the same simple network using Keras. Recall that our network has one unit, one input, one output, and no bias. The code can be found in the script `keras_simple_network.R`. First, the `keras` library is loaded and a sample training set is created. Then, the function `keras_model_sequential()` is used to instantiate a new empty model. It is called sequential because it consists of a sequence of layers. At this point it does not have any layers yet.
```
library(keras)
# Generate a train set.
# First element is the input x and
# the second element is the output y.
train_set <- data.frame(x = c(3.0,4.0,1.0),
y = c(1.5, 2.0, 0.5))
# Instantiate a sequential model.
model <- keras_model_sequential()
```
We can now start adding layers (only one in this example). To do so, the `layer_dense()` method can be used. The *dense* name means that this will be a densely (fully) connected layer. This layer will be the output layer with a single unit.
```
model %>%
layer_dense(units = 1,
use_bias = FALSE,
activation = 'linear',
input_shape = 1)
```
The first argument `units = 1` specifies the number of units in this layer. By default, a bias is added in each layer. To make it the same as in the previous example, we will not use a bias so `use_bias` is set to `FALSE`. The `activation` specifies the activation function. Here it is set to `'linear'` which means that no activation function is applied \\(f(x)\=x\\). Finally, we need to specify the number of inputs with `input_shape`. In this case, there is only one feature.
Before training the network we need to compile the model and specify the learning algorithm. In this case, stochastic gradient descent with a learning rate of \\(\\alpha\=0\.01\\). We also need to specify which loss function to use (we’ll use mean squared error). At every epoch, some performance metrics can be computed. Here, we specify that we want the mean squared error and mean absolute error. These metrics are computed on the train data. After compiling the model, the `summary()` method can be used to print a textual description of it. Figure [8\.16](deeplearning.html#fig:simpleSummary) shows the output of the `summary()` function.
```
model %>% compile(
optimizer = optimizer_sgd(lr = 0.01),
loss = 'mse',
metrics = list('mse','mae')
)
summary(model)
```
FIGURE 8\.16: Summary of the simple neural network.
From this output, we see that the network consists of a single dense layer with \\(1\\) unit.
To start the actual training procedure we need to call the `fit()` function. Its first argument is the input training data (features) as a matrix. The second argument specifies the corresponding true outputs. We let the algorithm run for \\(30\\) epochs. The batch size is set to \\(3\\) which is also the total number of data points in our data. In this example the dataset is very small so we set the batch size equal to the total number of instances. In practice, datasets can contain thousands of instances but the batch size will be relatively small (e.g., \\(8\\), \\(16\\), \\(32\\), etc.).
Additionally, there is a `validation_split` parameter that specifies the fraction of the train data to be used for validation. This is set to \\(0\\) (the default) since the dataset is very small. If the validation split is greater than \\(0\\), its performance metrics will also be computed. The `verbose` parameter sets the amount of information to be printed during training. A \\(0\\) will not print anything. A \\(2\\) will print one line of information per epoch. The last parameter `view_metrics` specifies if you want the progress of the loss and performance metrics to be plotted. The `fit()` function returns an object with summary statistics collected during training and is saved in the variable `history`.
```
history <- model %>% fit(
as.matrix(train_set$x), as.matrix(train_set$y),
epochs = 30,
batch_size = 3,
validation_split = 0,
verbose = 2,
view_metrics = TRUE
)
```
Figure [8\.17](deeplearning.html#fig:nnEpochs) presents the output of the `fit()` function in RStudio. In the console, the training loss, mean squared error, and mean absolute error are printed during each epoch. In the viewer pane, plots of the same metrics are shown. Here, we can see that the loss is nicely decreasing over time. The loss at epoch \\(30\\) should be close to \\(0\\).
FIGURE 8\.17: fit() function output.
The information saved in the `history` variable can be plotted with `plot(history)`. This will generate plots for the *loss*, *MSE*, and *MAE*.
The results can slightly differ every time the training is run due to random weight initializations performed by the back end.
Once the model is trained, we can perform inference on new data points with the `predict_on_batch()` function. Here we are passing three data points.
```
model %>% predict_on_batch(c(7, 50, -220))
#> [,1]
#> [1,] 3.465378
#> [2,] 24.752701
#> [3,] -108.911880
```
Now, try setting a higher learning rate, for example, \\(0\.05\\). With this learning rate, the algorithm will converge much faster. In my computer, at epoch \\(11\\) the loss was already \\(0\\).
One practical thing to note is that if you make any changes in the `compile()` or `fit()` functions, you will have to rerun the code that instantiates and defines the network. This is because the model object saves the current state including the learned weights. If you rerun the `fit()` function on a previously trained model, it will start with the previously learned weights.
### 8\.2\.1 Keras Example
`keras_simple_network.R`
If you haven’t already installed Keras and TensorFlow, I would recommend you to do so at this point. Instructions on how to install the required software can be found in Appendix [A](appendixInstall.html#appendixInstall).
In the previous section, I showed how to implement gradient descent in R (see `gradient_descent.R`). Now, I will show how to implement the same simple network using Keras. Recall that our network has one unit, one input, one output, and no bias. The code can be found in the script `keras_simple_network.R`. First, the `keras` library is loaded and a sample training set is created. Then, the function `keras_model_sequential()` is used to instantiate a new empty model. It is called sequential because it consists of a sequence of layers. At this point it does not have any layers yet.
```
library(keras)
# Generate a train set.
# First element is the input x and
# the second element is the output y.
train_set <- data.frame(x = c(3.0,4.0,1.0),
y = c(1.5, 2.0, 0.5))
# Instantiate a sequential model.
model <- keras_model_sequential()
```
We can now start adding layers (only one in this example). To do so, the `layer_dense()` method can be used. The *dense* name means that this will be a densely (fully) connected layer. This layer will be the output layer with a single unit.
```
model %>%
layer_dense(units = 1,
use_bias = FALSE,
activation = 'linear',
input_shape = 1)
```
The first argument `units = 1` specifies the number of units in this layer. By default, a bias is added in each layer. To make it the same as in the previous example, we will not use a bias so `use_bias` is set to `FALSE`. The `activation` specifies the activation function. Here it is set to `'linear'` which means that no activation function is applied \\(f(x)\=x\\). Finally, we need to specify the number of inputs with `input_shape`. In this case, there is only one feature.
Before training the network we need to compile the model and specify the learning algorithm. In this case, stochastic gradient descent with a learning rate of \\(\\alpha\=0\.01\\). We also need to specify which loss function to use (we’ll use mean squared error). At every epoch, some performance metrics can be computed. Here, we specify that we want the mean squared error and mean absolute error. These metrics are computed on the train data. After compiling the model, the `summary()` method can be used to print a textual description of it. Figure [8\.16](deeplearning.html#fig:simpleSummary) shows the output of the `summary()` function.
```
model %>% compile(
optimizer = optimizer_sgd(lr = 0.01),
loss = 'mse',
metrics = list('mse','mae')
)
summary(model)
```
FIGURE 8\.16: Summary of the simple neural network.
From this output, we see that the network consists of a single dense layer with \\(1\\) unit.
To start the actual training procedure we need to call the `fit()` function. Its first argument is the input training data (features) as a matrix. The second argument specifies the corresponding true outputs. We let the algorithm run for \\(30\\) epochs. The batch size is set to \\(3\\) which is also the total number of data points in our data. In this example the dataset is very small so we set the batch size equal to the total number of instances. In practice, datasets can contain thousands of instances but the batch size will be relatively small (e.g., \\(8\\), \\(16\\), \\(32\\), etc.).
Additionally, there is a `validation_split` parameter that specifies the fraction of the train data to be used for validation. This is set to \\(0\\) (the default) since the dataset is very small. If the validation split is greater than \\(0\\), its performance metrics will also be computed. The `verbose` parameter sets the amount of information to be printed during training. A \\(0\\) will not print anything. A \\(2\\) will print one line of information per epoch. The last parameter `view_metrics` specifies if you want the progress of the loss and performance metrics to be plotted. The `fit()` function returns an object with summary statistics collected during training and is saved in the variable `history`.
```
history <- model %>% fit(
as.matrix(train_set$x), as.matrix(train_set$y),
epochs = 30,
batch_size = 3,
validation_split = 0,
verbose = 2,
view_metrics = TRUE
)
```
Figure [8\.17](deeplearning.html#fig:nnEpochs) presents the output of the `fit()` function in RStudio. In the console, the training loss, mean squared error, and mean absolute error are printed during each epoch. In the viewer pane, plots of the same metrics are shown. Here, we can see that the loss is nicely decreasing over time. The loss at epoch \\(30\\) should be close to \\(0\\).
FIGURE 8\.17: fit() function output.
The information saved in the `history` variable can be plotted with `plot(history)`. This will generate plots for the *loss*, *MSE*, and *MAE*.
The results can slightly differ every time the training is run due to random weight initializations performed by the back end.
Once the model is trained, we can perform inference on new data points with the `predict_on_batch()` function. Here we are passing three data points.
```
model %>% predict_on_batch(c(7, 50, -220))
#> [,1]
#> [1,] 3.465378
#> [2,] 24.752701
#> [3,] -108.911880
```
Now, try setting a higher learning rate, for example, \\(0\.05\\). With this learning rate, the algorithm will converge much faster. In my computer, at epoch \\(11\\) the loss was already \\(0\\).
One practical thing to note is that if you make any changes in the `compile()` or `fit()` functions, you will have to rerun the code that instantiates and defines the network. This is because the model object saves the current state including the learned weights. If you rerun the `fit()` function on a previously trained model, it will start with the previously learned weights.
8\.3 Classification with Neural Networks
----------------------------------------
Neural networks are trained iteratively by modifying their weights while aiming to minimize the loss function. When the network predicts real numbers, the MSE loss function is normally used. For classification problems, the network should predict the most likely class out of \\(k\\) possible categories. To make a neural network work for classification problems, we need to introduce new elements to its architecture:
1. Add more units to the output layer.
2. Use a **softmax** activation function in the output layer.
3. Use **average cross\-entropy** as the loss function.
Let’s start with point number \\(1\\) (add more units to the output layer). This means that if the number of classes is \\(k\\), then the last layer needs to have \\(k\\) units, one for each class. That’s it! Figure [8\.18](deeplearning.html#fig:nnCrossEntropy) shows an example of a neural network with an output layer having \\(3\\) units. Each unit predicts a score for each of the \\(3\\) classes. Let’s call the vector of predicted scores \\(y'\\).
FIGURE 8\.18: Neural network with 3 output scores. Softmax is applied to the scores and the cross\-entropy with the true scores is calculated. This gives us an estimate of the similarity between the network’s predictions and the true values.
Point number \\(2\\) says that a **softmax** activation function should be used in the output layer. When training the network, just as with regression, we need a way to compute the error between the predicted values \\(y'\\) and the true values \\(y\\). In this case, \\(y\\) is a one\-hot encoded vector with a \\(1\\) at the position of the true class and \\(0s\\) elsewhere. If you are not familiar with one\-hot encoding, you can check the topic in chapter [5](preprocessing.html#preprocessing). As opposed to other classifiers like decision trees, \\(k\\)\-NN, etc., neural networks need the classes to be one\-hot encoded.
With regression problems, one way to compare the prediction with the true value is by using the squared difference: \\((y' \- y)^2\\). With classification, \\(y\\) and \\(y'\\) are vectors so we need another way to compare them. The true values \\(y\\) are represented as a vector of probabilities with a \\(1\\) at the position of the true class. The output scores \\(y'\\) do not necessarily sum up to \\(1\\) thus, they are not proper probabilities. Before comparing \\(y\\) and \\(y'\\) we need both to be probabilities. The **softmax** activation function is used to convert \\(y'\\) into a vector of probabilities. The softmax function is applied individually to each element of a vector:
\\\[\\begin{equation}
softmax(\\boldsymbol{x},i) \= \\frac{e^{\\boldsymbol{x}\_i}}{\\sum\_{j}{e^{\\boldsymbol{x}\_j}}}
\\tag{8\.9}
\\end{equation}\\]
where \\(\\boldsymbol{x}\\) is a vector and \\(i\\) is an index pointing to a particular element in the vector. Thus, to convert \\(y'\\) into a vector of probabilities we need to apply softmax to each of its elements. One thing to note is that this activation function depends on all the values in the vector (the output values of all units). Figure [8\.18](deeplearning.html#fig:nnCrossEntropy) shows the resulting vector of probabilities after applying softmax to each element of \\(y'\\). In R this can be implemented like the following:
```
# Scores from the figure.
scores <- c(3.0, 0.03, 1.2)
# Softmax function.
softmax <- function(scores){
exp(scores) / sum(exp(scores))
}
probabilities <- softmax(scores)
print(probabilities)
#> [1] 0.82196 0.04217 0.13587
print(sum(probabilities)) # Should sum up to 1.
#> [1] 1
```
We used R’s vectorization capabilities to compute the final vector of probabilities within the same function without having to iterate through each element. When using Keras, these operations are efficiently computed by the backend (for example, TensorFlow).
Finally, point \\(3\\) states that we need to use **average cross\-entropy** as the **loss function**. Now that we have converted \\(y'\\) into probabilities, we can compute its dissimilarity with \\(y\\). The distance (dissimilarity) between two vectors (\\(A\\),\\(B\\)) of probabilities can be computed using **cross\-entropy**:
\\\[\\begin{equation}
CE(A,B) \= \- \\sum\_{i}{B\_i log(A\_i)}
\\tag{8\.10}
\\end{equation}\\]
Thus, to get the dissimilarity between \\(y'\\) and \\(y\\) first we apply softmax to \\(y'\\) (to transform it into proper probabilities) and then, we compute the cross entropy between the resulting vector of probabilities and \\(y\\):
\\\[\\begin{equation}
CE(softmax(y'),y).
\\end{equation}\\]
In R this can be implemented with the following:
```
# Cross-entropy
CE <- function(A,B){
- sum(B * log(A))
}
y <- c(1, 0, 0)
print(CE(softmax(scores), y))
#> [1] 0.1961
```
Be aware that when computing the cross\-entropy with equation [(8\.10\)](deeplearning.html#eq:crossentropy) **order matters**. The first element should be the predicted scores \\(y'\\) and the second element should be the true one\-hot encoded vector \\(y\\). We don’t want to apply a log function to a vector with values of \\(0\\). Most of the time, the predicted scores \\(y'\\) will be different from \\(0\\). That’s why we prefer to apply the log function to them. In the very rare case when the predicted scores have zeros, we can add a very small number. In practice, this is taken care of by the backend (e.g., Tensorflow).
Now we know how to compute the cross\-entropy for each training instance. The total loss function is then, the **average cross\-entropy across the training points**. The next section shows how to build a neural network for classification using Keras.
### 8\.3\.1 Classification of Electromyography Signals
`keras_electromyography.R`
In this example, we will train a neural network with Keras to classify hand gestures based on muscle electrical activity. The *ELECTROYMYOGRAPHY* dataset will be used here. The electrical activity was recorded with an electromyography (EMG) sensor worn as an armband. The data were collected and made available by Yashuk ([2019](#ref-kirill)). The armband device has \\(8\\) sensors which are placed on the skin surface and measure electrical activity from the right forearm at a sampling rate of \\(200\\) Hz. A video of the device can be found here: <https://youtu.be/OuwDHfY2Awg>
The data contains \\(4\\) different gestures: 0\-rock, 1\-scissors, 2\-paper, 3\-OK, and has \\(65\\) columns. The last column is the class label from \\(0\\) to \\(3\\). The first \\(64\\) columns are electrical measurements. \\(8\\) consecutive readings for each of the \\(8\\) sensors. The objective is to use the first \\(64\\) variables to predict the class.
The script `keras_electromyography.R` has the full code. We start by splitting the `dataset` into train (\\(60\\%\\)), validation (\\(10\\%\\)) and test (\\(30\\%\\)) sets. We will use the validation set to monitor the performance during each epoch. We also need to normalize the three sets but only learn the normalization parameters from the train set. The `normalize()` function included in the script will do the job.
One last thing we need to do is to format the data as matrices and one\-hot encode the class. The following code defines a function that takes as input a data frame and the expected number of classes. It assumes that the first columns are the features and the last column contains the class. First, it converts the features into a matrix and stores them in `x`. Then, it converts the class into an array and one\-hot encodes it using the `to_categorical()` function from Keras. The classes are stored in `y` and the function returns a list with the features and one\-hot encoded classes. Then, we can call the function with the train, validation, and test sets.
```
# Define a function to format features and one-hot encode the class.
format.to.array <- function(data, numclasses = 4){
x <- as.matrix(data[, 1:(ncol(data)-1)])
y <- as.array(data[, ncol(data)])
y <- to_categorical(y, num_classes = numclasses)
l <- list(x=x, y=y)
return(l)
}
# Format data
trainset <- format.to.array(trainset, numclasses = 4)
valset <- format.to.array(valset, numclasses = 4)
testset <- format.to.array(testset, numclasses = 4)
```
Let’s print the first one\-hot encoded classes from the train set:
```
head(trainset$y)
#> [,1] [,2] [,3] [,4]
#> [1,] 0 0 1 0
#> [2,] 0 0 1 0
#> [3,] 0 0 1 0
#> [4,] 0 0 0 1
#> [5,] 1 0 0 0
#> [6,] 0 0 0 1
```
The first three instances belong to the class *‘paper’* because the \\(1s\\) are in the third position. The corresponding integers are 0\-rock, 1\-scissors, 2\-paper, 3\-OK. So *‘paper’* comes in the third position. The fourth instance belongs to the class *‘OK’*, the fifth to *‘rock’*, and so on.
Now it’s time to define the neural network architecture! We will do so inside a function:
```
# Define the network's architecture.
get.nn <- function(ninputs = 64, nclasses = 4, lr = 0.01){
model <- keras_model_sequential()
model %>%
layer_dense(units = 32, activation = 'relu',
input_shape = ninputs) %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = nclasses, activation = 'softmax')
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_sgd(lr = lr),
metrics = c('accuracy')
)
return(model)
}
```
The first argument takes the number of inputs (features), the second argument specifies the number of classes and the last argument is the learning rate \\(\\alpha\\). The first line instantiates an empty keras sequential model. Then we add three layers. The first two are hidden layers and the last one will be the output layer. The input layer is implicitly defined when setting the `input_shape` parameter in the first layer. The first hidden layer has \\(32\\) units with a ReLU activation function. Since this is the first hidden layer, we also need to specify what is the expected input by setting the `input_shape`. In this case, the number of input features is \\(64\\). The next hidden layer has \\(16\\) ReLU units. For the output layer, the number of units needs to be equal to the number of classes (\\(4\\), in this case). Since this is a classification problem we also set the activation function to `softmax`.
Then, the model is compiled and the loss function is set to `categorical_crossentropy` because this is a classification problem. Stochastic gradient descent is used with a learning rate passed as a parameter. During training, we want to monitor the *accuracy*. Finally, the function returns the compiled model.
Now we can call our function to create the model. This one will have \\(64\\) inputs and \\(4\\) outputs and the learning rate is set to \\(0\.01\\). It is always useful to print a summary of the model with the `summary()` function.
```
model <- get.nn(64, 4, lr = 0.01)
summary(model)
```
FIGURE 8\.19: Summary of the network.
From the summary, we can see that the network has \\(3\\) layers. The second column shows the output shape which in this case corresponds to the number of units in each layer. The last column shows the number of parameters of each layer. For example, the first layer has \\(2080\\) parameters! Those come from the weights and biases. There are \\(64\\) (inputs) \* \\(32\\) (units) \= \\(2048\\) weights plus the \\(32\\) biases (one for each unit). The biases are included by default on each layer unless otherwise specified.
The second layer receives \\(32\\) inputs on each of its \\(16\\) units. Thus \\(32\\) \* \\(16\\) \+ \\(16\\) (biases) \= \\(528\\). The last layer has \\(16\\) inputs from the previous layer on each of its \\(4\\) units plus \\(4\\) biases giving a total of \\(68\\) parameters. In total, the network has \\(2676\\) parameters. Here, we see how fast the number of parameters grows when adding more layers and units. Now, we use the `fit()` function to train the model.
```
history <- model %>% fit(
trainset$x, trainset$y,
epochs = 300,
batch_size = 8,
validation_data = list(valset$x, valset$y),
verbose = 1,
view_metrics = TRUE
)
```
The model is trained for \\(300\\) epochs with a batch size of \\(8\\). We used the `validation_data` parameter to specify the validation set to compute the performance on unseen data. The training will take some minutes to complete. Bigger models can take hours or even several days. Thus, it is a good idea to save a model once it is trained. You can do so with the `save_model_hdf5()` or `save_model_tf()` methods. The former saves the model in `hdf5` format while the later saves it in TensorFlow’s `SavedModel` format. The `SavedModel` is stored as a directory containing the necessary serialized files to restore the model’s state.
```
# Save model as hdf5.
save_model_hdf5(model, "electromyography.hdf5")
# Alternatively, save model as SavedModel.
save_model_tf(model, "electromyography_tf")
```
We can load a previously saved model with:
```
# Load model.
model <- load_model_hdf5("electromyography.hdf5")
# Or alternatively if the model is in SavedModel format.
model <- load_model_tf("electromyography")
```
The source code files include the trained models used in this book in case you want to reproduce the results. Both, the `hdf5` and `SavedModel` versions are included.
Due to some version incompatibilities with the h5py underlying library, you may get the following error when trying to load the `hdf5` files. `AttributeError: 'str' object has no attribute 'decode'`. If you encounter this error, load the models in `SavedModel` format using the `load_model_tf()` method instead.
Figure [8\.20](deeplearning.html#fig:nnEMGloss) shows the train and validation loss and accuracy as produced by `plot(history)`. We see that both the training and validation loss are decreasing over time. The accuracy increases over time.
FIGURE 8\.20: Loss and accuracy of the electromyography model.
Now, we evaluate the performance of the trained model with the test set using the `evaluate()` function.
```
# Evaluate model.
model %>% evaluate(testset$x, testset$y)
#> loss accuracy
#> 0.4045424 0.8474576
```
The accuracy was pretty decent (\\(\\approx 84\\%\\)). To get the actual class predictions you can use the `predict_classes()` function.
```
# Predict classes.
classes <- model %>% predict_classes(testset$x)
head(classes)
#> [1] 2 2 1 3 0 1
```
Note that this function returns the classes with numbers starting with \\(0\\) just as in the original dataset.
Sometimes it is useful to access the actual predicted scores for each class. This can be done with the `predict_on_batch()` function.
```
# Make predictions on the test set.
predictions <- model %>% predict_on_batch(testset$x)
head(predictions)
#> [,1] [,2] [,3] [,4]
#> [1,] 1.957638e-05 8.726048e-02 7.708290e-01 1.418910e-01
#> [2,] 3.937355e-05 2.571992e-04 9.965665e-01 3.136863e-03
#> [3,] 4.261451e-03 7.343097e-01 7.226156e-02 1.891673e-01
#> [4,] 8.669784e-06 2.088269e-04 1.339851e-01 8.657974e-01
#> [5,] 9.999956e-01 7.354113e-26 1.299388e-08 4.451362e-06
#> [6,] 2.513005e-05 9.914154e-01 7.252949e-03 1.306421e-03
```
To obtain the actual classes from the scores, we can compute the index of the maximum column. Then we subtract \\(\-1\\) so the classes start at \\(0\\).
```
classes <- max.col(predictions) - 1
head(classes)
#> [1] 2 2 1 3 0 1
```
Since the true classes are also one\-hot encoded we need to do the same to get the ground truth.
```
groundTruth <- max.col(testset$y) - 1
# Compute accuracy.
sum(classes == groundTruth) / length(classes)
#> [1] 0.8474576
```
The integers are mapped to class strings. Then, a confusion matrix is generated.
```
# Convert classes to strings.
# Class mapping by index: rock 0, scissors 1, paper 2, ok 3.
mapping <- c("rock", "scissors", "paper", "ok")
# Need to add 1 because indices in R start at 1.
str.predictions <- mapping[classes+1]
str.groundTruth <- mapping[groundTruth+1]
library(caret)
cm <- confusionMatrix(as.factor(str.predictions),
as.factor(str.groundTruth))
cm$table
#> Reference
#> Prediction ok paper rock scissors
#> ok 681 118 24 27
#> paper 54 681 47 12
#> rock 29 18 771 1
#> scissors 134 68 8 867
```
Now, try to modify the network by making it deeper (adding more layers) and fine\-tune the hyperparameters like the learning rate, batch size, etc., to increase the performance.
### 8\.3\.1 Classification of Electromyography Signals
`keras_electromyography.R`
In this example, we will train a neural network with Keras to classify hand gestures based on muscle electrical activity. The *ELECTROYMYOGRAPHY* dataset will be used here. The electrical activity was recorded with an electromyography (EMG) sensor worn as an armband. The data were collected and made available by Yashuk ([2019](#ref-kirill)). The armband device has \\(8\\) sensors which are placed on the skin surface and measure electrical activity from the right forearm at a sampling rate of \\(200\\) Hz. A video of the device can be found here: <https://youtu.be/OuwDHfY2Awg>
The data contains \\(4\\) different gestures: 0\-rock, 1\-scissors, 2\-paper, 3\-OK, and has \\(65\\) columns. The last column is the class label from \\(0\\) to \\(3\\). The first \\(64\\) columns are electrical measurements. \\(8\\) consecutive readings for each of the \\(8\\) sensors. The objective is to use the first \\(64\\) variables to predict the class.
The script `keras_electromyography.R` has the full code. We start by splitting the `dataset` into train (\\(60\\%\\)), validation (\\(10\\%\\)) and test (\\(30\\%\\)) sets. We will use the validation set to monitor the performance during each epoch. We also need to normalize the three sets but only learn the normalization parameters from the train set. The `normalize()` function included in the script will do the job.
One last thing we need to do is to format the data as matrices and one\-hot encode the class. The following code defines a function that takes as input a data frame and the expected number of classes. It assumes that the first columns are the features and the last column contains the class. First, it converts the features into a matrix and stores them in `x`. Then, it converts the class into an array and one\-hot encodes it using the `to_categorical()` function from Keras. The classes are stored in `y` and the function returns a list with the features and one\-hot encoded classes. Then, we can call the function with the train, validation, and test sets.
```
# Define a function to format features and one-hot encode the class.
format.to.array <- function(data, numclasses = 4){
x <- as.matrix(data[, 1:(ncol(data)-1)])
y <- as.array(data[, ncol(data)])
y <- to_categorical(y, num_classes = numclasses)
l <- list(x=x, y=y)
return(l)
}
# Format data
trainset <- format.to.array(trainset, numclasses = 4)
valset <- format.to.array(valset, numclasses = 4)
testset <- format.to.array(testset, numclasses = 4)
```
Let’s print the first one\-hot encoded classes from the train set:
```
head(trainset$y)
#> [,1] [,2] [,3] [,4]
#> [1,] 0 0 1 0
#> [2,] 0 0 1 0
#> [3,] 0 0 1 0
#> [4,] 0 0 0 1
#> [5,] 1 0 0 0
#> [6,] 0 0 0 1
```
The first three instances belong to the class *‘paper’* because the \\(1s\\) are in the third position. The corresponding integers are 0\-rock, 1\-scissors, 2\-paper, 3\-OK. So *‘paper’* comes in the third position. The fourth instance belongs to the class *‘OK’*, the fifth to *‘rock’*, and so on.
Now it’s time to define the neural network architecture! We will do so inside a function:
```
# Define the network's architecture.
get.nn <- function(ninputs = 64, nclasses = 4, lr = 0.01){
model <- keras_model_sequential()
model %>%
layer_dense(units = 32, activation = 'relu',
input_shape = ninputs) %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = nclasses, activation = 'softmax')
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_sgd(lr = lr),
metrics = c('accuracy')
)
return(model)
}
```
The first argument takes the number of inputs (features), the second argument specifies the number of classes and the last argument is the learning rate \\(\\alpha\\). The first line instantiates an empty keras sequential model. Then we add three layers. The first two are hidden layers and the last one will be the output layer. The input layer is implicitly defined when setting the `input_shape` parameter in the first layer. The first hidden layer has \\(32\\) units with a ReLU activation function. Since this is the first hidden layer, we also need to specify what is the expected input by setting the `input_shape`. In this case, the number of input features is \\(64\\). The next hidden layer has \\(16\\) ReLU units. For the output layer, the number of units needs to be equal to the number of classes (\\(4\\), in this case). Since this is a classification problem we also set the activation function to `softmax`.
Then, the model is compiled and the loss function is set to `categorical_crossentropy` because this is a classification problem. Stochastic gradient descent is used with a learning rate passed as a parameter. During training, we want to monitor the *accuracy*. Finally, the function returns the compiled model.
Now we can call our function to create the model. This one will have \\(64\\) inputs and \\(4\\) outputs and the learning rate is set to \\(0\.01\\). It is always useful to print a summary of the model with the `summary()` function.
```
model <- get.nn(64, 4, lr = 0.01)
summary(model)
```
FIGURE 8\.19: Summary of the network.
From the summary, we can see that the network has \\(3\\) layers. The second column shows the output shape which in this case corresponds to the number of units in each layer. The last column shows the number of parameters of each layer. For example, the first layer has \\(2080\\) parameters! Those come from the weights and biases. There are \\(64\\) (inputs) \* \\(32\\) (units) \= \\(2048\\) weights plus the \\(32\\) biases (one for each unit). The biases are included by default on each layer unless otherwise specified.
The second layer receives \\(32\\) inputs on each of its \\(16\\) units. Thus \\(32\\) \* \\(16\\) \+ \\(16\\) (biases) \= \\(528\\). The last layer has \\(16\\) inputs from the previous layer on each of its \\(4\\) units plus \\(4\\) biases giving a total of \\(68\\) parameters. In total, the network has \\(2676\\) parameters. Here, we see how fast the number of parameters grows when adding more layers and units. Now, we use the `fit()` function to train the model.
```
history <- model %>% fit(
trainset$x, trainset$y,
epochs = 300,
batch_size = 8,
validation_data = list(valset$x, valset$y),
verbose = 1,
view_metrics = TRUE
)
```
The model is trained for \\(300\\) epochs with a batch size of \\(8\\). We used the `validation_data` parameter to specify the validation set to compute the performance on unseen data. The training will take some minutes to complete. Bigger models can take hours or even several days. Thus, it is a good idea to save a model once it is trained. You can do so with the `save_model_hdf5()` or `save_model_tf()` methods. The former saves the model in `hdf5` format while the later saves it in TensorFlow’s `SavedModel` format. The `SavedModel` is stored as a directory containing the necessary serialized files to restore the model’s state.
```
# Save model as hdf5.
save_model_hdf5(model, "electromyography.hdf5")
# Alternatively, save model as SavedModel.
save_model_tf(model, "electromyography_tf")
```
We can load a previously saved model with:
```
# Load model.
model <- load_model_hdf5("electromyography.hdf5")
# Or alternatively if the model is in SavedModel format.
model <- load_model_tf("electromyography")
```
The source code files include the trained models used in this book in case you want to reproduce the results. Both, the `hdf5` and `SavedModel` versions are included.
Due to some version incompatibilities with the h5py underlying library, you may get the following error when trying to load the `hdf5` files. `AttributeError: 'str' object has no attribute 'decode'`. If you encounter this error, load the models in `SavedModel` format using the `load_model_tf()` method instead.
Figure [8\.20](deeplearning.html#fig:nnEMGloss) shows the train and validation loss and accuracy as produced by `plot(history)`. We see that both the training and validation loss are decreasing over time. The accuracy increases over time.
FIGURE 8\.20: Loss and accuracy of the electromyography model.
Now, we evaluate the performance of the trained model with the test set using the `evaluate()` function.
```
# Evaluate model.
model %>% evaluate(testset$x, testset$y)
#> loss accuracy
#> 0.4045424 0.8474576
```
The accuracy was pretty decent (\\(\\approx 84\\%\\)). To get the actual class predictions you can use the `predict_classes()` function.
```
# Predict classes.
classes <- model %>% predict_classes(testset$x)
head(classes)
#> [1] 2 2 1 3 0 1
```
Note that this function returns the classes with numbers starting with \\(0\\) just as in the original dataset.
Sometimes it is useful to access the actual predicted scores for each class. This can be done with the `predict_on_batch()` function.
```
# Make predictions on the test set.
predictions <- model %>% predict_on_batch(testset$x)
head(predictions)
#> [,1] [,2] [,3] [,4]
#> [1,] 1.957638e-05 8.726048e-02 7.708290e-01 1.418910e-01
#> [2,] 3.937355e-05 2.571992e-04 9.965665e-01 3.136863e-03
#> [3,] 4.261451e-03 7.343097e-01 7.226156e-02 1.891673e-01
#> [4,] 8.669784e-06 2.088269e-04 1.339851e-01 8.657974e-01
#> [5,] 9.999956e-01 7.354113e-26 1.299388e-08 4.451362e-06
#> [6,] 2.513005e-05 9.914154e-01 7.252949e-03 1.306421e-03
```
To obtain the actual classes from the scores, we can compute the index of the maximum column. Then we subtract \\(\-1\\) so the classes start at \\(0\\).
```
classes <- max.col(predictions) - 1
head(classes)
#> [1] 2 2 1 3 0 1
```
Since the true classes are also one\-hot encoded we need to do the same to get the ground truth.
```
groundTruth <- max.col(testset$y) - 1
# Compute accuracy.
sum(classes == groundTruth) / length(classes)
#> [1] 0.8474576
```
The integers are mapped to class strings. Then, a confusion matrix is generated.
```
# Convert classes to strings.
# Class mapping by index: rock 0, scissors 1, paper 2, ok 3.
mapping <- c("rock", "scissors", "paper", "ok")
# Need to add 1 because indices in R start at 1.
str.predictions <- mapping[classes+1]
str.groundTruth <- mapping[groundTruth+1]
library(caret)
cm <- confusionMatrix(as.factor(str.predictions),
as.factor(str.groundTruth))
cm$table
#> Reference
#> Prediction ok paper rock scissors
#> ok 681 118 24 27
#> paper 54 681 47 12
#> rock 29 18 771 1
#> scissors 134 68 8 867
```
Now, try to modify the network by making it deeper (adding more layers) and fine\-tune the hyperparameters like the learning rate, batch size, etc., to increase the performance.
8\.4 Overfitting
----------------
One important thing to look at when training a network is **overfitting**. That is, when the model memorizes instead of learning (see chapter [1](intro.html#intro)). Overfitting means that the model becomes very specialized at mapping inputs to outputs from the *train set* but fails to do so with new *test samples*. One reason is that a model can become too complex and with so many parameters that it will perfectly adapt to its training data but will miss more general patterns preventing it to perform well on unseen instances. To diagnose this, one can plot loss/accuracy curves during training epochs.
FIGURE 8\.21: Loss and accuracy curves.
In Figure [8\.21](deeplearning.html#fig:lossAccuracy) we can see that after some epochs the *validation loss* starts to increase even though the *train loss* is still decreasing. This is because the model is getting better on reducing the error on the train set but its performance starts to decrease when presented with new instances. Conversely, one can observe a similar effect with the accuracy. The model keeps improving its performance on the train set but at some point, the accuracy on the validation set starts to decrease. Usually, one stops the training before overfitting starts to occur. In the following, I will introduce you to two common techniques to combat overfitting in neural networks.
### 8\.4\.1 Early Stopping
`keras_electromyography_earlystopping.R`
Neural networks are trained for several epochs using gradient descent. But the question is: *For how many epochs?*. As can be seen in Figure [8\.21](deeplearning.html#fig:lossAccuracy), too many epochs can lead to overfitting and too few can cause underfitting. *Early stopping* is a simple but effective method to reduce the risk of overfitting. The method consists of setting a large number of epochs and stop updating the network’s parameters when a condition is met. For example, one condition can be to stop when there is no performance improvement on the validation set after \\(n\\) epochs or when there is a decrease of some percent in accuracy.
Keras provides some mechanisms to implement early stopping and this is accomplished via **callbacks**. A callback is a function that is run at different stages during training such as at the beginning or end of an epoch or at the beginning or end of a batch operation. Callbacks are passed as a list to the `fit()` function. You can define custom callbacks or use some of the built\-in ones including `callback_early_stopping()`. This callback will cause the training to stop when a metric stops improving. The metric can be *accuracy*, *loss*, etc. The following callback will stop the training if after \\(10\\) epochs (`patience`) there is no improvement of at least \\(1\\%\\) (`min_delta`) in accuracy on the validation set.
```
callback_early_stopping(monitor = "val_acc",
min_delta = 0.01,
patience = 10,
verbose = 1,
mode = "max")
```
The `min_delta` parameter specifies the minimum change in the monitored metric to qualify as an improvement. The `mode` specifies if training should be stopped when the metric has stopped decreasing, if it is set to `"min"`. If it is set to `"max"`, training will stop when the monitored metric has stopped increasing.
It may be the case that the best validation performance was achieved not in the last epoch but at some previous point. By setting the `restore_best_weights` parameter to `TRUE` the model weights from the epoch with the best value of the monitored metric will be restored.
The script `keras_electromyography_earlystopping.R` shows how to use the early stopping callback in Keras with the electromyography dataset. The following code is an extract that shows how to define the callback and pass it to the `fit()` function.
```
# Define early stopping callback.
my_callback <- callback_early_stopping(monitor = "val_acc",
min_delta = 0.01,
patience = 50,
verbose = 1,
mode = "max",
restore_best_weights = TRUE)
history <- model %>% fit(
trainset$x, trainset$y,
epochs = 500,
batch_size = 8,
validation_data = list(valset$x, valset$y),
verbose = 1,
view_metrics = TRUE,
callbacks = list(my_callback)
)
```
This code will cause the training to stop if after \\(50\\) epochs there is no improvement in accuracy of at least \\(1\\%\\) and will restore the model’s weights to the ones during the epoch with the highest accuracy. Figure [8\.22](deeplearning.html#fig:earlyStopping) shows how the training stopped at epoch \\(241\\).
FIGURE 8\.22: Early stopping example.
If we evaluate the final model on the test set, we see that the accuracy is \\(86\.4\\%\\), a noticeable increase compared to the \\(84\.7\\%\\) that we got when training for \\(300\\) epochs without early stopping.
```
# Evaluate model.
model %>% evaluate(testset$x, testset$y)
#> $loss
#> [1] 0.3777530
#> $acc
#> [1] 0.8641243
```
### 8\.4\.2 Dropout
Dropout is another technique to reduce overfitting proposed by Srivastava et al. ([2014](#ref-srivastava14)). It consists of ‘dropping’ some of the units from a hidden layer for each sample during training. In theory, it can also be applied to input and output layers but that is not very common. The incoming and outgoing connections of a dropped unit are discarded. Figure [8\.23](deeplearning.html#fig:imgDropout) shows an example of applying dropout to a network. In Figure [8\.23](deeplearning.html#fig:imgDropout) b, the middle unit was removed from the network whereas in Figure [8\.23](deeplearning.html#fig:imgDropout) c, the top and bottom units were removed.
FIGURE 8\.23: Dropout example.
Each unit has an associated probability \\(p\\) (independent of other units) of being dropped. This probability is another hyperparameter but typically it is set to \\(0\.5\\). Thus, during each iteration and for each sample, half of the units are discarded. The effect of this, is having more simple networks (see Figure [8\.23](deeplearning.html#fig:imgDropout)) and thus, less prone to overfitting. Intuitively, you can also think of dropout as training an **ensemble of neural networks**, each having a slightly different structure.
From the perspective of one unit that receives inputs from the previous hidden layer with dropout, approximately half of its incoming connections will be gone (if \\(p\=0\.5\\)). See Figure [8\.24](deeplearning.html#fig:dropoutUnit).
FIGURE 8\.24: Incoming connections to one unit when the previous layer has dropout.
Dropout has the effect of making units not to rely on any single incoming connection. This makes the whole network able to compensate for the lack of connections by learning alternative paths. In practice and for many applications, this results in a more robust model. A side effect of applying dropout is that the expected value of the activation function of a unit will be diminished because some of the previous activations will be \\(0\\). Recall that the output of a neuron is computed as:
\\\[\\begin{equation}
f(\\boldsymbol{x}) \= g(\\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b)
\\end{equation}\\]
where \\(\\boldsymbol{x}\\) contains the input values from the previous layer, \\(\\boldsymbol{w}\\) the corresponding weights and \\(g()\\) is the activation function. With dropout, approximately half of the values of \\(\\boldsymbol{x}\\) will be \\(0\\) (if \\(p\=0\.5\\)). To compensate for that, the input values need to be scaled, in this case, by a factor of \\(2\\).
\\\[\\begin{equation}
f(\\boldsymbol{x}) \= g(\\boldsymbol{w} \\cdot 2 \\boldsymbol{x} \+ b)
\\end{equation}\\]
In modern implementations, this scaling is done during training so at inference time
there is no need to apply dropout. The predictions are done as usual. In Keras, the `layer_dropout()` can be used to add dropout to any layer. Its parameter `rate` is a float between \\(0\\) and \\(1\\) that specifies the fraction of units to drop. The following code snippet builds a neural network with \\(2\\) hidden layers. Then, dropout with a rate of \\(0\.5\\) is applied to both of them.
```
model <- keras_model_sequential()
model %>%
layer_dense(units = 256, activation = 'relu', input_shape = 1000) %>%
layer_dropout(0.5) %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dropout(0.5) %>%
layer_dense(units = 2, activation = 'softmax')
```
It is very common to apply dropout to networks in computer vision because the inputs are images or videos containing a lot of input values (pixels) but the number of samples is often very limited causing overfitting. In section [8\.6](deeplearning.html#cnns) Convolutional Neural Networks (CNNs) will be introduced. They are suitable for computer vision problems. In the corresponding smile detection example (section [8\.8](deeplearning.html#cnnSmile)), we will use dropout. When building CNNs, dropout is almost always added to the different layers.
### 8\.4\.1 Early Stopping
`keras_electromyography_earlystopping.R`
Neural networks are trained for several epochs using gradient descent. But the question is: *For how many epochs?*. As can be seen in Figure [8\.21](deeplearning.html#fig:lossAccuracy), too many epochs can lead to overfitting and too few can cause underfitting. *Early stopping* is a simple but effective method to reduce the risk of overfitting. The method consists of setting a large number of epochs and stop updating the network’s parameters when a condition is met. For example, one condition can be to stop when there is no performance improvement on the validation set after \\(n\\) epochs or when there is a decrease of some percent in accuracy.
Keras provides some mechanisms to implement early stopping and this is accomplished via **callbacks**. A callback is a function that is run at different stages during training such as at the beginning or end of an epoch or at the beginning or end of a batch operation. Callbacks are passed as a list to the `fit()` function. You can define custom callbacks or use some of the built\-in ones including `callback_early_stopping()`. This callback will cause the training to stop when a metric stops improving. The metric can be *accuracy*, *loss*, etc. The following callback will stop the training if after \\(10\\) epochs (`patience`) there is no improvement of at least \\(1\\%\\) (`min_delta`) in accuracy on the validation set.
```
callback_early_stopping(monitor = "val_acc",
min_delta = 0.01,
patience = 10,
verbose = 1,
mode = "max")
```
The `min_delta` parameter specifies the minimum change in the monitored metric to qualify as an improvement. The `mode` specifies if training should be stopped when the metric has stopped decreasing, if it is set to `"min"`. If it is set to `"max"`, training will stop when the monitored metric has stopped increasing.
It may be the case that the best validation performance was achieved not in the last epoch but at some previous point. By setting the `restore_best_weights` parameter to `TRUE` the model weights from the epoch with the best value of the monitored metric will be restored.
The script `keras_electromyography_earlystopping.R` shows how to use the early stopping callback in Keras with the electromyography dataset. The following code is an extract that shows how to define the callback and pass it to the `fit()` function.
```
# Define early stopping callback.
my_callback <- callback_early_stopping(monitor = "val_acc",
min_delta = 0.01,
patience = 50,
verbose = 1,
mode = "max",
restore_best_weights = TRUE)
history <- model %>% fit(
trainset$x, trainset$y,
epochs = 500,
batch_size = 8,
validation_data = list(valset$x, valset$y),
verbose = 1,
view_metrics = TRUE,
callbacks = list(my_callback)
)
```
This code will cause the training to stop if after \\(50\\) epochs there is no improvement in accuracy of at least \\(1\\%\\) and will restore the model’s weights to the ones during the epoch with the highest accuracy. Figure [8\.22](deeplearning.html#fig:earlyStopping) shows how the training stopped at epoch \\(241\\).
FIGURE 8\.22: Early stopping example.
If we evaluate the final model on the test set, we see that the accuracy is \\(86\.4\\%\\), a noticeable increase compared to the \\(84\.7\\%\\) that we got when training for \\(300\\) epochs without early stopping.
```
# Evaluate model.
model %>% evaluate(testset$x, testset$y)
#> $loss
#> [1] 0.3777530
#> $acc
#> [1] 0.8641243
```
### 8\.4\.2 Dropout
Dropout is another technique to reduce overfitting proposed by Srivastava et al. ([2014](#ref-srivastava14)). It consists of ‘dropping’ some of the units from a hidden layer for each sample during training. In theory, it can also be applied to input and output layers but that is not very common. The incoming and outgoing connections of a dropped unit are discarded. Figure [8\.23](deeplearning.html#fig:imgDropout) shows an example of applying dropout to a network. In Figure [8\.23](deeplearning.html#fig:imgDropout) b, the middle unit was removed from the network whereas in Figure [8\.23](deeplearning.html#fig:imgDropout) c, the top and bottom units were removed.
FIGURE 8\.23: Dropout example.
Each unit has an associated probability \\(p\\) (independent of other units) of being dropped. This probability is another hyperparameter but typically it is set to \\(0\.5\\). Thus, during each iteration and for each sample, half of the units are discarded. The effect of this, is having more simple networks (see Figure [8\.23](deeplearning.html#fig:imgDropout)) and thus, less prone to overfitting. Intuitively, you can also think of dropout as training an **ensemble of neural networks**, each having a slightly different structure.
From the perspective of one unit that receives inputs from the previous hidden layer with dropout, approximately half of its incoming connections will be gone (if \\(p\=0\.5\\)). See Figure [8\.24](deeplearning.html#fig:dropoutUnit).
FIGURE 8\.24: Incoming connections to one unit when the previous layer has dropout.
Dropout has the effect of making units not to rely on any single incoming connection. This makes the whole network able to compensate for the lack of connections by learning alternative paths. In practice and for many applications, this results in a more robust model. A side effect of applying dropout is that the expected value of the activation function of a unit will be diminished because some of the previous activations will be \\(0\\). Recall that the output of a neuron is computed as:
\\\[\\begin{equation}
f(\\boldsymbol{x}) \= g(\\boldsymbol{w} \\cdot \\boldsymbol{x} \+ b)
\\end{equation}\\]
where \\(\\boldsymbol{x}\\) contains the input values from the previous layer, \\(\\boldsymbol{w}\\) the corresponding weights and \\(g()\\) is the activation function. With dropout, approximately half of the values of \\(\\boldsymbol{x}\\) will be \\(0\\) (if \\(p\=0\.5\\)). To compensate for that, the input values need to be scaled, in this case, by a factor of \\(2\\).
\\\[\\begin{equation}
f(\\boldsymbol{x}) \= g(\\boldsymbol{w} \\cdot 2 \\boldsymbol{x} \+ b)
\\end{equation}\\]
In modern implementations, this scaling is done during training so at inference time
there is no need to apply dropout. The predictions are done as usual. In Keras, the `layer_dropout()` can be used to add dropout to any layer. Its parameter `rate` is a float between \\(0\\) and \\(1\\) that specifies the fraction of units to drop. The following code snippet builds a neural network with \\(2\\) hidden layers. Then, dropout with a rate of \\(0\.5\\) is applied to both of them.
```
model <- keras_model_sequential()
model %>%
layer_dense(units = 256, activation = 'relu', input_shape = 1000) %>%
layer_dropout(0.5) %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dropout(0.5) %>%
layer_dense(units = 2, activation = 'softmax')
```
It is very common to apply dropout to networks in computer vision because the inputs are images or videos containing a lot of input values (pixels) but the number of samples is often very limited causing overfitting. In section [8\.6](deeplearning.html#cnns) Convolutional Neural Networks (CNNs) will be introduced. They are suitable for computer vision problems. In the corresponding smile detection example (section [8\.8](deeplearning.html#cnnSmile)), we will use dropout. When building CNNs, dropout is almost always added to the different layers.
8\.5 Fine\-tuning a Neural Network
----------------------------------
When deciding for a neural network’s architecture, no formula will tell you how many hidden layers or number of units each layer should have. There is also no formula for determining the batch size, the learning rate, type of activation function, for how many epochs should we train the network, and so on. All those are called the **hyperparameters** of the network. Hyperparameter tuning is a complex optimization problem and there is a lot of research going on that tackles the issue from different angles. My suggestion is to start with a simple architecture that has been used before to solve a similar problem and then fine\-tune it for your specific task. If you are not aware of such a network, there are some guidelines (described below) to get you started. Always keep in mind that those are only recommendations, so you do not need to abide by them and you should feel free to try configurations that deviate from those guidelines depending on your problem at hand.
Training neural networks is a time\-consuming process, especially in deep networks. Training a network can take from several minutes to weeks. In many cases, performing cross\-validation is not feasible. A common practice is to divide the data into train/validation/test sets. The training data is used to train a network with a given architecture and a set of hyperparameters. The validation set is used to evaluate the generalization performance of the network. Then, you can try different architectures and hyperparameters and evaluate the performance again and again with the validation set. Typically, the network’s performance is monitored during training epochs by plotting the loss and accuracy of the train and validation sets. Once you are happy with your model, you test its performance on the test set **only once** and that is the result that is reported.
Here are some starting point guidelines, however, also take into consideration that those hyperparameters can be dependent on each other. So, if you modify a hyperparameter it may impact other(s).
**Number of hidden layers.**
Most of the time one or two hidden layers are enough to solve not too complex problems. One advice is to start with one hidden layer and if that one is not enough to capture the complexity of the problem, add another layer and so on.
**Number of units.**
If a network has too few units it can underfit, that is, the model will be too simple to capture the underlying data patterns. If the network has too many units this can result in overfitting. Also, it will take more time to learn the parameters. Some guidelines mention that the number of units should be somewhere between the number of input features and the number of units in the output layer[25](#fn25). Guang\-Bin Huang ([2003](#ref-huang2003)) has even proposed a formula for the two\-hidden layer case to calculate the number of units that are enough to learn \\(N\\) samples: \\(2\\sqrt{(m\+2\)N}\\) where \\(m\\) is the number of output units.
My suggestion is to first gain some practice and intuition with simple problems. A good way to do so is with the TensorFlow playground (<https://playground.tensorflow.org/>) created by Daniel Smilkov and Shan Carter. This is a web\-based implementation of a neural network that you can fine\-tune to solve a predefined set of classification and regression problems. For example, Figure [8\.25](deeplearning.html#fig:playground) shows how I tried to solve the XOR problem with a neural network with \\(1\\) hidden layer and \\(1\\) unit with a sigmoid activation function. After more than \\(1,000\\) epochs the loss is still quite high (\\(0\.38\\)). Try to add more neurons and/or hidden layers and see if you can solve the XOR problem with fewer epochs.
FIGURE 8\.25: Screenshot of the TensorFlow playground. (Daniel Smilkov and Shan Carter, <https://github.com/tensorflow/playground> (Apache License 2\.0\)).
**Batch size.**
Batch sizes typically range between \\(4\\) and \\(512\\). Big batch sizes provide a better estimate of the gradient but are more computationally expensive. On the other hand, small batch sizes are faster to compute but will incur in more noise in the gradient estimation requiring more epochs to converge. When using a GPU or other specialized hardware, the computations can be performed in parallel thus, allowing bigger batch sizes to be computed in a reasonable time. Some people argue that the noise introduced with small batch sizes is good to escape from local minima. Keskar et al. ([2016](#ref-keskar2016)) showed that in practice, big batch sizes can result in degraded models. A good starting point is \\(32\\) which is the default in Keras.
**Learning rate.**
This is one of the most important hyperparameters. The learning rate specifies how fast gradient descent ‘moves’ when trying to find an optimal minimum. However, this doesn’t mean that the algorithm will *learn* faster if the learning rate is set to a high value. If it is too high, the loss can start oscillating. If it is too low, the learning will take a lot of time. One way to fine\-tune it, is to start with the default one. In Keras, the default learning rate for stochastic gradient descent is \\(0\.01\\). Then, based on the loss plot across epochs, you can decrease/increase it. If learning is taking long, try to increase it. If the loss seems to be oscillating or stuck, try reducing it. Typical values are \\(0\.1\\), \\(0\.01\\), \\(0\.001\\), \\(0\.0001\\), \\(0\.00001\\). Additionally to stochastic gradient descent, Keras provides implementations of other optimizers[26](#fn26) like Adam[27](#fn27) which have adaptive learning rates, but still, one needs to specify an initial one.
Before training a network it is a good practice to shuffle the rows of the train set if the data points are independent. Neural networks tend to ‘forget’ patterns learned from previous points during training as the wights are updated. For example, if the train set happens to be oredered by class labels, the network may ‘forget’ how to identify the first classes and will put more emphasis on the last ones.
It is also a good practice to normalize the input features before training a network. This will make the training process more efficient.
8\.6 Convolutional Neural Networks
----------------------------------
Convolutional Neural Networks or CNNs for short, have become extremely popular due to their capacity to solve computer vision problems. Most of the time they are used for image classification tasks but can also be used for regression and for time series data. If we wanted to perform image classification with a traditional neural network, first we would need to either build a feature vector by:
1. extracting features from the image or,
2. flattening the image pixels into a 1D array.
The first solution requires a lot of image processing expertise and domain knowledge. Extracting features from images is not a trivial task and requires a lot of preprocessing to reduce noise, artifacts, segment the objects of interest, remove background, etc. Additionally, considerable effort is spent on feature engineering. The drawback of the second solution is that spatial information is lost, that is, the relationship between neighboring pixels. CNNs solve the two previous problems by automatically extracting features while preserving spatial information. As opposed to traditional networks, CNNs can take as input \\(n\\)\-dimensional images and process them efficiently. The main building blocks of a CNN are:
1. **Convolution layers**
2. **Pooling operations**
3. **Traditional fully connected layers**
Figure [8\.26](deeplearning.html#fig:cnnArchitecture) shows a simple CNN and its basic components. First, the input image goes through a convolution layer with \\(4\\) kernels (details about the convolution operation are described in the next subsection). This layer is in charge of extracting features by applying the kernels on top of the image. The result of this operation is a convolved image, also known as **feature maps**. The number of feature maps is equal to the number of kernels, in this case, \\(4\\). Then, a **pooling operation** is applied on top of the feature maps. This operation reduces the size of the feature maps by downsampling them (details on this in a following subsection). The output of the pooling operation is a set of feature maps with reduced size. Here, the outputs are \\(4\\) reduced feature maps since the pooling operation is applied to each feature map independently of the others. Then, the feature maps are flattened into a one\-dimensional array. Conceptually, this array represents all the features extracted from the previous steps. These features are then used as inputs to a neural network with its respective input, hidden, and output layers. An ’\*’ and underlined text means that parameter learning occurs in that layer. For example, in the convolution layer, the parameters of the kernels need to be learned. On the other hand, the pooling operation does not require parameter learning since it is a fixed operation. Finally, the parameters of the neural network are learned too, including the hidden layers and the output layer.
FIGURE 8\.26: Simple CNN architecture. An \`\*’ indicates that parameter learning occurs.
One can build more complex CNNs by stacking more convolution layers and pooling operations. By doing so, the level of abstraction increases. For example, the first convolution extracts simple features like horizontal, vertical, diagonal lines, etc. The next convolution could extract more complex features like squares, triangles, and so on. The parameter learning of all layers (including the convolution layers) occurs during the same forward and backpropagation step just as with a normal neural network. Both, the features and the classification task are learned at the same time! During learning, batches of images are forward propagated and the parameters are adjusted accordingly to minimize the error (for example, the average cross\-entropy for classification). The same methods for training normal neural networks are used for CNNs, for example, stochastic gradient descent.
Each kernel in a convolution layer can have an associated bias which is also a parameter to be learned. By default, Keras uses a bias for each kernel. Furthermore, an activation function can be applied to the outputs of the convolution layer. This is applied element\-wise. ReLU is the most common one.
At inference time, the convolution layers and pooling operations act as feature extractors by generating feature maps that are ultimately flattened and passed to a normal neural network. It is also common to use the first layers as feature extractors and then replace the neural network with another model (Random Forest, SVM, etc.). In the following sections, details about the convolution and pooling operations are presented.
### 8\.6\.1 Convolutions
Convolutions are used to automatically extract feature maps from images. A convolution operation consists of a **kernel** also known as a **filter** which is a matrix with real values. Kernels are usually much smaller than the original image. For example, for a grayscale image of height and width of \\(100\\)x\\(100\\) a typical kernel size would be \\(3\\)x\\(3\\). The size of the kernel is a hyperparameter. The convolution operation consists of applying the kernel over the image starting at the upper left corner and moving forward row by row until reaching the bottom right corner. The **stride** controls how many elements the kernel is moved at a time and this is also a hyperparameter. A typical value for the stride is \\(1\\).
The convolution operation computes the sum of the element\-wise product between the kernel and the image region it is covering. The output of this operation is used to generate the convolved image (feature map). Figure [8\.27](deeplearning.html#fig:cnnConv) shows the first two iterations and the final iteration of the convolution operation on an image. In this case, the kernel is a \\(3\\)x\\(3\\) matrix with \\(1\\)s in its first row and \\(0\\)s elsewhere. The original image has a size of \\(5\\)x\\(5\\)x\\(1\\) (height, width, depth) and it seems to be a number \\(7\\).
FIGURE 8\.27: Convolution operation with a kernel of size 3x3 and stride\=1\. Iterations 1, 2, and 9\.
In the first iteration, the kernel is aligned with the upper left corner of the original image. An element\-wise multiplication is performed and the results are summed. The operation is shown at the top of the figure. In the first iteration, the result was \\(3\\) and it is set at the corresponding position of the final convolved image (feature map). In the next iteration, the kernel is moved one position to the right and again, the final result is \\(3\\) which is set in the next position of the convolved image. The process continues until the kernel reaches the bottom right corner. At the last iteration (9\), the result is \\(1\\).
Now, the convolved image (feature map) represents the features extracted by this particular kernel. Also, note that the feature map is a \\(3\\)x\\(3\\) matrix which is smaller than the original image. It is also possible to force the feature map to have the same size as the original image by padding it with zeros.
Before learning starts, the kernel values are initialized at random. In this example, the kernel has \\(1\\)s in the first row and it has \\(3\\)x\\(3\=9\\) parameters. This is what makes CNNs so efficient since the same kernel is applied to the entire image. This is known as ‘parameter sharing’. Our kernel has \\(1\\)s at the top and \\(0\\)s elsewhere so it seems that this kernel learned to detect horizontal lines. If we look at the final convolved image, we see that the horizontal lines were emphasized by this kernel. This would be a good candidate kernel to differentiate between \\(7\\)s and \\(0\\)s, for example. Since \\(0\\)s does not have long horizontal lines. But maybe it will have difficulties discriminating between \\(7\\)s and \\(5\\)s since both have horizontal lines at the top.
In this example, only \\(1\\) kernel was used but in practice, you may want more kernels, each in charge of identifying the best features for the given problem. For example, another kernel could learn to identify diagonal lines which would be useful to differentiate between \\(7\\)s and \\(5\\)s. The number of kernels per convolution layer is a hyperparameter. In the previous example, we could have defined to have \\(4\\) kernels instead of one. In that case, the output of that layer would have been \\(4\\) feature maps of size \\(3\\)x\\(3\\) each (Figure [8\.28](deeplearning.html#fig:cnn4kernels)).
FIGURE 8\.28: A convolution with 4 kernels. The output is 4 feature maps.
What would be the output of a convolution layer with \\(4\\) kernels of size \\(3\\)x\\(3\\) if it is applied to an RGB color image of size \\(5\\)x\\(5\\)x\\(3\\))? In that case, the output will be the same (\\(4\\) feature maps of size \\(3\\)x\\(3\\)) as if the image were in grayscale (\\(5\\)x\\(5\\)x\\(1\\)). Remember that the number of output feature maps is equal to the number of kernels regardless of the depth of the image. However, in this case, each kernel will have a depth of \\(3\\). Each depth is applied independently to the corresponding R, G, and B image channels. Thus, each kernel has \\(3\\)x\\(3\\)x\\(3\=27\\) parameters that need to be learned. After applying each kernel to each image channel (in this example, \\(3\\) channels), **the results of each channel are added** and this is why we end up with one feature map per kernel. The following course website has a nice interactive animation of how convolutions are applied to an image with \\(3\\) channels: [https://cs231n.github.io/convolutional\-networks/](https://cs231n.github.io/convolutional-networks/). In the next section (‘CNNs with Keras’), a couple of examples that demonstrate how to calculate the number of parameters and the outputs’ shape will be presented as well.
### 8\.6\.2 Pooling Operations
Pooling operations are typically applied after convolution layers. Their purpose is to reduce the size of the data and to emphasize important regions. These operations perform a fixed computation on the image and do no have learnable parameters. Similar to kernels, we need to define a window size. Then, this window is moved throughout the image and a computation is performed on the pixels covered by the window. The difference with kernels is that this window is just a guide but does not have parameters to be learned. The most common pooling operation is **max pooling** which consists of selecting the highest value.
Figure [8\.29](deeplearning.html#fig:cnnMaxPooling) shows an example of a max pooling operation on a \\(4\\)x\\(4\\) image. The window size is \\(2\\)x\\(2\\) and the stride is \\(2\\). The latter means that the window moves \\(2\\) places at a time.
FIGURE 8\.29: Max pooling with a window of size 2x2 and stride \= 2\.
The result of this operation is an image of size \\(2\\)x\\(2\\) which is half of the original one. Aside from max pooling, average pooling can be applied instead. In that case, it computes the mean value across all values covered by the window.
### 8\.6\.1 Convolutions
Convolutions are used to automatically extract feature maps from images. A convolution operation consists of a **kernel** also known as a **filter** which is a matrix with real values. Kernels are usually much smaller than the original image. For example, for a grayscale image of height and width of \\(100\\)x\\(100\\) a typical kernel size would be \\(3\\)x\\(3\\). The size of the kernel is a hyperparameter. The convolution operation consists of applying the kernel over the image starting at the upper left corner and moving forward row by row until reaching the bottom right corner. The **stride** controls how many elements the kernel is moved at a time and this is also a hyperparameter. A typical value for the stride is \\(1\\).
The convolution operation computes the sum of the element\-wise product between the kernel and the image region it is covering. The output of this operation is used to generate the convolved image (feature map). Figure [8\.27](deeplearning.html#fig:cnnConv) shows the first two iterations and the final iteration of the convolution operation on an image. In this case, the kernel is a \\(3\\)x\\(3\\) matrix with \\(1\\)s in its first row and \\(0\\)s elsewhere. The original image has a size of \\(5\\)x\\(5\\)x\\(1\\) (height, width, depth) and it seems to be a number \\(7\\).
FIGURE 8\.27: Convolution operation with a kernel of size 3x3 and stride\=1\. Iterations 1, 2, and 9\.
In the first iteration, the kernel is aligned with the upper left corner of the original image. An element\-wise multiplication is performed and the results are summed. The operation is shown at the top of the figure. In the first iteration, the result was \\(3\\) and it is set at the corresponding position of the final convolved image (feature map). In the next iteration, the kernel is moved one position to the right and again, the final result is \\(3\\) which is set in the next position of the convolved image. The process continues until the kernel reaches the bottom right corner. At the last iteration (9\), the result is \\(1\\).
Now, the convolved image (feature map) represents the features extracted by this particular kernel. Also, note that the feature map is a \\(3\\)x\\(3\\) matrix which is smaller than the original image. It is also possible to force the feature map to have the same size as the original image by padding it with zeros.
Before learning starts, the kernel values are initialized at random. In this example, the kernel has \\(1\\)s in the first row and it has \\(3\\)x\\(3\=9\\) parameters. This is what makes CNNs so efficient since the same kernel is applied to the entire image. This is known as ‘parameter sharing’. Our kernel has \\(1\\)s at the top and \\(0\\)s elsewhere so it seems that this kernel learned to detect horizontal lines. If we look at the final convolved image, we see that the horizontal lines were emphasized by this kernel. This would be a good candidate kernel to differentiate between \\(7\\)s and \\(0\\)s, for example. Since \\(0\\)s does not have long horizontal lines. But maybe it will have difficulties discriminating between \\(7\\)s and \\(5\\)s since both have horizontal lines at the top.
In this example, only \\(1\\) kernel was used but in practice, you may want more kernels, each in charge of identifying the best features for the given problem. For example, another kernel could learn to identify diagonal lines which would be useful to differentiate between \\(7\\)s and \\(5\\)s. The number of kernels per convolution layer is a hyperparameter. In the previous example, we could have defined to have \\(4\\) kernels instead of one. In that case, the output of that layer would have been \\(4\\) feature maps of size \\(3\\)x\\(3\\) each (Figure [8\.28](deeplearning.html#fig:cnn4kernels)).
FIGURE 8\.28: A convolution with 4 kernels. The output is 4 feature maps.
What would be the output of a convolution layer with \\(4\\) kernels of size \\(3\\)x\\(3\\) if it is applied to an RGB color image of size \\(5\\)x\\(5\\)x\\(3\\))? In that case, the output will be the same (\\(4\\) feature maps of size \\(3\\)x\\(3\\)) as if the image were in grayscale (\\(5\\)x\\(5\\)x\\(1\\)). Remember that the number of output feature maps is equal to the number of kernels regardless of the depth of the image. However, in this case, each kernel will have a depth of \\(3\\). Each depth is applied independently to the corresponding R, G, and B image channels. Thus, each kernel has \\(3\\)x\\(3\\)x\\(3\=27\\) parameters that need to be learned. After applying each kernel to each image channel (in this example, \\(3\\) channels), **the results of each channel are added** and this is why we end up with one feature map per kernel. The following course website has a nice interactive animation of how convolutions are applied to an image with \\(3\\) channels: [https://cs231n.github.io/convolutional\-networks/](https://cs231n.github.io/convolutional-networks/). In the next section (‘CNNs with Keras’), a couple of examples that demonstrate how to calculate the number of parameters and the outputs’ shape will be presented as well.
### 8\.6\.2 Pooling Operations
Pooling operations are typically applied after convolution layers. Their purpose is to reduce the size of the data and to emphasize important regions. These operations perform a fixed computation on the image and do no have learnable parameters. Similar to kernels, we need to define a window size. Then, this window is moved throughout the image and a computation is performed on the pixels covered by the window. The difference with kernels is that this window is just a guide but does not have parameters to be learned. The most common pooling operation is **max pooling** which consists of selecting the highest value.
Figure [8\.29](deeplearning.html#fig:cnnMaxPooling) shows an example of a max pooling operation on a \\(4\\)x\\(4\\) image. The window size is \\(2\\)x\\(2\\) and the stride is \\(2\\). The latter means that the window moves \\(2\\) places at a time.
FIGURE 8\.29: Max pooling with a window of size 2x2 and stride \= 2\.
The result of this operation is an image of size \\(2\\)x\\(2\\) which is half of the original one. Aside from max pooling, average pooling can be applied instead. In that case, it computes the mean value across all values covered by the window.
8\.7 CNNs with Keras
--------------------
keras\_cnns.R
Keras provides several functions to define convolution layers and pooling operations. In TensorFlow, image dimensions are specified with the following order: height, width, and depth. In Keras, the `layer_conv_2d()` function is used to add a convolution layer to a sequential model. This function has several arguments but the \\(6\\) most common ones are: `filters`,`kernel_size`,`strides`,`padding`,`activation`, and `input_shape`.
```
# Convolution layer.
layer_conv_2d(filters = 4, # Number of kernels.
kernel_size = c(3,3), # Kernel size.
strides = c(1,1), # Stride.
padding = "same", # Type of padding.
activation = 'relu', # Activation function.
input_shape = c(5,5,1)) # Input image dimensions.
# Only specified in first layer.
```
The `filters` parameter specifies the number of kernels. The `kernel_size` specifies the kernel size (height, width). The `strides` is an integer or list of \\(2\\) integers, specifying the strides of the convolution along the width and height (the default is \\(1\\)). The `padding` can take two possible strings: `"same"` or `"valid"`. If `padding="same"` the input image will be padded with zeros based on the kernel size and strides such that the convolved image has the same size as the original one. If `padding="valid"` it means no padding is applied. The default is `"valid"`. The `activation` parameter takes as input a string with the name of the activation function to use. The `input_shape` parameter is required when this layer is the first one and specifies the dimensions of the input image.
To add a max pooling operation you can use the `layer_max_pooling_2d()` function. Its most important parameter is `pool_size`.
```
layer_max_pooling_2d(pool_size = c(2, 2))
```
The `pool_size` specifies the window size (height, width). By default, the strides will be equal to `pool_size` but if desired, this can be changed with the `strides` parameter. This function also accepts a `padding` parameter similar to the one for `layer_max_pooling_2d()`.
In Keras, if the stride is not specified, it defaults to the window size (`pool_size` parameter).
To illustrate this convolution and pooling operations I will use two simple examples. The complete code for the two examples can be found in the script `keras_cnns.R`.
### 8\.7\.1 Example 1
Let’s create our first CNN in Keras. For now, this CNN will not be trained but only its architecture will be defined. The objective is to understand the building blocks of the network. In the next section, we will build and train a CNN that detects smiles from image faces.
Our network will consist of **\\(1\\) convolution layer**, **\\(1\\) max pooling layer**, **\\(1\\) fully connected hidden layer**, and **\\(1\\) output layer** as if this were a classification problem. The code to build such a network is shown below and the output of the `summary()` function in Figure [8\.30](deeplearning.html#fig:cnnEx1).
```
library(keras)
model <- keras_model_sequential()
model %>%
layer_conv_2d(filters = 4,
kernel_size = c(3,3),
padding = "valid",
activation = 'relu',
input_shape = c(10,10,1)) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dense(units = 2, activation = 'softmax')
summary(model)
```
FIGURE 8\.30: Output of summary().
The first convolution layer has \\(4\\) kernels of size \\(3\\)x\\(3\\) and a ReLU as the activation function. The padding is set to `"valid"` so no padding will be performed. The input image is of size \\(10\\)x\\(10\\)x\\(1\\) (height, width, depth). Then, we apply max pooling with a window size of \\(2\\)x\\(2\\). Later, the output is flattened and fed into a fully connected layer with \\(32\\) units. Finally, the output layer has \\(2\\) units with a softmax activation function for classification.
From the summary, the output of the first Conv2D layer is (None, 8, 8, 4\). The ‘None’ means that the number of input images is not fixed and depends on the batch size. The next two numbers correspond to the height and width which are both \\(8\\). This is because the image was not padded and after applying the convolution operation on the original \\(10\\)x\\(10\\) height and width image, its dimensions are reduced to \\(8\\). The last number (\\(4\\)) is the number of feature maps which is equal to the number of kernels (`filters=4`). The number of parameters is \\(40\\) (last column). This is because there are \\(4\\) kernels with \\(3\\)x\\(3\=9\\) parameters each, and there is one bias per kernel included by default: \\(4 \\times 3 \\times 3 \\times \+ 4 \= 40\\).
The output of MaxPooling2D is (None, 4, 4, 4\). The height and width are \\(4\\) because the pool size was \\(2\\) and the stride was \\(2\\). This had the effect of reducing to half the height and width of the output of the previous layer. Max pooling preserves the number of feature maps, thus, the last number is \\(4\\) (the number of feature maps from the previous layer). Max pooling does not have any learnable parameters since it applies a fixed operation every time.
Before passing the downsampled feature maps to the next fully connected layer they need to be **flattened** into a \\(1\\)\-dimensional array. This is done with the `layer_flatten()` function. Its output has a shape of (None, 64\) which corresponds to the \\(4 \\times 4 \\times 4 \=64\\) features of the previous layer. The next fully connected layer has \\(32\\) units each with a connection with every one of the \\(64\\) input features. Each unit has a bias. Thus the number of parameters is \\(64 \\times 32 \+ 32 \= 2080\\).
Finally the output layer has \\(32 \\times 2 \+ 2\=66\\) parameters. And the entire network has \\(2,186\\) parameters! Now, you can try to modify, the kernel size, the strides, the padding, and input shape and see how the output dimensions and the number of parameters vary.
### 8\.7\.2 Example 2
Now let’s try another example, but this time the input image will have a depth of \\(3\\) simulating an RGB image.
```
model2 <- keras_model_sequential()
model2 %>%
layer_conv_2d(filters = 16,
kernel_size = c(3,3),
padding = "same",
activation = 'relu',
input_shape = c(28,28,3)) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(units = 64, activation = 'relu') %>%
layer_dense(units = 5, activation = 'softmax')
summary(model2)
```
FIGURE 8\.31: Output of summary().
Figure [8\.31](deeplearning.html#fig:cnnEx2) shows that the output height and width of the first Conv2D layer is \\(28\\) which is the same as the input image size. This is because this time we set `padding = "same"` and the image dimensions were preserved. The \\(16\\) corresponds to the number of feature maps which was set with `filters = 16`.
The total parameter count for this layer is \\(448\\). Each kernel has \\(3 \\times 3 \= 9\\) parameters. There are \\(16\\) kernels but each kernel has a \\(depth\=3\\) because the input image is RGB. \\(9 \\times 16\[kernels] \\times 3\[depth] \+ 16\[biases] \= 448\\). Notice that even though each kernel has a depth of \\(3\\) the output number of feature maps of this layer is \\(16\\) and not \\(16 \\times 3 \= 48\\). This is because as mentioned before, each kernel produces a single feature map regardless of the depth because the values are summed depth\-wise. The rest of the layers are similar to the previous example.
### 8\.7\.1 Example 1
Let’s create our first CNN in Keras. For now, this CNN will not be trained but only its architecture will be defined. The objective is to understand the building blocks of the network. In the next section, we will build and train a CNN that detects smiles from image faces.
Our network will consist of **\\(1\\) convolution layer**, **\\(1\\) max pooling layer**, **\\(1\\) fully connected hidden layer**, and **\\(1\\) output layer** as if this were a classification problem. The code to build such a network is shown below and the output of the `summary()` function in Figure [8\.30](deeplearning.html#fig:cnnEx1).
```
library(keras)
model <- keras_model_sequential()
model %>%
layer_conv_2d(filters = 4,
kernel_size = c(3,3),
padding = "valid",
activation = 'relu',
input_shape = c(10,10,1)) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dense(units = 2, activation = 'softmax')
summary(model)
```
FIGURE 8\.30: Output of summary().
The first convolution layer has \\(4\\) kernels of size \\(3\\)x\\(3\\) and a ReLU as the activation function. The padding is set to `"valid"` so no padding will be performed. The input image is of size \\(10\\)x\\(10\\)x\\(1\\) (height, width, depth). Then, we apply max pooling with a window size of \\(2\\)x\\(2\\). Later, the output is flattened and fed into a fully connected layer with \\(32\\) units. Finally, the output layer has \\(2\\) units with a softmax activation function for classification.
From the summary, the output of the first Conv2D layer is (None, 8, 8, 4\). The ‘None’ means that the number of input images is not fixed and depends on the batch size. The next two numbers correspond to the height and width which are both \\(8\\). This is because the image was not padded and after applying the convolution operation on the original \\(10\\)x\\(10\\) height and width image, its dimensions are reduced to \\(8\\). The last number (\\(4\\)) is the number of feature maps which is equal to the number of kernels (`filters=4`). The number of parameters is \\(40\\) (last column). This is because there are \\(4\\) kernels with \\(3\\)x\\(3\=9\\) parameters each, and there is one bias per kernel included by default: \\(4 \\times 3 \\times 3 \\times \+ 4 \= 40\\).
The output of MaxPooling2D is (None, 4, 4, 4\). The height and width are \\(4\\) because the pool size was \\(2\\) and the stride was \\(2\\). This had the effect of reducing to half the height and width of the output of the previous layer. Max pooling preserves the number of feature maps, thus, the last number is \\(4\\) (the number of feature maps from the previous layer). Max pooling does not have any learnable parameters since it applies a fixed operation every time.
Before passing the downsampled feature maps to the next fully connected layer they need to be **flattened** into a \\(1\\)\-dimensional array. This is done with the `layer_flatten()` function. Its output has a shape of (None, 64\) which corresponds to the \\(4 \\times 4 \\times 4 \=64\\) features of the previous layer. The next fully connected layer has \\(32\\) units each with a connection with every one of the \\(64\\) input features. Each unit has a bias. Thus the number of parameters is \\(64 \\times 32 \+ 32 \= 2080\\).
Finally the output layer has \\(32 \\times 2 \+ 2\=66\\) parameters. And the entire network has \\(2,186\\) parameters! Now, you can try to modify, the kernel size, the strides, the padding, and input shape and see how the output dimensions and the number of parameters vary.
### 8\.7\.2 Example 2
Now let’s try another example, but this time the input image will have a depth of \\(3\\) simulating an RGB image.
```
model2 <- keras_model_sequential()
model2 %>%
layer_conv_2d(filters = 16,
kernel_size = c(3,3),
padding = "same",
activation = 'relu',
input_shape = c(28,28,3)) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(units = 64, activation = 'relu') %>%
layer_dense(units = 5, activation = 'softmax')
summary(model2)
```
FIGURE 8\.31: Output of summary().
Figure [8\.31](deeplearning.html#fig:cnnEx2) shows that the output height and width of the first Conv2D layer is \\(28\\) which is the same as the input image size. This is because this time we set `padding = "same"` and the image dimensions were preserved. The \\(16\\) corresponds to the number of feature maps which was set with `filters = 16`.
The total parameter count for this layer is \\(448\\). Each kernel has \\(3 \\times 3 \= 9\\) parameters. There are \\(16\\) kernels but each kernel has a \\(depth\=3\\) because the input image is RGB. \\(9 \\times 16\[kernels] \\times 3\[depth] \+ 16\[biases] \= 448\\). Notice that even though each kernel has a depth of \\(3\\) the output number of feature maps of this layer is \\(16\\) and not \\(16 \\times 3 \= 48\\). This is because as mentioned before, each kernel produces a single feature map regardless of the depth because the values are summed depth\-wise. The rest of the layers are similar to the previous example.
8\.8 Smiles Detection with a CNN
--------------------------------
keras\_smile\_detection.R
In this section, we will build a CNN that detects smiling and non\-smiling faces from pictures from the *SMILES* dataset. This information could be used, for example, to analyze smiling patterns during job interviews, exams, etc. For this task, we will use a cropped ([Sanderson and Lovell 2009](#ref-sanderson2009multi)) version of the Labeled Faces in the Wild (LFW) database ([Gary B. Huang et al. 2008](#ref-huang2008labeled)). A subset of the database was labeled by O. A. Arigbabu et al. ([2016](#ref-arigbabu2016smile)), O. Arigbabu ([2017](#ref-olasimbo)). The labels are provided as two text files, each, containing the list of files that correspond to smiling and non\-smiling faces. The dataset can be downloaded from: <http://conradsanderson.id.au/lfwcrop/> and the labels list from: <https://data.mendeley.com/datasets/yz4v8tb3tp/5>. See Appendix [B](appendixDatasets.html#appendixDatasets) for instructions on how to setup the dataset.
The smiling set has \\(600\\) pictures and the non\-smiling has \\(603\\) pictures. Figure [8\.32](deeplearning.html#fig:cnnSmileNotSmile) shows an example of one image from each of the sets.
FIGURE 8\.32: Example of a smiling and a non\-smiling face. (Adapted from the LFWcrop Face Dataset: C. Sanderson, B.C. Lovell. “Multi\-Region Probabilistic Histograms for Robust and Scalable Identity Inference.” *Lecture Notes in Computer Science (LNCS)*, Vol. 5558, pp. 199\-208, 2009\. doi: [https://doi.org/10\.1007/978\-3\-642\-01793\-3\_21](https://doi.org/10.1007/978-3-642-01793-3_21)).
The script `keras_smile_detection.R` has the full code of the analysis. First, we load the list of smiling pictures.
```
datapath <- file.path(datasets_path,"smiles")
smile.list <- read.table(paste0(datapath,"SMILE_list.txt"))
head(smile.list)
#> V1
#> 1 James_Jones_0001.jpg
#> 2 James_Kelly_0009.jpg
#> 3 James_McPherson_0001.jpg
#> 4 James_Watt_0001.jpg
#> 5 Jamie_Carey_0001.jpg
#> 6 Jamie_King_0001.jpg
# Substitute jpg with ppm.
smile.list <- gsub("jpg", "ppm", smile.list$V1)
```
The SMILE\_list.txt points to the names of the pictures in *jpg* format, but they are actually stored as *ppm* files. Thus, the *jpg* extension is replaced by *ppm* with the `gsub()` function. Since the images are in *ppm* format, we can use the `pixmap` library ([Bivand, Leisch, and Maechler 2011](#ref-pixmap)) to read and plot them. The `print()` function can be used to display the image properties. Here, we see that these are RGB images of \\(64\\)x\\(64\\) pixels.
```
library(pixmap)
# Read an smiling face.
img <- read.pnm(paste0(datapath,"faces/", smile.list[10]), cellres = 1)
# Plot the image.
plot(img)
# Print its properties.
print(img)
#> Pixmap image
#> Type : pixmapRGB
#> Size : 64x64
#> Resolution : 1x1
#> Bounding box : 0 0 64 64
```
Then, we load the images into two arrays `smiling.images` and `nonsmiling.images` (code omitted here). If we print the array dimensions we see that there are \\(600\\) smiling images of size \\(64 \\times 64 \\times 3\\).
```
# Print dimensions.
dim(smiling.images)
#> [1] 600 64 64 3
```
If we print the minimum and maximum values we see that they are \\(0\\) and \\(1\\) so there is no need for normalization.
```
max(smiling.images)
#> [1] 1
min(smiling.images)
#> [1] 0
```
The next step is to randomly split the dataset into train and test sets. We will use \\(85\\%\\) for the train set and \\(15\\%\\) for the test set. We set the `validation_split` parameter of the `fit()` function to choose a small percent (\\(10\\%\\)) of the train set as the validation set during training.
After creating the train and test sets, the train set images and labels are stored in `trainX` and `trainY`, respectively and the test set data is stored in `testX` and `testY`. The labels in `trainY` and `testY` were one\-hot encoded. Now that the data is in place, let’s build the CNN.
```
model <- keras_model_sequential()
model %>%
layer_conv_2d(filters = 8,
kernel_size = c(3,3),
activation = 'relu',
input_shape = c(64,64,3)) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_dropout(0.25) %>%
layer_conv_2d(filters = 16,
kernel_size = c(3,3),
activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_dropout(0.25) %>%
layer_flatten() %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dropout(0.5) %>%
layer_dense(units = 2, activation = 'softmax')
```
Our CNN consists of two convolution layers each followed by a max pooling operation and dropout. The feature maps are then flattened and passed to a fully connected layer with \\(32\\) units followed by a dropout. Since this is a binary classification problem (*‘smile’* vs. *‘non\-smile’*) the output layer has \\(2\\) units with a softmax activation function. Now the model can be compiled and the `fit()` function used to begin the training!
```
# Compile model.
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c("accuracy")
)
# Fit model.
history <- model %>% fit(
trainX, trainY,
epochs = 50,
batch_size = 8,
validation_split = 0.10,
verbose = 1,
view_metrics = TRUE
)
```
We are using a stochastic gradient descent optimizer with a learning rate of \\(0\.01\\) and cross\-entropy as the loss function. We can use \\(10\\%\\) of the train set as the validation set by setting `validation_split = 0.10`. Once the training is done, we can plot the *loss* and *accuracy* of each epoch.
```
plot(history)
```
FIGURE 8\.33: Train/test loss and accuracy.
After epoch \\(25\\) (see Figure [8\.33](deeplearning.html#fig:cnnSmilesLoss)) it looks like the training loss is decreasing faster than the validation loss. After epoch \\(40\\) it seems that the model starts to overfit (the validation loss is increasing a bit). If we look at the validation accuracy, it seems that it starts to get flat after epoch \\(30\\). Now we evaluate the model on the test set:
```
# Evaluate model on test set.
model %>% evaluate(testX, testY)
#> $loss
#> [1] 0.1862139
#> $acc
#> [1] 0.9222222
```
An accuracy of \\(92\\%\\) is pretty decent if we take into account that we didn’t have to do any image preprocessing or feature extraction! We can print the predictions of the first \\(16\\) test images (see Figure [8\.34](deeplearning.html#fig:cnnSmileResults)).
FIGURE 8\.34: Predictions of the first \\(16\\) test set images. Correct predictions are in green and incorrect ones in red. (Adapted from the LFWcrop Face Dataset: C. Sanderson, B.C. Lovell. “Multi\-Region Probabilistic Histograms for Robust and Scalable Identity Inference.” *Lecture Notes in Computer Science (LNCS)*, Vol. 5558, pp. 199\-208, 2009\. doi: [https://doi.org/10\.1007/978\-3\-642\-01793\-3\_21](https://doi.org/10.1007/978-3-642-01793-3_21)).
From those \\(16\\), all but one were correctly classified. The correct ones are shown in green and the incorrect one in red. Some faces seem to be smiling (last row, third image) but the mouth is closed, though. It seems that this CNN classifies images as *‘smiling’* only when the mouth is open which may be the way the train labels were defined.
8\.9 Summary
------------
**Deep learning (DL)** consists of a set of different architectures and algorithms. As of now, it mainly focuses on artificial neural networks (ANNs). This chapter introduced two main types of DL models (ANNs and CNNs) and their application to behavior analysis.
* Artificial neural networks (ANNs) are mathematical models inspired by the brain. But that does not mean they work the same as the brain.
* The **perceptron** is one of the simplest ANNs.
* ANNs consist of an input layer, hidden layer(s) and an output layer.
* Deep networks have many hidden layers.
* **Gradient descent** can be used to learn the parameters of a network.
* Overfitting is a recurring problem in ANNs. Some methods like **dropout** and **early stopping** can be used to reduce the effect of overfitting.
* A Convolutional Neural Network (CNN) is a type of ANN that can process \\(N\\)\-dimensional arrays very efficiently. They are used mainly for computer vision tasks.
* CNNs consist of **convolution** and **pooling** layers.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/multiuser.html |
Chapter 9 Multi\-user Validation
================================
Every person is different. We all have different physical and mental characteristics. Every person reacts differently to the same stimulus and conducts physical and motor activities in particular ways. As we have seen, predictive models rely on the training data; and for user\-oriented applications, this data encodes their behaviors. When building predictive models, we want them to be general and to perform accurately on new unseen instances. Sometimes this generalization capability comes at a price, especially in **multi\-user settings**. A multi\-user setting is one in which the results depend heavily on the **target user**, that is, the user on which the predictions are made. Take, for example, a hand gesture recognition system. At inference time, a specific person (the target user) performs a gesture and the system should recognize it. The input data comes directly from the user. On the other hand, a **non multi\-user** system does not depend on a particular person. A classifier that labels fruits on images or a regression model that predicts house prices does not depend directly on a particular person.
Some time ago I had to build an activity recognition system based on inertial data from a wrist band. So I collected the data, trained the models, and evaluated them. The performance results were good. However, it turned out that when the system was tested on a new sample group it failed. The reason? The training data was collected from people within a particular age group (young) but the target market of the product was for much older people. Older people tend to walk more slowly, thus, the system was predicting *‘no movement’* when in fact, the person was walking at a very slow pace. This is an extreme example, but even within the same age groups, there can be differences between users (*inter\-user variance*). Even the same user can evolve over time and change her/his behaviors (*intra\-user variance*).
So, how do we evaluate multi\-user systems to reduce the unexpected effects once the system is deployed? Most of the time, there’s going to be surprises when testing a system on new users. Nevertheless, in this chapter I will present three types of models that will help you reduce that uncertainty to some extent so you will have a better idea of how the system will behave when tested on more realistic conditions. The models are: **mixed models**, **user\-independent models**, and **user\-dependent models**. I will present how to train each type of model using a database with actions recorded with a motion capture system. After that, I will also show you how to build **adaptive models** with the objective of increasing the prediction performance for a particular user.
9\.1 Mixed Models
-----------------
Mixed models are trained and validated as ordinary, without considering information about mappings between data points and users. Suppose we have a dataset as shown in Figure [9\.1](multiuser.html#fig:tblMixModel). The first column is the user id, the second column the label we want to predict and the last two columns are two arbitrary features.
FIGURE 9\.1: Example dataset with a binary label and 2 features.
With a mixed model, we would just remove the *userid* column and perform \\(k\\)\-fold cross\-validation or hold\-out validation as usual. In fact, this is what we have been doing so far. By doing so, some random data points will end up in the train set and others in the test set regardless of which data point was generated by which user. The user rows are just *mixed*, thus the *mixed model* name. This model assumes that the data was generated by a single user. One disadvantage of validating a system using a mixed model is that the performance results could be overestimated. When randomly splitting into train and test sets, some data points for a given user could end up in each of the splits. At inference time, when presenting a test sample belonging to a particular user, it is likely that the training set of the model already included some data from that particular user. Thus, the model already knows a little bit about that user so we can expect an accurate prediction. However, this assumption not always holds true. If the model is to be used on a **new user** that the model has never seen before, then, it may not produce very accurate predictions.
**When should a mixed model be used to validate a system?**
1. When you know you will have available train data belonging to the intended target users.
2. In many cases, a dataset already has missing information about the mapping between rows and users. That is, a *userid* column is not present. In those cases, the best performance estimation would be through the use of a mixed model.
To demonstrate the differences between the three types of models (mixed, user\-independent, and user\-dependent) I will use the *SKELETON ACTIONS* dataset. First, a brief description of the dataset is presented including details about how the features were extracted. Then, the dataset is used to train a mixed model and in the following subsections, it is used to train user\-independent and user\-dependent models.
### 9\.1\.1 Skeleton Action Recognition with Mixed Models
`preprocess_skeleton_actions.R` `classify_skeleton_actions.R`
To demonstrate the three different types of models I chose the **UTD\-MHAD dataset** ([Chen, Jafari, and Kehtarnavaz 2015](#ref-chen2015utd)) and from now on, I will refer to it as the *SKELETON ACTIONS* dataset. This database is suitable because it was collected by \\(8\\) persons (\\(4\\) females/\\(4\\) males) and each file has a subject id, thus, we know which actions were collected by which users. There are \\(27\\) actions including: *‘right\-hand wave’*, *‘two hand front clap’*, *‘basketball shoot’*, *‘front boxing’*, etc.
The data was recorded using a Kinect camera and an inertial sensor unit. Each subject repeated each of the \\(27\\) actions \\(4\\) times. More information about the collection process and pictures is available in the original dataset website [https://personal.utdallas.edu/\~kehtar/UTD\-MHAD.html](https://personal.utdallas.edu/~kehtar/UTD-MHAD.html).
For our examples, I only consider the *skeleton data* generated by the Kinect camera. These data consists of human body joints (\\(20\\) joints). Each file contains one action for one user and one repetition. The file names are of the form: `aA_sS_tT_skeleton.mat`. The `A` is the action id, the `S` is the subject id and the `T` is the trial (repetition) number. For each time frame, the \\(3\\)D positions of the \\(20\\) joints are recorded.
The script `preprocess_skeleton_actions.R` shows how to read the files and plot the actions. The files are stored in Matlab format. The library `R.matlab` ([Bengtsson 2018](#ref-rmatlab)) can be used to read the files.
```
# Path to one of the files.
filepath <- "/skeleton_actions/a7_s1_t1_skeleton.mat"
# Read skeleton file.
df <- readMat(filepath)$d.skel
# Print dimensions.
dim(df)
#> [1] 20 3 66
```
From the file name, we see that this corresponds to action \\(7\\) (basketball shoot), from subject \\(1\\) and trial \\(1\\). The `readMat()` function reads the file contents and stores them as a \\(3\\)D array in `df`. If we print the dimensions we see that the first one corresponds to the number of joints, the second one are the positions (*x*, *y*, *z*), and the last dimension is the number of frames, in this case \\(66\\) frames.
We extract the first time\-frame as follows:
```
# Select the first frame.
frame <- data.frame(df[, , 1])
# Print dimensions.
dim(frame)
#> [1] 20 3
```
Each frame can then be plotted. The plotting code is included in the script. Figure [9\.2](multiuser.html#fig:sklBasket) shows how the skeleton looks like for six of the time frames. The script also has code to animate the actions.
FIGURE 9\.2: Skeleton of basketball shoot action. Six frames sampled from the entire sequence.
We will represent each action (file) as a feature vector. The same script also shows the code to extract the feature vectors from each action. To extract the features, a reference point in the skeleton is selected, in this case the spine (joint \\(3\\)). Then, for each time frame, the distance between all joints (excluding the reference point) and the reference point is calculated. Finally, for each distance, the *mean*, *min*, and *max* are computed across all time frames. Since there are \\(19\\) joints (excluding the spine), we end up with \\(19\*3\=57\\) features. Figure [9\.3](multiuser.html#fig:sklFeatures) shows how the final dataset looks like. It only shows the first four features out of the \\(57\\), the user id and the labels.
FIGURE 9\.3: First rows of the skeleton dataset after feature extraction showing the first 4 features. Source: Original data from C. Chen, R. Jafari, and N. Kehtarnavaz, “UTD\-MHAD: A Multimodal Dataset for Human Action Recognition Utilizing a Depth Camera and a Wearable Inertial Sensor”, *Proceedings of IEEE International Conference on Image Processing*, Canada, September 2015\.
The following examples assume that the file *dataset.csv* with the extracted features already exsits in the `skeleton_actions/` directory. To generate this file, run the feature extraction code in the script `preprocess_skeleton_actions.R`.
Once the dataset is in a suitable format, we proceed to **train our mixed model**. The script containing the full code for training the different types of models is `classify_skeleton_actions.R`. This script makes use of the *dataset.csv* file.
First, the auxiliary functions are loaded because we will use the `normalize()` function to normalize the data. We will use a Random Forest for the classification and the `caret` package to compute the performance metrics.
```
source(file.path("..","auxiliary_functions","globals.R"))
source(file.path("..","auxiliary_functions","functions.R"))
library(randomForest)
library(caret)
# Path to the csv file containing the extracted features.
# preprocess_skeleton_actins.R contains
# the code used to extract the features.
filepath <- file.path(datasets_path,
"skeleton_actions",
"dataset.csv")
# Load dataset.
dataset <- read.csv(filepath, stringsAsFactors = T)
# Extract unique labels.
unique.actions <- as.character(unique(dataset$label))
# Print the unique labels.
print(unique.actions)
#> [1] "a1" "a10" "a11" "a12" "a13" "a14" "a15" "a16" "a17"
#> [10] "a18" "a19" "a2" "a20" "a21" "a22" "a23" "a24" "a25"
#> [19] "a26" "a27" "a3" "a4" "a5" "a6" "a7" "a8" "a9"
```
The `unique.actions` variable stores the name of all actions. We will need it later to define the levels of the factor object. Next, we generate \\(10\\) folds and define some variables to store the performance metrics including the *accuracy*, *recall*, and *precision*. In each iteration during cross\-validation, we will compute and store those performance metrics.
```
k <- 10 # Number of folds.
set.seed(1234)
folds <- sample(k, nrow(dataset), replace = TRUE)
accuracies <- NULL; recalls <- NULL; precisions <- NULL
```
In the next code snippet, the actual cross\-validation is performed. This is just the usual cross\-validation procedure. The `normalize()` function defined in the auxiliary functions is used to normalize the data by only learning the parameters from the train set and applying them to the test set. Then, the Random Forest is fitted with the train set. One thing to note here is that the `userid` field is removed: `trainset[,-1]` since we are not using users’ information in the mixed model. Then, predictions on the test set are obtained and the accuracy, recall, and precision are computed during each iteration.
```
# Perform k-fold cross-validation.
for(i in 1:k){
trainset <- dataset[which(folds != i,),]
testset <- dataset[which(folds == i,),]
#Normalize.
res <- normalize(trainset, testset)
trainset <- res$train
testset <- res$test
rf <- randomForest(label ~., trainset[,-1])
preds.rf <- as.character(predict(rf,
newdata = testset[,-1]))
groundTruth <- as.character(testset$label)
cm.rf <- confusionMatrix(factor(preds.rf,
levels = unique.actions),
factor(groundTruth,
levels = unique.actions))
accuracies <- c(accuracies, cm.rf$overall["Accuracy"])
metrics <- colMeans(cm.rf$byClass[,c("Recall",
"Specificity",
"Precision",
"F1")],
na.rm = TRUE)
recalls <- c(recalls, metrics["Recall"])
precisions <- c(precisions, metrics["Precision"])
}
```
Finally, the average performance across folds for each of the metrics is printed.
```
# Print performance metrics.
mean(accuracies)
#> [1] 0.9277258
mean(recalls)
#> [1] 0.9372515
mean(precisions)
#> [1] 0.9208455
```
The results look promising with an average *accuracy* of \\(92\.7\\%\\), a *recall* of \\(93\.7\\%\\), and a *precision* of \\(92\.0\\%\\). One important thing to remember is that the mixed model assumes that the training data contains instances belonging to users in the test set. Thus, the model already knows a little bit about the users in the test set.
Now, imagine that you want to estimate the performance of the model in a situation where a completely new user is shown to the model, that is, the model does not know anything about this user. We can model those situations using a **user\-independent model** which is the topic of the next section.
9\.2 User\-independent Models
-----------------------------
The **user\-independent** model allows us to estimate the performance of a system on new users. That is, the model does not contain any information about the target user. This resembles a scenario when the user wants to use a service out\-of\-the\-box without having to go through a calibration process or having to collect training data. To build a user\-independent model we just need to make sure that the training data does not contain any information about the users on the test set. We can achieve this by splitting the dataset into two disjoint groups of users based on their ids. For example, assign \\(70\\%\\) of the users to the train set and the remaining to the test set.
If the dataset is small, we can optimize its usage by performing **leave\-one\-user\-out cross validation**. That is, if the dataset has \\(n\\) users, then, \\(n\\) iterations are performed. In each iteration, one user is selected as the test set and the remaining are used as the train set. Figure [9\.4](multiuser.html#fig:loov) illustrates an example of *leave\-one\-user\-out cross validation* for the first \\(2\\) iterations.
FIGURE 9\.4: First 2 iterations of leave\-one\-user\-out cross validation.
By doing this, we guarantee that the model knows anything about the target user. To implement this leave\-one\-user\-out validation method in our skeleton recognition case, let’s first define some initialization variables. These include the `unique.users` variable which stores the ids of all users in the database. As before, we will compute the *accuracy*, *recall*, and *precision* so we define variables to store those metrics for each user.
```
# Get a list of unique users.
unique.users <- as.character(unique(dataset$userid))
# Print the unique user ids.
unique.users
#> [1] "s1" "s2" "s3" "s4" "s5" "s6" "s7" "s8"
accuracies <- NULL; recalls <- NULL; precisions <- NULL
```
Then, we iterate through each user, build the corresponding train and test sets, and train the classifiers. Here, we make sure that the test set only includes data points belonging to a single user.
```
set.seed(1234)
for(user in unique.users){
testset <- dataset[which(dataset$userid == user),]
trainset <- dataset[which(dataset$userid != user),]
# Normalize. Not really needed here since Random Forest
# is not affected by different scales.
res <- normalize(trainset, testset)
trainset <- res$train
testset <- res$test
rf <- randomForest(label ~., trainset[,-1])
preds.rf <- as.character(predict(rf, newdata = testset[,-1]))
groundTruth <- as.character(testset$label)
cm.rf <- confusionMatrix(factor(preds.rf,
levels = unique.actions),
factor(groundTruth,
levels = unique.actions))
accuracies <- c(accuracies, cm.rf$overall["Accuracy"])
metrics <- colMeans(cm.rf$byClass[,c("Recall",
"Specificity",
"Precision",
"F1")],
na.rm = TRUE)
recalls <- c(recalls, metrics["Recall"])
precisions <- c(precisions, metrics["Precision"])
}
```
Now we print the average performance metrics across users.
```
mean(accuracies)
#> [1] 0.5807805
mean(recalls)
#> [1] 0.5798611
mean(precisions)
#> [1] 0.6539715
```
Those numbers are surprising! In the previous section, our **mixed model** had an accuracy of \\(92\.7\\%\\) and now the **user\-independent model** has an accuracy of only \\(58\.0\\%\\)! This is because the latter didn’t know anything about the target user. Since each person is different, the **user\-independent model** was not able to capture the patterns of new users and this had a big impact on the performance.
**When should a user\-independent model be used to validate a system?**
1. When you expect the system to be used out\-of\-the\-box by new users and the system does not have any data from those new users.
The main advantage of the user\-independent model is that it does not require training data from the *target users* so they can start using it right away at the expense of lower accuracy.
The opposite case is when a model is trained specifically for the *target user*. This model is called the **user\-dependent model** and will be presented in the next section.
9\.3 User\-dependent Models
---------------------------
A **user\-dependent model** is trained with data belonging only to the *target user*. In general, this type of model performs better compared to the *mixed model* and *user\-independent model*. This is because the model captures the particularities of a specific user. The way to evaluate user\-dependent models is to iterate through each user. For each user, build and test a model only with her/his data. The per\-user evaluation can be done using \\(k\\)\-fold cross\-validation, for example. For the skeleton database, we only have \\(4\\) instances per type of action. The number of unique classes (\\(27\\)) is high compared to the number of instances per action. If we do, for example, \\(10\\)\-fold cross\-validation, it is very likely that the train sets will not contain examples for several of the possible actions. To avoid this, we will do *leave\-one\-out cross validation* within each user. This means that we need to iterate through each instance. In each iteration, the selected instance is used as the test set and the remaining ones are used for the train set.
```
unique.users <- as.character(unique(dataset$userid))
accuracies <- NULL; recalls <- NULL; precisions <- NULL
set.seed(1234)
# Iterate through each user.
for(user in unique.users){
print(paste0("Evaluating user ", user))
user.data <- dataset[which(dataset$userid == user), -1]
# Leave-one-out cross validation within each user.
predictions.rf <- NULL; groundTruth <- NULL
for(i in 1:nrow(user.data)){
# Normalize. Not really needed here since Random Forest
# is not affected by different scales.
testset <- user.data[i,]
trainset <- user.data[-i,]
res <- normalize(trainset, testset)
testset <- res$test
trainset <- res$train
rf <- randomForest(label ~., trainset)
preds.rf <- as.character(predict(rf, newdata = testset))
predictions.rf <- c(predictions.rf, preds.rf)
groundTruth <- c(groundTruth, as.character(testset$label))
}
cm.rf <- confusionMatrix(factor(predictions.rf,
levels = unique.actions),
factor(groundTruth,
levels = unique.actions))
accuracies <- c(accuracies, cm.rf$overall["Accuracy"])
metrics <- colMeans(cm.rf$byClass[,c("Recall",
"Specificity",
"Precision",
"F1")],
na.rm = TRUE)
recalls <- c(recalls, metrics["Recall"])
precisions <- c(precisions, metrics["Precision"])
} # end of users iteration.
```
We iterated through each user and performed the leave\-one\-out\-validation for each, independently of the others and stored their results. We now compute the average performance across all users.
```
# Print average performance across users.
mean(accuracies)
#> [1] 0.943114
mean(recalls)
#> [1] 0.9425154
mean(precisions)
#> [1] 0.9500772
```
This time, the average accuracy was \\(94\.3\\%\\) which is higher than the accuracy achieved with the mixed model and the user\-independent model. The average recall and precision were also higher compared to the other types of models. The reason is because each model was targeted to a particular user.
**When should a user\-dependent model be used to validate a system?**
1. When the model will be trained only using data from the target user.
In general, user\-dependent models have the best accuracy. The disadvantage is that they require training data from the target user and for some applications, collecting training data can be very difficult and expensive.
Can we have a system that has the best of both worlds between user\-dependent and user\-independent models? That is, a model that is as accurate as a user\-dependent model but requires small quantities of training data from the target user. The answer is *yes*, and this is covered in the next section (*User\-adaptive Models*).
9\.4 User\-adaptive Models
--------------------------
We have already talked about some of the limitations of **user\-dependent** and **user\-independent** models. On one hand, user\-dependent models require training data from the target user. In many situations, collecting training data is difficult. On the other hand, user\-independent models do not need data from the target user but are less accurate. To overcome those limitations, models that evolve over time as more information is available can be built. One can start with a user\-independent model and as more data becomes available from the target user, the model is updated accordingly. In this case, there is no need for a user to wait before using the system and as new feedback is available, the model gets better and better by learning the specific patterns of the user.
In this section, I will explain how a technique called **transfer learning** can be used to build an **adaptive model** that updates itself as new training data is available. First, in the following subsection the idea of transfer learning is introduced and next, the method is used to build an adaptive model for activity recognition.
### 9\.4\.1 Transfer Learning
In machine learning, **transfer learning** refers to the idea of using the knowledge gained when solving a problem to solve a different one. The new problem can be similar but also very unrelated. For example, a model trained to detect smiles from images could also be used to predict gender (of course with some fine\-tuning). In humans, learning is a lifelong process in which many tasks are interrelated. When faced with a new problem, we tend to find solutions that have worked in the past for similar problems. However, in machine learning most of the time models are trained from scratch for every new problem. For many tasks, training a model from scratch is very time consuming and requires a lot of effort, especially during the data collection and labeling phase.
The idea of transfer learning dates back to 1991 ([Pratt et al. 1991](#ref-pratt1991)) but with the advent of *deep learning* and in particular, with Convolutional Neural Networks (see chapter [8](deeplearning.html#deeplearning)), it has gained popularity because it has proven to be a valuable tool when solving challenging problems. In 2014 a CNN architecture called VGG16 was proposed by Simonyan and Zisserman ([2014](#ref-simonyan2014)) and won the ILSVR image recognition competition. This CNN was trained with more than \\(1\\) million images to recognize \\(1000\\) categories. It consists of several convolution layers, max pooling operations, and fully connected layers. In total, the network has \\(\\approx 138\\) million parameters and it took some weeks to train.
What if you wanted to add a new category to the \\(1000\\) labels? Or maybe, you only want to focus on a subset of the categories? With transfer learning you can take advantage of a network that has already been trained and adapt it to your particular problem. In the case of *deep learning*, the approach consists of ‘freezing’ the first layers of a network and only retraining (updating) the last layers for the particular problem. During training, the frozen layers’ parameters will not change and the unfrozen ones are updated as usual during the gradient descent procedure. As discussed in chapter [8](deeplearning.html#deeplearning), the first layers can act as feature extractors and be reused. With this approach, you can easily retrain a VGG16 network in an average computer and within a reasonable time. In fact, Keras already provides interfaces to common pre\-trained models that you can reuse.
In the following section we will use this idea to build a **user\-adaptive model** for activity recognition using transfer learning.
### 9\.4\.2 A User\-adaptive Model for Activity Recognition
`keras/adaptive_cnn.R`
For this example, we will use the *SMARTPHONE ACTIVITIES* dataset **encoded as images** . In chapter [7](representations.html#representations) (section: Images) I showed how timeseries data can be represented as an image. That section presented an example of how accelerometer data can be represented as an RBG color image where each channel corresponds to one of the acceleration axes (*x*, *y*, *z*). We will use the file `images.txt` that already contains the activities in image format. The procedure of converting the raw data into this format was explained in chapter [7](representations.html#representations) and the corresponding code is in the script `timeseries_to_images.R`. Since the input data are images, we will use a Convolutional Neural Network (see chapter [8](deeplearning.html#deeplearning)).
The main objective will be to build an adaptive model with a small amount of training data from the target user. We will first build a **user\-independent model**. That is, we will select one of the users as the *target user*. We train the user\-independent model with data from the remaining users (excluding the target user). Then, we will apply transfer learning to adapt the model to the target user.
The target user’s data will be split into a test set and an **adaptive set**. The test set will be used to evaluate the performance of the model and the adaptive set will be used to fine\-tune the model. The adaptive set is used to simulate that we have obtained new data from the target user.
The complete code is in the script `keras/adaptive_cnn.R`. First, we start by reading the images file. Each row corresponds to one activity. The last two columns are the `userid` and the `class`. The first \\(300\\) columns correspond to the image pixels. Each image has a size of \\(10 \\times 10 \\times 3\\) (height, width, depth).
```
# Path to smartphone activities in image format.
filepath <- file.path(datasets_path,
"smartphone_activities",
"images.txt")
# Read data.
df <- read.csv(filepath, stringsAsFactors = F)
# Shuffle rows.
set.seed(1234)
df <- df[sample(nrow(df)),]
```
The rows happen to be ordered by user and activity, so we shuffle them to ensure that the model is not biased toward the last users and activities.
Since we will train a CNN using Keras, we need the classes to be in integer format. The following code is used to append a new column `intlabel` to the database. This new column contains the classes as integers. We also create a variable `mapping` to keep track of the mapping between integers and the actual labels. By printing the `mapping` variable we see that for example, the *‘Walking’* label has a corresponding integer value of \\(0\\), *‘Downstairs’* \\(1\\), and so on.
```
## Convert labels to integers starting at 0. ##
# Get the unique labels.
labels <- unique(df$label)
mapping <- 0:(length(labels) - 1)
names(mapping) <- labels
print(mapping)
#> Walking Downstairs Jogging Standing Upstairs Sitting
#> 0 1 2 3 4 5
# Append labels as integers at the end of data frame.
df$intlabel <- mapping[df$label]
```
Now we store the unique users’ ids in the `users` variable. After printing the variable’s values, notice that there are \\(19\\) distinct users in this database. The original database has more users but we only kept those that performed all the activities. Then, we select one of the users to act as the *target user*. I will just select one of them at random (turned out to be user \\(24\\)). Feel free to select another user if you want.
```
# Get the unique user ids.
users <- unique(df$userid)
# Print all user ids.
print(users)
#> [1] 29 20 18 8 32 27 3 36 34 5 7 12 6 21 24 31 13 33 19
# Choose one user at random to be the target user.
targetUser <- sample(users, 1)
```
Next, we split the data into two sets. The first set `trainset` contains the data from all users but **excluding the target user**. We create two variables: `train.y` and `train.x`. The first one has the labels as integers and the second one has the actual image pixels (features). The second set `target.data` contains data only from the target user.
```
# Split into train and target user sets.
# The train set includes data from all users excluding targetUser.
trainset <- df[df$userid != targetUser,]
# Save train labels in a separate variable.
train.y <- trainset$intlabel
# Save train pixels in a separate variable.
train.x <- as.matrix(trainset[,-c(301,302,303)])
# This contains all data from the target user.
target.data <- df[df$userid == targetUser,]
```
Then, we split the target’s user data into \\(50\\%\\) test data and \\(50\\%\\) adaptive data (code omitted here) so that we end up with the following \\(4\\) variables:
1. `target.adaptive.y` Integer labels for the adaptive data of the target user.
2. `target.adaptive.x` Pixels of the adaptive data of the target user.
3. `target.test.y` Integer labels for the test data of the target user.
4. `target.test.x` Pixels of the test data of the target user.
We also need to normalize the data and reshape it into the actual image format since in their current form, the pixels are stored into \\(1\\)\-dimensional arrays. We learn the normalization parameters only from the train set and then, use the `normalize.reshape()` function (defined in the same script file) to perform the actual normalization and formatting.
```
# Learn min and max values from train set for normalization.
maxv <- max(train.x)
minv <- min(train.x)
# Normalize and reshape. May take some minutes.
train.x <- normalize.reshape(train.x, minv, maxv)
target.adaptive.x <- normalize.reshape(target.adaptive.x, minv, maxv)
target.test.x <- normalize.reshape(target.test.x, minv, maxv)
```
Let’s inspect how the structure of the final datasets looks like.
```
dim(train.x)
#> [1] 6399 10 10 3
dim(target.adaptive.x)
#> [1] 124 10 10 3
dim(target.test.x)
#> [1] 124 10 10 3
```
Here, we see that the train set has \\(6399\\) instances (images). The adaptive and test sets both have \\(124\\) instances.
Now that we are done with the preprocessing, it is time to build the CNN model! This one will be the initial user\-independent model and is trained with all the train data `train.x`, `train.y`.
```
model <- keras_model_sequential()
model %>%
layer_conv_2d(name = "conv1",
filters = 8,
kernel_size = c(2,2),
activation = 'relu',
input_shape = c(10,10,3)) %>%
layer_conv_2d(name = "conv2",
filters = 16,
kernel_size = c(2,2),
activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(name = "hidden1", units = 32,
activation = 'relu') %>%
layer_dropout(0.25) %>%
layer_dense(units = 6, activation = 'softmax')
```
This CNN has two convolutional layers followed by a max pooling operation, a fully connected layer, and an output layer. One important thing to note is that **we have specified a name for each layer** with the `name` parameter. For example, the first convolution’s name is `conv1`, the second one is `conv2`, and the fully connected layer was named `hidden1`. Those names must be unique because they will be used to select specific layers to freeze and unfreeze.
If we print the model’s summary (Figure [9\.5](multiuser.html#fig:adaptSummary1)) we see that in total it has \\(9,054\\) **trainable parameters** and \\(0\\) **non\-trainable parameters**. This means that all the parameters of the network will be updated during the gradient descent procedure, as usual.
```
# Print summary.
summary(model)
```
FIGURE 9\.5: Summary of initial user\-independent model.
The next code will compile the model and initiate the training phase.
```
# Compile model.
model %>% compile(
loss = 'sparse_categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c("accuracy")
)
# Fit the user-independent model.
history <- model %>% fit(
train.x, train.y,
epochs = 50,
batch_size = 8,
validation_split = 0.15,
verbose = 1,
view_metrics = TRUE
)
plot(history)
```
FIGURE 9\.6: Loss and accuracy plot of the initial user\-independent model.
Note that this time the loss was defined as `loss = 'sparse_categorical_crossentropy'` instead of the usual `loss = 'categorical_crossentropy'`. Here, the `sparse_` suffix was added. You may have noted that in this example we did not one\-hot encode the labels but they were only transformed into integers. By adding the `sparse_` suffix we are telling Keras that our labels are not one\-hot\-encoded but encoded as integers starting at \\(0\\). It will then perform the one\-hot encoding for us. This is a little trick that saved us some time.
Figure [9\.6](multiuser.html#fig:adaptLoss1) shows a plot of the loss and accuracy during training. Then, we save the model so we can load it later. Let’s also estimate the model’s performance on the target user test set.
```
# Save model.
save_model_hdf5(model, "user-independent.hdf5")
# Compute performance (accuracy) on the target user test set.
model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 1.4837638 0.6048387
```
The overall *accuracy* of this user\-independent model when tested on the target user was \\(60\.4\\%\\) (quite low). Now, we can apply transfer learning and see if the model does better. We will ‘freeze’ the first convolution layer and only update the second convolution layer and the remaining fully connected layers using the target user\-adaptive data. The following code loads the previously trained user\-independent model. Then all the CNN’s weights are frozen using the `freeze_weights()` function. The `from` parameter specifies the first layer (inclusive) from which the parameters are to be frozen. Here, it is set to \\(1\\) so all parameters in the network are ‘frozen’. Then, we use the `unfreeze_weights()` function to specify from which layer (inclusive) the parameters should be unfrozen. In this case, we want to retrain from the second convolutional layer so we set it to `conv2` which is how we named this layer earlier.
```
adaptive.model <- load_model_hdf5("user-independent.hdf5")
# Freeze all layers.
freeze_weights(adaptive.model, from = 1)
# Unfreeze layers from conv2.
unfreeze_weights(adaptive.model, from = "conv2")
```
After those changes, we need to compile the model so the modifications take effect.
```
# Compile model. We need to compile after freezing/unfreezing weights.
adaptive.model %>% compile(
loss = 'sparse_categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c("accuracy")
)
summary(adaptive.model)
```
FIGURE 9\.7: Summary of user\-independent model after freezing first convolutional layer.
After printing the summary (Figure [9\.7](multiuser.html#fig:adaptSummary2)), note that the number of **trainable and non\-trainable parameters** has changed. Now, the non\-trainable parameters are \\(104\\) (before they were \\(0\\)). These \\(104\\) parameters correspond to the first convolutional layer but this time they will not be updated during the gradient descent training phase.
The following code will retrain the model using the adaptive data but keeping the first convolutional layer fixed.
```
# Update model with adaptive data.
history <- adaptive.model %>% fit(
target.adaptive.x, target.adaptive.y,
epochs = 50,
batch_size = 8,
validation_split = 0,
verbose = 1,
view_metrics = TRUE
)
```
Note that this time the `validation_split` was set to \\(0\\). This is because the target user data set is very small so there is not enough data to use as validation set. One possible approach to overcome this is to leave a percentage of users out when building the train set for the user\-independent model. Then, use those left\-out users to find which are the most appropriate layers to keep frozen. Once you are happy with the results, evaluate the model on the target user.
```
# Compute performance (accuracy) on the target user test set.
adaptive.model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 0.5173104 0.8548387
```
If we evaluate the adaptive model’s performance on the target’s user test set, the accuracy is \\(85\.4\\%\\) which is a considerable increase! (\\(\\approx 25\\%\\) increase).
At this point, you may be wondering whether this accuracy increase was due to the fact that the model was trained for an additional \\(50\\) epochs. To validate this, we can re\-train the initial user\-independent model for \\(50\\) more epochs.
```
retrained_model <- load_model_hdf5("user-independent.hdf5")
# Fit the user-independent model for 50 more epochs.
history <- retrained_model %>% fit(
train.x, train.y,
epochs = 50,
batch_size = 8,
validation_split = 0.15,
verbose = 1,
view_metrics = TRUE
)
# Compute performance (accuracy) on the target user test set.
retrained_model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 1.3033305 0.7096774
```
After re\-training the user\-independent model for \\(50\\) more epochs, its *accuracy* increased to \\(70\.9\\%\\). On the other hand, the adaptive model was trained with fewer data and produced a much better result (\\(85\.4\\%\\)) with only \\(124\\) instances as compared to the user\-independent model (\\(\\approx 5440\\) instances). That is, the \\(6399\\) instances minus \\(15\\%\\) used as the validation set. These results highlight one of the main advantages of transfer learning which is a reduction in the needed amount of train data.
9\.5 Summary
------------
Many real\-life scenarios involve multi\-user settings. That is, the system heavily depends on the specific behavior of a given target user. This chapter covered different types of models that can be used to evaluate the performance of a system in such a scenario.
* A **multi\-user setting** is one in which its results depend heavily on the target user.
* Inter/Intra \-user variance are the differences between users and within the same users, respectively.
* **Mixed models** are trained without considering unique users (user ids) information.
* **User\-independent models** are trained without including data from the *target user*.
* **User\-dependent models** are trained only with data from the *target user*.
* **User\-adaptive models** can be adapted to a particular *target user* as more data is available.
* **Transfer learning** is a method that can be used to adapt a model to a particular user without requiring big quantities of data.
9\.1 Mixed Models
-----------------
Mixed models are trained and validated as ordinary, without considering information about mappings between data points and users. Suppose we have a dataset as shown in Figure [9\.1](multiuser.html#fig:tblMixModel). The first column is the user id, the second column the label we want to predict and the last two columns are two arbitrary features.
FIGURE 9\.1: Example dataset with a binary label and 2 features.
With a mixed model, we would just remove the *userid* column and perform \\(k\\)\-fold cross\-validation or hold\-out validation as usual. In fact, this is what we have been doing so far. By doing so, some random data points will end up in the train set and others in the test set regardless of which data point was generated by which user. The user rows are just *mixed*, thus the *mixed model* name. This model assumes that the data was generated by a single user. One disadvantage of validating a system using a mixed model is that the performance results could be overestimated. When randomly splitting into train and test sets, some data points for a given user could end up in each of the splits. At inference time, when presenting a test sample belonging to a particular user, it is likely that the training set of the model already included some data from that particular user. Thus, the model already knows a little bit about that user so we can expect an accurate prediction. However, this assumption not always holds true. If the model is to be used on a **new user** that the model has never seen before, then, it may not produce very accurate predictions.
**When should a mixed model be used to validate a system?**
1. When you know you will have available train data belonging to the intended target users.
2. In many cases, a dataset already has missing information about the mapping between rows and users. That is, a *userid* column is not present. In those cases, the best performance estimation would be through the use of a mixed model.
To demonstrate the differences between the three types of models (mixed, user\-independent, and user\-dependent) I will use the *SKELETON ACTIONS* dataset. First, a brief description of the dataset is presented including details about how the features were extracted. Then, the dataset is used to train a mixed model and in the following subsections, it is used to train user\-independent and user\-dependent models.
### 9\.1\.1 Skeleton Action Recognition with Mixed Models
`preprocess_skeleton_actions.R` `classify_skeleton_actions.R`
To demonstrate the three different types of models I chose the **UTD\-MHAD dataset** ([Chen, Jafari, and Kehtarnavaz 2015](#ref-chen2015utd)) and from now on, I will refer to it as the *SKELETON ACTIONS* dataset. This database is suitable because it was collected by \\(8\\) persons (\\(4\\) females/\\(4\\) males) and each file has a subject id, thus, we know which actions were collected by which users. There are \\(27\\) actions including: *‘right\-hand wave’*, *‘two hand front clap’*, *‘basketball shoot’*, *‘front boxing’*, etc.
The data was recorded using a Kinect camera and an inertial sensor unit. Each subject repeated each of the \\(27\\) actions \\(4\\) times. More information about the collection process and pictures is available in the original dataset website [https://personal.utdallas.edu/\~kehtar/UTD\-MHAD.html](https://personal.utdallas.edu/~kehtar/UTD-MHAD.html).
For our examples, I only consider the *skeleton data* generated by the Kinect camera. These data consists of human body joints (\\(20\\) joints). Each file contains one action for one user and one repetition. The file names are of the form: `aA_sS_tT_skeleton.mat`. The `A` is the action id, the `S` is the subject id and the `T` is the trial (repetition) number. For each time frame, the \\(3\\)D positions of the \\(20\\) joints are recorded.
The script `preprocess_skeleton_actions.R` shows how to read the files and plot the actions. The files are stored in Matlab format. The library `R.matlab` ([Bengtsson 2018](#ref-rmatlab)) can be used to read the files.
```
# Path to one of the files.
filepath <- "/skeleton_actions/a7_s1_t1_skeleton.mat"
# Read skeleton file.
df <- readMat(filepath)$d.skel
# Print dimensions.
dim(df)
#> [1] 20 3 66
```
From the file name, we see that this corresponds to action \\(7\\) (basketball shoot), from subject \\(1\\) and trial \\(1\\). The `readMat()` function reads the file contents and stores them as a \\(3\\)D array in `df`. If we print the dimensions we see that the first one corresponds to the number of joints, the second one are the positions (*x*, *y*, *z*), and the last dimension is the number of frames, in this case \\(66\\) frames.
We extract the first time\-frame as follows:
```
# Select the first frame.
frame <- data.frame(df[, , 1])
# Print dimensions.
dim(frame)
#> [1] 20 3
```
Each frame can then be plotted. The plotting code is included in the script. Figure [9\.2](multiuser.html#fig:sklBasket) shows how the skeleton looks like for six of the time frames. The script also has code to animate the actions.
FIGURE 9\.2: Skeleton of basketball shoot action. Six frames sampled from the entire sequence.
We will represent each action (file) as a feature vector. The same script also shows the code to extract the feature vectors from each action. To extract the features, a reference point in the skeleton is selected, in this case the spine (joint \\(3\\)). Then, for each time frame, the distance between all joints (excluding the reference point) and the reference point is calculated. Finally, for each distance, the *mean*, *min*, and *max* are computed across all time frames. Since there are \\(19\\) joints (excluding the spine), we end up with \\(19\*3\=57\\) features. Figure [9\.3](multiuser.html#fig:sklFeatures) shows how the final dataset looks like. It only shows the first four features out of the \\(57\\), the user id and the labels.
FIGURE 9\.3: First rows of the skeleton dataset after feature extraction showing the first 4 features. Source: Original data from C. Chen, R. Jafari, and N. Kehtarnavaz, “UTD\-MHAD: A Multimodal Dataset for Human Action Recognition Utilizing a Depth Camera and a Wearable Inertial Sensor”, *Proceedings of IEEE International Conference on Image Processing*, Canada, September 2015\.
The following examples assume that the file *dataset.csv* with the extracted features already exsits in the `skeleton_actions/` directory. To generate this file, run the feature extraction code in the script `preprocess_skeleton_actions.R`.
Once the dataset is in a suitable format, we proceed to **train our mixed model**. The script containing the full code for training the different types of models is `classify_skeleton_actions.R`. This script makes use of the *dataset.csv* file.
First, the auxiliary functions are loaded because we will use the `normalize()` function to normalize the data. We will use a Random Forest for the classification and the `caret` package to compute the performance metrics.
```
source(file.path("..","auxiliary_functions","globals.R"))
source(file.path("..","auxiliary_functions","functions.R"))
library(randomForest)
library(caret)
# Path to the csv file containing the extracted features.
# preprocess_skeleton_actins.R contains
# the code used to extract the features.
filepath <- file.path(datasets_path,
"skeleton_actions",
"dataset.csv")
# Load dataset.
dataset <- read.csv(filepath, stringsAsFactors = T)
# Extract unique labels.
unique.actions <- as.character(unique(dataset$label))
# Print the unique labels.
print(unique.actions)
#> [1] "a1" "a10" "a11" "a12" "a13" "a14" "a15" "a16" "a17"
#> [10] "a18" "a19" "a2" "a20" "a21" "a22" "a23" "a24" "a25"
#> [19] "a26" "a27" "a3" "a4" "a5" "a6" "a7" "a8" "a9"
```
The `unique.actions` variable stores the name of all actions. We will need it later to define the levels of the factor object. Next, we generate \\(10\\) folds and define some variables to store the performance metrics including the *accuracy*, *recall*, and *precision*. In each iteration during cross\-validation, we will compute and store those performance metrics.
```
k <- 10 # Number of folds.
set.seed(1234)
folds <- sample(k, nrow(dataset), replace = TRUE)
accuracies <- NULL; recalls <- NULL; precisions <- NULL
```
In the next code snippet, the actual cross\-validation is performed. This is just the usual cross\-validation procedure. The `normalize()` function defined in the auxiliary functions is used to normalize the data by only learning the parameters from the train set and applying them to the test set. Then, the Random Forest is fitted with the train set. One thing to note here is that the `userid` field is removed: `trainset[,-1]` since we are not using users’ information in the mixed model. Then, predictions on the test set are obtained and the accuracy, recall, and precision are computed during each iteration.
```
# Perform k-fold cross-validation.
for(i in 1:k){
trainset <- dataset[which(folds != i,),]
testset <- dataset[which(folds == i,),]
#Normalize.
res <- normalize(trainset, testset)
trainset <- res$train
testset <- res$test
rf <- randomForest(label ~., trainset[,-1])
preds.rf <- as.character(predict(rf,
newdata = testset[,-1]))
groundTruth <- as.character(testset$label)
cm.rf <- confusionMatrix(factor(preds.rf,
levels = unique.actions),
factor(groundTruth,
levels = unique.actions))
accuracies <- c(accuracies, cm.rf$overall["Accuracy"])
metrics <- colMeans(cm.rf$byClass[,c("Recall",
"Specificity",
"Precision",
"F1")],
na.rm = TRUE)
recalls <- c(recalls, metrics["Recall"])
precisions <- c(precisions, metrics["Precision"])
}
```
Finally, the average performance across folds for each of the metrics is printed.
```
# Print performance metrics.
mean(accuracies)
#> [1] 0.9277258
mean(recalls)
#> [1] 0.9372515
mean(precisions)
#> [1] 0.9208455
```
The results look promising with an average *accuracy* of \\(92\.7\\%\\), a *recall* of \\(93\.7\\%\\), and a *precision* of \\(92\.0\\%\\). One important thing to remember is that the mixed model assumes that the training data contains instances belonging to users in the test set. Thus, the model already knows a little bit about the users in the test set.
Now, imagine that you want to estimate the performance of the model in a situation where a completely new user is shown to the model, that is, the model does not know anything about this user. We can model those situations using a **user\-independent model** which is the topic of the next section.
### 9\.1\.1 Skeleton Action Recognition with Mixed Models
`preprocess_skeleton_actions.R` `classify_skeleton_actions.R`
To demonstrate the three different types of models I chose the **UTD\-MHAD dataset** ([Chen, Jafari, and Kehtarnavaz 2015](#ref-chen2015utd)) and from now on, I will refer to it as the *SKELETON ACTIONS* dataset. This database is suitable because it was collected by \\(8\\) persons (\\(4\\) females/\\(4\\) males) and each file has a subject id, thus, we know which actions were collected by which users. There are \\(27\\) actions including: *‘right\-hand wave’*, *‘two hand front clap’*, *‘basketball shoot’*, *‘front boxing’*, etc.
The data was recorded using a Kinect camera and an inertial sensor unit. Each subject repeated each of the \\(27\\) actions \\(4\\) times. More information about the collection process and pictures is available in the original dataset website [https://personal.utdallas.edu/\~kehtar/UTD\-MHAD.html](https://personal.utdallas.edu/~kehtar/UTD-MHAD.html).
For our examples, I only consider the *skeleton data* generated by the Kinect camera. These data consists of human body joints (\\(20\\) joints). Each file contains one action for one user and one repetition. The file names are of the form: `aA_sS_tT_skeleton.mat`. The `A` is the action id, the `S` is the subject id and the `T` is the trial (repetition) number. For each time frame, the \\(3\\)D positions of the \\(20\\) joints are recorded.
The script `preprocess_skeleton_actions.R` shows how to read the files and plot the actions. The files are stored in Matlab format. The library `R.matlab` ([Bengtsson 2018](#ref-rmatlab)) can be used to read the files.
```
# Path to one of the files.
filepath <- "/skeleton_actions/a7_s1_t1_skeleton.mat"
# Read skeleton file.
df <- readMat(filepath)$d.skel
# Print dimensions.
dim(df)
#> [1] 20 3 66
```
From the file name, we see that this corresponds to action \\(7\\) (basketball shoot), from subject \\(1\\) and trial \\(1\\). The `readMat()` function reads the file contents and stores them as a \\(3\\)D array in `df`. If we print the dimensions we see that the first one corresponds to the number of joints, the second one are the positions (*x*, *y*, *z*), and the last dimension is the number of frames, in this case \\(66\\) frames.
We extract the first time\-frame as follows:
```
# Select the first frame.
frame <- data.frame(df[, , 1])
# Print dimensions.
dim(frame)
#> [1] 20 3
```
Each frame can then be plotted. The plotting code is included in the script. Figure [9\.2](multiuser.html#fig:sklBasket) shows how the skeleton looks like for six of the time frames. The script also has code to animate the actions.
FIGURE 9\.2: Skeleton of basketball shoot action. Six frames sampled from the entire sequence.
We will represent each action (file) as a feature vector. The same script also shows the code to extract the feature vectors from each action. To extract the features, a reference point in the skeleton is selected, in this case the spine (joint \\(3\\)). Then, for each time frame, the distance between all joints (excluding the reference point) and the reference point is calculated. Finally, for each distance, the *mean*, *min*, and *max* are computed across all time frames. Since there are \\(19\\) joints (excluding the spine), we end up with \\(19\*3\=57\\) features. Figure [9\.3](multiuser.html#fig:sklFeatures) shows how the final dataset looks like. It only shows the first four features out of the \\(57\\), the user id and the labels.
FIGURE 9\.3: First rows of the skeleton dataset after feature extraction showing the first 4 features. Source: Original data from C. Chen, R. Jafari, and N. Kehtarnavaz, “UTD\-MHAD: A Multimodal Dataset for Human Action Recognition Utilizing a Depth Camera and a Wearable Inertial Sensor”, *Proceedings of IEEE International Conference on Image Processing*, Canada, September 2015\.
The following examples assume that the file *dataset.csv* with the extracted features already exsits in the `skeleton_actions/` directory. To generate this file, run the feature extraction code in the script `preprocess_skeleton_actions.R`.
Once the dataset is in a suitable format, we proceed to **train our mixed model**. The script containing the full code for training the different types of models is `classify_skeleton_actions.R`. This script makes use of the *dataset.csv* file.
First, the auxiliary functions are loaded because we will use the `normalize()` function to normalize the data. We will use a Random Forest for the classification and the `caret` package to compute the performance metrics.
```
source(file.path("..","auxiliary_functions","globals.R"))
source(file.path("..","auxiliary_functions","functions.R"))
library(randomForest)
library(caret)
# Path to the csv file containing the extracted features.
# preprocess_skeleton_actins.R contains
# the code used to extract the features.
filepath <- file.path(datasets_path,
"skeleton_actions",
"dataset.csv")
# Load dataset.
dataset <- read.csv(filepath, stringsAsFactors = T)
# Extract unique labels.
unique.actions <- as.character(unique(dataset$label))
# Print the unique labels.
print(unique.actions)
#> [1] "a1" "a10" "a11" "a12" "a13" "a14" "a15" "a16" "a17"
#> [10] "a18" "a19" "a2" "a20" "a21" "a22" "a23" "a24" "a25"
#> [19] "a26" "a27" "a3" "a4" "a5" "a6" "a7" "a8" "a9"
```
The `unique.actions` variable stores the name of all actions. We will need it later to define the levels of the factor object. Next, we generate \\(10\\) folds and define some variables to store the performance metrics including the *accuracy*, *recall*, and *precision*. In each iteration during cross\-validation, we will compute and store those performance metrics.
```
k <- 10 # Number of folds.
set.seed(1234)
folds <- sample(k, nrow(dataset), replace = TRUE)
accuracies <- NULL; recalls <- NULL; precisions <- NULL
```
In the next code snippet, the actual cross\-validation is performed. This is just the usual cross\-validation procedure. The `normalize()` function defined in the auxiliary functions is used to normalize the data by only learning the parameters from the train set and applying them to the test set. Then, the Random Forest is fitted with the train set. One thing to note here is that the `userid` field is removed: `trainset[,-1]` since we are not using users’ information in the mixed model. Then, predictions on the test set are obtained and the accuracy, recall, and precision are computed during each iteration.
```
# Perform k-fold cross-validation.
for(i in 1:k){
trainset <- dataset[which(folds != i,),]
testset <- dataset[which(folds == i,),]
#Normalize.
res <- normalize(trainset, testset)
trainset <- res$train
testset <- res$test
rf <- randomForest(label ~., trainset[,-1])
preds.rf <- as.character(predict(rf,
newdata = testset[,-1]))
groundTruth <- as.character(testset$label)
cm.rf <- confusionMatrix(factor(preds.rf,
levels = unique.actions),
factor(groundTruth,
levels = unique.actions))
accuracies <- c(accuracies, cm.rf$overall["Accuracy"])
metrics <- colMeans(cm.rf$byClass[,c("Recall",
"Specificity",
"Precision",
"F1")],
na.rm = TRUE)
recalls <- c(recalls, metrics["Recall"])
precisions <- c(precisions, metrics["Precision"])
}
```
Finally, the average performance across folds for each of the metrics is printed.
```
# Print performance metrics.
mean(accuracies)
#> [1] 0.9277258
mean(recalls)
#> [1] 0.9372515
mean(precisions)
#> [1] 0.9208455
```
The results look promising with an average *accuracy* of \\(92\.7\\%\\), a *recall* of \\(93\.7\\%\\), and a *precision* of \\(92\.0\\%\\). One important thing to remember is that the mixed model assumes that the training data contains instances belonging to users in the test set. Thus, the model already knows a little bit about the users in the test set.
Now, imagine that you want to estimate the performance of the model in a situation where a completely new user is shown to the model, that is, the model does not know anything about this user. We can model those situations using a **user\-independent model** which is the topic of the next section.
9\.2 User\-independent Models
-----------------------------
The **user\-independent** model allows us to estimate the performance of a system on new users. That is, the model does not contain any information about the target user. This resembles a scenario when the user wants to use a service out\-of\-the\-box without having to go through a calibration process or having to collect training data. To build a user\-independent model we just need to make sure that the training data does not contain any information about the users on the test set. We can achieve this by splitting the dataset into two disjoint groups of users based on their ids. For example, assign \\(70\\%\\) of the users to the train set and the remaining to the test set.
If the dataset is small, we can optimize its usage by performing **leave\-one\-user\-out cross validation**. That is, if the dataset has \\(n\\) users, then, \\(n\\) iterations are performed. In each iteration, one user is selected as the test set and the remaining are used as the train set. Figure [9\.4](multiuser.html#fig:loov) illustrates an example of *leave\-one\-user\-out cross validation* for the first \\(2\\) iterations.
FIGURE 9\.4: First 2 iterations of leave\-one\-user\-out cross validation.
By doing this, we guarantee that the model knows anything about the target user. To implement this leave\-one\-user\-out validation method in our skeleton recognition case, let’s first define some initialization variables. These include the `unique.users` variable which stores the ids of all users in the database. As before, we will compute the *accuracy*, *recall*, and *precision* so we define variables to store those metrics for each user.
```
# Get a list of unique users.
unique.users <- as.character(unique(dataset$userid))
# Print the unique user ids.
unique.users
#> [1] "s1" "s2" "s3" "s4" "s5" "s6" "s7" "s8"
accuracies <- NULL; recalls <- NULL; precisions <- NULL
```
Then, we iterate through each user, build the corresponding train and test sets, and train the classifiers. Here, we make sure that the test set only includes data points belonging to a single user.
```
set.seed(1234)
for(user in unique.users){
testset <- dataset[which(dataset$userid == user),]
trainset <- dataset[which(dataset$userid != user),]
# Normalize. Not really needed here since Random Forest
# is not affected by different scales.
res <- normalize(trainset, testset)
trainset <- res$train
testset <- res$test
rf <- randomForest(label ~., trainset[,-1])
preds.rf <- as.character(predict(rf, newdata = testset[,-1]))
groundTruth <- as.character(testset$label)
cm.rf <- confusionMatrix(factor(preds.rf,
levels = unique.actions),
factor(groundTruth,
levels = unique.actions))
accuracies <- c(accuracies, cm.rf$overall["Accuracy"])
metrics <- colMeans(cm.rf$byClass[,c("Recall",
"Specificity",
"Precision",
"F1")],
na.rm = TRUE)
recalls <- c(recalls, metrics["Recall"])
precisions <- c(precisions, metrics["Precision"])
}
```
Now we print the average performance metrics across users.
```
mean(accuracies)
#> [1] 0.5807805
mean(recalls)
#> [1] 0.5798611
mean(precisions)
#> [1] 0.6539715
```
Those numbers are surprising! In the previous section, our **mixed model** had an accuracy of \\(92\.7\\%\\) and now the **user\-independent model** has an accuracy of only \\(58\.0\\%\\)! This is because the latter didn’t know anything about the target user. Since each person is different, the **user\-independent model** was not able to capture the patterns of new users and this had a big impact on the performance.
**When should a user\-independent model be used to validate a system?**
1. When you expect the system to be used out\-of\-the\-box by new users and the system does not have any data from those new users.
The main advantage of the user\-independent model is that it does not require training data from the *target users* so they can start using it right away at the expense of lower accuracy.
The opposite case is when a model is trained specifically for the *target user*. This model is called the **user\-dependent model** and will be presented in the next section.
9\.3 User\-dependent Models
---------------------------
A **user\-dependent model** is trained with data belonging only to the *target user*. In general, this type of model performs better compared to the *mixed model* and *user\-independent model*. This is because the model captures the particularities of a specific user. The way to evaluate user\-dependent models is to iterate through each user. For each user, build and test a model only with her/his data. The per\-user evaluation can be done using \\(k\\)\-fold cross\-validation, for example. For the skeleton database, we only have \\(4\\) instances per type of action. The number of unique classes (\\(27\\)) is high compared to the number of instances per action. If we do, for example, \\(10\\)\-fold cross\-validation, it is very likely that the train sets will not contain examples for several of the possible actions. To avoid this, we will do *leave\-one\-out cross validation* within each user. This means that we need to iterate through each instance. In each iteration, the selected instance is used as the test set and the remaining ones are used for the train set.
```
unique.users <- as.character(unique(dataset$userid))
accuracies <- NULL; recalls <- NULL; precisions <- NULL
set.seed(1234)
# Iterate through each user.
for(user in unique.users){
print(paste0("Evaluating user ", user))
user.data <- dataset[which(dataset$userid == user), -1]
# Leave-one-out cross validation within each user.
predictions.rf <- NULL; groundTruth <- NULL
for(i in 1:nrow(user.data)){
# Normalize. Not really needed here since Random Forest
# is not affected by different scales.
testset <- user.data[i,]
trainset <- user.data[-i,]
res <- normalize(trainset, testset)
testset <- res$test
trainset <- res$train
rf <- randomForest(label ~., trainset)
preds.rf <- as.character(predict(rf, newdata = testset))
predictions.rf <- c(predictions.rf, preds.rf)
groundTruth <- c(groundTruth, as.character(testset$label))
}
cm.rf <- confusionMatrix(factor(predictions.rf,
levels = unique.actions),
factor(groundTruth,
levels = unique.actions))
accuracies <- c(accuracies, cm.rf$overall["Accuracy"])
metrics <- colMeans(cm.rf$byClass[,c("Recall",
"Specificity",
"Precision",
"F1")],
na.rm = TRUE)
recalls <- c(recalls, metrics["Recall"])
precisions <- c(precisions, metrics["Precision"])
} # end of users iteration.
```
We iterated through each user and performed the leave\-one\-out\-validation for each, independently of the others and stored their results. We now compute the average performance across all users.
```
# Print average performance across users.
mean(accuracies)
#> [1] 0.943114
mean(recalls)
#> [1] 0.9425154
mean(precisions)
#> [1] 0.9500772
```
This time, the average accuracy was \\(94\.3\\%\\) which is higher than the accuracy achieved with the mixed model and the user\-independent model. The average recall and precision were also higher compared to the other types of models. The reason is because each model was targeted to a particular user.
**When should a user\-dependent model be used to validate a system?**
1. When the model will be trained only using data from the target user.
In general, user\-dependent models have the best accuracy. The disadvantage is that they require training data from the target user and for some applications, collecting training data can be very difficult and expensive.
Can we have a system that has the best of both worlds between user\-dependent and user\-independent models? That is, a model that is as accurate as a user\-dependent model but requires small quantities of training data from the target user. The answer is *yes*, and this is covered in the next section (*User\-adaptive Models*).
9\.4 User\-adaptive Models
--------------------------
We have already talked about some of the limitations of **user\-dependent** and **user\-independent** models. On one hand, user\-dependent models require training data from the target user. In many situations, collecting training data is difficult. On the other hand, user\-independent models do not need data from the target user but are less accurate. To overcome those limitations, models that evolve over time as more information is available can be built. One can start with a user\-independent model and as more data becomes available from the target user, the model is updated accordingly. In this case, there is no need for a user to wait before using the system and as new feedback is available, the model gets better and better by learning the specific patterns of the user.
In this section, I will explain how a technique called **transfer learning** can be used to build an **adaptive model** that updates itself as new training data is available. First, in the following subsection the idea of transfer learning is introduced and next, the method is used to build an adaptive model for activity recognition.
### 9\.4\.1 Transfer Learning
In machine learning, **transfer learning** refers to the idea of using the knowledge gained when solving a problem to solve a different one. The new problem can be similar but also very unrelated. For example, a model trained to detect smiles from images could also be used to predict gender (of course with some fine\-tuning). In humans, learning is a lifelong process in which many tasks are interrelated. When faced with a new problem, we tend to find solutions that have worked in the past for similar problems. However, in machine learning most of the time models are trained from scratch for every new problem. For many tasks, training a model from scratch is very time consuming and requires a lot of effort, especially during the data collection and labeling phase.
The idea of transfer learning dates back to 1991 ([Pratt et al. 1991](#ref-pratt1991)) but with the advent of *deep learning* and in particular, with Convolutional Neural Networks (see chapter [8](deeplearning.html#deeplearning)), it has gained popularity because it has proven to be a valuable tool when solving challenging problems. In 2014 a CNN architecture called VGG16 was proposed by Simonyan and Zisserman ([2014](#ref-simonyan2014)) and won the ILSVR image recognition competition. This CNN was trained with more than \\(1\\) million images to recognize \\(1000\\) categories. It consists of several convolution layers, max pooling operations, and fully connected layers. In total, the network has \\(\\approx 138\\) million parameters and it took some weeks to train.
What if you wanted to add a new category to the \\(1000\\) labels? Or maybe, you only want to focus on a subset of the categories? With transfer learning you can take advantage of a network that has already been trained and adapt it to your particular problem. In the case of *deep learning*, the approach consists of ‘freezing’ the first layers of a network and only retraining (updating) the last layers for the particular problem. During training, the frozen layers’ parameters will not change and the unfrozen ones are updated as usual during the gradient descent procedure. As discussed in chapter [8](deeplearning.html#deeplearning), the first layers can act as feature extractors and be reused. With this approach, you can easily retrain a VGG16 network in an average computer and within a reasonable time. In fact, Keras already provides interfaces to common pre\-trained models that you can reuse.
In the following section we will use this idea to build a **user\-adaptive model** for activity recognition using transfer learning.
### 9\.4\.2 A User\-adaptive Model for Activity Recognition
`keras/adaptive_cnn.R`
For this example, we will use the *SMARTPHONE ACTIVITIES* dataset **encoded as images** . In chapter [7](representations.html#representations) (section: Images) I showed how timeseries data can be represented as an image. That section presented an example of how accelerometer data can be represented as an RBG color image where each channel corresponds to one of the acceleration axes (*x*, *y*, *z*). We will use the file `images.txt` that already contains the activities in image format. The procedure of converting the raw data into this format was explained in chapter [7](representations.html#representations) and the corresponding code is in the script `timeseries_to_images.R`. Since the input data are images, we will use a Convolutional Neural Network (see chapter [8](deeplearning.html#deeplearning)).
The main objective will be to build an adaptive model with a small amount of training data from the target user. We will first build a **user\-independent model**. That is, we will select one of the users as the *target user*. We train the user\-independent model with data from the remaining users (excluding the target user). Then, we will apply transfer learning to adapt the model to the target user.
The target user’s data will be split into a test set and an **adaptive set**. The test set will be used to evaluate the performance of the model and the adaptive set will be used to fine\-tune the model. The adaptive set is used to simulate that we have obtained new data from the target user.
The complete code is in the script `keras/adaptive_cnn.R`. First, we start by reading the images file. Each row corresponds to one activity. The last two columns are the `userid` and the `class`. The first \\(300\\) columns correspond to the image pixels. Each image has a size of \\(10 \\times 10 \\times 3\\) (height, width, depth).
```
# Path to smartphone activities in image format.
filepath <- file.path(datasets_path,
"smartphone_activities",
"images.txt")
# Read data.
df <- read.csv(filepath, stringsAsFactors = F)
# Shuffle rows.
set.seed(1234)
df <- df[sample(nrow(df)),]
```
The rows happen to be ordered by user and activity, so we shuffle them to ensure that the model is not biased toward the last users and activities.
Since we will train a CNN using Keras, we need the classes to be in integer format. The following code is used to append a new column `intlabel` to the database. This new column contains the classes as integers. We also create a variable `mapping` to keep track of the mapping between integers and the actual labels. By printing the `mapping` variable we see that for example, the *‘Walking’* label has a corresponding integer value of \\(0\\), *‘Downstairs’* \\(1\\), and so on.
```
## Convert labels to integers starting at 0. ##
# Get the unique labels.
labels <- unique(df$label)
mapping <- 0:(length(labels) - 1)
names(mapping) <- labels
print(mapping)
#> Walking Downstairs Jogging Standing Upstairs Sitting
#> 0 1 2 3 4 5
# Append labels as integers at the end of data frame.
df$intlabel <- mapping[df$label]
```
Now we store the unique users’ ids in the `users` variable. After printing the variable’s values, notice that there are \\(19\\) distinct users in this database. The original database has more users but we only kept those that performed all the activities. Then, we select one of the users to act as the *target user*. I will just select one of them at random (turned out to be user \\(24\\)). Feel free to select another user if you want.
```
# Get the unique user ids.
users <- unique(df$userid)
# Print all user ids.
print(users)
#> [1] 29 20 18 8 32 27 3 36 34 5 7 12 6 21 24 31 13 33 19
# Choose one user at random to be the target user.
targetUser <- sample(users, 1)
```
Next, we split the data into two sets. The first set `trainset` contains the data from all users but **excluding the target user**. We create two variables: `train.y` and `train.x`. The first one has the labels as integers and the second one has the actual image pixels (features). The second set `target.data` contains data only from the target user.
```
# Split into train and target user sets.
# The train set includes data from all users excluding targetUser.
trainset <- df[df$userid != targetUser,]
# Save train labels in a separate variable.
train.y <- trainset$intlabel
# Save train pixels in a separate variable.
train.x <- as.matrix(trainset[,-c(301,302,303)])
# This contains all data from the target user.
target.data <- df[df$userid == targetUser,]
```
Then, we split the target’s user data into \\(50\\%\\) test data and \\(50\\%\\) adaptive data (code omitted here) so that we end up with the following \\(4\\) variables:
1. `target.adaptive.y` Integer labels for the adaptive data of the target user.
2. `target.adaptive.x` Pixels of the adaptive data of the target user.
3. `target.test.y` Integer labels for the test data of the target user.
4. `target.test.x` Pixels of the test data of the target user.
We also need to normalize the data and reshape it into the actual image format since in their current form, the pixels are stored into \\(1\\)\-dimensional arrays. We learn the normalization parameters only from the train set and then, use the `normalize.reshape()` function (defined in the same script file) to perform the actual normalization and formatting.
```
# Learn min and max values from train set for normalization.
maxv <- max(train.x)
minv <- min(train.x)
# Normalize and reshape. May take some minutes.
train.x <- normalize.reshape(train.x, minv, maxv)
target.adaptive.x <- normalize.reshape(target.adaptive.x, minv, maxv)
target.test.x <- normalize.reshape(target.test.x, minv, maxv)
```
Let’s inspect how the structure of the final datasets looks like.
```
dim(train.x)
#> [1] 6399 10 10 3
dim(target.adaptive.x)
#> [1] 124 10 10 3
dim(target.test.x)
#> [1] 124 10 10 3
```
Here, we see that the train set has \\(6399\\) instances (images). The adaptive and test sets both have \\(124\\) instances.
Now that we are done with the preprocessing, it is time to build the CNN model! This one will be the initial user\-independent model and is trained with all the train data `train.x`, `train.y`.
```
model <- keras_model_sequential()
model %>%
layer_conv_2d(name = "conv1",
filters = 8,
kernel_size = c(2,2),
activation = 'relu',
input_shape = c(10,10,3)) %>%
layer_conv_2d(name = "conv2",
filters = 16,
kernel_size = c(2,2),
activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(name = "hidden1", units = 32,
activation = 'relu') %>%
layer_dropout(0.25) %>%
layer_dense(units = 6, activation = 'softmax')
```
This CNN has two convolutional layers followed by a max pooling operation, a fully connected layer, and an output layer. One important thing to note is that **we have specified a name for each layer** with the `name` parameter. For example, the first convolution’s name is `conv1`, the second one is `conv2`, and the fully connected layer was named `hidden1`. Those names must be unique because they will be used to select specific layers to freeze and unfreeze.
If we print the model’s summary (Figure [9\.5](multiuser.html#fig:adaptSummary1)) we see that in total it has \\(9,054\\) **trainable parameters** and \\(0\\) **non\-trainable parameters**. This means that all the parameters of the network will be updated during the gradient descent procedure, as usual.
```
# Print summary.
summary(model)
```
FIGURE 9\.5: Summary of initial user\-independent model.
The next code will compile the model and initiate the training phase.
```
# Compile model.
model %>% compile(
loss = 'sparse_categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c("accuracy")
)
# Fit the user-independent model.
history <- model %>% fit(
train.x, train.y,
epochs = 50,
batch_size = 8,
validation_split = 0.15,
verbose = 1,
view_metrics = TRUE
)
plot(history)
```
FIGURE 9\.6: Loss and accuracy plot of the initial user\-independent model.
Note that this time the loss was defined as `loss = 'sparse_categorical_crossentropy'` instead of the usual `loss = 'categorical_crossentropy'`. Here, the `sparse_` suffix was added. You may have noted that in this example we did not one\-hot encode the labels but they were only transformed into integers. By adding the `sparse_` suffix we are telling Keras that our labels are not one\-hot\-encoded but encoded as integers starting at \\(0\\). It will then perform the one\-hot encoding for us. This is a little trick that saved us some time.
Figure [9\.6](multiuser.html#fig:adaptLoss1) shows a plot of the loss and accuracy during training. Then, we save the model so we can load it later. Let’s also estimate the model’s performance on the target user test set.
```
# Save model.
save_model_hdf5(model, "user-independent.hdf5")
# Compute performance (accuracy) on the target user test set.
model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 1.4837638 0.6048387
```
The overall *accuracy* of this user\-independent model when tested on the target user was \\(60\.4\\%\\) (quite low). Now, we can apply transfer learning and see if the model does better. We will ‘freeze’ the first convolution layer and only update the second convolution layer and the remaining fully connected layers using the target user\-adaptive data. The following code loads the previously trained user\-independent model. Then all the CNN’s weights are frozen using the `freeze_weights()` function. The `from` parameter specifies the first layer (inclusive) from which the parameters are to be frozen. Here, it is set to \\(1\\) so all parameters in the network are ‘frozen’. Then, we use the `unfreeze_weights()` function to specify from which layer (inclusive) the parameters should be unfrozen. In this case, we want to retrain from the second convolutional layer so we set it to `conv2` which is how we named this layer earlier.
```
adaptive.model <- load_model_hdf5("user-independent.hdf5")
# Freeze all layers.
freeze_weights(adaptive.model, from = 1)
# Unfreeze layers from conv2.
unfreeze_weights(adaptive.model, from = "conv2")
```
After those changes, we need to compile the model so the modifications take effect.
```
# Compile model. We need to compile after freezing/unfreezing weights.
adaptive.model %>% compile(
loss = 'sparse_categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c("accuracy")
)
summary(adaptive.model)
```
FIGURE 9\.7: Summary of user\-independent model after freezing first convolutional layer.
After printing the summary (Figure [9\.7](multiuser.html#fig:adaptSummary2)), note that the number of **trainable and non\-trainable parameters** has changed. Now, the non\-trainable parameters are \\(104\\) (before they were \\(0\\)). These \\(104\\) parameters correspond to the first convolutional layer but this time they will not be updated during the gradient descent training phase.
The following code will retrain the model using the adaptive data but keeping the first convolutional layer fixed.
```
# Update model with adaptive data.
history <- adaptive.model %>% fit(
target.adaptive.x, target.adaptive.y,
epochs = 50,
batch_size = 8,
validation_split = 0,
verbose = 1,
view_metrics = TRUE
)
```
Note that this time the `validation_split` was set to \\(0\\). This is because the target user data set is very small so there is not enough data to use as validation set. One possible approach to overcome this is to leave a percentage of users out when building the train set for the user\-independent model. Then, use those left\-out users to find which are the most appropriate layers to keep frozen. Once you are happy with the results, evaluate the model on the target user.
```
# Compute performance (accuracy) on the target user test set.
adaptive.model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 0.5173104 0.8548387
```
If we evaluate the adaptive model’s performance on the target’s user test set, the accuracy is \\(85\.4\\%\\) which is a considerable increase! (\\(\\approx 25\\%\\) increase).
At this point, you may be wondering whether this accuracy increase was due to the fact that the model was trained for an additional \\(50\\) epochs. To validate this, we can re\-train the initial user\-independent model for \\(50\\) more epochs.
```
retrained_model <- load_model_hdf5("user-independent.hdf5")
# Fit the user-independent model for 50 more epochs.
history <- retrained_model %>% fit(
train.x, train.y,
epochs = 50,
batch_size = 8,
validation_split = 0.15,
verbose = 1,
view_metrics = TRUE
)
# Compute performance (accuracy) on the target user test set.
retrained_model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 1.3033305 0.7096774
```
After re\-training the user\-independent model for \\(50\\) more epochs, its *accuracy* increased to \\(70\.9\\%\\). On the other hand, the adaptive model was trained with fewer data and produced a much better result (\\(85\.4\\%\\)) with only \\(124\\) instances as compared to the user\-independent model (\\(\\approx 5440\\) instances). That is, the \\(6399\\) instances minus \\(15\\%\\) used as the validation set. These results highlight one of the main advantages of transfer learning which is a reduction in the needed amount of train data.
### 9\.4\.1 Transfer Learning
In machine learning, **transfer learning** refers to the idea of using the knowledge gained when solving a problem to solve a different one. The new problem can be similar but also very unrelated. For example, a model trained to detect smiles from images could also be used to predict gender (of course with some fine\-tuning). In humans, learning is a lifelong process in which many tasks are interrelated. When faced with a new problem, we tend to find solutions that have worked in the past for similar problems. However, in machine learning most of the time models are trained from scratch for every new problem. For many tasks, training a model from scratch is very time consuming and requires a lot of effort, especially during the data collection and labeling phase.
The idea of transfer learning dates back to 1991 ([Pratt et al. 1991](#ref-pratt1991)) but with the advent of *deep learning* and in particular, with Convolutional Neural Networks (see chapter [8](deeplearning.html#deeplearning)), it has gained popularity because it has proven to be a valuable tool when solving challenging problems. In 2014 a CNN architecture called VGG16 was proposed by Simonyan and Zisserman ([2014](#ref-simonyan2014)) and won the ILSVR image recognition competition. This CNN was trained with more than \\(1\\) million images to recognize \\(1000\\) categories. It consists of several convolution layers, max pooling operations, and fully connected layers. In total, the network has \\(\\approx 138\\) million parameters and it took some weeks to train.
What if you wanted to add a new category to the \\(1000\\) labels? Or maybe, you only want to focus on a subset of the categories? With transfer learning you can take advantage of a network that has already been trained and adapt it to your particular problem. In the case of *deep learning*, the approach consists of ‘freezing’ the first layers of a network and only retraining (updating) the last layers for the particular problem. During training, the frozen layers’ parameters will not change and the unfrozen ones are updated as usual during the gradient descent procedure. As discussed in chapter [8](deeplearning.html#deeplearning), the first layers can act as feature extractors and be reused. With this approach, you can easily retrain a VGG16 network in an average computer and within a reasonable time. In fact, Keras already provides interfaces to common pre\-trained models that you can reuse.
In the following section we will use this idea to build a **user\-adaptive model** for activity recognition using transfer learning.
### 9\.4\.2 A User\-adaptive Model for Activity Recognition
`keras/adaptive_cnn.R`
For this example, we will use the *SMARTPHONE ACTIVITIES* dataset **encoded as images** . In chapter [7](representations.html#representations) (section: Images) I showed how timeseries data can be represented as an image. That section presented an example of how accelerometer data can be represented as an RBG color image where each channel corresponds to one of the acceleration axes (*x*, *y*, *z*). We will use the file `images.txt` that already contains the activities in image format. The procedure of converting the raw data into this format was explained in chapter [7](representations.html#representations) and the corresponding code is in the script `timeseries_to_images.R`. Since the input data are images, we will use a Convolutional Neural Network (see chapter [8](deeplearning.html#deeplearning)).
The main objective will be to build an adaptive model with a small amount of training data from the target user. We will first build a **user\-independent model**. That is, we will select one of the users as the *target user*. We train the user\-independent model with data from the remaining users (excluding the target user). Then, we will apply transfer learning to adapt the model to the target user.
The target user’s data will be split into a test set and an **adaptive set**. The test set will be used to evaluate the performance of the model and the adaptive set will be used to fine\-tune the model. The adaptive set is used to simulate that we have obtained new data from the target user.
The complete code is in the script `keras/adaptive_cnn.R`. First, we start by reading the images file. Each row corresponds to one activity. The last two columns are the `userid` and the `class`. The first \\(300\\) columns correspond to the image pixels. Each image has a size of \\(10 \\times 10 \\times 3\\) (height, width, depth).
```
# Path to smartphone activities in image format.
filepath <- file.path(datasets_path,
"smartphone_activities",
"images.txt")
# Read data.
df <- read.csv(filepath, stringsAsFactors = F)
# Shuffle rows.
set.seed(1234)
df <- df[sample(nrow(df)),]
```
The rows happen to be ordered by user and activity, so we shuffle them to ensure that the model is not biased toward the last users and activities.
Since we will train a CNN using Keras, we need the classes to be in integer format. The following code is used to append a new column `intlabel` to the database. This new column contains the classes as integers. We also create a variable `mapping` to keep track of the mapping between integers and the actual labels. By printing the `mapping` variable we see that for example, the *‘Walking’* label has a corresponding integer value of \\(0\\), *‘Downstairs’* \\(1\\), and so on.
```
## Convert labels to integers starting at 0. ##
# Get the unique labels.
labels <- unique(df$label)
mapping <- 0:(length(labels) - 1)
names(mapping) <- labels
print(mapping)
#> Walking Downstairs Jogging Standing Upstairs Sitting
#> 0 1 2 3 4 5
# Append labels as integers at the end of data frame.
df$intlabel <- mapping[df$label]
```
Now we store the unique users’ ids in the `users` variable. After printing the variable’s values, notice that there are \\(19\\) distinct users in this database. The original database has more users but we only kept those that performed all the activities. Then, we select one of the users to act as the *target user*. I will just select one of them at random (turned out to be user \\(24\\)). Feel free to select another user if you want.
```
# Get the unique user ids.
users <- unique(df$userid)
# Print all user ids.
print(users)
#> [1] 29 20 18 8 32 27 3 36 34 5 7 12 6 21 24 31 13 33 19
# Choose one user at random to be the target user.
targetUser <- sample(users, 1)
```
Next, we split the data into two sets. The first set `trainset` contains the data from all users but **excluding the target user**. We create two variables: `train.y` and `train.x`. The first one has the labels as integers and the second one has the actual image pixels (features). The second set `target.data` contains data only from the target user.
```
# Split into train and target user sets.
# The train set includes data from all users excluding targetUser.
trainset <- df[df$userid != targetUser,]
# Save train labels in a separate variable.
train.y <- trainset$intlabel
# Save train pixels in a separate variable.
train.x <- as.matrix(trainset[,-c(301,302,303)])
# This contains all data from the target user.
target.data <- df[df$userid == targetUser,]
```
Then, we split the target’s user data into \\(50\\%\\) test data and \\(50\\%\\) adaptive data (code omitted here) so that we end up with the following \\(4\\) variables:
1. `target.adaptive.y` Integer labels for the adaptive data of the target user.
2. `target.adaptive.x` Pixels of the adaptive data of the target user.
3. `target.test.y` Integer labels for the test data of the target user.
4. `target.test.x` Pixels of the test data of the target user.
We also need to normalize the data and reshape it into the actual image format since in their current form, the pixels are stored into \\(1\\)\-dimensional arrays. We learn the normalization parameters only from the train set and then, use the `normalize.reshape()` function (defined in the same script file) to perform the actual normalization and formatting.
```
# Learn min and max values from train set for normalization.
maxv <- max(train.x)
minv <- min(train.x)
# Normalize and reshape. May take some minutes.
train.x <- normalize.reshape(train.x, minv, maxv)
target.adaptive.x <- normalize.reshape(target.adaptive.x, minv, maxv)
target.test.x <- normalize.reshape(target.test.x, minv, maxv)
```
Let’s inspect how the structure of the final datasets looks like.
```
dim(train.x)
#> [1] 6399 10 10 3
dim(target.adaptive.x)
#> [1] 124 10 10 3
dim(target.test.x)
#> [1] 124 10 10 3
```
Here, we see that the train set has \\(6399\\) instances (images). The adaptive and test sets both have \\(124\\) instances.
Now that we are done with the preprocessing, it is time to build the CNN model! This one will be the initial user\-independent model and is trained with all the train data `train.x`, `train.y`.
```
model <- keras_model_sequential()
model %>%
layer_conv_2d(name = "conv1",
filters = 8,
kernel_size = c(2,2),
activation = 'relu',
input_shape = c(10,10,3)) %>%
layer_conv_2d(name = "conv2",
filters = 16,
kernel_size = c(2,2),
activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_flatten() %>%
layer_dense(name = "hidden1", units = 32,
activation = 'relu') %>%
layer_dropout(0.25) %>%
layer_dense(units = 6, activation = 'softmax')
```
This CNN has two convolutional layers followed by a max pooling operation, a fully connected layer, and an output layer. One important thing to note is that **we have specified a name for each layer** with the `name` parameter. For example, the first convolution’s name is `conv1`, the second one is `conv2`, and the fully connected layer was named `hidden1`. Those names must be unique because they will be used to select specific layers to freeze and unfreeze.
If we print the model’s summary (Figure [9\.5](multiuser.html#fig:adaptSummary1)) we see that in total it has \\(9,054\\) **trainable parameters** and \\(0\\) **non\-trainable parameters**. This means that all the parameters of the network will be updated during the gradient descent procedure, as usual.
```
# Print summary.
summary(model)
```
FIGURE 9\.5: Summary of initial user\-independent model.
The next code will compile the model and initiate the training phase.
```
# Compile model.
model %>% compile(
loss = 'sparse_categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c("accuracy")
)
# Fit the user-independent model.
history <- model %>% fit(
train.x, train.y,
epochs = 50,
batch_size = 8,
validation_split = 0.15,
verbose = 1,
view_metrics = TRUE
)
plot(history)
```
FIGURE 9\.6: Loss and accuracy plot of the initial user\-independent model.
Note that this time the loss was defined as `loss = 'sparse_categorical_crossentropy'` instead of the usual `loss = 'categorical_crossentropy'`. Here, the `sparse_` suffix was added. You may have noted that in this example we did not one\-hot encode the labels but they were only transformed into integers. By adding the `sparse_` suffix we are telling Keras that our labels are not one\-hot\-encoded but encoded as integers starting at \\(0\\). It will then perform the one\-hot encoding for us. This is a little trick that saved us some time.
Figure [9\.6](multiuser.html#fig:adaptLoss1) shows a plot of the loss and accuracy during training. Then, we save the model so we can load it later. Let’s also estimate the model’s performance on the target user test set.
```
# Save model.
save_model_hdf5(model, "user-independent.hdf5")
# Compute performance (accuracy) on the target user test set.
model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 1.4837638 0.6048387
```
The overall *accuracy* of this user\-independent model when tested on the target user was \\(60\.4\\%\\) (quite low). Now, we can apply transfer learning and see if the model does better. We will ‘freeze’ the first convolution layer and only update the second convolution layer and the remaining fully connected layers using the target user\-adaptive data. The following code loads the previously trained user\-independent model. Then all the CNN’s weights are frozen using the `freeze_weights()` function. The `from` parameter specifies the first layer (inclusive) from which the parameters are to be frozen. Here, it is set to \\(1\\) so all parameters in the network are ‘frozen’. Then, we use the `unfreeze_weights()` function to specify from which layer (inclusive) the parameters should be unfrozen. In this case, we want to retrain from the second convolutional layer so we set it to `conv2` which is how we named this layer earlier.
```
adaptive.model <- load_model_hdf5("user-independent.hdf5")
# Freeze all layers.
freeze_weights(adaptive.model, from = 1)
# Unfreeze layers from conv2.
unfreeze_weights(adaptive.model, from = "conv2")
```
After those changes, we need to compile the model so the modifications take effect.
```
# Compile model. We need to compile after freezing/unfreezing weights.
adaptive.model %>% compile(
loss = 'sparse_categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c("accuracy")
)
summary(adaptive.model)
```
FIGURE 9\.7: Summary of user\-independent model after freezing first convolutional layer.
After printing the summary (Figure [9\.7](multiuser.html#fig:adaptSummary2)), note that the number of **trainable and non\-trainable parameters** has changed. Now, the non\-trainable parameters are \\(104\\) (before they were \\(0\\)). These \\(104\\) parameters correspond to the first convolutional layer but this time they will not be updated during the gradient descent training phase.
The following code will retrain the model using the adaptive data but keeping the first convolutional layer fixed.
```
# Update model with adaptive data.
history <- adaptive.model %>% fit(
target.adaptive.x, target.adaptive.y,
epochs = 50,
batch_size = 8,
validation_split = 0,
verbose = 1,
view_metrics = TRUE
)
```
Note that this time the `validation_split` was set to \\(0\\). This is because the target user data set is very small so there is not enough data to use as validation set. One possible approach to overcome this is to leave a percentage of users out when building the train set for the user\-independent model. Then, use those left\-out users to find which are the most appropriate layers to keep frozen. Once you are happy with the results, evaluate the model on the target user.
```
# Compute performance (accuracy) on the target user test set.
adaptive.model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 0.5173104 0.8548387
```
If we evaluate the adaptive model’s performance on the target’s user test set, the accuracy is \\(85\.4\\%\\) which is a considerable increase! (\\(\\approx 25\\%\\) increase).
At this point, you may be wondering whether this accuracy increase was due to the fact that the model was trained for an additional \\(50\\) epochs. To validate this, we can re\-train the initial user\-independent model for \\(50\\) more epochs.
```
retrained_model <- load_model_hdf5("user-independent.hdf5")
# Fit the user-independent model for 50 more epochs.
history <- retrained_model %>% fit(
train.x, train.y,
epochs = 50,
batch_size = 8,
validation_split = 0.15,
verbose = 1,
view_metrics = TRUE
)
# Compute performance (accuracy) on the target user test set.
retrained_model %>% evaluate(target.test.x, target.test.y)
#> loss accuracy
#> 1.3033305 0.7096774
```
After re\-training the user\-independent model for \\(50\\) more epochs, its *accuracy* increased to \\(70\.9\\%\\). On the other hand, the adaptive model was trained with fewer data and produced a much better result (\\(85\.4\\%\\)) with only \\(124\\) instances as compared to the user\-independent model (\\(\\approx 5440\\) instances). That is, the \\(6399\\) instances minus \\(15\\%\\) used as the validation set. These results highlight one of the main advantages of transfer learning which is a reduction in the needed amount of train data.
9\.5 Summary
------------
Many real\-life scenarios involve multi\-user settings. That is, the system heavily depends on the specific behavior of a given target user. This chapter covered different types of models that can be used to evaluate the performance of a system in such a scenario.
* A **multi\-user setting** is one in which its results depend heavily on the target user.
* Inter/Intra \-user variance are the differences between users and within the same users, respectively.
* **Mixed models** are trained without considering unique users (user ids) information.
* **User\-independent models** are trained without including data from the *target user*.
* **User\-dependent models** are trained only with data from the *target user*.
* **User\-adaptive models** can be adapted to a particular *target user* as more data is available.
* **Transfer learning** is a method that can be used to adapt a model to a particular user without requiring big quantities of data.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/abnormalbehaviors.html |
Chapter 10 Detecting Abnormal Behaviors
=======================================
Abnormal data points are instances that are rare or do not occur very often. They are also called *outliers*. Some examples include illegal bank transactions, defective products, natural disasters, etc. Detecting abnormal behaviors is an important topic in the fields of health care, ecology, economy, psychology, and so on. For example, abnormal behaviors in wildlife creatures can be an indication of abrupt changes in the environment and rare behavioral patterns in a person may be an indication of health deterioration.
Anomaly detection can be formulated as a binary classification task and solved by training a classifier to distinguish between *normal* and *abnormal* instances. The problem with this approach is that anomalous points are rare and there may not be enough to train a classifier. This can also lead to class imbalance problems. Furthermore, the models should be able to detect abnormal points even if they are very different from the training data. To address those issues, several anomaly detection methods have been developed over the years and this chapter introduces two of them: Isolation Forests and autoencoders.
This chapter starts by explaining how Isolation Forests work and then, an example of how to apply them for abnormal trajectory detection is presented. Next, a method (ROC curve) to evaluate the performance of such models is described. Finally, another method called autoencoder that can be used for anomaly detection is explained and applied to the abnormal trajectory detection problem.
10\.1 Isolation Forests
-----------------------
As its name implies, an *Isolation Forest* identifies anomalous points by explicitly ‘isolating’ them. In this context *isolation* means separating an instance from the others. This approach is different from many other anomaly detection algorithms where they first build a profile of normal instances and mark an instance as an anomaly if it does not conform to the normal profile. Isolation Forests were proposed by Liu, Ting, and Zhou ([2008](#ref-Liu2008isolation)) and the method is based on building many trees (similar to Random Forests, chapter [3](ensemble.html#ensemble)). This method has several advantages including its efficiency in terms of time and memory usage. Another advantage is that at training time it does not need to have examples of the abnormal cases but if available, they can be incorporated as well. Since this method is based on trees, another nice thing about it is that there is no need to scale the features.
This method is based on the observation that anomalies are ‘few and different’ which makes them easier to isolate. It is based on building an ensemble of trees where each tree is called an Isolation Tree. Each Isolation Tree partitions the features until every instance is isolated (it’s at a leaf node). Since anomalies are easier to isolate they will be closer to the root of the tree. An instance is marked as an anomaly if its average path length to the root across all Isolation Trees is short.
A tree is generated recursively by randomly selecting a feature and then selecting a random partition between the maximum and minimum value of that feature. Each partition corresponds to a split in a tree. The procedure terminates when all instances are isolated. The number of partitions that were required to isolate a point corresponds to the path length of that point to the root of the tree.
Figure [10\.1](abnormalbehaviors.html#fig:partitionExamle) shows a set of points with only one feature (x axis). One of the anomalous points is highlighted as a red triangle. One of the normal points is marked as a blue solid circle.
FIGURE 10\.1: Example partitioning of a normal and an anomalous point.
To isolate the anomalous instance, we can randomly and recursively choose partition positions (vertical lines in Figure [10\.1](abnormalbehaviors.html#fig:partitionExamle)) until the instance is encapsulated in its own partition. In this example, it took \\(4\\) partitions (red lines) to isolate the anomalous instance, thus, the path length of this instance to the root of the tree is \\(4\\). The partitions were located at \\(0\.51, 1\.6, 1\.7,\\) and \\(1\.8\\). The code to reproduce this example is in the script `example_isolate_point.R`. If we look at the highlighted normal instance we can see that it took \\(8\\) partitions to isolate it.
Instead of generating a single tree, we can generate an ensemble of \\(n\\) trees and average their path lengths. Figure [10\.2](abnormalbehaviors.html#fig:anomalyIts) shows the average path length for the same previous normal and anomalous instances as the number of trees in the ensemble is increased.
FIGURE 10\.2: Average path lenghts for increasing number of trees.
After \\(200\\) trees, the average path length of the normal instance starts to converge to \\(8\.7\\) and the path length of the anomalous one converges to \\(3\.1\\). This shows that anomalies have shorter path lengths on average.
In practice, an Isolation Tree is recursively grown until a predefined maximum height is reached (more on this later), or when all instances are isolated, or all instances in a partition have the same values. Once all Isolation Trees in the ensemble (Isolation Forest) are generated, the instances can be sorted according to their average path length to the root. Then, instances with the shorter path lengths can be marked as anomalies.
Instead of directly using the average path lengths for deciding whether or not an instance is an anomaly, the authors of the method proposed an anomaly score that is between \\(0\\) and \\(1\\). The reason for this, is that this score is easier to interpret since it’s normalized. The closer the anomaly score is to \\(1\\) the more likely the instance is an anomaly. Instances with anomaly scores \\(\<\< 0\.5\\) can be marked as normal. The anomaly score for an instance \\(x\\) is computed with the formula:
\\\[\\begin{equation}
s(x) \= 2^{\-\\frac{E(h(x))}{c(n)}}
\\tag{10\.1}
\\end{equation}\\]
where \\(h(x)\\) is the path length of \\(x\\) to the root of a given tree and \\(E(h(x))\\) is the average of the path lengths of \\(x\\) across all trees in the ensemble. \\(n\\) is the number of instances in the train set. \\(c(n)\\) is the average path length of an unsuccessful search in a binary search tree:
\\\[\\begin{equation}
c(n) \= 2H(n\-1\) \- (2(n\-1\)/n)
\\tag{10\.2}
\\end{equation}\\]
where \\(H(x)\\) denotes the harmonic number and is estimated by \\(ln(x) \+ 0\.5772156649\\) (Euler\-Mascheroni constant).
A practical ‘trick’ that Isolation Forests use is *sub\-sampling without replacement*. That is, instead of using the entire training set, an independent random sample of size \\(p\\) is used to build each tree. The sub\-sampling reduces the *swamping* and *masking* effects. *Swamping* occurs when normal instances are too close to anomalies and thus, marked as anomalies. *Masking* refers to the presence of too many anomalies close together. This increases the number of partitions needed to isolate each anomaly point.
Figure [10\.3](abnormalbehaviors.html#fig:samplingInstances) (left) shows a set of \\(4000\\) normal and \\(100\\) anomalous instances clustered in the same region. The right plot shows how it looks like after sampling \\(256\\) instances from the total. Here, we can see that the anomalous points are more clearly separated from the normal ones.
FIGURE 10\.3: Dataset before and after sampling.
Previously, I mentioned that trees are grown until a predefined maximum height is reached. The authors of the method suggest to set this maximum height to \\(l\=ceiling(log\_2(p))\\) which approximates the average tree height. Remember that \\(p\\) is the sampling size. Since anomalous instances are closer to the root, we can expect normal instances to be in the lower sections of the tree, thus, there is no need to grow the entire tree and we can limit its height.
The only two parameters of the algorithm are the number of trees and the sampling size \\(p\\). The authors recommend a default sampling size of \\(256\\) and \\(100\\) trees.
At training time, the ensemble of trees is generated using the train data. It is not necessary that the train data contain examples of anomalous instances. This is advantageous because in many cases the anomalous instances are scarce so we can reserve them for testing. At test time, instances in the test set are passed through all trees and an anomaly score is computed for each. Instances with an anomaly score greater than some threshold are marked as anomalies. The optimal threshold can be estimated using an Area Under the Curve analysis which will be covered in the following sections.
The `solitude` R package ([Srikanth 2020](#ref-solitude)) provides convenient functions to train Isolation Forests and make predictions. In the following section we will use it to detect abnormal fish behaviors.
10\.2 Detecting Abnormal Fish Behaviors
---------------------------------------
`visualize_fish.R` `extract_features.R` `isolation_forest_fish.R`
In marine biology, the analysis of fish behavior is essential since it can be used to detect environmental changes produced by pollution, climate change, etc. Fish behaviors can be characterized by their trajectories, that is, how they move within the environment. A **trajectory** is the path that an object follows through space and time.
Capturing fish trajectories is a challenging task specially, in unconstrained underwater conditions. Thankfully, the Fish4Knowledge[28](#fn28) project has developed fish analysis tools and methods to ease the task. They have processed enormous amounts of video streaming data and have extracted fish information including trajectories. They have made the fish trajectories dataset publicly available[29](#fn29) ([Beyan and Fisher 2013](#ref-Beyan2013)).
The *FISH TRAJECTORIES* dataset contains \\(3102\\) trajectories belonging to the *Dascyllus reticulatus* fish (see Figure [10\.4](abnormalbehaviors.html#fig:dascyllus)) observed in the Taiwanese coral reef. Each trajectory is labeled as *‘normal’* or *‘abnormal’*. The trajectories were extracted from underwater video and stored as coordinates over time.
FIGURE 10\.4: Example of Dascyllus reticulatus fish. (Author: Rickard Zerpe. Source: wikimedia.org (CC BY 2\.0\) \[[https://creativecommons.org/licenses/by/2\.0/legalcode](https://creativecommons.org/licenses/by/2.0/legalcode)]).
Our main task will be to detect the **abnormal** trajectories using an Isolation Forest but before that, we are going to explore, visualize, and pre\-process the dataset.
### 10\.2\.1 Exploring and Visualizing Trajectories
The data is stored in a `.mat` file, so we are going to use the package `R.matlab` ([Bengtsson 2018](#ref-rmatlab)) to import the data into an array. The following code can be found in the script `visualize_fish.R`.
```
library(R.matlab)
# Read data.
df <- readMat("../fishDetections_total3102.mat"))$fish.detections
# Print data frame dimensions.
dim(df)
#> [1] 7 1 3102
```
We use the `dim()` function to print the dimensions of the array. From the output, we can see that there are \\(3102\\) individual trajectories and each trajectory has \\(7\\) attributes. Let’s explore what are the contents of a single trajectory. The following code snippet extracts the first trajectory and prints its structure.
```
# Read one of the trajectories.
trj <- df[,,1]
# Inspect its structure.
str(trj)
#> List of 7
#> $ frame.number : num [1:37, 1] 826 827 828 829 833 834 835 836 ...
#> $ bounding.box.x : num [1:37, 1] 167 165 162 159 125 124 126 126 ...
#> $ bounding.box.y : num [1:37, 1] 67 65 65 66 58 61 65 71 71 62 ...
#> $ bounding.box.w : num [1:37, 1] 40 37 39 34 39 39 38 38 37 31 ...
#> $ bounding.box.h : num [1:37, 1] 38 40 40 38 35 34 34 33 34 35 ...
#> $ class : num [1, 1] 1
#> $ classDescription: chr [1, 1] "normal"
```
A trajectory is composed of \\(7\\) pieces of information:
1. frame.number: Frame number in original video.
2. bounding.box.x: Bounding box leftmost edge.
3. bounding.box.y: Bounding box topmost edge.
4. bounding.box.w: Bounding box width.
5. bounding.box.h: Bounding box height.
6. class: 1\=normal, 2\=rare.
7. classDescription: ‘normal’ or abnormal’.
The bounding box represents the square region where the fish was detected in the video footage. Figure [10\.5](abnormalbehaviors.html#fig:fishBox) shows an example of a fish and its bounding box (not from the original dataset but for illustration purpose only). Also note that the dataset does not contain the images but only the bounding boxes’ coordinates.
FIGURE 10\.5: Fish bounding box (in red). (Author: Nick Hobgood. Source: wikimedia.org (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
Each trajectory has a different number of video frames. We can get the frame count by inspecting the length of one of the coordinates.
```
# Count how many frames this trajectory has.
length(trj$bounding.box.x)
#> [1] 37
```
The first trajectory has \\(37\\) frames but on average, they have \\(10\\) frames. For our analyses, we only include trajectories with a minimum of \\(10\\) frames since it may be difficult to extract patterns from shorter paths. Furthermore, we are not going to use the bounding boxes themselves but the center point of the box.
At this point, it would be a good idea to plot how the data looks like. To do so, I will use the `anipaths` package ([Scharf 2020](#ref-anipaths)) which has a function to animate trajectories! I will not cover the details here on how to use the package but the complete code is in the same script `visualize_fish.R`. The output result is in the form of an ‘index.html’ file that contains the interactive animation. For simplicity, I only selected \\(50\\) and \\(10\\) normal and abnormal trajectories (respectively) to be plotted. Figure [10\.6](abnormalbehaviors.html#fig:animTrajectories) shows the resulting plot. The plot also includes some controls to play, pause, change the speed of the animation, etc.
FIGURE 10\.6: Example of animated trajectories generated with the anipaths package.
The *‘normal’* and *‘abnormal’* labels were determined by visual inspection by experts. The abnormal cases include events such as predator avoidance and aggressive movements (due to another fish or because of being frightened).
### 10\.2\.2 Preprocessing and Feature Extraction
Now that we have explored and visualized the data, we can begin with the preprocessing and feature extraction. As previously mentioned, the database contains bounding boxes and we want to use the center of the boxes to define the trajectories. The following code snippet (from `extract_features.R`) shows how the center of a box can be computed.
```
# Compute center of bounding box.
x.coord <- trj$bounding.box.x + (trj$bounding.box.w / 2)
y.coord <- trj$bounding.box.y + (trj$bounding.box.h / 2)
# Make times start at 0.
times <- trj$frame.number - trj$frame.number[1]
tmp <- data.frame(x.coord, y.coord, time=times)
```
The *x* and *y* coordinates of the center points from a given trajectory `trj` for all time frames will be stored in `x.coord` and `y.coord`. The next line ‘shifts’ the frame numbers so they all start in \\(0\\) (to simplify preprocessing). Finally we store the coordinates and frame times in a temporal data frame for further preprocessing.
At this point we will use the `trajr` package ([McLean and Volponi 2018](#ref-trajr)) which includes functions to plot and perform operations on trajectories. The `TrajFromCoords()` function can be used to create a trajectory object from a data frame. Note that the data frame needs to have a predefined order. That is why we first stored the x coordinates, then the y coordinates, and finally the time in the `tmp` data frame.
```
tmp.trj <- TrajFromCoords(tmp, fps = 1)
```
The temporal data frame is passed as the first argument and the frames per second is set to \\(1\\). Now we plot the `tmp.trj` object.
```
plot(tmp.trj, lwd = 1, xlab="x", ylab="y")
points(tmp.trj, draw.start.pt = T, pch = 1, col = "blue", cex = 1.2)
legend("topright", c("Starting point"), pch = c(16), col=c("black"))
```
FIGURE 10\.7: Plot of first trajectory.
From Figure [10\.7](abnormalbehaviors.html#fig:trajPlot) we can see that there are big time gaps between some points. This is because some time frames are missing. If we print the first rows of the trajectory and look at the time, we see that for example, time steps \\(4,5,\\) and \\(6\\) are missing.
```
head(tmp.trj)
#> x y time displacementTime polar displacement
#> 1 187.0 86.0 0 0 187.0+86.0i 0.0+0.0i
#> 2 183.5 85.0 1 1 183.5+85.0i -3.5-1.0i
#> 3 181.5 85.0 2 2 181.5+85.0i -2.0+0.0i
#> 4 176.0 85.0 3 3 176.0+85.0i -5.5+0.0i
#> 5 144.5 75.5 7 7 144.5+75.5i -31.5-9.5i
```
Before continuing, it would be a good idea to try to fill those gaps. The function `TrajResampleTime()` does exactly that by applying linear interpolation along the trajectory.
```
resampled <- TrajResampleTime(tmp.trj, 1)
```
If we plot the resampled trajectory (Figure [10\.8](abnormalbehaviors.html#fig:trajResampledPlot)) we will see how the missing points were filled.
FIGURE 10\.8: The original trajectory (circles) and after filling the gaps with linear interpolation (crosses).
Now we are almost ready to start detecting anomalies. Remember that Isolation Trees work with features by making partitions. Thus, we need to convert the trajectories into a feature vector representation. To do that, we will extract some features from the trajectories based on *speed* and *acceleration*. The `TrajDerivatives()` function computes the speed and linear acceleration between pairs of trajectory points.
```
derivs <- TrajDerivatives(resampled)
# Print first speeds.
head(derivs$speed)
#> [1] 3.640055 2.000000 5.500000 8.225342 8.225342 8.225342
# Print first linear accelerations.
head(derivs$acceleration)
#> [1] -1.640055 3.500000 2.725342 0.000000 0.000000 0.000000
```
The number of resulting speeds and accelerations are \\(n\-1\\) and \\(n\-2\\), respectively where \\(n\\) is the number of time steps in the trajectory. When training an Isolation Forest, all feature vectors need to be of the same length however, the trajectories in the database have different number of time steps. In order to have fixed\-length feature vectors we will compute the *mean*, *standard deviation*, *min*, and *max* from both, the speeds and accelerations. Thus, we will end up having \\(8\\) features per trajectory. Finally we assemble the features into a data frame along with the trajectory id and the label (*‘normal’* or *‘abnormal’*).
```
f.meanSpeed <- mean(derivs$speed)
f.sdSpeed <- sd(derivs$speed)
f.minSpeed <- min(derivs$speed)
f.maxSpeed <- max(derivs$speed)
f.meanAcc <- mean(derivs$acceleration)
f.sdAcc <- sd(derivs$acceleration)
f.minAcc <- min(derivs$acceleration)
f.maxAcc <- max(derivs$acceleration)
features <- data.frame(id=paste0("id",i), label=trj$classDescription[1],
f.meanSpeed, f.sdSpeed, f.minSpeed, f.maxSpeed,
f.meanAcc, f.sdAcc, f.minAcc, f.maxAcc)
```
We do the feature extraction for each trajectory and save the results as a .csv file *fishFeatures.csv* which is already included in the dataset. Let’s read and print the first rows of the dataset.
```
# Read dataset.
dataset <- read.csv("fishFeatures.csv", stringsAsFactors = T)
# Print first rows of the dataset.
head(dataset)
#> id label f.meanSpeed f.sdSpeed f.minSpeed f.maxSpeed f.meanAcc
#> 1 id1 normal 2.623236 2.228456 0.5000000 8.225342 -0.05366002
#> 2 id2 normal 5.984859 3.820270 1.4142136 15.101738 -0.03870468
#> 3 id3 normal 16.608716 14.502042 0.7071068 46.424670 -1.00019597
#> 4 id5 normal 4.808608 4.137387 0.5000000 17.204651 -0.28181520
#> 5 id6 normal 17.785747 9.926729 3.3541020 44.240818 -0.53753380
#> 6 id7 normal 9.848422 6.026229 0.0000000 33.324165 -0.10555561
#> f.sdAcc f.minAcc f.maxAcc
#> 1 1.839475 -5.532760 3.500000
#> 2 2.660073 -7.273932 7.058594
#> 3 12.890386 -24.320298 30.714624
#> 4 5.228209 -12.204651 15.623512
#> 5 11.272472 -22.178067 21.768613
#> 6 6.692688 -31.262613 11.683561
```
Each row represents one trajectory. We can use the `table()` function to get the counts for *‘normal’* and *‘abnormal’* cases.
```
table(dataset$label)
#> abnormal normal
#> 54 1093
```
After discarding trajectories with less than \\(10\\) points we ended up with \\(1093\\) *‘normal’* instances and \\(54\\) *‘abnormal’* instances.
### 10\.2\.3 Training the Model
To get a preliminary idea of how difficult it is to separate the two classes we can use a MDS plot (see chapter [4](edavis.html#edavis)) to project the \\(8\\) features into a \\(2\\)\-dimensional plane.
FIGURE 10\.9: MDS of the fish trajectories.
In Figure [10\.9](abnormalbehaviors.html#fig:mdsFishes) we see that several *abnormal* points are in the right hand side but many others are in the same space as the *normal* points so it’s time to train an Isolation Forest and see to what extent it can detect the abnormal cases!
One of the nice things about Isolation Forest is that it does not need examples of the abnormal cases during training. If we want, we can also include the abnormal cases but since we don’t have many we will reserve them for the test set. The script `isolation_forest_fish.R` contains the code to train the model. We will split the data into a train set (\\(80\\%\\)) consisting only of normal instances and a test set with both, normal and abnormal instances. The train set is stored in the data frame `train.normal` and the test set in `test.all`. Since the method is based on trees, we don’t need to normalize the data.
First, we need to define the parameters of the Isolation Forest. We can do so by passing the values at creation time.
```
m.iforest <- isolationForest$new(sample_size = 256,
num_trees = 100,
nproc = 1)
```
As suggested in the original paper ([Liu, Ting, and Zhou 2008](#ref-Liu2008isolation)), the sampling size is set to \\(256\\) and the number of trees to \\(100\\). The `nproc` parameter specifies the number of CPU cores to use. I set it to \\(1\\) to ensure we get reproducible results.
Now we can train the model with the train set. The first two columns are removed since they correspond to the trajectories ids and class label.
```
# Fit the model.
m.iforest$fit(train.normal[,-c(1:2)])
```
Once the model is trained, we can start making predictions. Let’s start by making predictions on the **train set** (later we’ll do it on the test set). We know that the train set only consists of normal instances.
```
# Predict anomaly scores on train set.
train.scores <- m.iforest$predict(train.normal[,-c(1:2)])
```
The returned value of the `predict()` function is a data frame containing the average tree depth and the anomaly score for each instance.
```
# Print first rows of predictions.
head(train.scores)
#> id average_depth anomaly_score
#> 1: 1 7.97 0.5831917
#> 2: 2 8.00 0.5820092
#> 3: 3 7.98 0.5827973
#> 4: 4 7.80 0.5899383
#> 5: 5 7.77 0.5911370
#> 6: 6 7.90 0.5859603
```
We know that the train set only has normal instances thus, we need to find the highest anomaly score so that we can set a threshold to detect the abnormal cases. The following code will print the highest anomaly scores.
```
# Sort and display instances with the highest anomaly scores.
head(train.scores[order(anomaly_score, decreasing = TRUE)])
#> id average_depth anomaly_score
#> 1: 75 4.05 0.7603188
#> 2: 618 4.45 0.7400179
#> 3: 147 4.67 0.7290844
#> 4: 661 4.75 0.7251487
#> 5: 756 4.80 0.7226998
#> 6: 54 5.54 0.6874070
```
The highest anomaly score for a normal instance is \\(0\.7603\\) so we would assume that abnormal points will have higher anomaly scores. Armed with this information, we set the threshold to \\(0\.7603\\) and instances having a higher anomaly score will be considered to be abnormal.
```
threshold <- 0.7603
```
Now, we predict the anomaly scores on the **test set** and if the score is \\(\> threshold\\) then we classify that point as abnormal. The `predicted.labels` array will contain \\(0s\\) and \\(1s\\). A \\(1\\) means that the instance is abnormal.
```
# Predict anomaly scores on test set.
test.scores <- m.iforest$predict(test.all[,-c(1:2)])
# Predict labels based on threshold.
predicted.labels <- as.integer((test.scores$anomaly_score > threshold))
```
Now that we have the predicted labels we can compute some performance metrics.
```
# All abnormal cases are at the end so we can
# compute the ground truth as follows.
gt.all <- c(rep(0,nrow(test.normal)), rep(1, nrow(test.abnormal)))
levels <- c("0","1")
# Compute performance metrics.
cm <- confusionMatrix(factor(predicted.labels, levels = levels),
factor(gt.all, levels = levels),
positive = "1")
# Print confusion matrix.
cm$table
#> Reference
#> Prediction 0 1
#> 0 218 37
#> 1 0 17
# Print sensitivity
cm$byClass["Sensitivity"]
#> Sensitivity
#> 0.3148148
```
From the confusion matrix we see that \\(17\\) out of \\(54\\) abnormal instances were detected. On the other hand, all the normal instances (\\(218\\)) were correctly identified as such. The sensitivity (also known as recall) of the abnormal class was \\(17/54\=0\.314\\) which seems very low. We are failing to detect several of the abnormal cases.
One thing we can do is to decrease the threshold at the expense of increasing the false positives, that is, classifying normal instances as abnormal. If we set `threshold <- 0.6` we get the following confusion matrix.
```
#> Reference
#> Prediction 0 1
#> 0 206 8
#> 1 12 46
```
This time we were able to identify \\(46\\) of the abnormal cases! This gives a sensitivity of \\(46/54\=0\.85\\) which is much better than before. However, nothing is for free. If we look at the normal class, this time we had \\(12\\) misclassified points (false positives).
A good way of finding the best threshold is to set apart a validation set from which the optimal threshold can be estimated. However, this is not always feasible due to the limited amount of abnormal points.
In this example we manually tried different thresholds and evaluated their impact on the final results. In the following section I will show you a method that allows you to estimate the performance of a model when considering many possible thresholds at once!
### 10\.2\.4 ROC Curve and AUC
The **receiver operating characteristic curve**, also known as **ROC curve** is a plot that depicts how the sensitivity and the false positive rate (FPR) behave as the detection threshold varies. The sensitivity/recall can be calculated by dividing the true positives by the total number of positives \\(TP/P\\) (see chapter [2](classification.html#classification)). The \\(FPR\=FP/N\\) where FP are the false positives and N are the total number of negative examples (the normal trajectories). The FPR is also known as the probability of false alarm. Ideally, we want a model that has a high sensitivity and a low FPR.
In R we can use the `PRROC` package ([Grau, Grosse, and Keilwagen 2015](#ref-prroc)) to plot ROC curves. The ROC curve of the Isolation Forest results for the abnormal fish trajectory detection can be plotted using the following code:
```
library(PRROC)
roc_obj <- roc.curve(scores.class0 = test.scores$anomaly_score,
weights.class0 = gt.all,
curve = TRUE,
rand.compute = TRUE)
# Set rand.plot = TRUE to also plot the random model's curve.
plot(roc_obj, rand.plot = TRUE)
```
The argument `scores.class0` specifies the returned scores by the Isolation Forest and `weights.class0` are the true labels, \\(1\\) for the positive class (abnormal), and \\(0\\) for the negative class (normal). We set `curve=TRUE` so the method returns a table with thresholds and their respective sensitivity and FPR. The `rand.compute=TRUE` instructs the function to also compute the curve of a random model, that is, one that predicts scores at random. Figure [10\.10](abnormalbehaviors.html#fig:rocCurve) shows the ROC plot.
FIGURE 10\.10: ROC curve and AUC. The dashed line represents a random model.
Here we can see how the sensitivity and FPR increase as the threshold decreases. In the best case we want a sensitivity of \\(1\\) and a FPR of \\(0\\). This ideal point is located at top left corner but this model does not reach that level of performance but a bit lower. The dashed line in the diagonal is the curve for a random model. We can also access the thresholds table:
```
# Print first values of the curve table.
roc_obj$curve
#> [,1] [,2] [,3]
#> [1,] 0 0.00000000 0.8015213
#> [2,] 0 0.01851852 0.7977342
#> [3,] 0 0.03703704 0.7939650
#> [4,] 0 0.05555556 0.7875449
#> [5,] 0 0.09259259 0.7864799
#> .....
```
The first column is the FPR, the second column is the sensitivity, and the last column is the threshold. Choosing the best threshold is not straightforward and will depend on the compromise we want to have between sensitivity and FPR.
Note that the plot also prints an \\(AUC\=0\.963\\). This is known as the **Area Under the Curve (AUC)** and as the name implies, it is the area under the ROC curve. A perfect model will have an AUC of \\(1\.0\\). Our model achieved an AUC of \\(0\.963\\) which is pretty good. A random model will have an AUC around \\(0\.5\\). A value below \\(0\.5\\) means that the model is performing worse than random. The AUC is a performance metric that measures the quality of a model regardless of the selected threshold and is typically presented in addition to accuracy, recall, precision, etc.
If someone tells you something negative about yourself (e.g., that you don’t play football well), assume that they have an AUC below \\(0\.5\\) (worse than random). At least, that’s what I do to cope with those situations. (If you invert the predictions of a binary classifier that does worse than random you will get a classifier that is better than random).
10\.3 Autoencoders
------------------
In its simplest form, an autoencoder is a neural network whose output layer has the same shape as the input layer. If you are not familiar with artificial neural networks, you can take a look at chapter [8](deeplearning.html#deeplearning). An autoencoder will try to learn how to generate an output that is as similar as possible to the provided input. Figure [10\.11](abnormalbehaviors.html#fig:simpleAutoencoder) shows an example of a simple autoencoder with \\(4\\) units in the input and output layers. The hidden layer has \\(2\\) units.
FIGURE 10\.11: Example of a simple autoencoder.
Recall that when training a classification or regression model, we need to provide training examples of the form \\((x,y)\\) where \\(x\\) represents the input features and \\(y\\) is the desired output (a label or a number). When training an autoencoder, the input and the output is the same, that is, \\((x,x)\\).
Now you may be wondering what is the point of training a model that generates the same output as its input. If you take a closer look at Figure [10\.11](abnormalbehaviors.html#fig:simpleAutoencoder) you can see that the hidden layer has fewer units (only \\(2\\)) than the input and output layers. When the data is passed from the input layer to the hidden layer it is ‘reduced’ (compressed). Then, the compressed data is reconstructed as it is passed to the subsequent layers until it reaches the output. Thus, the neural network will learn to compress and reconstruct the data at the same time. Once the network is trained, we can get rid of the layers after the middle hidden layer and use the ‘left\-hand\-side’ of the network to compress our data. This left\-hand\-side is called the **encoder**. Then, we can use the right\-hand\-side to decompress the data. This part is called the **decoder**. In this example, the encoder and decoder consist of only \\(1\\) layer but they can have more (as we will see in the next section). In practice, you will not use autoencoders to compress files in your computer because there are more efficient methods to do that. Furthermore, the compression is *lossy*, that is, there is no guarantee that the reconstructed file will be exactly the same as the original. However, autoencoders have many applications including:
* Dimensionality reduction for visualization.
* Data denoising.
* Data generation (variational autoencoders).
* Anomaly detection (this is what we are interested in!).
Recall that when training a neural network we need to define a loss function. The loss function captures how well the network is learning. It measures how different the predictions are from the true expected outputs. In the context of autoencoders, this difference is known as the **reconstruction error** and can be measured using the mean squared error (similar to regression).
In this section I introduced the most simple type of autoencoder but there are many variants such as denoising autoencoders, variational autoencoders (VAEs), and so on. The following Wikipedia page provides a good overview of the different types of autoencoders: <https://en.wikipedia.org/wiki/Autoencoder>
### 10\.3\.1 Autoencoders for Anomaly Detection
`keras_autoencoder_fish.R`
Autoencoders can be used as anomaly detectors. This idea will be demonstrated with an example to detect abnormal fish trajectories. The way this is done is by training an autoencoder to compress and reconstruct the **normal** instances. Once the autoencoder has learned to encode normal instances, we can expect the reconstruction error to be small. When presented with out\-of\-the\-normal instances, the autoencoder will have a hard time trying to reconstruct them and consequently, the reconstruction error will be high. Similar to Isolation Forests where the tree path length provides a measure of the rarity of an instance, the reconstruction error in autoencoders can be used as an anomaly score.
To tell whether an instance is abnormal or not, we pass it through the autoencoder and compute its reconstruction error \\(\\epsilon\\). If \\(\\epsilon \> threshold\\) the input data can be regarded as abnormal.
Similar to what we did with the Isolation Forest, we will use the *fishFeatures.csv* file that contains the fish trajectories encoded as feature vectors. Each trajectory is composed of \\(8\\) numeric features based on acceleration and speed. We will use \\(80\\%\\) of the normal instances to train the autoencoder. All abnormal instances will be used for the test set.
After splitting the data (the code is in `keras_autoencoder_fish.R`), we will normalize (standardize) it. The `normalize.standard()` function will normalize the data such that it has a mean of \\(0\\) and a standard deviation of \\(1\\) using the following formula:
\\\[\\begin{equation}
z\_i \= \\frac{x\_i \- \\mu}{\\sigma}
\\tag{10\.3}
\\end{equation}\\]
where \\(\\mu\\) is the mean and \\(\\sigma\\) is the standard deviation of \\(x\\). This is slightly different from the \\(0\\)\-\\(1\\) normalization we have used before. The reason is that when scaling to \\(0\\)\-\\(1\\) the min and max values from the train set need to be learned. If there are data points in the test set that have values outside the min and max they will be truncated. But since we expect anomalies to have rare values, it is likely that they will be outside the train set ranges and will be truncated. After being truncated, abnormal instances could now look more similar to the normal ones thus, it will be more difficult to spot them. By standardizing the data we make sure that the extreme values of the abnormal points are preserved. In this case, the parameters to be learned from the train set are \\(\\mu\\) and \\(\\sigma\\).
Once the data is normalized we can define the autoencoder in keras as follows:
```
autoencoder <- keras_model_sequential()
autoencoder %>%
layer_dense(units = 32, activation = 'relu',
input_shape = ncol(train.normal)-2) %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = 8, activation = 'relu') %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dense(units = ncol(train.normal)-2, activation = 'linear')
```
This is a normal neural network with an input layer having the same number of units as number of features (\\(8\\)). This network has \\(5\\) hidden layers of size \\(32,16,8,16\\), and \\(32\\), respectively. The output layer has \\(8\\) units (the same as the input layer). All activation functions are RELU’s except the last one which is linear because the network should be able to produce any number as output. Now we can compile and fit the model.
```
autoencoder %>% compile(
loss = 'mse',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c('mse')
)
history <- autoencoder %>% fit(
as.matrix(train.normal[,-c(1:2)]),
as.matrix(train.normal[,-c(1:2)]),
epochs = 100,
batch_size = 32,
validation_split = 0.10,
verbose = 2,
view_metrics = TRUE
)
```
We set *mean squared error* (MSE) as the loss function. We use the normal instances in the train set (`train.normal`) as the input and expected output. The validation split is set to \\(10\\%\\) so we can plot the reconstruction error (loss) on unseen instances. Finally, the model is trained for \\(100\\) epochs. From Figure [10\.12](abnormalbehaviors.html#fig:lossAutoencoder) we can see that as the training progresses, the loss and the MSE decrease.
FIGURE 10\.12: Loss and MSE.
We can now compute the MSE on the normal and abnormal **test sets**. The `test.normal` data frame only contains normal test instances and `test.abnormal` only contains abnormal test instances.
```
# Compute MSE on normal test set.
autoencoder %>% evaluate(as.matrix(test.normal[,-c(1:2)]),
as.matrix(test.normal[,-c(1:2)]))
#> loss mean_squared_error
#> 0.06147528 0.06147528
# Compute MSE on abnormal test set.
autoencoder %>% evaluate(as.matrix(test.abnormal[,-c(1:2)]),
as.matrix(test.abnormal[,-c(1:2)]))
#> loss mean_squared_error
#> 2.660597 2.660597
```
Clearly, the MSE of the normal test set is much lower than the abnormal test set. This means that the autoencoder had a difficult time trying to reconstruct the abnormal points because it never saw similar ones before.
To find a good threshold we can start by analyzing the reconstruction errors on the **train set**. First, we need to get the predictions.
```
# Predict values on the normal train set.
preds.train.normal <- autoencoder %>%
predict_on_batch(as.matrix(train.normal[,-c(1:2)]))
```
The variable `preds.train.normal` contains the predicted values for each feature and each instance. We can use those predictions to compute the reconstruction error by comparing them with the ground truth values. As reconstruction error we will use the squared errors. The function `squared.errors()` computes the reconstruction error for each instance.
```
# Compute individual squared errors in train set.
errors.train.normal <- squared.errors(preds.train.normal,
as.matrix(train.normal[,-c(1:2)]))
mean(errors.train.normal)
#> [1] 0.8113273
quantile(errors.train.normal)
#> 0% 25% 50% 75% 100%
#> 0.0158690 0.2926631 0.4978471 0.8874694 15.0958992
```
The mean reconstruction error of the normal instances in the train set is \\(0\.811\\). If we look at the quantiles, we can see that most of the instances have an error of \\(\<\= 0\.887\\). With this information we can set `threshold <- 1.0`. If the reconstruction error is \\(\> threshold\\) then we will consider that point as an anomaly.
```
# Make predictions on the abnormal test set.
preds.test.abnormal <- autoencoder %>%
predict_on_batch(as.matrix(test.abnormal[,-c(1:2)]))
# Compute reconstruction errors.
errors.test.abnormal <- squared.errors(preds.test.abnormal,
as.matrix(test.abnormal[,-c(1:2)]))
# Predict labels based on threshold 1:abnormal, 0:normal.
pred.labels.abnormal <- as.integer((errors.test.abnormal > threshold))
# Count how many abnormal instances were detected.
sum(pred.labels.abnormal)
#> [1] 46
```
By using that threshold the autoencoder was able to detect \\(46\\) out of the \\(54\\) anomaly points. From the following confusion matrix we can also see that there were \\(16\\) false positives.
```
#> Reference
#> Prediction 0 1
#> 0 202 8
#> 1 16 46
```
FIGURE 10\.13: ROC curve and AUC. The dashed line represents a random model.
From the ROC curve in Figure [10\.13](abnormalbehaviors.html#fig:rocAutoencoder) we can see that the AUC was \\(0\.93\\) which is lower than the \\(0\.96\\) achieved by the Isolation Forest but with some fine tuning and training for more epochs, the autoencoder should be able to achieve similar results.
10\.4 Summary
-------------
This chapter presented two anomaly detection models, namely Isolation Forests and autoencoders. Examples of how those models can be used for anomaly trajectory detection were also presented. This chapter also introduced ROC curves and AUC which can be used to assess the performance of a model.
* **Isolation Forests** work by generating random partitions of the features until all instances are isolated.
* Abnormal points are more likely to be isolated during the first partitions.
* The average tree path length of abnormal points is smaller than that of normal points.
* An **anomaly score** that ranges between \\(0\\) and \\(1\\) is calculated based on the path length and the closer to \\(1\\) the more likely the point is an anomaly.
* A **ROC curve** is used to visualize the sensitivity and false positive rate of a model for different thresholds.
* The area under the curve **AUC** can be used to summarize the performance of a model.
* A simple **autoencoder** is an artificial neural network whose output layer has the same shape as the input layer.
* Autoencoders are used to encode the data into a lower dimension from which then, it can be reconstructed.
* The **reconstruction error** (loss) is a measure of how distant a prediction is from the ground truth and can be used as an anomaly score.
10\.1 Isolation Forests
-----------------------
As its name implies, an *Isolation Forest* identifies anomalous points by explicitly ‘isolating’ them. In this context *isolation* means separating an instance from the others. This approach is different from many other anomaly detection algorithms where they first build a profile of normal instances and mark an instance as an anomaly if it does not conform to the normal profile. Isolation Forests were proposed by Liu, Ting, and Zhou ([2008](#ref-Liu2008isolation)) and the method is based on building many trees (similar to Random Forests, chapter [3](ensemble.html#ensemble)). This method has several advantages including its efficiency in terms of time and memory usage. Another advantage is that at training time it does not need to have examples of the abnormal cases but if available, they can be incorporated as well. Since this method is based on trees, another nice thing about it is that there is no need to scale the features.
This method is based on the observation that anomalies are ‘few and different’ which makes them easier to isolate. It is based on building an ensemble of trees where each tree is called an Isolation Tree. Each Isolation Tree partitions the features until every instance is isolated (it’s at a leaf node). Since anomalies are easier to isolate they will be closer to the root of the tree. An instance is marked as an anomaly if its average path length to the root across all Isolation Trees is short.
A tree is generated recursively by randomly selecting a feature and then selecting a random partition between the maximum and minimum value of that feature. Each partition corresponds to a split in a tree. The procedure terminates when all instances are isolated. The number of partitions that were required to isolate a point corresponds to the path length of that point to the root of the tree.
Figure [10\.1](abnormalbehaviors.html#fig:partitionExamle) shows a set of points with only one feature (x axis). One of the anomalous points is highlighted as a red triangle. One of the normal points is marked as a blue solid circle.
FIGURE 10\.1: Example partitioning of a normal and an anomalous point.
To isolate the anomalous instance, we can randomly and recursively choose partition positions (vertical lines in Figure [10\.1](abnormalbehaviors.html#fig:partitionExamle)) until the instance is encapsulated in its own partition. In this example, it took \\(4\\) partitions (red lines) to isolate the anomalous instance, thus, the path length of this instance to the root of the tree is \\(4\\). The partitions were located at \\(0\.51, 1\.6, 1\.7,\\) and \\(1\.8\\). The code to reproduce this example is in the script `example_isolate_point.R`. If we look at the highlighted normal instance we can see that it took \\(8\\) partitions to isolate it.
Instead of generating a single tree, we can generate an ensemble of \\(n\\) trees and average their path lengths. Figure [10\.2](abnormalbehaviors.html#fig:anomalyIts) shows the average path length for the same previous normal and anomalous instances as the number of trees in the ensemble is increased.
FIGURE 10\.2: Average path lenghts for increasing number of trees.
After \\(200\\) trees, the average path length of the normal instance starts to converge to \\(8\.7\\) and the path length of the anomalous one converges to \\(3\.1\\). This shows that anomalies have shorter path lengths on average.
In practice, an Isolation Tree is recursively grown until a predefined maximum height is reached (more on this later), or when all instances are isolated, or all instances in a partition have the same values. Once all Isolation Trees in the ensemble (Isolation Forest) are generated, the instances can be sorted according to their average path length to the root. Then, instances with the shorter path lengths can be marked as anomalies.
Instead of directly using the average path lengths for deciding whether or not an instance is an anomaly, the authors of the method proposed an anomaly score that is between \\(0\\) and \\(1\\). The reason for this, is that this score is easier to interpret since it’s normalized. The closer the anomaly score is to \\(1\\) the more likely the instance is an anomaly. Instances with anomaly scores \\(\<\< 0\.5\\) can be marked as normal. The anomaly score for an instance \\(x\\) is computed with the formula:
\\\[\\begin{equation}
s(x) \= 2^{\-\\frac{E(h(x))}{c(n)}}
\\tag{10\.1}
\\end{equation}\\]
where \\(h(x)\\) is the path length of \\(x\\) to the root of a given tree and \\(E(h(x))\\) is the average of the path lengths of \\(x\\) across all trees in the ensemble. \\(n\\) is the number of instances in the train set. \\(c(n)\\) is the average path length of an unsuccessful search in a binary search tree:
\\\[\\begin{equation}
c(n) \= 2H(n\-1\) \- (2(n\-1\)/n)
\\tag{10\.2}
\\end{equation}\\]
where \\(H(x)\\) denotes the harmonic number and is estimated by \\(ln(x) \+ 0\.5772156649\\) (Euler\-Mascheroni constant).
A practical ‘trick’ that Isolation Forests use is *sub\-sampling without replacement*. That is, instead of using the entire training set, an independent random sample of size \\(p\\) is used to build each tree. The sub\-sampling reduces the *swamping* and *masking* effects. *Swamping* occurs when normal instances are too close to anomalies and thus, marked as anomalies. *Masking* refers to the presence of too many anomalies close together. This increases the number of partitions needed to isolate each anomaly point.
Figure [10\.3](abnormalbehaviors.html#fig:samplingInstances) (left) shows a set of \\(4000\\) normal and \\(100\\) anomalous instances clustered in the same region. The right plot shows how it looks like after sampling \\(256\\) instances from the total. Here, we can see that the anomalous points are more clearly separated from the normal ones.
FIGURE 10\.3: Dataset before and after sampling.
Previously, I mentioned that trees are grown until a predefined maximum height is reached. The authors of the method suggest to set this maximum height to \\(l\=ceiling(log\_2(p))\\) which approximates the average tree height. Remember that \\(p\\) is the sampling size. Since anomalous instances are closer to the root, we can expect normal instances to be in the lower sections of the tree, thus, there is no need to grow the entire tree and we can limit its height.
The only two parameters of the algorithm are the number of trees and the sampling size \\(p\\). The authors recommend a default sampling size of \\(256\\) and \\(100\\) trees.
At training time, the ensemble of trees is generated using the train data. It is not necessary that the train data contain examples of anomalous instances. This is advantageous because in many cases the anomalous instances are scarce so we can reserve them for testing. At test time, instances in the test set are passed through all trees and an anomaly score is computed for each. Instances with an anomaly score greater than some threshold are marked as anomalies. The optimal threshold can be estimated using an Area Under the Curve analysis which will be covered in the following sections.
The `solitude` R package ([Srikanth 2020](#ref-solitude)) provides convenient functions to train Isolation Forests and make predictions. In the following section we will use it to detect abnormal fish behaviors.
10\.2 Detecting Abnormal Fish Behaviors
---------------------------------------
`visualize_fish.R` `extract_features.R` `isolation_forest_fish.R`
In marine biology, the analysis of fish behavior is essential since it can be used to detect environmental changes produced by pollution, climate change, etc. Fish behaviors can be characterized by their trajectories, that is, how they move within the environment. A **trajectory** is the path that an object follows through space and time.
Capturing fish trajectories is a challenging task specially, in unconstrained underwater conditions. Thankfully, the Fish4Knowledge[28](#fn28) project has developed fish analysis tools and methods to ease the task. They have processed enormous amounts of video streaming data and have extracted fish information including trajectories. They have made the fish trajectories dataset publicly available[29](#fn29) ([Beyan and Fisher 2013](#ref-Beyan2013)).
The *FISH TRAJECTORIES* dataset contains \\(3102\\) trajectories belonging to the *Dascyllus reticulatus* fish (see Figure [10\.4](abnormalbehaviors.html#fig:dascyllus)) observed in the Taiwanese coral reef. Each trajectory is labeled as *‘normal’* or *‘abnormal’*. The trajectories were extracted from underwater video and stored as coordinates over time.
FIGURE 10\.4: Example of Dascyllus reticulatus fish. (Author: Rickard Zerpe. Source: wikimedia.org (CC BY 2\.0\) \[[https://creativecommons.org/licenses/by/2\.0/legalcode](https://creativecommons.org/licenses/by/2.0/legalcode)]).
Our main task will be to detect the **abnormal** trajectories using an Isolation Forest but before that, we are going to explore, visualize, and pre\-process the dataset.
### 10\.2\.1 Exploring and Visualizing Trajectories
The data is stored in a `.mat` file, so we are going to use the package `R.matlab` ([Bengtsson 2018](#ref-rmatlab)) to import the data into an array. The following code can be found in the script `visualize_fish.R`.
```
library(R.matlab)
# Read data.
df <- readMat("../fishDetections_total3102.mat"))$fish.detections
# Print data frame dimensions.
dim(df)
#> [1] 7 1 3102
```
We use the `dim()` function to print the dimensions of the array. From the output, we can see that there are \\(3102\\) individual trajectories and each trajectory has \\(7\\) attributes. Let’s explore what are the contents of a single trajectory. The following code snippet extracts the first trajectory and prints its structure.
```
# Read one of the trajectories.
trj <- df[,,1]
# Inspect its structure.
str(trj)
#> List of 7
#> $ frame.number : num [1:37, 1] 826 827 828 829 833 834 835 836 ...
#> $ bounding.box.x : num [1:37, 1] 167 165 162 159 125 124 126 126 ...
#> $ bounding.box.y : num [1:37, 1] 67 65 65 66 58 61 65 71 71 62 ...
#> $ bounding.box.w : num [1:37, 1] 40 37 39 34 39 39 38 38 37 31 ...
#> $ bounding.box.h : num [1:37, 1] 38 40 40 38 35 34 34 33 34 35 ...
#> $ class : num [1, 1] 1
#> $ classDescription: chr [1, 1] "normal"
```
A trajectory is composed of \\(7\\) pieces of information:
1. frame.number: Frame number in original video.
2. bounding.box.x: Bounding box leftmost edge.
3. bounding.box.y: Bounding box topmost edge.
4. bounding.box.w: Bounding box width.
5. bounding.box.h: Bounding box height.
6. class: 1\=normal, 2\=rare.
7. classDescription: ‘normal’ or abnormal’.
The bounding box represents the square region where the fish was detected in the video footage. Figure [10\.5](abnormalbehaviors.html#fig:fishBox) shows an example of a fish and its bounding box (not from the original dataset but for illustration purpose only). Also note that the dataset does not contain the images but only the bounding boxes’ coordinates.
FIGURE 10\.5: Fish bounding box (in red). (Author: Nick Hobgood. Source: wikimedia.org (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
Each trajectory has a different number of video frames. We can get the frame count by inspecting the length of one of the coordinates.
```
# Count how many frames this trajectory has.
length(trj$bounding.box.x)
#> [1] 37
```
The first trajectory has \\(37\\) frames but on average, they have \\(10\\) frames. For our analyses, we only include trajectories with a minimum of \\(10\\) frames since it may be difficult to extract patterns from shorter paths. Furthermore, we are not going to use the bounding boxes themselves but the center point of the box.
At this point, it would be a good idea to plot how the data looks like. To do so, I will use the `anipaths` package ([Scharf 2020](#ref-anipaths)) which has a function to animate trajectories! I will not cover the details here on how to use the package but the complete code is in the same script `visualize_fish.R`. The output result is in the form of an ‘index.html’ file that contains the interactive animation. For simplicity, I only selected \\(50\\) and \\(10\\) normal and abnormal trajectories (respectively) to be plotted. Figure [10\.6](abnormalbehaviors.html#fig:animTrajectories) shows the resulting plot. The plot also includes some controls to play, pause, change the speed of the animation, etc.
FIGURE 10\.6: Example of animated trajectories generated with the anipaths package.
The *‘normal’* and *‘abnormal’* labels were determined by visual inspection by experts. The abnormal cases include events such as predator avoidance and aggressive movements (due to another fish or because of being frightened).
### 10\.2\.2 Preprocessing and Feature Extraction
Now that we have explored and visualized the data, we can begin with the preprocessing and feature extraction. As previously mentioned, the database contains bounding boxes and we want to use the center of the boxes to define the trajectories. The following code snippet (from `extract_features.R`) shows how the center of a box can be computed.
```
# Compute center of bounding box.
x.coord <- trj$bounding.box.x + (trj$bounding.box.w / 2)
y.coord <- trj$bounding.box.y + (trj$bounding.box.h / 2)
# Make times start at 0.
times <- trj$frame.number - trj$frame.number[1]
tmp <- data.frame(x.coord, y.coord, time=times)
```
The *x* and *y* coordinates of the center points from a given trajectory `trj` for all time frames will be stored in `x.coord` and `y.coord`. The next line ‘shifts’ the frame numbers so they all start in \\(0\\) (to simplify preprocessing). Finally we store the coordinates and frame times in a temporal data frame for further preprocessing.
At this point we will use the `trajr` package ([McLean and Volponi 2018](#ref-trajr)) which includes functions to plot and perform operations on trajectories. The `TrajFromCoords()` function can be used to create a trajectory object from a data frame. Note that the data frame needs to have a predefined order. That is why we first stored the x coordinates, then the y coordinates, and finally the time in the `tmp` data frame.
```
tmp.trj <- TrajFromCoords(tmp, fps = 1)
```
The temporal data frame is passed as the first argument and the frames per second is set to \\(1\\). Now we plot the `tmp.trj` object.
```
plot(tmp.trj, lwd = 1, xlab="x", ylab="y")
points(tmp.trj, draw.start.pt = T, pch = 1, col = "blue", cex = 1.2)
legend("topright", c("Starting point"), pch = c(16), col=c("black"))
```
FIGURE 10\.7: Plot of first trajectory.
From Figure [10\.7](abnormalbehaviors.html#fig:trajPlot) we can see that there are big time gaps between some points. This is because some time frames are missing. If we print the first rows of the trajectory and look at the time, we see that for example, time steps \\(4,5,\\) and \\(6\\) are missing.
```
head(tmp.trj)
#> x y time displacementTime polar displacement
#> 1 187.0 86.0 0 0 187.0+86.0i 0.0+0.0i
#> 2 183.5 85.0 1 1 183.5+85.0i -3.5-1.0i
#> 3 181.5 85.0 2 2 181.5+85.0i -2.0+0.0i
#> 4 176.0 85.0 3 3 176.0+85.0i -5.5+0.0i
#> 5 144.5 75.5 7 7 144.5+75.5i -31.5-9.5i
```
Before continuing, it would be a good idea to try to fill those gaps. The function `TrajResampleTime()` does exactly that by applying linear interpolation along the trajectory.
```
resampled <- TrajResampleTime(tmp.trj, 1)
```
If we plot the resampled trajectory (Figure [10\.8](abnormalbehaviors.html#fig:trajResampledPlot)) we will see how the missing points were filled.
FIGURE 10\.8: The original trajectory (circles) and after filling the gaps with linear interpolation (crosses).
Now we are almost ready to start detecting anomalies. Remember that Isolation Trees work with features by making partitions. Thus, we need to convert the trajectories into a feature vector representation. To do that, we will extract some features from the trajectories based on *speed* and *acceleration*. The `TrajDerivatives()` function computes the speed and linear acceleration between pairs of trajectory points.
```
derivs <- TrajDerivatives(resampled)
# Print first speeds.
head(derivs$speed)
#> [1] 3.640055 2.000000 5.500000 8.225342 8.225342 8.225342
# Print first linear accelerations.
head(derivs$acceleration)
#> [1] -1.640055 3.500000 2.725342 0.000000 0.000000 0.000000
```
The number of resulting speeds and accelerations are \\(n\-1\\) and \\(n\-2\\), respectively where \\(n\\) is the number of time steps in the trajectory. When training an Isolation Forest, all feature vectors need to be of the same length however, the trajectories in the database have different number of time steps. In order to have fixed\-length feature vectors we will compute the *mean*, *standard deviation*, *min*, and *max* from both, the speeds and accelerations. Thus, we will end up having \\(8\\) features per trajectory. Finally we assemble the features into a data frame along with the trajectory id and the label (*‘normal’* or *‘abnormal’*).
```
f.meanSpeed <- mean(derivs$speed)
f.sdSpeed <- sd(derivs$speed)
f.minSpeed <- min(derivs$speed)
f.maxSpeed <- max(derivs$speed)
f.meanAcc <- mean(derivs$acceleration)
f.sdAcc <- sd(derivs$acceleration)
f.minAcc <- min(derivs$acceleration)
f.maxAcc <- max(derivs$acceleration)
features <- data.frame(id=paste0("id",i), label=trj$classDescription[1],
f.meanSpeed, f.sdSpeed, f.minSpeed, f.maxSpeed,
f.meanAcc, f.sdAcc, f.minAcc, f.maxAcc)
```
We do the feature extraction for each trajectory and save the results as a .csv file *fishFeatures.csv* which is already included in the dataset. Let’s read and print the first rows of the dataset.
```
# Read dataset.
dataset <- read.csv("fishFeatures.csv", stringsAsFactors = T)
# Print first rows of the dataset.
head(dataset)
#> id label f.meanSpeed f.sdSpeed f.minSpeed f.maxSpeed f.meanAcc
#> 1 id1 normal 2.623236 2.228456 0.5000000 8.225342 -0.05366002
#> 2 id2 normal 5.984859 3.820270 1.4142136 15.101738 -0.03870468
#> 3 id3 normal 16.608716 14.502042 0.7071068 46.424670 -1.00019597
#> 4 id5 normal 4.808608 4.137387 0.5000000 17.204651 -0.28181520
#> 5 id6 normal 17.785747 9.926729 3.3541020 44.240818 -0.53753380
#> 6 id7 normal 9.848422 6.026229 0.0000000 33.324165 -0.10555561
#> f.sdAcc f.minAcc f.maxAcc
#> 1 1.839475 -5.532760 3.500000
#> 2 2.660073 -7.273932 7.058594
#> 3 12.890386 -24.320298 30.714624
#> 4 5.228209 -12.204651 15.623512
#> 5 11.272472 -22.178067 21.768613
#> 6 6.692688 -31.262613 11.683561
```
Each row represents one trajectory. We can use the `table()` function to get the counts for *‘normal’* and *‘abnormal’* cases.
```
table(dataset$label)
#> abnormal normal
#> 54 1093
```
After discarding trajectories with less than \\(10\\) points we ended up with \\(1093\\) *‘normal’* instances and \\(54\\) *‘abnormal’* instances.
### 10\.2\.3 Training the Model
To get a preliminary idea of how difficult it is to separate the two classes we can use a MDS plot (see chapter [4](edavis.html#edavis)) to project the \\(8\\) features into a \\(2\\)\-dimensional plane.
FIGURE 10\.9: MDS of the fish trajectories.
In Figure [10\.9](abnormalbehaviors.html#fig:mdsFishes) we see that several *abnormal* points are in the right hand side but many others are in the same space as the *normal* points so it’s time to train an Isolation Forest and see to what extent it can detect the abnormal cases!
One of the nice things about Isolation Forest is that it does not need examples of the abnormal cases during training. If we want, we can also include the abnormal cases but since we don’t have many we will reserve them for the test set. The script `isolation_forest_fish.R` contains the code to train the model. We will split the data into a train set (\\(80\\%\\)) consisting only of normal instances and a test set with both, normal and abnormal instances. The train set is stored in the data frame `train.normal` and the test set in `test.all`. Since the method is based on trees, we don’t need to normalize the data.
First, we need to define the parameters of the Isolation Forest. We can do so by passing the values at creation time.
```
m.iforest <- isolationForest$new(sample_size = 256,
num_trees = 100,
nproc = 1)
```
As suggested in the original paper ([Liu, Ting, and Zhou 2008](#ref-Liu2008isolation)), the sampling size is set to \\(256\\) and the number of trees to \\(100\\). The `nproc` parameter specifies the number of CPU cores to use. I set it to \\(1\\) to ensure we get reproducible results.
Now we can train the model with the train set. The first two columns are removed since they correspond to the trajectories ids and class label.
```
# Fit the model.
m.iforest$fit(train.normal[,-c(1:2)])
```
Once the model is trained, we can start making predictions. Let’s start by making predictions on the **train set** (later we’ll do it on the test set). We know that the train set only consists of normal instances.
```
# Predict anomaly scores on train set.
train.scores <- m.iforest$predict(train.normal[,-c(1:2)])
```
The returned value of the `predict()` function is a data frame containing the average tree depth and the anomaly score for each instance.
```
# Print first rows of predictions.
head(train.scores)
#> id average_depth anomaly_score
#> 1: 1 7.97 0.5831917
#> 2: 2 8.00 0.5820092
#> 3: 3 7.98 0.5827973
#> 4: 4 7.80 0.5899383
#> 5: 5 7.77 0.5911370
#> 6: 6 7.90 0.5859603
```
We know that the train set only has normal instances thus, we need to find the highest anomaly score so that we can set a threshold to detect the abnormal cases. The following code will print the highest anomaly scores.
```
# Sort and display instances with the highest anomaly scores.
head(train.scores[order(anomaly_score, decreasing = TRUE)])
#> id average_depth anomaly_score
#> 1: 75 4.05 0.7603188
#> 2: 618 4.45 0.7400179
#> 3: 147 4.67 0.7290844
#> 4: 661 4.75 0.7251487
#> 5: 756 4.80 0.7226998
#> 6: 54 5.54 0.6874070
```
The highest anomaly score for a normal instance is \\(0\.7603\\) so we would assume that abnormal points will have higher anomaly scores. Armed with this information, we set the threshold to \\(0\.7603\\) and instances having a higher anomaly score will be considered to be abnormal.
```
threshold <- 0.7603
```
Now, we predict the anomaly scores on the **test set** and if the score is \\(\> threshold\\) then we classify that point as abnormal. The `predicted.labels` array will contain \\(0s\\) and \\(1s\\). A \\(1\\) means that the instance is abnormal.
```
# Predict anomaly scores on test set.
test.scores <- m.iforest$predict(test.all[,-c(1:2)])
# Predict labels based on threshold.
predicted.labels <- as.integer((test.scores$anomaly_score > threshold))
```
Now that we have the predicted labels we can compute some performance metrics.
```
# All abnormal cases are at the end so we can
# compute the ground truth as follows.
gt.all <- c(rep(0,nrow(test.normal)), rep(1, nrow(test.abnormal)))
levels <- c("0","1")
# Compute performance metrics.
cm <- confusionMatrix(factor(predicted.labels, levels = levels),
factor(gt.all, levels = levels),
positive = "1")
# Print confusion matrix.
cm$table
#> Reference
#> Prediction 0 1
#> 0 218 37
#> 1 0 17
# Print sensitivity
cm$byClass["Sensitivity"]
#> Sensitivity
#> 0.3148148
```
From the confusion matrix we see that \\(17\\) out of \\(54\\) abnormal instances were detected. On the other hand, all the normal instances (\\(218\\)) were correctly identified as such. The sensitivity (also known as recall) of the abnormal class was \\(17/54\=0\.314\\) which seems very low. We are failing to detect several of the abnormal cases.
One thing we can do is to decrease the threshold at the expense of increasing the false positives, that is, classifying normal instances as abnormal. If we set `threshold <- 0.6` we get the following confusion matrix.
```
#> Reference
#> Prediction 0 1
#> 0 206 8
#> 1 12 46
```
This time we were able to identify \\(46\\) of the abnormal cases! This gives a sensitivity of \\(46/54\=0\.85\\) which is much better than before. However, nothing is for free. If we look at the normal class, this time we had \\(12\\) misclassified points (false positives).
A good way of finding the best threshold is to set apart a validation set from which the optimal threshold can be estimated. However, this is not always feasible due to the limited amount of abnormal points.
In this example we manually tried different thresholds and evaluated their impact on the final results. In the following section I will show you a method that allows you to estimate the performance of a model when considering many possible thresholds at once!
### 10\.2\.4 ROC Curve and AUC
The **receiver operating characteristic curve**, also known as **ROC curve** is a plot that depicts how the sensitivity and the false positive rate (FPR) behave as the detection threshold varies. The sensitivity/recall can be calculated by dividing the true positives by the total number of positives \\(TP/P\\) (see chapter [2](classification.html#classification)). The \\(FPR\=FP/N\\) where FP are the false positives and N are the total number of negative examples (the normal trajectories). The FPR is also known as the probability of false alarm. Ideally, we want a model that has a high sensitivity and a low FPR.
In R we can use the `PRROC` package ([Grau, Grosse, and Keilwagen 2015](#ref-prroc)) to plot ROC curves. The ROC curve of the Isolation Forest results for the abnormal fish trajectory detection can be plotted using the following code:
```
library(PRROC)
roc_obj <- roc.curve(scores.class0 = test.scores$anomaly_score,
weights.class0 = gt.all,
curve = TRUE,
rand.compute = TRUE)
# Set rand.plot = TRUE to also plot the random model's curve.
plot(roc_obj, rand.plot = TRUE)
```
The argument `scores.class0` specifies the returned scores by the Isolation Forest and `weights.class0` are the true labels, \\(1\\) for the positive class (abnormal), and \\(0\\) for the negative class (normal). We set `curve=TRUE` so the method returns a table with thresholds and their respective sensitivity and FPR. The `rand.compute=TRUE` instructs the function to also compute the curve of a random model, that is, one that predicts scores at random. Figure [10\.10](abnormalbehaviors.html#fig:rocCurve) shows the ROC plot.
FIGURE 10\.10: ROC curve and AUC. The dashed line represents a random model.
Here we can see how the sensitivity and FPR increase as the threshold decreases. In the best case we want a sensitivity of \\(1\\) and a FPR of \\(0\\). This ideal point is located at top left corner but this model does not reach that level of performance but a bit lower. The dashed line in the diagonal is the curve for a random model. We can also access the thresholds table:
```
# Print first values of the curve table.
roc_obj$curve
#> [,1] [,2] [,3]
#> [1,] 0 0.00000000 0.8015213
#> [2,] 0 0.01851852 0.7977342
#> [3,] 0 0.03703704 0.7939650
#> [4,] 0 0.05555556 0.7875449
#> [5,] 0 0.09259259 0.7864799
#> .....
```
The first column is the FPR, the second column is the sensitivity, and the last column is the threshold. Choosing the best threshold is not straightforward and will depend on the compromise we want to have between sensitivity and FPR.
Note that the plot also prints an \\(AUC\=0\.963\\). This is known as the **Area Under the Curve (AUC)** and as the name implies, it is the area under the ROC curve. A perfect model will have an AUC of \\(1\.0\\). Our model achieved an AUC of \\(0\.963\\) which is pretty good. A random model will have an AUC around \\(0\.5\\). A value below \\(0\.5\\) means that the model is performing worse than random. The AUC is a performance metric that measures the quality of a model regardless of the selected threshold and is typically presented in addition to accuracy, recall, precision, etc.
If someone tells you something negative about yourself (e.g., that you don’t play football well), assume that they have an AUC below \\(0\.5\\) (worse than random). At least, that’s what I do to cope with those situations. (If you invert the predictions of a binary classifier that does worse than random you will get a classifier that is better than random).
### 10\.2\.1 Exploring and Visualizing Trajectories
The data is stored in a `.mat` file, so we are going to use the package `R.matlab` ([Bengtsson 2018](#ref-rmatlab)) to import the data into an array. The following code can be found in the script `visualize_fish.R`.
```
library(R.matlab)
# Read data.
df <- readMat("../fishDetections_total3102.mat"))$fish.detections
# Print data frame dimensions.
dim(df)
#> [1] 7 1 3102
```
We use the `dim()` function to print the dimensions of the array. From the output, we can see that there are \\(3102\\) individual trajectories and each trajectory has \\(7\\) attributes. Let’s explore what are the contents of a single trajectory. The following code snippet extracts the first trajectory and prints its structure.
```
# Read one of the trajectories.
trj <- df[,,1]
# Inspect its structure.
str(trj)
#> List of 7
#> $ frame.number : num [1:37, 1] 826 827 828 829 833 834 835 836 ...
#> $ bounding.box.x : num [1:37, 1] 167 165 162 159 125 124 126 126 ...
#> $ bounding.box.y : num [1:37, 1] 67 65 65 66 58 61 65 71 71 62 ...
#> $ bounding.box.w : num [1:37, 1] 40 37 39 34 39 39 38 38 37 31 ...
#> $ bounding.box.h : num [1:37, 1] 38 40 40 38 35 34 34 33 34 35 ...
#> $ class : num [1, 1] 1
#> $ classDescription: chr [1, 1] "normal"
```
A trajectory is composed of \\(7\\) pieces of information:
1. frame.number: Frame number in original video.
2. bounding.box.x: Bounding box leftmost edge.
3. bounding.box.y: Bounding box topmost edge.
4. bounding.box.w: Bounding box width.
5. bounding.box.h: Bounding box height.
6. class: 1\=normal, 2\=rare.
7. classDescription: ‘normal’ or abnormal’.
The bounding box represents the square region where the fish was detected in the video footage. Figure [10\.5](abnormalbehaviors.html#fig:fishBox) shows an example of a fish and its bounding box (not from the original dataset but for illustration purpose only). Also note that the dataset does not contain the images but only the bounding boxes’ coordinates.
FIGURE 10\.5: Fish bounding box (in red). (Author: Nick Hobgood. Source: wikimedia.org (CC BY\-SA 3\.0\) \[[https://creativecommons.org/licenses/by\-sa/3\.0/legalcode](https://creativecommons.org/licenses/by-sa/3.0/legalcode)]).
Each trajectory has a different number of video frames. We can get the frame count by inspecting the length of one of the coordinates.
```
# Count how many frames this trajectory has.
length(trj$bounding.box.x)
#> [1] 37
```
The first trajectory has \\(37\\) frames but on average, they have \\(10\\) frames. For our analyses, we only include trajectories with a minimum of \\(10\\) frames since it may be difficult to extract patterns from shorter paths. Furthermore, we are not going to use the bounding boxes themselves but the center point of the box.
At this point, it would be a good idea to plot how the data looks like. To do so, I will use the `anipaths` package ([Scharf 2020](#ref-anipaths)) which has a function to animate trajectories! I will not cover the details here on how to use the package but the complete code is in the same script `visualize_fish.R`. The output result is in the form of an ‘index.html’ file that contains the interactive animation. For simplicity, I only selected \\(50\\) and \\(10\\) normal and abnormal trajectories (respectively) to be plotted. Figure [10\.6](abnormalbehaviors.html#fig:animTrajectories) shows the resulting plot. The plot also includes some controls to play, pause, change the speed of the animation, etc.
FIGURE 10\.6: Example of animated trajectories generated with the anipaths package.
The *‘normal’* and *‘abnormal’* labels were determined by visual inspection by experts. The abnormal cases include events such as predator avoidance and aggressive movements (due to another fish or because of being frightened).
### 10\.2\.2 Preprocessing and Feature Extraction
Now that we have explored and visualized the data, we can begin with the preprocessing and feature extraction. As previously mentioned, the database contains bounding boxes and we want to use the center of the boxes to define the trajectories. The following code snippet (from `extract_features.R`) shows how the center of a box can be computed.
```
# Compute center of bounding box.
x.coord <- trj$bounding.box.x + (trj$bounding.box.w / 2)
y.coord <- trj$bounding.box.y + (trj$bounding.box.h / 2)
# Make times start at 0.
times <- trj$frame.number - trj$frame.number[1]
tmp <- data.frame(x.coord, y.coord, time=times)
```
The *x* and *y* coordinates of the center points from a given trajectory `trj` for all time frames will be stored in `x.coord` and `y.coord`. The next line ‘shifts’ the frame numbers so they all start in \\(0\\) (to simplify preprocessing). Finally we store the coordinates and frame times in a temporal data frame for further preprocessing.
At this point we will use the `trajr` package ([McLean and Volponi 2018](#ref-trajr)) which includes functions to plot and perform operations on trajectories. The `TrajFromCoords()` function can be used to create a trajectory object from a data frame. Note that the data frame needs to have a predefined order. That is why we first stored the x coordinates, then the y coordinates, and finally the time in the `tmp` data frame.
```
tmp.trj <- TrajFromCoords(tmp, fps = 1)
```
The temporal data frame is passed as the first argument and the frames per second is set to \\(1\\). Now we plot the `tmp.trj` object.
```
plot(tmp.trj, lwd = 1, xlab="x", ylab="y")
points(tmp.trj, draw.start.pt = T, pch = 1, col = "blue", cex = 1.2)
legend("topright", c("Starting point"), pch = c(16), col=c("black"))
```
FIGURE 10\.7: Plot of first trajectory.
From Figure [10\.7](abnormalbehaviors.html#fig:trajPlot) we can see that there are big time gaps between some points. This is because some time frames are missing. If we print the first rows of the trajectory and look at the time, we see that for example, time steps \\(4,5,\\) and \\(6\\) are missing.
```
head(tmp.trj)
#> x y time displacementTime polar displacement
#> 1 187.0 86.0 0 0 187.0+86.0i 0.0+0.0i
#> 2 183.5 85.0 1 1 183.5+85.0i -3.5-1.0i
#> 3 181.5 85.0 2 2 181.5+85.0i -2.0+0.0i
#> 4 176.0 85.0 3 3 176.0+85.0i -5.5+0.0i
#> 5 144.5 75.5 7 7 144.5+75.5i -31.5-9.5i
```
Before continuing, it would be a good idea to try to fill those gaps. The function `TrajResampleTime()` does exactly that by applying linear interpolation along the trajectory.
```
resampled <- TrajResampleTime(tmp.trj, 1)
```
If we plot the resampled trajectory (Figure [10\.8](abnormalbehaviors.html#fig:trajResampledPlot)) we will see how the missing points were filled.
FIGURE 10\.8: The original trajectory (circles) and after filling the gaps with linear interpolation (crosses).
Now we are almost ready to start detecting anomalies. Remember that Isolation Trees work with features by making partitions. Thus, we need to convert the trajectories into a feature vector representation. To do that, we will extract some features from the trajectories based on *speed* and *acceleration*. The `TrajDerivatives()` function computes the speed and linear acceleration between pairs of trajectory points.
```
derivs <- TrajDerivatives(resampled)
# Print first speeds.
head(derivs$speed)
#> [1] 3.640055 2.000000 5.500000 8.225342 8.225342 8.225342
# Print first linear accelerations.
head(derivs$acceleration)
#> [1] -1.640055 3.500000 2.725342 0.000000 0.000000 0.000000
```
The number of resulting speeds and accelerations are \\(n\-1\\) and \\(n\-2\\), respectively where \\(n\\) is the number of time steps in the trajectory. When training an Isolation Forest, all feature vectors need to be of the same length however, the trajectories in the database have different number of time steps. In order to have fixed\-length feature vectors we will compute the *mean*, *standard deviation*, *min*, and *max* from both, the speeds and accelerations. Thus, we will end up having \\(8\\) features per trajectory. Finally we assemble the features into a data frame along with the trajectory id and the label (*‘normal’* or *‘abnormal’*).
```
f.meanSpeed <- mean(derivs$speed)
f.sdSpeed <- sd(derivs$speed)
f.minSpeed <- min(derivs$speed)
f.maxSpeed <- max(derivs$speed)
f.meanAcc <- mean(derivs$acceleration)
f.sdAcc <- sd(derivs$acceleration)
f.minAcc <- min(derivs$acceleration)
f.maxAcc <- max(derivs$acceleration)
features <- data.frame(id=paste0("id",i), label=trj$classDescription[1],
f.meanSpeed, f.sdSpeed, f.minSpeed, f.maxSpeed,
f.meanAcc, f.sdAcc, f.minAcc, f.maxAcc)
```
We do the feature extraction for each trajectory and save the results as a .csv file *fishFeatures.csv* which is already included in the dataset. Let’s read and print the first rows of the dataset.
```
# Read dataset.
dataset <- read.csv("fishFeatures.csv", stringsAsFactors = T)
# Print first rows of the dataset.
head(dataset)
#> id label f.meanSpeed f.sdSpeed f.minSpeed f.maxSpeed f.meanAcc
#> 1 id1 normal 2.623236 2.228456 0.5000000 8.225342 -0.05366002
#> 2 id2 normal 5.984859 3.820270 1.4142136 15.101738 -0.03870468
#> 3 id3 normal 16.608716 14.502042 0.7071068 46.424670 -1.00019597
#> 4 id5 normal 4.808608 4.137387 0.5000000 17.204651 -0.28181520
#> 5 id6 normal 17.785747 9.926729 3.3541020 44.240818 -0.53753380
#> 6 id7 normal 9.848422 6.026229 0.0000000 33.324165 -0.10555561
#> f.sdAcc f.minAcc f.maxAcc
#> 1 1.839475 -5.532760 3.500000
#> 2 2.660073 -7.273932 7.058594
#> 3 12.890386 -24.320298 30.714624
#> 4 5.228209 -12.204651 15.623512
#> 5 11.272472 -22.178067 21.768613
#> 6 6.692688 -31.262613 11.683561
```
Each row represents one trajectory. We can use the `table()` function to get the counts for *‘normal’* and *‘abnormal’* cases.
```
table(dataset$label)
#> abnormal normal
#> 54 1093
```
After discarding trajectories with less than \\(10\\) points we ended up with \\(1093\\) *‘normal’* instances and \\(54\\) *‘abnormal’* instances.
### 10\.2\.3 Training the Model
To get a preliminary idea of how difficult it is to separate the two classes we can use a MDS plot (see chapter [4](edavis.html#edavis)) to project the \\(8\\) features into a \\(2\\)\-dimensional plane.
FIGURE 10\.9: MDS of the fish trajectories.
In Figure [10\.9](abnormalbehaviors.html#fig:mdsFishes) we see that several *abnormal* points are in the right hand side but many others are in the same space as the *normal* points so it’s time to train an Isolation Forest and see to what extent it can detect the abnormal cases!
One of the nice things about Isolation Forest is that it does not need examples of the abnormal cases during training. If we want, we can also include the abnormal cases but since we don’t have many we will reserve them for the test set. The script `isolation_forest_fish.R` contains the code to train the model. We will split the data into a train set (\\(80\\%\\)) consisting only of normal instances and a test set with both, normal and abnormal instances. The train set is stored in the data frame `train.normal` and the test set in `test.all`. Since the method is based on trees, we don’t need to normalize the data.
First, we need to define the parameters of the Isolation Forest. We can do so by passing the values at creation time.
```
m.iforest <- isolationForest$new(sample_size = 256,
num_trees = 100,
nproc = 1)
```
As suggested in the original paper ([Liu, Ting, and Zhou 2008](#ref-Liu2008isolation)), the sampling size is set to \\(256\\) and the number of trees to \\(100\\). The `nproc` parameter specifies the number of CPU cores to use. I set it to \\(1\\) to ensure we get reproducible results.
Now we can train the model with the train set. The first two columns are removed since they correspond to the trajectories ids and class label.
```
# Fit the model.
m.iforest$fit(train.normal[,-c(1:2)])
```
Once the model is trained, we can start making predictions. Let’s start by making predictions on the **train set** (later we’ll do it on the test set). We know that the train set only consists of normal instances.
```
# Predict anomaly scores on train set.
train.scores <- m.iforest$predict(train.normal[,-c(1:2)])
```
The returned value of the `predict()` function is a data frame containing the average tree depth and the anomaly score for each instance.
```
# Print first rows of predictions.
head(train.scores)
#> id average_depth anomaly_score
#> 1: 1 7.97 0.5831917
#> 2: 2 8.00 0.5820092
#> 3: 3 7.98 0.5827973
#> 4: 4 7.80 0.5899383
#> 5: 5 7.77 0.5911370
#> 6: 6 7.90 0.5859603
```
We know that the train set only has normal instances thus, we need to find the highest anomaly score so that we can set a threshold to detect the abnormal cases. The following code will print the highest anomaly scores.
```
# Sort and display instances with the highest anomaly scores.
head(train.scores[order(anomaly_score, decreasing = TRUE)])
#> id average_depth anomaly_score
#> 1: 75 4.05 0.7603188
#> 2: 618 4.45 0.7400179
#> 3: 147 4.67 0.7290844
#> 4: 661 4.75 0.7251487
#> 5: 756 4.80 0.7226998
#> 6: 54 5.54 0.6874070
```
The highest anomaly score for a normal instance is \\(0\.7603\\) so we would assume that abnormal points will have higher anomaly scores. Armed with this information, we set the threshold to \\(0\.7603\\) and instances having a higher anomaly score will be considered to be abnormal.
```
threshold <- 0.7603
```
Now, we predict the anomaly scores on the **test set** and if the score is \\(\> threshold\\) then we classify that point as abnormal. The `predicted.labels` array will contain \\(0s\\) and \\(1s\\). A \\(1\\) means that the instance is abnormal.
```
# Predict anomaly scores on test set.
test.scores <- m.iforest$predict(test.all[,-c(1:2)])
# Predict labels based on threshold.
predicted.labels <- as.integer((test.scores$anomaly_score > threshold))
```
Now that we have the predicted labels we can compute some performance metrics.
```
# All abnormal cases are at the end so we can
# compute the ground truth as follows.
gt.all <- c(rep(0,nrow(test.normal)), rep(1, nrow(test.abnormal)))
levels <- c("0","1")
# Compute performance metrics.
cm <- confusionMatrix(factor(predicted.labels, levels = levels),
factor(gt.all, levels = levels),
positive = "1")
# Print confusion matrix.
cm$table
#> Reference
#> Prediction 0 1
#> 0 218 37
#> 1 0 17
# Print sensitivity
cm$byClass["Sensitivity"]
#> Sensitivity
#> 0.3148148
```
From the confusion matrix we see that \\(17\\) out of \\(54\\) abnormal instances were detected. On the other hand, all the normal instances (\\(218\\)) were correctly identified as such. The sensitivity (also known as recall) of the abnormal class was \\(17/54\=0\.314\\) which seems very low. We are failing to detect several of the abnormal cases.
One thing we can do is to decrease the threshold at the expense of increasing the false positives, that is, classifying normal instances as abnormal. If we set `threshold <- 0.6` we get the following confusion matrix.
```
#> Reference
#> Prediction 0 1
#> 0 206 8
#> 1 12 46
```
This time we were able to identify \\(46\\) of the abnormal cases! This gives a sensitivity of \\(46/54\=0\.85\\) which is much better than before. However, nothing is for free. If we look at the normal class, this time we had \\(12\\) misclassified points (false positives).
A good way of finding the best threshold is to set apart a validation set from which the optimal threshold can be estimated. However, this is not always feasible due to the limited amount of abnormal points.
In this example we manually tried different thresholds and evaluated their impact on the final results. In the following section I will show you a method that allows you to estimate the performance of a model when considering many possible thresholds at once!
### 10\.2\.4 ROC Curve and AUC
The **receiver operating characteristic curve**, also known as **ROC curve** is a plot that depicts how the sensitivity and the false positive rate (FPR) behave as the detection threshold varies. The sensitivity/recall can be calculated by dividing the true positives by the total number of positives \\(TP/P\\) (see chapter [2](classification.html#classification)). The \\(FPR\=FP/N\\) where FP are the false positives and N are the total number of negative examples (the normal trajectories). The FPR is also known as the probability of false alarm. Ideally, we want a model that has a high sensitivity and a low FPR.
In R we can use the `PRROC` package ([Grau, Grosse, and Keilwagen 2015](#ref-prroc)) to plot ROC curves. The ROC curve of the Isolation Forest results for the abnormal fish trajectory detection can be plotted using the following code:
```
library(PRROC)
roc_obj <- roc.curve(scores.class0 = test.scores$anomaly_score,
weights.class0 = gt.all,
curve = TRUE,
rand.compute = TRUE)
# Set rand.plot = TRUE to also plot the random model's curve.
plot(roc_obj, rand.plot = TRUE)
```
The argument `scores.class0` specifies the returned scores by the Isolation Forest and `weights.class0` are the true labels, \\(1\\) for the positive class (abnormal), and \\(0\\) for the negative class (normal). We set `curve=TRUE` so the method returns a table with thresholds and their respective sensitivity and FPR. The `rand.compute=TRUE` instructs the function to also compute the curve of a random model, that is, one that predicts scores at random. Figure [10\.10](abnormalbehaviors.html#fig:rocCurve) shows the ROC plot.
FIGURE 10\.10: ROC curve and AUC. The dashed line represents a random model.
Here we can see how the sensitivity and FPR increase as the threshold decreases. In the best case we want a sensitivity of \\(1\\) and a FPR of \\(0\\). This ideal point is located at top left corner but this model does not reach that level of performance but a bit lower. The dashed line in the diagonal is the curve for a random model. We can also access the thresholds table:
```
# Print first values of the curve table.
roc_obj$curve
#> [,1] [,2] [,3]
#> [1,] 0 0.00000000 0.8015213
#> [2,] 0 0.01851852 0.7977342
#> [3,] 0 0.03703704 0.7939650
#> [4,] 0 0.05555556 0.7875449
#> [5,] 0 0.09259259 0.7864799
#> .....
```
The first column is the FPR, the second column is the sensitivity, and the last column is the threshold. Choosing the best threshold is not straightforward and will depend on the compromise we want to have between sensitivity and FPR.
Note that the plot also prints an \\(AUC\=0\.963\\). This is known as the **Area Under the Curve (AUC)** and as the name implies, it is the area under the ROC curve. A perfect model will have an AUC of \\(1\.0\\). Our model achieved an AUC of \\(0\.963\\) which is pretty good. A random model will have an AUC around \\(0\.5\\). A value below \\(0\.5\\) means that the model is performing worse than random. The AUC is a performance metric that measures the quality of a model regardless of the selected threshold and is typically presented in addition to accuracy, recall, precision, etc.
If someone tells you something negative about yourself (e.g., that you don’t play football well), assume that they have an AUC below \\(0\.5\\) (worse than random). At least, that’s what I do to cope with those situations. (If you invert the predictions of a binary classifier that does worse than random you will get a classifier that is better than random).
10\.3 Autoencoders
------------------
In its simplest form, an autoencoder is a neural network whose output layer has the same shape as the input layer. If you are not familiar with artificial neural networks, you can take a look at chapter [8](deeplearning.html#deeplearning). An autoencoder will try to learn how to generate an output that is as similar as possible to the provided input. Figure [10\.11](abnormalbehaviors.html#fig:simpleAutoencoder) shows an example of a simple autoencoder with \\(4\\) units in the input and output layers. The hidden layer has \\(2\\) units.
FIGURE 10\.11: Example of a simple autoencoder.
Recall that when training a classification or regression model, we need to provide training examples of the form \\((x,y)\\) where \\(x\\) represents the input features and \\(y\\) is the desired output (a label or a number). When training an autoencoder, the input and the output is the same, that is, \\((x,x)\\).
Now you may be wondering what is the point of training a model that generates the same output as its input. If you take a closer look at Figure [10\.11](abnormalbehaviors.html#fig:simpleAutoencoder) you can see that the hidden layer has fewer units (only \\(2\\)) than the input and output layers. When the data is passed from the input layer to the hidden layer it is ‘reduced’ (compressed). Then, the compressed data is reconstructed as it is passed to the subsequent layers until it reaches the output. Thus, the neural network will learn to compress and reconstruct the data at the same time. Once the network is trained, we can get rid of the layers after the middle hidden layer and use the ‘left\-hand\-side’ of the network to compress our data. This left\-hand\-side is called the **encoder**. Then, we can use the right\-hand\-side to decompress the data. This part is called the **decoder**. In this example, the encoder and decoder consist of only \\(1\\) layer but they can have more (as we will see in the next section). In practice, you will not use autoencoders to compress files in your computer because there are more efficient methods to do that. Furthermore, the compression is *lossy*, that is, there is no guarantee that the reconstructed file will be exactly the same as the original. However, autoencoders have many applications including:
* Dimensionality reduction for visualization.
* Data denoising.
* Data generation (variational autoencoders).
* Anomaly detection (this is what we are interested in!).
Recall that when training a neural network we need to define a loss function. The loss function captures how well the network is learning. It measures how different the predictions are from the true expected outputs. In the context of autoencoders, this difference is known as the **reconstruction error** and can be measured using the mean squared error (similar to regression).
In this section I introduced the most simple type of autoencoder but there are many variants such as denoising autoencoders, variational autoencoders (VAEs), and so on. The following Wikipedia page provides a good overview of the different types of autoencoders: <https://en.wikipedia.org/wiki/Autoencoder>
### 10\.3\.1 Autoencoders for Anomaly Detection
`keras_autoencoder_fish.R`
Autoencoders can be used as anomaly detectors. This idea will be demonstrated with an example to detect abnormal fish trajectories. The way this is done is by training an autoencoder to compress and reconstruct the **normal** instances. Once the autoencoder has learned to encode normal instances, we can expect the reconstruction error to be small. When presented with out\-of\-the\-normal instances, the autoencoder will have a hard time trying to reconstruct them and consequently, the reconstruction error will be high. Similar to Isolation Forests where the tree path length provides a measure of the rarity of an instance, the reconstruction error in autoencoders can be used as an anomaly score.
To tell whether an instance is abnormal or not, we pass it through the autoencoder and compute its reconstruction error \\(\\epsilon\\). If \\(\\epsilon \> threshold\\) the input data can be regarded as abnormal.
Similar to what we did with the Isolation Forest, we will use the *fishFeatures.csv* file that contains the fish trajectories encoded as feature vectors. Each trajectory is composed of \\(8\\) numeric features based on acceleration and speed. We will use \\(80\\%\\) of the normal instances to train the autoencoder. All abnormal instances will be used for the test set.
After splitting the data (the code is in `keras_autoencoder_fish.R`), we will normalize (standardize) it. The `normalize.standard()` function will normalize the data such that it has a mean of \\(0\\) and a standard deviation of \\(1\\) using the following formula:
\\\[\\begin{equation}
z\_i \= \\frac{x\_i \- \\mu}{\\sigma}
\\tag{10\.3}
\\end{equation}\\]
where \\(\\mu\\) is the mean and \\(\\sigma\\) is the standard deviation of \\(x\\). This is slightly different from the \\(0\\)\-\\(1\\) normalization we have used before. The reason is that when scaling to \\(0\\)\-\\(1\\) the min and max values from the train set need to be learned. If there are data points in the test set that have values outside the min and max they will be truncated. But since we expect anomalies to have rare values, it is likely that they will be outside the train set ranges and will be truncated. After being truncated, abnormal instances could now look more similar to the normal ones thus, it will be more difficult to spot them. By standardizing the data we make sure that the extreme values of the abnormal points are preserved. In this case, the parameters to be learned from the train set are \\(\\mu\\) and \\(\\sigma\\).
Once the data is normalized we can define the autoencoder in keras as follows:
```
autoencoder <- keras_model_sequential()
autoencoder %>%
layer_dense(units = 32, activation = 'relu',
input_shape = ncol(train.normal)-2) %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = 8, activation = 'relu') %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dense(units = ncol(train.normal)-2, activation = 'linear')
```
This is a normal neural network with an input layer having the same number of units as number of features (\\(8\\)). This network has \\(5\\) hidden layers of size \\(32,16,8,16\\), and \\(32\\), respectively. The output layer has \\(8\\) units (the same as the input layer). All activation functions are RELU’s except the last one which is linear because the network should be able to produce any number as output. Now we can compile and fit the model.
```
autoencoder %>% compile(
loss = 'mse',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c('mse')
)
history <- autoencoder %>% fit(
as.matrix(train.normal[,-c(1:2)]),
as.matrix(train.normal[,-c(1:2)]),
epochs = 100,
batch_size = 32,
validation_split = 0.10,
verbose = 2,
view_metrics = TRUE
)
```
We set *mean squared error* (MSE) as the loss function. We use the normal instances in the train set (`train.normal`) as the input and expected output. The validation split is set to \\(10\\%\\) so we can plot the reconstruction error (loss) on unseen instances. Finally, the model is trained for \\(100\\) epochs. From Figure [10\.12](abnormalbehaviors.html#fig:lossAutoencoder) we can see that as the training progresses, the loss and the MSE decrease.
FIGURE 10\.12: Loss and MSE.
We can now compute the MSE on the normal and abnormal **test sets**. The `test.normal` data frame only contains normal test instances and `test.abnormal` only contains abnormal test instances.
```
# Compute MSE on normal test set.
autoencoder %>% evaluate(as.matrix(test.normal[,-c(1:2)]),
as.matrix(test.normal[,-c(1:2)]))
#> loss mean_squared_error
#> 0.06147528 0.06147528
# Compute MSE on abnormal test set.
autoencoder %>% evaluate(as.matrix(test.abnormal[,-c(1:2)]),
as.matrix(test.abnormal[,-c(1:2)]))
#> loss mean_squared_error
#> 2.660597 2.660597
```
Clearly, the MSE of the normal test set is much lower than the abnormal test set. This means that the autoencoder had a difficult time trying to reconstruct the abnormal points because it never saw similar ones before.
To find a good threshold we can start by analyzing the reconstruction errors on the **train set**. First, we need to get the predictions.
```
# Predict values on the normal train set.
preds.train.normal <- autoencoder %>%
predict_on_batch(as.matrix(train.normal[,-c(1:2)]))
```
The variable `preds.train.normal` contains the predicted values for each feature and each instance. We can use those predictions to compute the reconstruction error by comparing them with the ground truth values. As reconstruction error we will use the squared errors. The function `squared.errors()` computes the reconstruction error for each instance.
```
# Compute individual squared errors in train set.
errors.train.normal <- squared.errors(preds.train.normal,
as.matrix(train.normal[,-c(1:2)]))
mean(errors.train.normal)
#> [1] 0.8113273
quantile(errors.train.normal)
#> 0% 25% 50% 75% 100%
#> 0.0158690 0.2926631 0.4978471 0.8874694 15.0958992
```
The mean reconstruction error of the normal instances in the train set is \\(0\.811\\). If we look at the quantiles, we can see that most of the instances have an error of \\(\<\= 0\.887\\). With this information we can set `threshold <- 1.0`. If the reconstruction error is \\(\> threshold\\) then we will consider that point as an anomaly.
```
# Make predictions on the abnormal test set.
preds.test.abnormal <- autoencoder %>%
predict_on_batch(as.matrix(test.abnormal[,-c(1:2)]))
# Compute reconstruction errors.
errors.test.abnormal <- squared.errors(preds.test.abnormal,
as.matrix(test.abnormal[,-c(1:2)]))
# Predict labels based on threshold 1:abnormal, 0:normal.
pred.labels.abnormal <- as.integer((errors.test.abnormal > threshold))
# Count how many abnormal instances were detected.
sum(pred.labels.abnormal)
#> [1] 46
```
By using that threshold the autoencoder was able to detect \\(46\\) out of the \\(54\\) anomaly points. From the following confusion matrix we can also see that there were \\(16\\) false positives.
```
#> Reference
#> Prediction 0 1
#> 0 202 8
#> 1 16 46
```
FIGURE 10\.13: ROC curve and AUC. The dashed line represents a random model.
From the ROC curve in Figure [10\.13](abnormalbehaviors.html#fig:rocAutoencoder) we can see that the AUC was \\(0\.93\\) which is lower than the \\(0\.96\\) achieved by the Isolation Forest but with some fine tuning and training for more epochs, the autoencoder should be able to achieve similar results.
### 10\.3\.1 Autoencoders for Anomaly Detection
`keras_autoencoder_fish.R`
Autoencoders can be used as anomaly detectors. This idea will be demonstrated with an example to detect abnormal fish trajectories. The way this is done is by training an autoencoder to compress and reconstruct the **normal** instances. Once the autoencoder has learned to encode normal instances, we can expect the reconstruction error to be small. When presented with out\-of\-the\-normal instances, the autoencoder will have a hard time trying to reconstruct them and consequently, the reconstruction error will be high. Similar to Isolation Forests where the tree path length provides a measure of the rarity of an instance, the reconstruction error in autoencoders can be used as an anomaly score.
To tell whether an instance is abnormal or not, we pass it through the autoencoder and compute its reconstruction error \\(\\epsilon\\). If \\(\\epsilon \> threshold\\) the input data can be regarded as abnormal.
Similar to what we did with the Isolation Forest, we will use the *fishFeatures.csv* file that contains the fish trajectories encoded as feature vectors. Each trajectory is composed of \\(8\\) numeric features based on acceleration and speed. We will use \\(80\\%\\) of the normal instances to train the autoencoder. All abnormal instances will be used for the test set.
After splitting the data (the code is in `keras_autoencoder_fish.R`), we will normalize (standardize) it. The `normalize.standard()` function will normalize the data such that it has a mean of \\(0\\) and a standard deviation of \\(1\\) using the following formula:
\\\[\\begin{equation}
z\_i \= \\frac{x\_i \- \\mu}{\\sigma}
\\tag{10\.3}
\\end{equation}\\]
where \\(\\mu\\) is the mean and \\(\\sigma\\) is the standard deviation of \\(x\\). This is slightly different from the \\(0\\)\-\\(1\\) normalization we have used before. The reason is that when scaling to \\(0\\)\-\\(1\\) the min and max values from the train set need to be learned. If there are data points in the test set that have values outside the min and max they will be truncated. But since we expect anomalies to have rare values, it is likely that they will be outside the train set ranges and will be truncated. After being truncated, abnormal instances could now look more similar to the normal ones thus, it will be more difficult to spot them. By standardizing the data we make sure that the extreme values of the abnormal points are preserved. In this case, the parameters to be learned from the train set are \\(\\mu\\) and \\(\\sigma\\).
Once the data is normalized we can define the autoencoder in keras as follows:
```
autoencoder <- keras_model_sequential()
autoencoder %>%
layer_dense(units = 32, activation = 'relu',
input_shape = ncol(train.normal)-2) %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = 8, activation = 'relu') %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dense(units = ncol(train.normal)-2, activation = 'linear')
```
This is a normal neural network with an input layer having the same number of units as number of features (\\(8\\)). This network has \\(5\\) hidden layers of size \\(32,16,8,16\\), and \\(32\\), respectively. The output layer has \\(8\\) units (the same as the input layer). All activation functions are RELU’s except the last one which is linear because the network should be able to produce any number as output. Now we can compile and fit the model.
```
autoencoder %>% compile(
loss = 'mse',
optimizer = optimizer_sgd(lr = 0.01),
metrics = c('mse')
)
history <- autoencoder %>% fit(
as.matrix(train.normal[,-c(1:2)]),
as.matrix(train.normal[,-c(1:2)]),
epochs = 100,
batch_size = 32,
validation_split = 0.10,
verbose = 2,
view_metrics = TRUE
)
```
We set *mean squared error* (MSE) as the loss function. We use the normal instances in the train set (`train.normal`) as the input and expected output. The validation split is set to \\(10\\%\\) so we can plot the reconstruction error (loss) on unseen instances. Finally, the model is trained for \\(100\\) epochs. From Figure [10\.12](abnormalbehaviors.html#fig:lossAutoencoder) we can see that as the training progresses, the loss and the MSE decrease.
FIGURE 10\.12: Loss and MSE.
We can now compute the MSE on the normal and abnormal **test sets**. The `test.normal` data frame only contains normal test instances and `test.abnormal` only contains abnormal test instances.
```
# Compute MSE on normal test set.
autoencoder %>% evaluate(as.matrix(test.normal[,-c(1:2)]),
as.matrix(test.normal[,-c(1:2)]))
#> loss mean_squared_error
#> 0.06147528 0.06147528
# Compute MSE on abnormal test set.
autoencoder %>% evaluate(as.matrix(test.abnormal[,-c(1:2)]),
as.matrix(test.abnormal[,-c(1:2)]))
#> loss mean_squared_error
#> 2.660597 2.660597
```
Clearly, the MSE of the normal test set is much lower than the abnormal test set. This means that the autoencoder had a difficult time trying to reconstruct the abnormal points because it never saw similar ones before.
To find a good threshold we can start by analyzing the reconstruction errors on the **train set**. First, we need to get the predictions.
```
# Predict values on the normal train set.
preds.train.normal <- autoencoder %>%
predict_on_batch(as.matrix(train.normal[,-c(1:2)]))
```
The variable `preds.train.normal` contains the predicted values for each feature and each instance. We can use those predictions to compute the reconstruction error by comparing them with the ground truth values. As reconstruction error we will use the squared errors. The function `squared.errors()` computes the reconstruction error for each instance.
```
# Compute individual squared errors in train set.
errors.train.normal <- squared.errors(preds.train.normal,
as.matrix(train.normal[,-c(1:2)]))
mean(errors.train.normal)
#> [1] 0.8113273
quantile(errors.train.normal)
#> 0% 25% 50% 75% 100%
#> 0.0158690 0.2926631 0.4978471 0.8874694 15.0958992
```
The mean reconstruction error of the normal instances in the train set is \\(0\.811\\). If we look at the quantiles, we can see that most of the instances have an error of \\(\<\= 0\.887\\). With this information we can set `threshold <- 1.0`. If the reconstruction error is \\(\> threshold\\) then we will consider that point as an anomaly.
```
# Make predictions on the abnormal test set.
preds.test.abnormal <- autoencoder %>%
predict_on_batch(as.matrix(test.abnormal[,-c(1:2)]))
# Compute reconstruction errors.
errors.test.abnormal <- squared.errors(preds.test.abnormal,
as.matrix(test.abnormal[,-c(1:2)]))
# Predict labels based on threshold 1:abnormal, 0:normal.
pred.labels.abnormal <- as.integer((errors.test.abnormal > threshold))
# Count how many abnormal instances were detected.
sum(pred.labels.abnormal)
#> [1] 46
```
By using that threshold the autoencoder was able to detect \\(46\\) out of the \\(54\\) anomaly points. From the following confusion matrix we can also see that there were \\(16\\) false positives.
```
#> Reference
#> Prediction 0 1
#> 0 202 8
#> 1 16 46
```
FIGURE 10\.13: ROC curve and AUC. The dashed line represents a random model.
From the ROC curve in Figure [10\.13](abnormalbehaviors.html#fig:rocAutoencoder) we can see that the AUC was \\(0\.93\\) which is lower than the \\(0\.96\\) achieved by the Isolation Forest but with some fine tuning and training for more epochs, the autoencoder should be able to achieve similar results.
10\.4 Summary
-------------
This chapter presented two anomaly detection models, namely Isolation Forests and autoencoders. Examples of how those models can be used for anomaly trajectory detection were also presented. This chapter also introduced ROC curves and AUC which can be used to assess the performance of a model.
* **Isolation Forests** work by generating random partitions of the features until all instances are isolated.
* Abnormal points are more likely to be isolated during the first partitions.
* The average tree path length of abnormal points is smaller than that of normal points.
* An **anomaly score** that ranges between \\(0\\) and \\(1\\) is calculated based on the path length and the closer to \\(1\\) the more likely the point is an anomaly.
* A **ROC curve** is used to visualize the sensitivity and false positive rate of a model for different thresholds.
* The area under the curve **AUC** can be used to summarize the performance of a model.
* A simple **autoencoder** is an artificial neural network whose output layer has the same shape as the input layer.
* Autoencoders are used to encode the data into a lower dimension from which then, it can be reconstructed.
* The **reconstruction error** (loss) is a measure of how distant a prediction is from the ground truth and can be used as an anomaly score.
| Machine Learning |
enriquegit.github.io | https://enriquegit.github.io/behavior-free/appendixInstall.html |
A Setup Your Environment
========================
The examples in this book were tested with R 4\.0\.5\. You can get the latest R version from its official website: www.r\-project.org/
As IDE, I use RStudio (<https://rstudio.com/>) but you can use your favorite one. Most of the code examples in this book rely on datasets. The following two sections describe how to get and install the datasets and source code. If you want to try out the examples, I recommend you to follow the instructions in the following two sections.
The last section includes instructions on how to install Keras and TensorFlow, which are the required libraries to build and train deep learning models. Deep learning is covered in chapter [8](deeplearning.html#deeplearning). Before that, you don’t need those libraries.
A.1 Installing the Datasets
---------------------------
A compressed file with a collection of most of the datasets used in this book can be downloaded here: [https://github.com/enriquegit/behavior\-free\-datasets](https://github.com/enriquegit/behavior-free-datasets)
Download the datasets collection file (behavior\_book\_datasets.zip) and extract it into a local directory, for example, `C:/datasets/`. This compilation includes most of the datasets. Due to some datasets having large file sizes or license restrictions, not all of them are included in the collection set. But you can download them separately. Even though a dataset may not be included in the compiled set, it will still have a corresponding directory with a README file with instructions on how to obtain it. The following picture shows how the directory structure looks like in my PC.
A.2 Installing the Examples Source Code
---------------------------------------
The examples source code can be downloaded here: [https://github.com/enriquegit/behavior\-free\-code](https://github.com/enriquegit/behavior-free-code)
You can get the code using git or if you are not familiar with it, click on the ‘Code’ button and then ‘Download zip’. Then, extract the file into a local directory of your choice.
There is a directory for each chapter and two additional directories: `auxiliary_functions/` and `install_functions/`.
The `auxiliary_functions/` folder has generic functions that are imported by some other R scripts. In this directory, there is a file called `globals.R`. Open that file and set the variable `datasets_path` to your local path where you downloaded the datasets. For example, I set it to:
```
datasets_path <- "C:/datasets"
```
The `install_functions/` directory has a single script: `install_packages.R`. This script can be used to install all the packages used in the examples (except Keras and TensorFlow which is covered in the next section). The script reads the packages listed in listpackages.txt and tries to install them if they are not present. This is just a convenient way to install everything at once but you can always install each package individually with the usual `install.packages()` method.
When running the examples, it is assumed that the working directory is the same as the actual script. For example, if you want to try `indoor_classification.R`, and that script is located in `C:/code/Predicting Behavior with Classification Models/` then, your working directory should be `C:/code/Predicting Behavior with Classification Models/`. In Windows, and if RStudio is not already opened, when you double\-click an R script, RStudio will be launched (if it is set as the default program) and the working directory will be set.
You can check your current working directory by typing `getwd()` and you can set your working directory with `setwd()`. Alternatively, in RStudio, you can set your working directory in the menu bar ‘Session’ \-\> ‘Set Working Directory’ \-\> ‘To Source File Location’.
A.3 Running Shiny Apps
----------------------
Shiny apps[30](#fn30) are interactive applications written in R. This book includes some shiny apps that demonstrate some of the concepts. Shiny apps file names will start with the prefix `shiny_` followed by the specific file name. Some have an ‘.Rmd’ extension while others will have an ‘.R’ extension. Regardless of the extension, they are run in the same way. Before running shiny apps, make sure you have installed the packages `shiny` and `shinydashboard`.
```
install.packages("shiny")
install.packages("shinydashboard")
```
To run an app, just open the corresponding file in RStudio. RStudio will detect that this is a shiny app and a ‘Run Document’ or ‘Run App’ button will be shown. Click the button to start the app.
A.4 Installing Keras and TensorFlow
-----------------------------------
Keras and TensorFlow are used until chapter [8](deeplearning.html#deeplearning). It is not necessary to install them if you are not still there.
TensorFlow has two main versions. a CPU and a GPU version. The GPU version takes advantage of the capabilities of some video cards to perform faster operations. The examples in this book can be run with both versions. The following instructions apply to the CPU version. Installing the GPU version requires some platform\-specific details. I recommend you to first install the CPU version and if you want/need to perform faster computations, then, go with the GPU version.
Installing Keras with TensorFlow (CPU version) as backend takes four simple steps:
1. If you are on Windows, you need to install Anaconda[31](#fn31). The individual version is free.
2. Install the `keras` R package with `install.packages("keras")`
3. Load `keras` with `library(keras)`
4. Run the `install_keras()` function. This function will install TensorFlow as the backend. If you don’t have Anaconda installed, you will be asked if you want to install Miniconda.
You can test your installation with:
```
library(tensorflow)
tf$constant("Hello World")
#> tf.Tensor(b'Hello World', shape=(), dtype=string)
```
The first time in a session that you run TensorFlow related code with the CPU version, you may get warning messages like the following, which you can safely ignore.
```
#> tensorflow/stream_executor/platform/default/dso_loader.cc:55]
#> Could not load dynamic library 'cudart64_101.dll';
#> dlerror: cudart64_101.dll not found
```
If you want to install the GPU version, first, you need to make sure you have a compatible video card. More information on how to install the GPU version is available here <https://keras.rstudio.com/reference/install_keras.html> and here <https://tensorflow.rstudio.com/installation/gpu/local_gpu/>
A.1 Installing the Datasets
---------------------------
A compressed file with a collection of most of the datasets used in this book can be downloaded here: [https://github.com/enriquegit/behavior\-free\-datasets](https://github.com/enriquegit/behavior-free-datasets)
Download the datasets collection file (behavior\_book\_datasets.zip) and extract it into a local directory, for example, `C:/datasets/`. This compilation includes most of the datasets. Due to some datasets having large file sizes or license restrictions, not all of them are included in the collection set. But you can download them separately. Even though a dataset may not be included in the compiled set, it will still have a corresponding directory with a README file with instructions on how to obtain it. The following picture shows how the directory structure looks like in my PC.
A.2 Installing the Examples Source Code
---------------------------------------
The examples source code can be downloaded here: [https://github.com/enriquegit/behavior\-free\-code](https://github.com/enriquegit/behavior-free-code)
You can get the code using git or if you are not familiar with it, click on the ‘Code’ button and then ‘Download zip’. Then, extract the file into a local directory of your choice.
There is a directory for each chapter and two additional directories: `auxiliary_functions/` and `install_functions/`.
The `auxiliary_functions/` folder has generic functions that are imported by some other R scripts. In this directory, there is a file called `globals.R`. Open that file and set the variable `datasets_path` to your local path where you downloaded the datasets. For example, I set it to:
```
datasets_path <- "C:/datasets"
```
The `install_functions/` directory has a single script: `install_packages.R`. This script can be used to install all the packages used in the examples (except Keras and TensorFlow which is covered in the next section). The script reads the packages listed in listpackages.txt and tries to install them if they are not present. This is just a convenient way to install everything at once but you can always install each package individually with the usual `install.packages()` method.
When running the examples, it is assumed that the working directory is the same as the actual script. For example, if you want to try `indoor_classification.R`, and that script is located in `C:/code/Predicting Behavior with Classification Models/` then, your working directory should be `C:/code/Predicting Behavior with Classification Models/`. In Windows, and if RStudio is not already opened, when you double\-click an R script, RStudio will be launched (if it is set as the default program) and the working directory will be set.
You can check your current working directory by typing `getwd()` and you can set your working directory with `setwd()`. Alternatively, in RStudio, you can set your working directory in the menu bar ‘Session’ \-\> ‘Set Working Directory’ \-\> ‘To Source File Location’.
A.3 Running Shiny Apps
----------------------
Shiny apps[30](#fn30) are interactive applications written in R. This book includes some shiny apps that demonstrate some of the concepts. Shiny apps file names will start with the prefix `shiny_` followed by the specific file name. Some have an ‘.Rmd’ extension while others will have an ‘.R’ extension. Regardless of the extension, they are run in the same way. Before running shiny apps, make sure you have installed the packages `shiny` and `shinydashboard`.
```
install.packages("shiny")
install.packages("shinydashboard")
```
To run an app, just open the corresponding file in RStudio. RStudio will detect that this is a shiny app and a ‘Run Document’ or ‘Run App’ button will be shown. Click the button to start the app.
A.4 Installing Keras and TensorFlow
-----------------------------------
Keras and TensorFlow are used until chapter [8](deeplearning.html#deeplearning). It is not necessary to install them if you are not still there.
TensorFlow has two main versions. a CPU and a GPU version. The GPU version takes advantage of the capabilities of some video cards to perform faster operations. The examples in this book can be run with both versions. The following instructions apply to the CPU version. Installing the GPU version requires some platform\-specific details. I recommend you to first install the CPU version and if you want/need to perform faster computations, then, go with the GPU version.
Installing Keras with TensorFlow (CPU version) as backend takes four simple steps:
1. If you are on Windows, you need to install Anaconda[31](#fn31). The individual version is free.
2. Install the `keras` R package with `install.packages("keras")`
3. Load `keras` with `library(keras)`
4. Run the `install_keras()` function. This function will install TensorFlow as the backend. If you don’t have Anaconda installed, you will be asked if you want to install Miniconda.
You can test your installation with:
```
library(tensorflow)
tf$constant("Hello World")
#> tf.Tensor(b'Hello World', shape=(), dtype=string)
```
The first time in a session that you run TensorFlow related code with the CPU version, you may get warning messages like the following, which you can safely ignore.
```
#> tensorflow/stream_executor/platform/default/dso_loader.cc:55]
#> Could not load dynamic library 'cudart64_101.dll';
#> dlerror: cudart64_101.dll not found
```
If you want to install the GPU version, first, you need to make sure you have a compatible video card. More information on how to install the GPU version is available here <https://keras.rstudio.com/reference/install_keras.html> and here <https://tensorflow.rstudio.com/installation/gpu/local_gpu/>
| Machine Learning |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-using-marss-models-to-study-spatial-structure.html |
7\.6 Using MARSS models to study spatial structure
--------------------------------------------------
For our next example, we will use MARSS models to test hypotheses about the population structure of harbor seals on the west coast. For this example, we will evaluate the support for different population structures (numbers of subpopulations) using different \\(\\mathbf{Z}\\)s to specify how survey regions map onto subpopulations. We will assume correlated process errors with the same magnitude of process variance and covariance. We will assume independent observations errors with equal variances at each site. We could do unequal variances but it takes a long time to fit so for this example, the observation variances are set equal.
The dataset we will use is `harborSeal`, a 29\-year dataset of abundance indices for 12 regions along the U.S. west coast between 1975\-2004 (Figure [7\.5](sec-mss-using-marss-models-to-study-spatial-structure.html#fig:mss-Cs02-fig1)).
We start by setting up our data matrix. We will leave off Hood Canal.
```
dat <- MARSS::harborSeal
years <- dat[, "Year"]
good <- !(colnames(dat) %in% c("Year", "HoodCanal"))
sealData <- t(dat[, good])
```
Figure 7\.5: Plot of log counts at each survey region in the harborSeal dataset. Each region is an index of the harbor seal abundance in that region.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-hypotheses-regarding-spatial-structure.html |
7\.7 Hypotheses regarding spatial structure
-------------------------------------------
We will evaluate the data support for the following hypotheses about the population structure:
* H1: `stock` 3 subpopulations defined by management units
* H2: `coast+PS` 2 subpopulations defined by coastal versus WA inland
* H3: `N+S` 2 subpopulations defined by north and south split in the middle of Oregon
* H4:`NC+strait+PS+SC` 4 subpopulations defined by N coastal, S coastal, SJF\+Georgia Strait, and Puget Sound
* H5: `panmictic` All regions are part of the same panmictic population
* H6: `site` Each of the 11 regions is a subpopulation
These hypotheses translate to these \\(\\mathbf{Z}\\) matrices (H6 not shown; it is an identity matrix):
\\\[\\begin{equation\*}
\\begin{array}{rcccc}
\&H1\&H2\&H4\&H5\\\\
\&\\text{pnw ps ca}\&\\text{coast pc}\&\\text{nc is ps sc}\&\\text{pan}\\\\
\\hline
\\begin{array}{r}\\text{Coastal Estuaries}\\\\ \\text{Olympic Peninsula} \\\\ \\text{Str. Juan de Fuca} \\\\ \\text{San Juan Islands} \\\\
\\text{Eastern Bays} \\\\ \\text{Puget Sound} \\\\ \\text{CA Mainland} \\\\ \\text{CA Channel Islands} \\\\ \\text{OR North Coast} \\\\
\\text{OR South Coast} \\\\ \\text{Georgia Strait} \\end{array}\&
\\begin{bmatrix}
1 \& 0 \& 0 \\\\
1 \& 0 \& 0 \\\\
0 \& 1 \& 0 \\\\
0 \& 1 \& 0 \\\\
0 \& 1 \& 0 \\\\
0 \& 1 \& 0 \\\\
0 \& 0 \& 1 \\\\
0 \& 0 \& 1 \\\\
1 \& 0 \& 0 \\\\
1 \& 0 \& 0 \\\\
0 \& 1 \& 0
\\end{bmatrix}\&
\\begin{bmatrix}
1 \& 0 \\\\
1 \& 0 \\\\
0 \& 1 \\\\
0 \& 1 \\\\
0 \& 1 \\\\
0 \& 1 \\\\
1 \& 0 \\\\
1 \& 0 \\\\
1 \& 0 \\\\
1 \& 0 \\\\
0 \& 1
\\end{bmatrix}\&
\\begin{bmatrix}
1 \& 0 \& 0 \& 0\\\\
1 \& 0 \& 0 \& 0\\\\
0 \& 1 \& 0 \& 0\\\\
0 \& 1 \& 0 \& 0\\\\
0 \& 0 \& 1 \& 0\\\\
0 \& 0 \& 1 \& 0\\\\
0 \& 0 \& 0 \& 1\\\\
0 \& 0 \& 0 \& 1\\\\
1 \& 0 \& 0 \& 0\\\\
0 \& 0 \& 0 \& 1\\\\
0 \& 1 \& 0 \& 0
\\end{bmatrix}\&
\\begin{bmatrix}
1 \\\\
1 \\\\
1 \\\\
1 \\\\
1 \\\\
1 \\\\
1 \\\\
1 \\\\
1 \\\\
1 \\\\
1
\\end{bmatrix}
\\end{array}
\\end{equation\*}\\]
To tell `MARSS()` the form of \\(\\mathbf{Z}\\), we construct the same matrix in R. For example, for hypotheses 1, we can write:
```
Z.model <- matrix(0, 11, 3)
Z.model[c(1, 2, 9, 10), 1] <- 1 # which elements in col 1 are 1
Z.model[c(3:6, 11), 2] <- 1 # which elements in col 2 are 1
Z.model[7:8, 3] <- 1 # which elements in col 3 are 1
```
Or we can use a short\-cut by specifying \\(\\mathbf{Z}\\) as a factor that has the name of the subpopulation associated with each row in \\(\\mathbf{y}\\). For hypothesis 1, this is
```
Z1 <- factor(c("pnw", "pnw", rep("ps", 4), "ca", "ca", "pnw",
"pnw", "ps"))
```
Notice it is 11 elements in length; one element for each row of data.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-set-up-the-hypotheses-as-different-models.html |
7\.8 Set up the hypotheses as different models
----------------------------------------------
Only the \\(\\mathbf{Z}\\) matrices change for our model. We will set up a base model list used for all models.
```
mod.list <- list(
B = "identity",
U = "unequal",
Q = "equalvarcov",
Z = "placeholder",
A = "scaling",
R = "diagonal and equal",
x0 = "unequal",
tinitx = 0
)
```
Then we set up the \\(\\mathbf{Z}\\) matrices using the factor short\-cut.
```
Z.models <- list(
H1 = factor(c("pnw", "pnw", rep("ps", 4), "ca", "ca", "pnw", "pnw", "ps")),
H2 = factor(c(rep("coast", 2), rep("ps", 4), rep("coast", 4), "ps")),
H3 = factor(c(rep("N", 6), "S", "S", "N", "S", "N")),
H4 = factor(c("nc", "nc", "is", "is", "ps", "ps", "sc", "sc", "nc", "sc", "is")),
H5 = factor(rep("pan", 11)),
H6 = factor(1:11) # site
)
names(Z.models) <-
c("stock", "coast+PS", "N+S", "NC+strait+PS+SC", "panmictic", "site")
```
### 7\.8\.1 Fit the models
We loop through the models, fit and store the results:
```
out.tab <- NULL
fits <- list()
for (i in 1:length(Z.models)) {
mod.list$Z <- Z.models[[i]]
fit <- MARSS::MARSS(sealData, model = mod.list, silent = TRUE,
control = list(maxit = 1000))
out <- data.frame(H = names(Z.models)[i], logLik = fit$logLik,
AICc = fit$AICc, num.param = fit$num.params, m = length(unique(Z.models[[i]])),
num.iter = fit$numIter, converged = !fit$convergence)
out.tab <- rbind(out.tab, out)
fits <- c(fits, list(fit))
}
```
We will use AICc and AIC weights to summarize the data support for the different hypotheses. First we will sort the fits based on AICc:
```
min.AICc <- order(out.tab$AICc)
out.tab.1 <- out.tab[min.AICc, ]
```
Next we add the \\(\\Delta\\)AICc values by subtracting the lowest AICc:
```
out.tab.1 <- cbind(out.tab.1, delta.AICc = out.tab.1$AICc - out.tab.1$AICc[1])
```
Relative likelihood is defined as \\(\\,\\text{exp}(\-\\Delta \\mathrm{AICc}/2\)\\).
```
out.tab.1 <- cbind(out.tab.1, rel.like = exp(-1 * out.tab.1$delta.AICc/2))
```
The AIC weight for a model is its relative likelihood divided by the sum of all the relative likelihoods.
```
out.tab.1 <- cbind(out.tab.1, AIC.weight = out.tab.1$rel.like/sum(out.tab.1$rel.like))
```
Let’s look at the model weights (`out.tab.1`):
```
H delta.AICc AIC.weight converged
NC+strait+PS+SC 0.00 0.979 TRUE
site 7.65 0.021 TRUE
N+S 36.97 0.000 TRUE
stock 47.02 0.000 TRUE
coast+PS 48.78 0.000 TRUE
panmictic 71.67 0.000 TRUE
```
### 7\.8\.1 Fit the models
We loop through the models, fit and store the results:
```
out.tab <- NULL
fits <- list()
for (i in 1:length(Z.models)) {
mod.list$Z <- Z.models[[i]]
fit <- MARSS::MARSS(sealData, model = mod.list, silent = TRUE,
control = list(maxit = 1000))
out <- data.frame(H = names(Z.models)[i], logLik = fit$logLik,
AICc = fit$AICc, num.param = fit$num.params, m = length(unique(Z.models[[i]])),
num.iter = fit$numIter, converged = !fit$convergence)
out.tab <- rbind(out.tab, out)
fits <- c(fits, list(fit))
}
```
We will use AICc and AIC weights to summarize the data support for the different hypotheses. First we will sort the fits based on AICc:
```
min.AICc <- order(out.tab$AICc)
out.tab.1 <- out.tab[min.AICc, ]
```
Next we add the \\(\\Delta\\)AICc values by subtracting the lowest AICc:
```
out.tab.1 <- cbind(out.tab.1, delta.AICc = out.tab.1$AICc - out.tab.1$AICc[1])
```
Relative likelihood is defined as \\(\\,\\text{exp}(\-\\Delta \\mathrm{AICc}/2\)\\).
```
out.tab.1 <- cbind(out.tab.1, rel.like = exp(-1 * out.tab.1$delta.AICc/2))
```
The AIC weight for a model is its relative likelihood divided by the sum of all the relative likelihoods.
```
out.tab.1 <- cbind(out.tab.1, AIC.weight = out.tab.1$rel.like/sum(out.tab.1$rel.like))
```
Let’s look at the model weights (`out.tab.1`):
```
H delta.AICc AIC.weight converged
NC+strait+PS+SC 0.00 0.979 TRUE
site 7.65 0.021 TRUE
N+S 36.97 0.000 TRUE
stock 47.02 0.000 TRUE
coast+PS 48.78 0.000 TRUE
panmictic 71.67 0.000 TRUE
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-multivariate-state-space-models-with-jags.html |
7\.9 Fitting a MARSS model with JAGS
------------------------------------
Here we show you how to fit a MARSS model for the harbor seal data using JAGS. We will focus on four time series from inland Washington and set up the data as follows:
```
data(harborSealWA, package = "MARSS")
sites <- c("SJF", "SJI", "EBays", "PSnd")
Y <- harborSealWA[, sites]
Y <- t(Y) # time across columns
```
We will fit the model with four temporally independent subpopulations with the same population growth rate (\\(u\\)) and year\-to\-year variance (\\(q\\)). This is the model in Section [7\.4](sec-mss-segind.html#sec-mss-segind).
### 7\.9\.1 Writing the model in JAGS
The first step is to write this model in JAGS. See Chapter [12](chap-jags.html#chap-jags) for more information on and examples of JAGS models.
```
jagsscript <- cat("
model {
U ~ dnorm(0, 0.01);
tauQ~dgamma(0.001,0.001);
Q <- 1/tauQ;
# Estimate the initial state vector of population abundances
for(i in 1:nSites) {
X[i,1] ~ dnorm(3,0.01); # vague normal prior
}
# Autoregressive process for remaining years
for(t in 2:nYears) {
for(i in 1:nSites) {
predX[i,t] <- X[i,t-1] + U;
X[i,t] ~ dnorm(predX[i,t], tauQ);
}
}
# Observation model
# The Rs are different in each site
for(i in 1:nSites) {
tauR[i]~dgamma(0.001,0.001);
R[i] <- 1/tauR[i];
}
for(t in 1:nYears) {
for(i in 1:nSites) {
Y[i,t] ~ dnorm(X[i,t],tauR[i]);
}
}
}
",
file = "marss-jags.txt")
```
### 7\.9\.2 Fit the JAGS model
{\#sec\-mss\-fit\-jags}
Then we write the data list, parameter list, and pass the model to the `jags()` function:
```
jags.data <- list(Y = Y, nSites = nrow(Y), nYears = ncol(Y)) # named list
jags.params <- c("X", "U", "Q", "R")
model.loc <- "marss-jags.txt" # name of the txt file
mod_1 <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
### 7\.9\.3 Plot the posteriors for the estimated states
We can plot any of the variables we chose to return to R in the `jags.params` list. Let’s focus on the `X`. When we look at the dimension of the `X`, we can use the `apply()` function to calculate the means and 95 percent CIs of the estimated states.
```
# attach.jags attaches the jags.params to our workspace
attach.jags(mod_1)
means <- apply(X, c(2, 3), mean)
upperCI <- apply(X, c(2, 3), quantile, 0.975)
lowerCI <- apply(X, c(2, 3), quantile, 0.025)
par(mfrow = c(2, 2))
nYears <- ncol(Y)
for (i in 1:nrow(means)) {
plot(means[i, ], lwd = 3, ylim = range(c(lowerCI[i, ], upperCI[i,
])), type = "n", main = colnames(Y)[i], ylab = "log abundance",
xlab = "time step")
polygon(c(1:nYears, nYears:1, 1), c(upperCI[i, ], rev(lowerCI[i,
]), upperCI[i, 1]), col = "skyblue", lty = 0)
lines(means[i, ], lwd = 3)
title(rownames(Y)[i])
}
```
Figure 7\.6: Plot of the posterior means and credible intervals for the estimated states.
```
detach.jags()
```
### 7\.9\.1 Writing the model in JAGS
The first step is to write this model in JAGS. See Chapter [12](chap-jags.html#chap-jags) for more information on and examples of JAGS models.
```
jagsscript <- cat("
model {
U ~ dnorm(0, 0.01);
tauQ~dgamma(0.001,0.001);
Q <- 1/tauQ;
# Estimate the initial state vector of population abundances
for(i in 1:nSites) {
X[i,1] ~ dnorm(3,0.01); # vague normal prior
}
# Autoregressive process for remaining years
for(t in 2:nYears) {
for(i in 1:nSites) {
predX[i,t] <- X[i,t-1] + U;
X[i,t] ~ dnorm(predX[i,t], tauQ);
}
}
# Observation model
# The Rs are different in each site
for(i in 1:nSites) {
tauR[i]~dgamma(0.001,0.001);
R[i] <- 1/tauR[i];
}
for(t in 1:nYears) {
for(i in 1:nSites) {
Y[i,t] ~ dnorm(X[i,t],tauR[i]);
}
}
}
",
file = "marss-jags.txt")
```
### 7\.9\.2 Fit the JAGS model
{\#sec\-mss\-fit\-jags}
Then we write the data list, parameter list, and pass the model to the `jags()` function:
```
jags.data <- list(Y = Y, nSites = nrow(Y), nYears = ncol(Y)) # named list
jags.params <- c("X", "U", "Q", "R")
model.loc <- "marss-jags.txt" # name of the txt file
mod_1 <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
### 7\.9\.3 Plot the posteriors for the estimated states
We can plot any of the variables we chose to return to R in the `jags.params` list. Let’s focus on the `X`. When we look at the dimension of the `X`, we can use the `apply()` function to calculate the means and 95 percent CIs of the estimated states.
```
# attach.jags attaches the jags.params to our workspace
attach.jags(mod_1)
means <- apply(X, c(2, 3), mean)
upperCI <- apply(X, c(2, 3), quantile, 0.975)
lowerCI <- apply(X, c(2, 3), quantile, 0.025)
par(mfrow = c(2, 2))
nYears <- ncol(Y)
for (i in 1:nrow(means)) {
plot(means[i, ], lwd = 3, ylim = range(c(lowerCI[i, ], upperCI[i,
])), type = "n", main = colnames(Y)[i], ylab = "log abundance",
xlab = "time step")
polygon(c(1:nYears, nYears:1, 1), c(upperCI[i, ], rev(lowerCI[i,
]), upperCI[i, 1]), col = "skyblue", lty = 0)
lines(means[i, ], lwd = 3)
title(rownames(Y)[i])
}
```
Figure 7\.6: Plot of the posterior means and credible intervals for the estimated states.
```
detach.jags()
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-marss-fitting-with-stan.html |
7\.10 Fitting a MARSS model with Stan
-------------------------------------
Let’s fit the same model as in Section [7\.9](sec-mss-multivariate-state-space-models-with-jags.html#sec-mss-multivariate-state-space-models-with-jags) with Stan using the **rstan** package. If you have not already, you will need to install the **rstan** package. This package depends on a number of other packages which should install automatically when you install **rstan**.
First we write the model. We could write this to a file (recommended), but for this example, we write as a character object. Though the syntax is different from the JAGS code, it has many similarities. Note that Stan does not allow missing values in the data, thus we need to pass in only the non\-missing values along with the row and column indices of those values. The latter is so we can match them to the appropriate state (\\(x\\)) values.
```
scode <- "
data {
int<lower=0> TT; // length of ts
int<lower=0> N; // num of ts; rows of y
int<lower=0> n_pos; // number of non-NA values in y
int<lower=0> col_indx_pos[n_pos]; // col index of non-NA vals
int<lower=0> row_indx_pos[n_pos]; // row index of non-NA vals
vector[n_pos] y;
}
parameters {
vector[N] x0; // initial states
real u;
vector[N] pro_dev[TT]; // refed as pro_dev[TT,N]
real<lower=0> sd_q;
real<lower=0> sd_r[N]; // obs variances are different
}
transformed parameters {
vector[N] x[TT]; // refed as x[TT,N]
for(i in 1:N){
x[1,i] = x0[i] + u + pro_dev[1,i];
for(t in 2:TT) {
x[t,i] = x[t-1,i] + u + pro_dev[t,i];
}
}
}
model {
sd_q ~ cauchy(0,5);
for(i in 1:N){
x0[i] ~ normal(y[i],10); // assume no missing y[1]
sd_r[i] ~ cauchy(0,5);
for(t in 1:TT){
pro_dev[t,i] ~ normal(0, sd_q);
}
}
u ~ normal(0,2);
for(i in 1:n_pos){
y[i] ~ normal(x[col_indx_pos[i], row_indx_pos[i]], sd_r[row_indx_pos[i]]);
}
}
generated quantities {
vector[n_pos] log_lik;
for (n in 1:n_pos) log_lik[n] = normal_lpdf(y[n] | x[col_indx_pos[n], row_indx_pos[n]], sd_r[row_indx_pos[n]]);
}
"
```
Then we call `stan()` and pass in the data, names of parameter we wish to have returned, and information on number of chains, samples (iter), and thinning. The output is verbose (hidden here) and may have some warnings.
```
ypos <- Y[!is.na(Y)]
n_pos <- length(ypos) # number on non-NA ys
indx_pos <- which(!is.na(Y), arr.ind = TRUE) # index on the non-NAs
col_indx_pos <- as.vector(indx_pos[, "col"])
row_indx_pos <- as.vector(indx_pos[, "row"])
mod <- rstan::stan(model_code = scode, data = list(y = ypos,
TT = ncol(Y), N = nrow(Y), n_pos = n_pos, col_indx_pos = col_indx_pos,
row_indx_pos = row_indx_pos), pars = c("sd_q", "x", "sd_r",
"u", "x0"), chains = 3, iter = 1000, thin = 1)
```
We use `extract()` to extract the parameters from the fitted model and then the means and 95% credible intervals.
```
pars <- rstan::extract(mod)
means <- apply(pars$x, c(2, 3), mean)
upperCI <- apply(pars$x, c(2, 3), quantile, 0.975)
lowerCI <- apply(pars$x, c(2, 3), quantile, 0.025)
colnames(means) <- colnames(upperCI) <- colnames(lowerCI) <- rownames(Y)
```
```
No id variables; using all as measure variables
No id variables; using all as measure variables
No id variables; using all as measure variables
```
Figure 7\.7: Estimated level and 95 percent credible intervals.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mss-problems.html |
7\.11 Problems
--------------
For these questions, use the `harborSealWA` data set in **MARSS**. The data are already logged, but you will need to remove the year column and have time going across the columns not down the rows.
```
require(MARSS)
data(harborSealWA, package = "MARSS")
dat <- t(harborSealWA[, 2:6])
```
The sites are San Juan de Fuca (SJF 3\), San Juan Islands (SJI 4\), Eastern Bays (EBays 5\), Puget Sound (PSnd 6\) and Hood Canal (HC 7\).
Regions in the harbor seal surveys
1. Plot the harbor seal data. Use whatever plotting functions you wish (e.g. `ggplot()`, `plot(); points(); lines()`, `matplot()`).
2. Fit a panmictic population model that assumes that each of the 5 sites is observing one “Inland WA” harbor seal population with trend \\(u\\). Assume the observation errors are independent and identical. This means 1 variance on diagonal and 0s on off\-diagonal. This is the default assumption for `MARSS()`.
1. Write the \\(\\mathbf{Z}\\) for this model.
The code to use for making a matrix in Rmarkdown is
```
$$\begin{bmatrix}a & b & 0\\d & e & f\\0 & h & i\end{bmatrix}$$
```
2. Write the \\(\\mathbf{Z}\\) matrix in R using `Z=matrix(...)` and using the factor short\-cut for specifying \\(\\mathbf{Z}\\). `Z=factor(c(...)`.
3. Fit the model using `MARSS()`. What is the estimated trend (\\(u\\))? How fast was the population increasing (percent per year) based on this estimated \\(u\\)?
4. Compute the confidence intervals for the parameter estimates. Compare the intervals using the Hessian approximation and using a parametric bootstrap. What differences do you see between the two approaches? Use this code:
```
library(broom)
tidy(fit)
# set nboot low so it doesn't take forever
tidy(fit, method="parametric",nboot=100)
```
5. What does an estimate of \\(\\mathbf{Q}\=0\\) mean? What would the estimated state (\\(x\\)) look like when \\(\\mathbf{Q}\=0\\)?
3. Using the same panmictic population model, compare 3 assumptions about the observation error structure.
* The observation errors are independent with different variances.
* The observation errors are independent with the same variance.
* The observation errors are correlated with the same variance and same correlation.
1. Write the \\(\\mathbf{R}\\) variance\-covariance matrices for each assumption.
2. Create each R matrix in R. To combine, numbers and characters in a matrix use a list matrix like so:
```
A <- matrix(list(0),3,3)
A[1,1] <- "sigma2"
```
3. Fit each model using `MARSS()` and compute the confidence intervals (CIs) for the estimated parameters. Compare the estimated \\(u\\) (the population long\-term trend) along with their CIs. Does the assumption about the observation errors change the \\(u\\) estimate?
4. Plot the state residuals, the ACF of the state residuals, and the histogram of the state residuals for each fit. Are there any issues that you see? Use this code to get your state residuals:
```
MARSSresiduals(fit)$state.residuals[1,]
```
You need the `[1,]` since the residuals are returned as a matrix.
4. Fit a model with 3 subpopulations. 1\=SJF,SJI; 2\=PS,EBays; 3\=HC. The \\(x\\) part of the model is the population structure. Assume that the observation errors are identical and independent (`R="diagonal and equal"`). Assume that the process errors are unique and independent (`Q="diagonal and unequal"`). Assume that the \\(u\\) are unique among the 3 subpopulation.
1. Write the \\(\\mathbf{x}\\) equation. Make sure each matrix in the equation has the right number of rows and columns.
2. Write the \\(\\mathbf{Z}\\) matrix.
3. Write the \\(\\mathbf{Z}\\) in R using `Z=matrix(...)` and using the factor shortcut `Z=factor(c(...))`.
4. Fit the model with `MARSS()`.
5. What do the estimated \\(u\\) and \\(\\mathbf{Q}\\) imply about the population dynamics in the 3 subpopulations?
5. Repeat the fit from Question 4 but assume that the 3 subpopulations covary. Use `Q="unconstrained"`.
1. What does the estimated \\(\\mathbf{Q}\\) matrix tell you about how the 3 subpopulation covary?
2. Compare the AICc from the model in Question 4 and the one with `Q="unconstrained"`. Which is more supported?
3. Fit the model with `Q="equalvarcov"`. Is this more supported based on AICc?
6. Develop the following alternative models for the structure of the inland harbor seal population. For each model assume that the observation errors are identical and independent (`R="diagonal and equal"`). Assume that the process errors covary with equal variance and covariances (`Q="equalvarcov"`).
* 5 subpopulations with unique \\(u\\).
* 5 subpopulations with shared (equal) \\(u\\).
* 5 subpopulations but with \\(u\\) shared in some regions: SJF\+SJI shared, PS\+EBays shared, HC unique.
* 1 panmictic population.
* 3 subpopulations, 1\=SJF,SJI, 2\=PS,EBays, 3\=HC, with unique \\(u\\)
* 2 subpopulations, 1\=SJF,SJI,PS,EBays, 2\=HC, with unique \\(u\\)
1. Fit each model using `MARSS()`.
2. Prepare a table of each model with a column for the AICc values. And a column for \\(\\Delta AICc\\) (AICc minus the lowest AICc in the group). What is the most supported model?
7. Do diagnostics on the model innovations residuals for the 3 subpopulation model from question 4\. Use the following code to get your model residuals. This will put NAs in the model residuals where there is missing data. Then do the tests on each row of `resids`.
```
resids <- MARSSresiduals(fit, type = "tt1")$model.residuals
resids[is.na(dat)] <- NA
```
1. Plot the model residuals.
2. Plot the ACF of the model residuals. Use `acf(..., na.action=na.pass)`.
3. Plot the histogram of the model residuals.
4. Fit an ARIMA() model to your model residuals using `forecast::auto.arima()`. Are the best fit models what you want? Note, we cannot use the Augmented Dickey\-Fuller or KPSS tests when there are missing values in our residuals time series.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-msscov.html |
Chapter 8 MARSS models with covariates
======================================
A script with all the R code in the chapter can be downloaded [here](./Rcode/multivariate-ss-cov.R). The Rmd for this chapter can be downloaded [here](./Rmds/multivariate-ss-cov.Rmd)
### Data and packages
For the chapter examples, we will use the green and bluegreen algae in the Lake Washington plankton data set and the covariates in that dataset. This is a 32\-year time series (1962\-1994\) of monthly plankton counts (cells per mL) from Lake Washington, Washington, USA with the covariates total phosphorous and pH. `lakeWAplanktonTrans` is a transformed version of the raw data used for teaching purposes. Zeros have been replaced with NAs (missing). The logged (natural log) raw plankton counts have been standardized to a mean of zero and variance of 1 (so logged and then z\-scored). Temperature, TP and pH were also z\-scored but not logged (so z\-score of the untransformed values for these covariates). The single missing temperature value was replaced with \-1 and the single missing TP value was replaced with \-0\.3\.
We will use the 10 years of data from 1965\-1974 (Figure [8\.1](sec-msscov-prepare-data.html#fig:msscov-plank-plot)), a decade with particularly high green and bluegreen algae levels.
```
data(lakeWAplankton, package = "MARSS")
# lakeWA
fulldat <- lakeWAplanktonTrans
years <- fulldat[, "Year"] >= 1965 & fulldat[, "Year"] < 1975
dat <- t(fulldat[years, c("Greens", "Bluegreens")])
covariates <- t(fulldat[years, c("Temp", "TP")])
```
Packages:
```
library(MARSS)
library(ggplot2)
```
### Data and packages
For the chapter examples, we will use the green and bluegreen algae in the Lake Washington plankton data set and the covariates in that dataset. This is a 32\-year time series (1962\-1994\) of monthly plankton counts (cells per mL) from Lake Washington, Washington, USA with the covariates total phosphorous and pH. `lakeWAplanktonTrans` is a transformed version of the raw data used for teaching purposes. Zeros have been replaced with NAs (missing). The logged (natural log) raw plankton counts have been standardized to a mean of zero and variance of 1 (so logged and then z\-scored). Temperature, TP and pH were also z\-scored but not logged (so z\-score of the untransformed values for these covariates). The single missing temperature value was replaced with \-1 and the single missing TP value was replaced with \-0\.3\.
We will use the 10 years of data from 1965\-1974 (Figure [8\.1](sec-msscov-prepare-data.html#fig:msscov-plank-plot)), a decade with particularly high green and bluegreen algae levels.
```
data(lakeWAplankton, package = "MARSS")
# lakeWA
fulldat <- lakeWAplanktonTrans
years <- fulldat[, "Year"] >= 1965 & fulldat[, "Year"] < 1975
dat <- t(fulldat[years, c("Greens", "Bluegreens")])
covariates <- t(fulldat[years, c("Temp", "TP")])
```
Packages:
```
library(MARSS)
library(ggplot2)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-msscov-prepare-data.html |
8\.2 Prepare the plankton data
------------------------------
We will prepare the data by z\-scoring. The original data `lakeWAplanktonTrans` were already z\-scored, but we changed the mean when we subsampled the years so we need to z\-score again.
```
# z-score the response variables
the.mean <- apply(dat, 1, mean, na.rm = TRUE)
the.sigma <- sqrt(apply(dat, 1, var, na.rm = TRUE))
dat <- (dat - the.mean) * (1/the.sigma)
```
Next we set up the covariate data, temperature and total phosphorous. We z\-score the covariates to standardize and remove the mean.
```
the.mean <- apply(covariates, 1, mean, na.rm = TRUE)
the.sigma <- sqrt(apply(covariates, 1, var, na.rm = TRUE))
covariates <- (covariates - the.mean) * (1/the.sigma)
```
Figure 8\.1: Time series of Green and Bluegreen algae abundances in Lake Washington along with the temperature and total phosporous covariates.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-msscov-obs-error-only.html |
8\.3 Observation\-error only model
----------------------------------
We can estimate the effect of the covariates using a process\-error only model, an observation\-error only model, or a model with both types of error. An observation\-error only model is a multivariate regression, and we will start here so you see the relationship of MARSS model to more familiar linear regression models.
In a standard multivariate linear regression, we only have an observation model with independent errors (the state process does not appear in the model):
\\\[\\begin{equation}
\\mathbf{y}\_t \= \\mathbf{a} \+ \\mathbf{D}\\mathbf{d}\_t \+ \\mathbf{v}\_t, \\text{ where } \\mathbf{v}\_t \\sim \\text{MVN}(0,\\mathbf{R})
\\tag{8\.2}
\\end{equation}\\]
The elements in \\(\\mathbf{a}\\) are the intercepts and those in \\(\\mathbf{D}\\) are the slopes (effects). We have dropped the \\(t\\) subscript on \\(\\mathbf{a}\\) and \\(\\mathbf{D}\\) because these will be modeled as time\-constant. Writing this out for the two plankton and the two covariates we get:
\\\[\\begin{equation}
\\begin{split}
\\begin{bmatrix}
y\_{g} \\\\
y\_{bg} \\end{bmatrix}\_t \&\=
\\begin{bmatrix}
a\_1 \\\\
a\_2 \\end{bmatrix} \+
\\begin{bmatrix}
\\beta\_{\\mathrm{g,temp}}\&\\beta\_{\\mathrm{g,tp}} \\\\
\\beta\_{\\mathrm{bg,temp}}\&\\beta\_{\\mathrm{bg,tp}} \\end{bmatrix}
\\begin{bmatrix}
\\mathrm{temp} \\\\
\\mathrm{tp} \\end{bmatrix}\_{t} \+
\\begin{bmatrix}
v\_{1} \\\\
v\_{2} \\end{bmatrix}\_t
\\end{split}
\\tag{8\.3}
\\end{equation}\\]
Let’s fit this model with MARSS. The \\(\\mathbf{x}\\) part of the model is irrelevant so we want to fix the parameters in that part of the model. We won’t set \\(\\mathbf{B}\=0\\) or \\(\\mathbf{Z}\=0\\) since that might cause numerical issues for the Kalman filter. Instead we fix them as identity matrices and fix \\(\\mathbf{x}\_0\=0\\) so that \\(\\mathbf{x}\_t\=0\\) for all \\(t\\).
```
Q <- U <- x0 <- "zero"
B <- Z <- "identity"
d <- covariates
A <- "zero"
D <- "unconstrained"
y <- dat # to show relationship between dat & the equation
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, D = D,
d = d, x0 = x0)
kem <- MARSS(y, model = model.list)
```
```
Success! algorithm run for 15 iterations. abstol and log-log tests passed.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Algorithm ran 15 (=minit) iterations and convergence was reached.
Log-likelihood: -276.4287
AIC: 562.8573 AICc: 563.1351
Estimate
R.diag 0.706
D.(Greens,Temp) 0.367
D.(Bluegreens,Temp) 0.392
D.(Greens,TP) 0.058
D.(Bluegreens,TP) 0.535
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
We set `A="zero"` because the data and covariates have been demeaned. Of course, one can do multiple regression in R using, say, `lm()`, and that would be much, much faster. The EM algorithm is over\-kill here, but it is shown so that you see how a standard multivariate linear regression model is written as a MARSS model in matrix form.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-msscov-proc-error-only.html |
8\.4 Process\-error only model
------------------------------
Now let’s model the data as an autoregressive process observed without error, and incorporate the covariates into the process model. Note that this is much different from typical linear regression models. The \\(\\mathbf{x}\\) part represents our model of the data (in this case plankton species). How is this different from the autoregressive observation errors? Well, we are modeling our data as autoregressive so data at \\(t\-1\\) affects the data at \\(t\\). Population abundances are inherently autoregressive so this model is a bit closer to the underlying mechanism generating the data. Here is our new process model for plankton abundance.
\\\[\\begin{equation}
\\mathbf{x}\_t \= \\mathbf{x}\_{t\-1} \+ \\mathbf{C}\\mathbf{c}\_t \+ \\mathbf{w}\_t, \\text{ where } \\mathbf{w}\_t \\sim \\text{MVN}(0,\\mathbf{Q})
\\tag{8\.4}
\\end{equation}\\]
We can fit this as follows:
```
R <- A <- U <- "zero"
B <- Z <- "identity"
Q <- "equalvarcov"
C <- "unconstrained"
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
C = C, c = covariates)
kem <- MARSS(dat, model = model.list)
```
```
Success! algorithm run for 15 iterations. abstol and log-log tests passed.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Algorithm ran 15 (=minit) iterations and convergence was reached.
Log-likelihood: -285.0732
AIC: 586.1465 AICc: 586.8225
Estimate
Q.diag 0.7269
Q.offdiag -0.0210
x0.X.Greens -0.5189
x0.X.Bluegreens -0.2431
C.(X.Greens,Temp) -0.0434
C.(X.Bluegreens,Temp) 0.0988
C.(X.Greens,TP) -0.0589
C.(X.Bluegreens,TP) 0.0104
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
Now, it looks like temperature has a strong negative effect on algae? Also our log\-likelihood dropped a lot. Well, the data do not look at all like a random walk model (where \\(\\mathbf{B}\=1\\)), which we can see from the plot of the data (Figure [8\.1](sec-msscov-prepare-data.html#fig:msscov-plank-plot)). The data are fluctuating about some mean so let’s switch to a better autoregressive model—a mean\-reverting model. To do this, we will allow the diagonal elements of \\(\\mathbf{B}\\) to be something other than 1\.
```
model.list$B <- "diagonal and unequal"
kem <- MARSS(dat, model = model.list)
```
```
Success! algorithm run for 15 iterations. abstol and log-log tests passed.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Algorithm ran 15 (=minit) iterations and convergence was reached.
Log-likelihood: -236.6106
AIC: 493.2211 AICc: 494.2638
Estimate
B.(X.Greens,X.Greens) 0.1981
B.(X.Bluegreens,X.Bluegreens) 0.7672
Q.diag 0.4899
Q.offdiag -0.0221
x0.X.Greens -1.2915
x0.X.Bluegreens -0.4179
C.(X.Greens,Temp) 0.2844
C.(X.Bluegreens,Temp) 0.1655
C.(X.Greens,TP) 0.0332
C.(X.Bluegreens,TP) 0.1340
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
Notice that the log\-likelihood goes up quite a bit, which means that the mean\-reverting model fits the data much better.
With this model, we are estimating \\(\\mathbf{x}\_0\\). If we set `model$tinitx=1`, we will get a error message that \\(\\mathbf{R}\\) diagonals are equal to 0 and we need to fix `x0`. Because \\(\\mathbf{R}\=0\\), if we set the initial states at \\(t\=1\\), then they are fully determined by the data.
```
x0 <- dat[, 1, drop = FALSE]
model.list$tinitx <- 1
model.list$x0 <- x0
kem <- MARSS(dat, model = model.list)
```
```
Success! algorithm run for 15 iterations. abstol and log-log tests passed.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Algorithm ran 15 (=minit) iterations and convergence was reached.
Log-likelihood: -235.4827
AIC: 486.9653 AICc: 487.6414
Estimate
B.(X.Greens,X.Greens) 0.1980
B.(X.Bluegreens,X.Bluegreens) 0.7671
Q.diag 0.4944
Q.offdiag -0.0223
C.(X.Greens,Temp) 0.2844
C.(X.Bluegreens,Temp) 0.1655
C.(X.Greens,TP) 0.0332
C.(X.Bluegreens,TP) 0.1340
Initial states (x0) defined at t=1
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-msscov-both-error.html |
8\.5 Both process\- and observation\-error
------------------------------------------
Here is an example where we have both process and observation error but the covariates only affect the process:
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{x}\_t \= \\mathbf{B}\\mathbf{x}\_{t\-1} \+ \\mathbf{C}\_t\\mathbf{c}\_t \+ \\mathbf{w}\_t, \\text{ where } \\mathbf{w}\_t \\sim \\text{MVN}(0,\\mathbf{Q})\\\\
\\mathbf{y}\_t \= \\mathbf{x}\_{t\-1} \+ \\mathbf{v}\_t, \\text{ where } \\mathbf{v}\_t \\sim \\text{MVN}(0,\\mathbf{R}),
\\end{gathered}
\\tag{8\.5}
\\end{equation}\\]
\\(\\mathbf{x}\\) is the true algae abundances and \\(\\mathbf{y}\\) is the observation of the \\(\\mathbf{x}\\)’s.
Let’s say we knew that the observation variance on the algae measurements was about 0\.16 and we wanted to include that known value in the model. To do that, we can simply add \\(\\mathbf{R}\\) to the model list from the process\-error only model in the last example.
```
D <- d <- A <- U <- "zero"
Z <- "identity"
B <- "diagonal and unequal"
Q <- "equalvarcov"
C <- "unconstrained"
c <- covariates
R <- diag(0.16, 2)
x0 <- "unequal"
tinitx <- 1
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
D = D, d = d, C = C, c = c, x0 = x0, tinitx = tinitx)
kem <- MARSS(dat, model = model.list)
```
```
Success! abstol and log-log tests passed at 36 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 36 iterations.
Log-likelihood: -240.3694
AIC: 500.7389 AICc: 501.7815
Estimate
B.(X.Greens,X.Greens) 0.30848
B.(X.Bluegreens,X.Bluegreens) 0.76101
Q.diag 0.33923
Q.offdiag -0.00411
x0.X.Greens -0.52614
x0.X.Bluegreens -0.32836
C.(X.Greens,Temp) 0.23790
C.(X.Bluegreens,Temp) 0.16991
C.(X.Greens,TP) 0.02505
C.(X.Bluegreens,TP) 0.14183
Initial states (x0) defined at t=1
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
Note, our estimates of the effect of temperature and total phosphorous are not that different than what you get from a simple multiple regression (our first example). This might be because the autoregressive component is small, meaning the estimated diagonals on the \\(\\mathbf{B}\\) matrix are small.
Here is an example where we have both process and observation error but the covariates only affect the observation process:
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{x}\_t \= \\mathbf{B}\\mathbf{x}\_{t\-1} \+ \\mathbf{w}\_t, \\text{ where } \\mathbf{w}\_t \\sim \\text{MVN}(0,\\mathbf{Q})\\\\
\\mathbf{y}\_t \= \\mathbf{x}\_{t\-1} \+ \\mathbf{D}\\mathbf{d}\_t \+ \\mathbf{v}\_t, \\text{ where } \\mathbf{v}\_t \\sim \\text{MVN}(0,\\mathbf{R}),
\\end{gathered}
\\tag{8\.6}
\\end{equation}\\]
\\(\\mathbf{x}\\) is the true algae abundances and \\(\\mathbf{y}\\) is the observation of the \\(\\mathbf{x}\\)’s.
```
C <- c <- A <- U <- "zero"
Z <- "identity"
B <- "diagonal and unequal"
Q <- "equalvarcov"
D <- "unconstrained"
d <- covariates
R <- diag(0.16, 2)
x0 <- "unequal"
tinitx <- 1
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
D = D, d = d, C = C, c = c, x0 = x0, tinitx = tinitx)
kem <- MARSS(dat, model = model.list)
```
```
Success! abstol and log-log tests passed at 45 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 45 iterations.
Log-likelihood: -239.5879
AIC: 499.1759 AICc: 500.2185
Estimate
B.(X.Greens,X.Greens) 0.428
B.(X.Bluegreens,X.Bluegreens) 0.859
Q.diag 0.314
Q.offdiag -0.030
x0.X.Greens -0.121
x0.X.Bluegreens -0.119
D.(Greens,Temp) 0.373
D.(Bluegreens,Temp) 0.276
D.(Greens,TP) 0.042
D.(Bluegreens,TP) 0.115
Initial states (x0) defined at t=1
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-msscov-season.html |
8\.6 Including seasonal effects in MARSS models
-----------------------------------------------
Time\-series data are often collected at intervals with some implicit \`\`seasonality.’’ For example, quarterly earnings for a business, monthly rainfall totals, or hourly air temperatures. In those cases, it is often helpful to extract any recurring seasonal patterns that might otherwise mask some of the other temporal dynamics we are interested in examining.
Here we show a few approaches for including seasonal effects using the Lake Washington plankton data, which were collected monthly. The following examples will use all five phytoplankton species from Lake Washington. First, let’s set up the data.
```
years <- fulldat[, "Year"] >= 1965 & fulldat[, "Year"] < 1975
phytos <- c("Diatoms", "Greens", "Bluegreens", "Unicells", "Other.algae")
dat <- t(fulldat[years, phytos])
# z.score data because we changed the mean when we
# subsampled
the.mean <- apply(dat, 1, mean, na.rm = TRUE)
the.sigma <- sqrt(apply(dat, 1, var, na.rm = TRUE))
dat <- (dat - the.mean) * (1/the.sigma)
# number of time periods/samples
TT <- dim(dat)[2]
```
### 8\.6\.1 Seasonal effects as fixed factors
One common approach for estimating seasonal effects is to treat each one as a fixed factor, such that the number of parameters equals the number of \`\`seasons’’ (e.g., 24 hours per day, 4 quarters per year). The plankton data are collected monthly, so we will treat each month as a fixed factor. To fit a model with fixed month effects, we create a \\(12 \\times T\\) covariate matrix \\(\\mathbf{c}\\) with one row for each month (Jan, Feb, …) and one column for each time point. We put a 1 in the January row for each column corresponding to a January time point, a 1 in the February row for each column corresponding to a February time point, and so on. All other values of \\(\\mathbf{c}\\) equal 0\. The following code will create such a \\(\\mathbf{c}\\) matrix.
```
# number of 'seasons' (e.g., 12 months per year)
period <- 12
# first 'season' (e.g., Jan = 1, July = 7)
per.1st <- 1
# create factors for seasons
c.in <- diag(period)
for (i in 2:(ceiling(TT/period))) {
c.in <- cbind(c.in, diag(period))
}
# trim c.in to correct start & length
c.in <- c.in[, (1:TT) + (per.1st - 1)]
# better row names
rownames(c.in) <- month.abb
```
Next we need to set up the form of the \\(\\mathbf{C}\\) matrix which defines any constraints we want to set on the month effects. \\(\\mathbf{C}\\) is a \\(5 \\times 12\\) matrix. Five taxon and 12 month effects.
If we wanted each taxon to have the same month effect, i.e. there is a common month effect across all taxon, then
we have the same value in each \\(\\mathbf{C}\\) column:
```
C <- matrix(month.abb, 5, 12, byrow = TRUE)
C
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
[1,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
[2,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
[3,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
[4,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
[5,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
```
Notice, that \\(\\mathbf{C}\\) only has 12 values in it, the 12 common month effects.
However, for this example, we will let each taxon have a different month effect thus allowing different seasonality for each taxon. For this model, we want each value in \\(\\mathbf{C}\\) to be unique:
```
C <- "unconstrained"
```
Now \\(\\mathbf{C}\\) has 5 \\(\\times\\) 12 \= 60 separate effects.
Then we set up the form for the rest of the model parameters. We make the following assumptions:
```
# Each taxon has unique density-dependence
B <- "diagonal and unequal"
# Assume independent process errors
Q <- "diagonal and unequal"
# We have demeaned the data & are fitting a mean-reverting
# model by estimating a diagonal B, thus
U <- "zero"
# Each obs time series is associated with only one process
Z <- "identity"
# The data are demeaned & fluctuate around a mean
A <- "zero"
# We assume observation errors are independent, but they
# have similar variance due to similar collection methods
R <- "diagonal and equal"
# We are not including covariate effects in the obs
# equation
D <- "zero"
d <- "zero"
```
Now we can set up the model list for MARSS and fit the model (results are not shown since they are verbose with 60 different month effects).
```
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
C = C, c = c.in, D = D, d = d)
seas.mod.1 <- MARSS(dat, model = model.list, control = list(maxit = 1500))
# Get the estimated seasonal effects rows are taxa, cols
# are seasonal effects
seas.1 <- coef(seas.mod.1, type = "matrix")$C
rownames(seas.1) <- phytos
colnames(seas.1) <- month.abb
```
The top panel in Figure [8\.2](sec-msscov-season.html#fig:msscov-mon-effects) shows the estimated seasonal effects for this model. Note that if we had set U\=“unequal,” we would need to set one of the columns of \\(\\mathbf{C}\\) to zero because the model would be under\-determined (infinite number of solutions). If we substracted the mean January abundance off each time series, we could set the January column in \\(\\mathbf{C}\\) to 0 and get rid of 5 estimated effects.
### 8\.6\.2 Seasonal effects as a polynomial
The fixed factor approach required estimating 60 effects. Another approach is to model the month effect as a 3rd\-order (or higher) polynomial: \\(a\+b\\times m \+ c\\times m^2 \+ d \\times m^3\\) where \\(m\\) is the month number. This approach has less flexibility but requires only 20 estimated parameters (i.e., 4 regression parameters times 5 taxa). To do so, we create a \\(4 \\times T\\) covariate matrix \\(\\mathbf{c}\\) with the rows corresponding to 1, \\(m\\), \\(m^2\\), and \\(m^3\\), and the columns again corresponding to the time points. Here is how to set up this matrix:
```
# number of 'seasons' (e.g., 12 months per year)
period <- 12
# first 'season' (e.g., Jan = 1, July = 7)
per.1st <- 1
# order of polynomial
poly.order <- 3
# create polynomials of months
month.cov <- matrix(1, 1, period)
for (i in 1:poly.order) {
month.cov = rbind(month.cov, (1:12)^i)
}
# our c matrix is month.cov replicated once for each year
c.m.poly <- matrix(month.cov, poly.order + 1, TT + period, byrow = FALSE)
# trim c.in to correct start & length
c.m.poly <- c.m.poly[, (1:TT) + (per.1st - 1)]
# Everything else remains the same as in the previous
# example
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
C = C, c = c.m.poly, D = D, d = d)
seas.mod.2 <- MARSS(dat, model = model.list, control = list(maxit = 1500))
```
The effect of month \\(m\\) for taxon \\(i\\) is \\(a\_i \+ b\_i \\times m \+ c\_i \\times m^2 \+ d\_i \\times m^3\\), where \\(a\_i\\), \\(b\_i\\), \\(c\_i\\) and \\(d\_i\\) are in the \\(i\\)\-th row of \\(\\mathbf{C}\\). We can now calculate the matrix of seasonal effects as follows, where each row is a taxon and each column is a month:
```
C.2 = coef(seas.mod.2, type = "matrix")$C
seas.2 = C.2 %*% month.cov
rownames(seas.2) <- phytos
colnames(seas.2) <- month.abb
```
The middle panel in Figure [8\.2](sec-msscov-season.html#fig:msscov-mon-effects) shows the estimated seasonal effects for this polynomial model.
Note: Setting the covariates up like this means that our covariates are collinear since \\(m\\), \\(m^2\\) and \\(m^3\\) are correlated, obviously. A better approach is to use the `poly()` function to create an orthogonal polynomial covariate matrix `c.m.poly.o`:
```
month.cov.o <- cbind(1, poly(1:period, poly.order))
c.m.poly.o <- matrix(t(month.cov.o), poly.order + 1, TT + period,
byrow = FALSE)
c.m.poly.o <- c.m.poly.o[, (1:TT) + (per.1st - 1)]
```
### 8\.6\.3 Seasonal effects as a Fourier series
The factor approach required estimating 60 effects, and the 3rd order polynomial model was an improvement at only 20 parameters. A third option is to use a discrete Fourier series, which is combination of sine and cosine waves; it would require only 10 parameters. Specifically, the effect of month \\(m\\) on taxon \\(i\\) is \\(a\_i \\times \\cos(2 \\pi m/p) \+ b\_i \\times \\sin(2 \\pi m/p)\\), where \\(p\\) is the period (e.g., 12 months, 4 quarters), and \\(a\_i\\) and \\(b\_i\\) are contained in the \\(i\\)\-th row of \\(\\mathbf{C}\\).
We begin by defining the \\(2 \\times T\\) seasonal covariate matrix \\(\\mathbf{c}\\) as a combination of 1 cosine and 1 sine wave:
```
cos.t <- cos(2 * pi * seq(TT)/period)
sin.t <- sin(2 * pi * seq(TT)/period)
c.Four <- rbind(cos.t, sin.t)
```
Everything else remains the same and we can fit this model as follows:
```
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
C = C, c = c.Four, D = D, d = d)
seas.mod.3 <- MARSS(dat, model = model.list, control = list(maxit = 1500))
```
We make our seasonal effect matrix as follows:
```
C.3 <- coef(seas.mod.3, type = "matrix")$C
# The time series of net seasonal effects
seas.3 <- C.3 %*% c.Four[, 1:period]
rownames(seas.3) <- phytos
colnames(seas.3) <- month.abb
```
The bottom panel in Figure [8\.2](sec-msscov-season.html#fig:msscov-mon-effects) shows the estimated seasonal effects for this seasonal\-effects model based on a discrete Fourier series.
Figure 8\.2: Estimated monthly effects for the three approaches to estimating seasonal effects. Top panel: each month modelled as a separate fixed effect for each taxon (60 parameters); Middle panel: monthly effects modelled as a 3rd order polynomial (20 parameters); Bottom panel: monthly effects modelled as a discrete Fourier series (10 parameters).
Rather than rely on our eyes to judge model fits, we should formally assess which of the 3 approaches offers the most parsimonious fit to the data. Here is a table of AICc values for the 3 models:
```
data.frame(Model = c("Fixed", "Cubic", "Fourier"), AICc = round(c(seas.mod.1$AICc,
seas.mod.2$AICc, seas.mod.3$AICc), 1))
```
```
Model AICc
1 Fixed 1188.4
2 Cubic 1144.9
3 Fourier 1127.4
```
The model selection results indicate that the model with monthly seasonal effects estimated via the discrete Fourier sequence is the best of the 3 models. Its AICc value is much lower than either the polynomial or fixed\-effects models.
### 8\.6\.1 Seasonal effects as fixed factors
One common approach for estimating seasonal effects is to treat each one as a fixed factor, such that the number of parameters equals the number of \`\`seasons’’ (e.g., 24 hours per day, 4 quarters per year). The plankton data are collected monthly, so we will treat each month as a fixed factor. To fit a model with fixed month effects, we create a \\(12 \\times T\\) covariate matrix \\(\\mathbf{c}\\) with one row for each month (Jan, Feb, …) and one column for each time point. We put a 1 in the January row for each column corresponding to a January time point, a 1 in the February row for each column corresponding to a February time point, and so on. All other values of \\(\\mathbf{c}\\) equal 0\. The following code will create such a \\(\\mathbf{c}\\) matrix.
```
# number of 'seasons' (e.g., 12 months per year)
period <- 12
# first 'season' (e.g., Jan = 1, July = 7)
per.1st <- 1
# create factors for seasons
c.in <- diag(period)
for (i in 2:(ceiling(TT/period))) {
c.in <- cbind(c.in, diag(period))
}
# trim c.in to correct start & length
c.in <- c.in[, (1:TT) + (per.1st - 1)]
# better row names
rownames(c.in) <- month.abb
```
Next we need to set up the form of the \\(\\mathbf{C}\\) matrix which defines any constraints we want to set on the month effects. \\(\\mathbf{C}\\) is a \\(5 \\times 12\\) matrix. Five taxon and 12 month effects.
If we wanted each taxon to have the same month effect, i.e. there is a common month effect across all taxon, then
we have the same value in each \\(\\mathbf{C}\\) column:
```
C <- matrix(month.abb, 5, 12, byrow = TRUE)
C
```
```
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
[1,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
[2,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
[3,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
[4,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
[5,] "Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
```
Notice, that \\(\\mathbf{C}\\) only has 12 values in it, the 12 common month effects.
However, for this example, we will let each taxon have a different month effect thus allowing different seasonality for each taxon. For this model, we want each value in \\(\\mathbf{C}\\) to be unique:
```
C <- "unconstrained"
```
Now \\(\\mathbf{C}\\) has 5 \\(\\times\\) 12 \= 60 separate effects.
Then we set up the form for the rest of the model parameters. We make the following assumptions:
```
# Each taxon has unique density-dependence
B <- "diagonal and unequal"
# Assume independent process errors
Q <- "diagonal and unequal"
# We have demeaned the data & are fitting a mean-reverting
# model by estimating a diagonal B, thus
U <- "zero"
# Each obs time series is associated with only one process
Z <- "identity"
# The data are demeaned & fluctuate around a mean
A <- "zero"
# We assume observation errors are independent, but they
# have similar variance due to similar collection methods
R <- "diagonal and equal"
# We are not including covariate effects in the obs
# equation
D <- "zero"
d <- "zero"
```
Now we can set up the model list for MARSS and fit the model (results are not shown since they are verbose with 60 different month effects).
```
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
C = C, c = c.in, D = D, d = d)
seas.mod.1 <- MARSS(dat, model = model.list, control = list(maxit = 1500))
# Get the estimated seasonal effects rows are taxa, cols
# are seasonal effects
seas.1 <- coef(seas.mod.1, type = "matrix")$C
rownames(seas.1) <- phytos
colnames(seas.1) <- month.abb
```
The top panel in Figure [8\.2](sec-msscov-season.html#fig:msscov-mon-effects) shows the estimated seasonal effects for this model. Note that if we had set U\=“unequal,” we would need to set one of the columns of \\(\\mathbf{C}\\) to zero because the model would be under\-determined (infinite number of solutions). If we substracted the mean January abundance off each time series, we could set the January column in \\(\\mathbf{C}\\) to 0 and get rid of 5 estimated effects.
### 8\.6\.2 Seasonal effects as a polynomial
The fixed factor approach required estimating 60 effects. Another approach is to model the month effect as a 3rd\-order (or higher) polynomial: \\(a\+b\\times m \+ c\\times m^2 \+ d \\times m^3\\) where \\(m\\) is the month number. This approach has less flexibility but requires only 20 estimated parameters (i.e., 4 regression parameters times 5 taxa). To do so, we create a \\(4 \\times T\\) covariate matrix \\(\\mathbf{c}\\) with the rows corresponding to 1, \\(m\\), \\(m^2\\), and \\(m^3\\), and the columns again corresponding to the time points. Here is how to set up this matrix:
```
# number of 'seasons' (e.g., 12 months per year)
period <- 12
# first 'season' (e.g., Jan = 1, July = 7)
per.1st <- 1
# order of polynomial
poly.order <- 3
# create polynomials of months
month.cov <- matrix(1, 1, period)
for (i in 1:poly.order) {
month.cov = rbind(month.cov, (1:12)^i)
}
# our c matrix is month.cov replicated once for each year
c.m.poly <- matrix(month.cov, poly.order + 1, TT + period, byrow = FALSE)
# trim c.in to correct start & length
c.m.poly <- c.m.poly[, (1:TT) + (per.1st - 1)]
# Everything else remains the same as in the previous
# example
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
C = C, c = c.m.poly, D = D, d = d)
seas.mod.2 <- MARSS(dat, model = model.list, control = list(maxit = 1500))
```
The effect of month \\(m\\) for taxon \\(i\\) is \\(a\_i \+ b\_i \\times m \+ c\_i \\times m^2 \+ d\_i \\times m^3\\), where \\(a\_i\\), \\(b\_i\\), \\(c\_i\\) and \\(d\_i\\) are in the \\(i\\)\-th row of \\(\\mathbf{C}\\). We can now calculate the matrix of seasonal effects as follows, where each row is a taxon and each column is a month:
```
C.2 = coef(seas.mod.2, type = "matrix")$C
seas.2 = C.2 %*% month.cov
rownames(seas.2) <- phytos
colnames(seas.2) <- month.abb
```
The middle panel in Figure [8\.2](sec-msscov-season.html#fig:msscov-mon-effects) shows the estimated seasonal effects for this polynomial model.
Note: Setting the covariates up like this means that our covariates are collinear since \\(m\\), \\(m^2\\) and \\(m^3\\) are correlated, obviously. A better approach is to use the `poly()` function to create an orthogonal polynomial covariate matrix `c.m.poly.o`:
```
month.cov.o <- cbind(1, poly(1:period, poly.order))
c.m.poly.o <- matrix(t(month.cov.o), poly.order + 1, TT + period,
byrow = FALSE)
c.m.poly.o <- c.m.poly.o[, (1:TT) + (per.1st - 1)]
```
### 8\.6\.3 Seasonal effects as a Fourier series
The factor approach required estimating 60 effects, and the 3rd order polynomial model was an improvement at only 20 parameters. A third option is to use a discrete Fourier series, which is combination of sine and cosine waves; it would require only 10 parameters. Specifically, the effect of month \\(m\\) on taxon \\(i\\) is \\(a\_i \\times \\cos(2 \\pi m/p) \+ b\_i \\times \\sin(2 \\pi m/p)\\), where \\(p\\) is the period (e.g., 12 months, 4 quarters), and \\(a\_i\\) and \\(b\_i\\) are contained in the \\(i\\)\-th row of \\(\\mathbf{C}\\).
We begin by defining the \\(2 \\times T\\) seasonal covariate matrix \\(\\mathbf{c}\\) as a combination of 1 cosine and 1 sine wave:
```
cos.t <- cos(2 * pi * seq(TT)/period)
sin.t <- sin(2 * pi * seq(TT)/period)
c.Four <- rbind(cos.t, sin.t)
```
Everything else remains the same and we can fit this model as follows:
```
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R,
C = C, c = c.Four, D = D, d = d)
seas.mod.3 <- MARSS(dat, model = model.list, control = list(maxit = 1500))
```
We make our seasonal effect matrix as follows:
```
C.3 <- coef(seas.mod.3, type = "matrix")$C
# The time series of net seasonal effects
seas.3 <- C.3 %*% c.Four[, 1:period]
rownames(seas.3) <- phytos
colnames(seas.3) <- month.abb
```
The bottom panel in Figure [8\.2](sec-msscov-season.html#fig:msscov-mon-effects) shows the estimated seasonal effects for this seasonal\-effects model based on a discrete Fourier series.
Figure 8\.2: Estimated monthly effects for the three approaches to estimating seasonal effects. Top panel: each month modelled as a separate fixed effect for each taxon (60 parameters); Middle panel: monthly effects modelled as a 3rd order polynomial (20 parameters); Bottom panel: monthly effects modelled as a discrete Fourier series (10 parameters).
Rather than rely on our eyes to judge model fits, we should formally assess which of the 3 approaches offers the most parsimonious fit to the data. Here is a table of AICc values for the 3 models:
```
data.frame(Model = c("Fixed", "Cubic", "Fourier"), AICc = round(c(seas.mod.1$AICc,
seas.mod.2$AICc, seas.mod.3$AICc), 1))
```
```
Model AICc
1 Fixed 1188.4
2 Cubic 1144.9
3 Fourier 1127.4
```
The model selection results indicate that the model with monthly seasonal effects estimated via the discrete Fourier sequence is the best of the 3 models. Its AICc value is much lower than either the polynomial or fixed\-effects models.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-msscov-model-diagnostics.html |
8\.7 Model diagnostics
----------------------
We will examine some basic model diagnostics for these three approaches by looking at plots of the model residuals and their autocorrelation functions (ACFs) for all five taxa using the following code:
```
for (i in 1:3) {
dev.new()
modn <- paste("seas.mod", i, sep = ".")
for (j in 1:5) {
resid.j <- MARSSresiduals(get(modn), type = "tt1")$model.residuals[j,
]
plot.ts(resid.j, ylab = "Residual", main = phytos[j])
abline(h = 0, lty = "dashed")
acf(resid.j, na.action = na.pass)
}
}
```
Figure 8\.3: Residuals for model with season modelled as a discrete Fourier series.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-msscov-hw-discussion.html |
8\.8 Homework data and discussion
---------------------------------
For these problems, use the following code to load in 1980\-1994 phytoplankton data, covariates, and z\-score all the data. Run the code below and use `dat` and `covars` directly in your code.
```
library(MARSS)
spp <- c("Cryptomonas", "Diatoms", "Greens", "Unicells", "Other.algae",
"Daphnia")
yrs <- lakeWAplanktonTrans[, "Year"] %in% 1980:1994
dat <- t(lakeWAplanktonTrans[yrs, spp])
# z-score the data
avg <- apply(dat, 1, mean, na.rm = TRUE)
sd <- sqrt(apply(dat, 1, var, na.rm = TRUE))
dat <- (dat - avg)/sd
rownames(dat) = spp
# always check that the mean and variance are 1 after
# z-scoring
apply(dat, 1, mean, na.rm = TRUE) #this should be 0
apply(dat, 1, var, na.rm = TRUE) #this should be 1
```
For the covariates, you’ll use temperature and TP.
```
covars <- rbind(Temp = lakeWAplanktonTrans[yrs, "Temp"], TP = lakeWAplanktonTrans[yrs,
"TP"])
avg <- apply(covars, 1, mean)
sd <- sqrt(apply(covars, 1, var, na.rm = TRUE))
covars <- (covars - avg)/sd
rownames(covars) <- c("Temp", "TP")
# always check that the mean and variance are 1 after
# z-scoring
apply(covars, 1, mean, na.rm = TRUE) #this should be 0
apply(covars, 1, var, na.rm = TRUE) #this should be 1
```
Here are some guidelines to help you answer the questions:
* Use a MARSS model that allows for both observation and process error.
* Assume that the observation errors are independent and identically distributed with known variance of 0\.10\.
* Assume that the process errors are independent from one another, but the variances differ by taxon.
* Assume that each group is an observation of its own process. This means `Z="identity"`.
* Use `B="diagonal and unequal"`. This implies that each of the taxa are operating under varying degrees of density\-dependence, and they are not allowed to interact.
* All the data have been de\-meaned and \\(\\mathbf{Z}\\) is identity, therefore use `U="zero"` and `A="zero"`. Make sure to check that the means of the data are 0 and the variance is 1\.
* Use `tinitx=1`. It makes \\(\\mathbf{B}\\) estimation more stable. It goes in your model list.
* Include a plot of residuals versus time and acf of residuals for each question. You only need to show these for the top (best) model if the question involves comparing multiple models.
* Use AICc to compare models.
* Some of the models may not converge, however use for the purpose of the homework, use the unconverged models. Thus use the output from `MARSS()` without any additional arguments. If you want, you can try using `control=list(maxit=1000)` to increase the number of iterations. Or you can try `method="BFGS"` in your `MARSS()` call. This will use the BFGS optimization method, however it may throw an error for these data.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-msscov-problems.html |
8\.9 Problems
-------------
Read Section [8\.8](sec-msscov-hw-discussion.html#sec-msscov-hw-discussion) for the data and tips on answering the questions and setting up your models. Note the questions asking about the effects on *growth rate* are asking about the *C* matrix in
\\\[\\mathbf{x}\_t\=\\mathbf{B}\\mathbf{x}\_{t\-1}\+\\mathbf{C}\\mathbf{c}\_t\+\\mathbf{w}\_t\\]
The \\(\\mathbf{C}\\mathbf{c}\_t\+\\mathbf{w}\_t\\) are the process errors and represent the growth rates (growth above or below what you would expect given \\(\\mathbf{x}\_{t\-1}\\)). Use your raw data in the MARSS model. You do not need to difference the data to get at the growth rates since the process model is modeling that.
1. How does month affect the mean phytoplankton population growth rates? Show a plot of the estimated mean growth rate versus month for each taxon using three approaches to estimate the month effect (factor, polynomial, Fourier series). Estimate seasonal effects without any covariate (Temp, TP) effects.
2. It is likely that both temperature and total phosphorus (TP) affect phytoplankton population growth rates. Using MARSS models, estimate the effect of Temp and TP on growth rates of each taxon.
Leave out the seasonal covariates from question 1, i.e. only use Temp and TP as covariates. Make a plot of the point estimates of the Temp and TP effects with the 95% CIs added to the plot. `tidy()` is an easy way to get the parameters CIs.
3. Estimate the Temp and TP effects using `B="unconstrained"`.
1. Compare the \\(\\mathbf{B}\\) matrix for the fit from question 2 and from question 3\. Describe the species interactions modeled by the \\(\\mathbf{B}\\) matrix when `B="unconstrained"`. How is it different than the \\(\\mathbf{B}\\) matrix from question 2? Note, you can retrieve the matrix using `coef(fit, type="matrix")$B`.
2. Do the Temp and TP effects change when you use `B="unconstrained"`? Make sure to look at the CIs also.
4. Using MARSS models, evaluate which (Temp or TP) is the more important driver or if both are important. Again, leave out the seasonal covariates from question 1, i.e. only use Temp and TP as covariates. Compare two approaches: comparison of effect sizes in a model with both Temp and TP and model selection using a set of models.
5. Evaluate whether the effect of temperature (Temp) on the taxa manifests itself via their underlying physiology (by affecting growth rates and thus abundance) or because physical changes in the water stratification makes them easier/harder to sample in some months. Leave out the seasonal covariates from question 1, i.e. only use Temp and TP as the covariates. For TP, assume it always affects growth rates, never the observation errors.
6. Is there support for temperature or TP affecting all functional groups’ growth rates the same, or are the effects on one taxon different from another? Make sure to test all possibilities: the Temp and TP effects are the same for all taxa, and one covariate effect is the same across taxa while the other’s effects are unique across taxa.
7. Compare your results for question 2 using linear regression, by using the `lm()` function. You’ll need to look at the response of each taxon separately, i.e. one response variable. You can have a multivariate response variable with `lm()` but the functions will be doing 6 independent linear regressions. In your `lm()` model, use only Temp and TP (and intercept) as covariates. Compare the estimated effects to those from question 2\. How are they different? How is this model different from the model you fit in question 2?
8. Temp and TP are negatively correlated (cor \= \-0\.66\). A common threshold for collinearity in regression models is 0\.7\. Temp and TP fall below that but are close. One approach to collinearity is sequential regression ([Dormann et al. 2013](references.html#ref-Dormannetal2013)). The first (most influential) covariate is included ‘as is’ and the second covariate appears as the residuals of a regression of the second against the first. The covariates are now orthogonal however the second covariate is conditioned on the first. If we see an effect of the residuals covariate, it is the effect of TP additional to the contribution it already made through its relationship with temperature. Rerun question 2 using sequential regression (see code below).
Make your Temp and TP covariates orthogonal using sequential regression. Do your conclusions about the effects of Temperature and TP change?
Below is code to construct your orthogonal covariates for sequential regression.
```
fit <- lm(covars[1, ] ~ covars[2, ])
seqcovs <- rbind(covars[1, ], residuals(fit))
avg <- apply(seqcovs, 1, mean)
sd <- sqrt(apply(seqcovs, 1, var, na.rm = TRUE))
seqcovs <- (seqcovs - avg)/sd
rownames(seqcovs) <- c("Temp", "TPresids")
```
9. Compare the AICc’s of the 3 seasonal models from question 1 and the 4 Temp/TP models from question 5\. What does this tell you about the Temp and TP only models?
10. We cannot just fit a model with season and Temp plus TP since Temp and TP are highly seasonal. That will cause problems if we have something that explain season (a polynomial) and a covariate that has seasonality. Instead, use sequential regression to fit a model with seasonality, Temp and TP. Use a 3rd order polynomial with the `poly()` function to create orthogonal season covariates and then use sequential regression (code in problem 8\) to create Temp and TP covariates that are orthogonal to your season covariates. Fit the model and compare a model with only season to a model with season and Temp plus TP.
11. Another approach to looking at effects of covariates which have season cycles is to examine if the seasonal anomalies of the independent variable can be explained by the seasonal anomalies of the dependent variables. In other words, can an unusually high February abundance (higher than expected) be explained by an unusually high or low February temperature? In this approach, you remove season so you do not need to model it (with factor, polynomial, etc). The `stl()` function can be used to decompose a time series using LOESS. We’ll use `stl()` since it can handle missing values.
1. Decompose the Diatom time series using `stl()` and plot. Use `na.action=zoo::na.approx` to deal with the NAs. Use `s.window="periodic"`. Other than that you can use the defaults.
```
i <- "Diatoms"
dati <- ts(dat[i, ], frequency = 12)
a <- stl(dati, "periodic", na.action = zoo::na.approx)
```
2. Create dependent variables and covariates that are anomolies by modifying the following code. For the anomaly, you will use the remainder plus the trend. You will need to adapt this code to create the anomalies for Temp and TP and for `dat` (your data).
```
i <- "Diatoms"
a <- stl(ts(dat[i, ], frequency = 12), "periodic", na.action = zoo::na.approx)
anom <- a$time.series[, "remainder"] + a$time.series[, "trend"]
```
3. Notice that you have simply removed the seasonal cycle from the data. Using the seasonal anomalies (from part b), estimate the effect of Temp and TP on each taxon’s growth rate. You will use the same model as in question 2, but use the seasonal anomalies as data and covariates.
```
anoms <- matrix(NA, dim(dat)[1] + dim(covars)[1], dim(dat)[2])
rownames(anoms) <- c(rownames(dat), rownames(covars))
for (i in 1:dim(dat)[1]) {
a <- stl(ts(dat[i, ], frequency = 12), "periodic", na.action = zoo::na.approx)
anoms[i, ] <- a$time.series[, "remainder"] + a$time.series[,
"trend"]
}
for (i in 1:dim(covars)[1]) {
a <- stl(ts(covars[i, ], frequency = 12), "periodic", na.action = zoo::na.approx)
anoms[i + dim(dat)[1], ] <- a$time.series[, "remainder"] +
a$time.series[, "trend"]
}
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-dlm-dynamic-linear-models.html |
Chapter 9 Dynamic linear models
===============================
Dynamic linear models (DLMs) are a type of linear regression model, wherein the parameters are treated as time\-varying rather than static. DLMs are used commonly in econometrics, but have received less attention in the ecological literature (c.f. [Lamon, Carpenter, and Stow 1998](references.html#ref-Lamonetal1998); [Scheuerell and Williams 2005](references.html#ref-ScheuerellWilliams2005)). Our treatment of DLMs is rather cursory—we direct the reader to excellent textbooks by [Pole, West, and Harrison](references.html#ref-Poleetal1994) ([1994](references.html#ref-Poleetal1994)) and [Petris, Petrone, and Campagnoli](references.html#ref-Petrisetal2009) ([2009](references.html#ref-Petrisetal2009)) for more in\-depth treatments of DLMs. The former focuses on Bayesian estimation whereas the latter addresses both likelihood\-based and Bayesian estimation methods.
A script with all the R code in the chapter can be downloaded [here](./Rcode/DLM.R). The Rmd for this chapter can be downloaded [here](./Rmds/DLM.Rmd).
### Data
Most of the data used in the chapter are from the **MARSS** package. Install the package, if needed, and load:
```
library(MARSS)
```
The problem set uses an additional data set on spawners and recruits (`KvichakSockeye`) in the `atsalibrary` package.
### Data
Most of the data used in the chapter are from the **MARSS** package. Install the package, if needed, and load:
```
library(MARSS)
```
The problem set uses an additional data set on spawners and recruits (`KvichakSockeye`) in the `atsalibrary` package.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dlm-simple-examples.html |
9\.3 Stochastic level models
----------------------------
The most simple DLM is a stochastic level model, where the level is a random walk without drift, and this level is observed with error. We will write it first in using regression notation where the intercept is \\(\\alpha\\) and then in MARSS notation. In the latter, \\(\\alpha\_t\=x\_t\\).
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.5}
y\_t \= \\alpha\_t \+ e\_t \\\\
\\alpha\_t \= \\alpha\_{t\-1} \+ w\_t \\\\
\\Downarrow \\\\
y\_t \= x\_t \+ v\_t \\\\
x\_t \= x\_{t\-1} \+ w\_t
\\end{gathered}
\\end{equation}\\]
Using this model, we can model the Nile River level and fit the model using `MARSS()`.
```
## load Nile flow data
data(Nile, package = "datasets")
## define model list
mod_list <- list(B = "identity", U = "zero", Q = matrix("q"),
Z = "identity", A = matrix("a"), R = matrix("r"))
## fit the model with MARSS
fit <- MARSS(matrix(Nile, nrow = 1), mod_list)
```
```
Success! abstol and log-log tests passed at 82 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 82 iterations.
Log-likelihood: -637.7569
AIC: 1283.514 AICc: 1283.935
Estimate
A.a -0.338
R.r 15135.796
Q.q 1381.153
x0.x0 1111.791
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
### 9\.3\.1 Stochastic level with drift
We can add a drift term to the level model to allow the level to tend upward or downward with a deterministic rate \\(\\eta\\). This is a random walk with bias.
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.6}
y\_t \= \\alpha\_t \+ e\_t \\\\
\\alpha\_t \= \\alpha\_{t\-1} \+ \\eta \+ w\_t \\\\
\\Downarrow \\\\
y\_t \= x\_t \+ v\_t \\\\
x\_t \= x\_{t\-1} \+ u \+ w\_t
\\end{gathered}
\\end{equation}\\]
We can allow that the drift term \\(\\eta\\) evolves over time along with the level. In this case, \\(\\eta\\) is modeled as a random walk along with \\(\\alpha\\). This model is
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.7}
y\_t \= \\alpha\_t \+ e\_t \\\\
\\alpha\_t \= \\alpha\_{t\-1} \+ \\eta\_{t\-1} \+ w\_{\\alpha,t} \\\\
\\eta\_t \= \\eta\_{t\-1} \+ w\_{\\eta,t}
\\end{gathered}
\\end{equation}\\]
Equation [(9\.7\)](sec-dlm-simple-examples.html#eq:dlm-stoch-level-drift-2) can be written in matrix form as:
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.8}
y\_t \= \\begin{bmatrix}1\&0\\end{bmatrix}\\begin{bmatrix}
\\alpha \\\\
\\eta
\\end{bmatrix}\_t \+ v\_t \\\\
\\begin{bmatrix}
\\alpha \\\\
\\eta
\\end{bmatrix}\_t \= \\begin{bmatrix}
1 \& 1 \\\\
0 \& 1
\\end{bmatrix}\\begin{bmatrix}
\\alpha \\\\
\\eta
\\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}
w\_{\\alpha} \\\\
w\_{\\eta}
\\end{bmatrix}\_t
\\end{gathered}
\\end{equation}\\]
Equation [(9\.8\)](sec-dlm-simple-examples.html#eq:dlm-stoch-level-drift-3) is a MARSS model.
\\\[\\begin{equation}
y\_t \= \\mathbf{Z}\\mathbf{x}\_t \+ v\_t \\\\
\\mathbf{x}\_t \= \\mathbf{B}\\mathbf{x}\_{t\-1} \+ \\mathbf{w}\_t
\\end{equation}\\]
where \\(\\mathbf{B}\=\\begin{bmatrix} 1 \& 1 \\\\ 0 \& 1\\end{bmatrix}\\), \\(\\mathbf{x}\=\\begin{bmatrix}\\alpha \\\\ \\eta\\end{bmatrix}\\) and \\(\\mathbf{Z}\=\\begin{bmatrix}1\&0\\end{bmatrix}\\).
See Section [6\.2](sec-uss-examples-using-the-nile-river-data.html#sec-uss-examples-using-the-nile-river-data) for more discussion of stochastic level models and Section @ref() to see how to fit this model with the `StructTS(sec-uss-the-structts-function)` function in the **stats** package.
### 9\.3\.1 Stochastic level with drift
We can add a drift term to the level model to allow the level to tend upward or downward with a deterministic rate \\(\\eta\\). This is a random walk with bias.
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.6}
y\_t \= \\alpha\_t \+ e\_t \\\\
\\alpha\_t \= \\alpha\_{t\-1} \+ \\eta \+ w\_t \\\\
\\Downarrow \\\\
y\_t \= x\_t \+ v\_t \\\\
x\_t \= x\_{t\-1} \+ u \+ w\_t
\\end{gathered}
\\end{equation}\\]
We can allow that the drift term \\(\\eta\\) evolves over time along with the level. In this case, \\(\\eta\\) is modeled as a random walk along with \\(\\alpha\\). This model is
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.7}
y\_t \= \\alpha\_t \+ e\_t \\\\
\\alpha\_t \= \\alpha\_{t\-1} \+ \\eta\_{t\-1} \+ w\_{\\alpha,t} \\\\
\\eta\_t \= \\eta\_{t\-1} \+ w\_{\\eta,t}
\\end{gathered}
\\end{equation}\\]
Equation [(9\.7\)](sec-dlm-simple-examples.html#eq:dlm-stoch-level-drift-2) can be written in matrix form as:
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.8}
y\_t \= \\begin{bmatrix}1\&0\\end{bmatrix}\\begin{bmatrix}
\\alpha \\\\
\\eta
\\end{bmatrix}\_t \+ v\_t \\\\
\\begin{bmatrix}
\\alpha \\\\
\\eta
\\end{bmatrix}\_t \= \\begin{bmatrix}
1 \& 1 \\\\
0 \& 1
\\end{bmatrix}\\begin{bmatrix}
\\alpha \\\\
\\eta
\\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}
w\_{\\alpha} \\\\
w\_{\\eta}
\\end{bmatrix}\_t
\\end{gathered}
\\end{equation}\\]
Equation [(9\.8\)](sec-dlm-simple-examples.html#eq:dlm-stoch-level-drift-3) is a MARSS model.
\\\[\\begin{equation}
y\_t \= \\mathbf{Z}\\mathbf{x}\_t \+ v\_t \\\\
\\mathbf{x}\_t \= \\mathbf{B}\\mathbf{x}\_{t\-1} \+ \\mathbf{w}\_t
\\end{equation}\\]
where \\(\\mathbf{B}\=\\begin{bmatrix} 1 \& 1 \\\\ 0 \& 1\\end{bmatrix}\\), \\(\\mathbf{x}\=\\begin{bmatrix}\\alpha \\\\ \\eta\\end{bmatrix}\\) and \\(\\mathbf{Z}\=\\begin{bmatrix}1\&0\\end{bmatrix}\\).
See Section [6\.2](sec-uss-examples-using-the-nile-river-data.html#sec-uss-examples-using-the-nile-river-data) for more discussion of stochastic level models and Section @ref() to see how to fit this model with the `StructTS(sec-uss-the-structts-function)` function in the **stats** package.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dlm-fitting-a-univariate-dlm-with-marss.html |
9\.7 Fitting with `MARSS()`
---------------------------
Now let’s go ahead and analyze the DLM specified in Equations [(9\.19\)](sec-dlm-salmon-example.html#eq:dlm-dlmSW1)–[(9\.21\)](sec-dlm-salmon-example.html#eq:dlm-dlmSW3). We begin by loading the data set (which is in the **MARSS** package). The data set has 3 columns for 1\) the year the salmon smolts migrated to the ocean (`year`), 2\) logit\-transformed survival (`logit.s`), and 3\) the coastal upwelling index for April (`CUI.apr`). There are 42 years of data (1964–2005\).
```
## load the data
data(SalmonSurvCUI, package = "MARSS")
## get time indices
years <- SalmonSurvCUI[, 1]
## number of years of data
TT <- length(years)
## get response variable: logit(survival)
dat <- matrix(SalmonSurvCUI[, 2], nrow = 1)
```
As we have seen in other case studies, standardizing our covariate(s) to have zero\-mean and unit\-variance can be helpful in model fitting and interpretation. In this case, it’s a good idea because the variance of `CUI.apr` is orders of magnitude greater than `logit.s`.
```
## get predictor variable
CUI <- SalmonSurvCUI[, 3]
## z-score the CUI
CUI_z <- matrix((CUI - mean(CUI))/sqrt(var(CUI)), nrow = 1)
## number of regr params (slope + intercept)
m <- dim(CUI_z)[1] + 1
```
Plots of logit\-transformed survival and the \\(z\\)\-scored April upwelling index are shown in Figure [9\.1](sec-dlm-fitting-a-univariate-dlm-with-marss.html#fig:dlm-plotdata).
Figure 9\.1: Time series of logit\-transformed marine survival estimates for Snake River spring/summer Chinook salmon (top) and *z*\-scores of the coastal upwelling index at 45N 125W (bottom). The *x*\-axis indicates the year that the salmon smolts entered the ocean.
Next, we need to set up the appropriate matrices and vectors for MARSS. Let’s begin with those for the process equation because they are straightforward.
```
## for process eqn
B <- diag(m) ## 2x2; Identity
U <- matrix(0, nrow = m, ncol = 1) ## 2x1; both elements = 0
Q <- matrix(list(0), m, m) ## 2x2; all 0 for now
diag(Q) <- c("q.alpha", "q.beta") ## 2x2; diag = (q1,q2)
```
Defining the correct form for the observation model is a little more tricky, however, because of how we model the effect(s) of predictor variables. In a DLM, we need to use \\(\\mathbf{Z}\_t\\) (instead of \\(\\mathbf{d}\_t\\)) as the matrix of predictor variables that affect \\(\\mathbf{y}\_t\\), and we use \\(\\mathbf{x}\_t\\) (instead of \\(\\mathbf{D}\_t\\)) as the regression parameters. Therefore, we need to set \\(\\mathbf{Z}\_t\\) equal to an \\(n\\times m\\times T\\) array, where \\(n\\) is the number of response variables (\= 1; \\(y\_t\\) is univariate), \\(m\\) is the number of regression parameters (\= intercept \+ slope \= 2\), and \\(T\\) is the length of the time series (\= 42\).
```
## for observation eqn
Z <- array(NA, c(1, m, TT)) ## NxMxT; empty for now
Z[1, 1, ] <- rep(1, TT) ## Nx1; 1's for intercept
Z[1, 2, ] <- CUI_z ## Nx1; predictor variable
A <- matrix(0) ## 1x1; scalar = 0
R <- matrix("r") ## 1x1; scalar = r
```
Lastly, we need to define our lists of initial starting values and model matrices/vectors.
```
## only need starting values for regr parameters
inits_list <- list(x0 = matrix(c(0, 0), nrow = m))
## list of model matrices & vectors
mod_list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R)
```
And now we can fit our DLM with MARSS.
```
## fit univariate DLM
dlm_1 <- MARSS(dat, inits = inits_list, model = mod_list)
```
```
Success! abstol and log-log tests passed at 115 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 115 iterations.
Log-likelihood: -40.03813
AIC: 90.07627 AICc: 91.74293
Estimate
R.r 0.15708
Q.q.alpha 0.11264
Q.q.beta 0.00564
x0.X1 -3.34023
x0.X2 -0.05388
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
Notice that the MARSS output does not list any estimates of the regression parameters themselves. Why not? Remember that in a DLM the matrix of states \\((\\mathbf{x})\\) contains the estimates of the regression parameters \\((\\boldsymbol{\\theta})\\). Therefore, we need to look in `dlm_1$states` for the MLEs of the regression parameters, and in `dlm_1$states.se` for their standard errors.
Time series of the estimated intercept and slope are shown in Figure [9\.2](sec-dlm-fitting-a-univariate-dlm-with-marss.html#fig:dlm-plotdlm-1). It appears as though the intercept is much more dynamic than the slope, as indicated by a much larger estimate of process variance for the former (`Q.q1`). In fact, although the effect of April upwelling appears to be increasing over time, it doesn’t really become important as a predictor variable until about 1990 when the approximate 95% confidence interval for the slope no longer overlaps zero.
Figure 9\.2: Time series of estimated mean states (thick lines) for the intercept (top) and slope (bottom) parameters from the DLM specified by Equations [(9\.19\)](sec-dlm-salmon-example.html#eq:dlm-dlmSW1)–[(9\.21\)](sec-dlm-salmon-example.html#eq:dlm-dlmSW3). Thin lines denote the mean \\(\\pm\\) 2 standard deviations.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dlm-forecasting-with-a-univariate-dlm.html |
9\.8 Forecasting
----------------
Forecasting from a DLM involves two steps:
1. Get an estimate of the regression parameters at time \\(t\\) from data up to time \\(t\-1\\). These are also called the one\-step ahead forecast (or prediction) of the regression parameters.
2. Make a prediction of \\(y\\) at time \\(t\\) based on the predictor variables at time \\(t\\) and the estimate of the regression parameters at time \\(t\\) (step 1\). This is also called the one\-step ahead forecast (or prediction) of the observation.
### 9\.8\.1 Estimate of the regression parameters
For step 1, we want to compute the distribution of the regression parameters at time \\(t\\) conditioned on the data up to time \\(t\-1\\), also known as the one\-step ahead forecasts of the regression parameters. Let’s denote \\(\\boldsymbol{\\theta}\_{t\-1}\\) conditioned on \\(y\_{1:t\-1}\\) as \\(\\boldsymbol{\\theta}\_{t\-1\|t\-1}\\) and denote \\(\\boldsymbol{\\theta}\_{t}\\) conditioned on \\(y\_{1:t\-1}\\) as \\(\\boldsymbol{\\theta}\_{t\|t\-1}\\). We will start by defining the distribution of \\(\\boldsymbol{\\theta}\_{t\|t}\\) as follows
\\\[\\begin{equation}
\\tag{9\.23}
\\boldsymbol{\\theta}\_{t\|t} \\sim \\text{MVN}(\\boldsymbol{\\pi}\_t,\\boldsymbol{\\Lambda}\_t) \\end{equation}\\]
where \\(\\boldsymbol{\\pi}\_t \= \\text{E}(\\boldsymbol{\\theta}\_{t\|t})\\) and \\(\\mathbf{\\Lambda}\_t \= \\text{Var}(\\boldsymbol{\\theta}\_{t\|t})\\).
Now we can compute the distribution of \\(\\boldsymbol{\\theta}\_{t}\\) conditioned on \\(y\_{1:t\-1}\\) using the process equation for \\(\\boldsymbol{\\theta}\\):
\\\[\\begin{equation}
\\boldsymbol{\\theta}\_{t} \= \\mathbf{G}\_t \\boldsymbol{\\theta}\_{t\-1} \+ \\mathbf{w}\_t, \\, \\mathbf{w}\_t \\sim \\text{MVN}(\\mathbf{0}, \\mathbf{Q})
\\end{equation}\\]
The expected value of \\(\\boldsymbol{\\theta}\_{t\|t\-1}\\) is thus
\\\[\\begin{equation}
\\tag{9\.24}
\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \= \\mathbf{G}\_t \\text{E}(\\boldsymbol{\\theta}\_{t\-1\|t\-1}) \= \\mathbf{G}\_t \\boldsymbol{\\pi}\_{t\-1}
\\end{equation}\\]
The variance of \\(\\boldsymbol{\\theta}\_{t\|t\-1}\\) is
\\\[\\begin{equation}
\\tag{9\.25}
\\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1}) \= \\mathbf{G}\_t \\text{Var}(\\boldsymbol{\\theta}\_{t\-1\|t\-1}) \\mathbf{G}\_t^{\\top} \+ \\mathbf{Q} \= \\mathbf{G}\_t \\mathbf{\\Lambda}\_{t\-1} \\mathbf{G}\_t^{\\top} \+ \\mathbf{Q}
\\end{equation}\\]
Thus the distribution of \\(\\boldsymbol{\\theta}\_{t}\\) conditioned on \\(y\_{1:t\-1}\\) is
\\\[\\begin{equation}
\\tag{9\.26}
\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\sim \\text{MVN}(\\mathbf{G}\_t \\boldsymbol{\\pi}\_{t\-1}, \\mathbf{G}\_t \\mathbf{\\Lambda}\_{t\-1} \\mathbf{G}\_t^{\\top} \+ \\mathbf{Q})
\\end{equation}\\]
### 9\.8\.2 Prediction of the response variable \\(y\_t\\)
For step 2, we make the prediction of \\(y\_{t}\\) given the predictor variables at time \\(t\\) and the estimate of the regression parameters at time \\(t\\). This is called the one\-step ahead prediction for the observation at time \\(t\\). We will denote the prediction of \\(y\\) as \\(\\hat{y}\\) and we want to compute its distribution (mean and variance). We do this using the equation for \\(y\_t\\) but substituting the expected value of \\(\\boldsymbol{\\theta}\_{t\|t\-1}\\) for \\(\\boldsymbol{\\theta}\_t\\).
\\\[\\begin{equation}
\\tag{9\.27}
\\hat{y}\_{t\|t\-1} \= \\mathbf{F}^{\\top}\_{t} \\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \+ e\_{t}, \\, e\_{t} \\sim \\text{N}(0, r)
\\end{equation}\\]
Our prediction of \\(y\\) at \\(t\\) has a normal distribution with mean (expected value) and variance. The expected value of \\(\\hat{y}\_{t\|t\-1}\\) is
\\\[\\begin{equation}
\\tag{9\.28}
\\text{E}(\\hat{y}\_{t\|t\-1}) \= \\mathbf{F}^{\\top}\_{t} \\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \= \\mathbf{F}^{\\top}\_{t} (\\mathbf{G}\_t \\boldsymbol{\\pi}\_{t\-1})
\\end{equation}\\]
and the variance of \\(\\hat{y}\_{t\|t\-1}\\) is
\\\[\\begin{align}
\\tag{9\.29}
\\text{Var}(\\hat{y}\_{t\|t\-1}) \&\= \\mathbf{F}^{\\top}\_{t} \\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\mathbf{F}\_{t} \+ r \\\\
\&\= \\mathbf{F}^{\\top}\_{t} (\\mathbf{G}\_t \\mathbf{\\Lambda}\_{t\-1} \\mathbf{G}\_t^{\\top} \+ \\mathbf{Q}) \\mathbf{F}\_{t} \+ r
\\end{align}\\]
### 9\.8\.3 Computing the prediction
The expectations and variance of \\(\\boldsymbol{\\theta}\_t\\) conditioned on \\(y\_{1:t}\\) and \\(y\_{1:t\-1}\\) are standard output from the Kalman filter. Thus to produce the predictions, all we need to do is run our DLM state\-space model through a Kalman filter to get \\(\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1})\\) and \\(\\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1})\\) and then use Equation [(9\.28\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-predict-y-mean) to compute the mean prediction and Equation [(9\.29\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-predict-y-var) to compute its variance.
The Kalman filter will need \\(\\mathbf{F}\_t\\), \\(\\mathbf{G}\_t\\) and estimates of \\(\\mathbf{Q}\\) and \\(r\\). The latter are calculated by fitting the DLM to the data \\(y\_{1:t}\\), using for example the `MARSS()` function.
Let’s see an example with the salmon survival DLM. We will use the Kalman filter function in the **MARSS** package and the DLM fit from `MARSS()`.
### 9\.8\.4 Forecasting salmon survival
[Scheuerell and Williams](references.html#ref-ScheuerellWilliams2005) ([2005](references.html#ref-ScheuerellWilliams2005)) were interested in how well upwelling could be used to actually *forecast* expected survival of salmon, so let’s look at how well our model does in that context. To do so, we need the predictive distribution for the survival at time \\(t\\) given the upwelling at time \\(t\\) and the predicted regression parameters at \\(t\\).
In the salmon survival DLM, the \\(\\mathbf{G}\_t\\) matrix is the identity matrix, thus the mean and variance of the one\-step ahead predictive distribution for the observation at time \\(t\\) reduces to (from Equations [(9\.28\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-predict-y-mean) and [(9\.29\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-predict-y-var))
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.30}
\\text{E}(\\hat{y}\_{t\|t\-1}) \= \\mathbf{F}^{\\top}\_{t} \\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\\\
\\text{Var}(\\hat{y}\_{t\|t\-1}) \= \\mathbf{F}^{\\top}\_{t} \\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\mathbf{F}\_{t} \+ \\hat{r}
\\end{gathered}
\\end{equation}\\]
where
\\\[
\\mathbf{F}\_{t}\=\\begin{bmatrix}1 \\\\ f\_{t}\\end{bmatrix}
\\]
and \\(f\_{t}\\) is the upwelling index at \\(t\+1\\). \\(\\hat{r}\\) is the estimated observation variance from our model fit.
### 9\.8\.5 Forecasting using MARSS
Working from Equation [(9\.30\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-dlmFore3), we can compute the expected value of the forecast at time \\(t\\) and its variance using the Kalman filter. For the expectation, we need \\(\\mathbf{F}\_{t}^\\top\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1})\\).
\\(\\mathbf{F}\_t^\\top\\) is called \\(\\mathbf{Z}\_t\\) in MARSS notation. The one\-step ahead forecasts of the regression parameters at time \\(t\\), the \\(\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1})\\), are calculated as part of the Kalman filter algorithm—they are termed \\(\\tilde{x}\_t^{t\-1}\\) in MARSS notation and stored as `xtt1` in the list produced by the `MARSSkfss()` Kalman filter function.
Using the `Z` defined in [9\.6](sec-dlm-salmon-example.html#sec-dlm-salmon-example), we compute the mean forecast as follows:
```
## get list of Kalman filter output
kf_out <- MARSSkfss(dlm_1)
## forecasts of regr parameters; 2xT matrix
eta <- kf_out$xtt1
## ts of E(forecasts)
fore_mean <- vector()
for (t in 1:TT) {
fore_mean[t] <- Z[, , t] %*% eta[, t, drop = FALSE]
}
```
For the variance of the forecasts, we need
\\(\\mathbf{F}^{\\top}\_{t} \\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\mathbf{F}\_{t} \+ \\hat{r}\\). As with the mean, \\(\\mathbf{F}^\\top\_t \\equiv \\mathbf{Z}\_t\\). The variances of the one\-step ahead forecasts of the regression parameters at time \\(t\\), \\(\\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1})\\), are also calculated as part of the Kalman filter algorithm—they are stored as `Vtt1` in the list produced by the `MARSSkfss()` function. Lastly, the observation variance \\(\\hat{r}\\) was estimated when we fit the DLM to the data using `MARSS()` and can be extracted from the `dlm_1` fit.
Putting this together, we can compute the forecast variance:
```
## variance of regr parameters; 1x2xT array
Phi <- kf_out$Vtt1
## obs variance; 1x1 matrix
R_est <- coef(dlm_1, type = "matrix")$R
## ts of Var(forecasts)
fore_var <- vector()
for (t in 1:TT) {
tZ <- matrix(Z[, , t], m, 1) ## transpose of Z
fore_var[t] <- Z[, , t] %*% Phi[, , t] %*% tZ + R_est
}
```
Plots of the model mean forecasts with their estimated uncertainty are shown in Figure [9\.3](sec-dlm-forecasting-with-a-univariate-dlm.html#fig:dlm-plotdlmForeLogit). Nearly all of the observed values fell within the approximate prediction interval. Notice that we have a forecasted value for the first year of the time series (1964\), which may seem at odds with our notion of forecasting at time \\(t\\) based on data available only through time \\(t\-1\\). In this case, however, MARSS is actually estimating the states at \\(t\=0\\) (\\(\\boldsymbol{\\theta}\_0\\)), which allows us to compute a forecast for the first time point.
Figure 9\.3: Time series of logit\-transformed survival data (blue dots) and model mean forecasts (thick line). Thin lines denote the approximate 95% prediction intervals.
Although our model forecasts look reasonable in logit\-space, it is worthwhile to examine how well they look when the survival data and forecasts are back\-transformed onto the interval \[0,1] (Figure [9\.4](sec-dlm-forecasting-with-a-univariate-dlm.html#fig:dlm-plotdlmForeRaw)). In that case, the accuracy does not seem to be affected, but the precision appears much worse, especially during the early and late portions of the time series when survival is changing rapidly.
Figure 9\.4: Time series of survival data (blue dots) and model mean forecasts (thick line). Thin lines denote the approximate 95% prediction intervals.
Notice that we passed the DLM fit to all the data to `MARSSkfss()`. This meant that the Kalman filter used estimates of \\(\\mathbf{Q}\\) and \\(r\\) using all the data in the `xtt1` and `Vtt1` calculations. Thus our predictions at time \\(t\\) are not entirely based on only data up to time \\(t\-1\\) since the \\(\\mathbf{Q}\\) and \\(r\\) estimates were from all the data from 1964 to 2005\.
### 9\.8\.1 Estimate of the regression parameters
For step 1, we want to compute the distribution of the regression parameters at time \\(t\\) conditioned on the data up to time \\(t\-1\\), also known as the one\-step ahead forecasts of the regression parameters. Let’s denote \\(\\boldsymbol{\\theta}\_{t\-1}\\) conditioned on \\(y\_{1:t\-1}\\) as \\(\\boldsymbol{\\theta}\_{t\-1\|t\-1}\\) and denote \\(\\boldsymbol{\\theta}\_{t}\\) conditioned on \\(y\_{1:t\-1}\\) as \\(\\boldsymbol{\\theta}\_{t\|t\-1}\\). We will start by defining the distribution of \\(\\boldsymbol{\\theta}\_{t\|t}\\) as follows
\\\[\\begin{equation}
\\tag{9\.23}
\\boldsymbol{\\theta}\_{t\|t} \\sim \\text{MVN}(\\boldsymbol{\\pi}\_t,\\boldsymbol{\\Lambda}\_t) \\end{equation}\\]
where \\(\\boldsymbol{\\pi}\_t \= \\text{E}(\\boldsymbol{\\theta}\_{t\|t})\\) and \\(\\mathbf{\\Lambda}\_t \= \\text{Var}(\\boldsymbol{\\theta}\_{t\|t})\\).
Now we can compute the distribution of \\(\\boldsymbol{\\theta}\_{t}\\) conditioned on \\(y\_{1:t\-1}\\) using the process equation for \\(\\boldsymbol{\\theta}\\):
\\\[\\begin{equation}
\\boldsymbol{\\theta}\_{t} \= \\mathbf{G}\_t \\boldsymbol{\\theta}\_{t\-1} \+ \\mathbf{w}\_t, \\, \\mathbf{w}\_t \\sim \\text{MVN}(\\mathbf{0}, \\mathbf{Q})
\\end{equation}\\]
The expected value of \\(\\boldsymbol{\\theta}\_{t\|t\-1}\\) is thus
\\\[\\begin{equation}
\\tag{9\.24}
\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \= \\mathbf{G}\_t \\text{E}(\\boldsymbol{\\theta}\_{t\-1\|t\-1}) \= \\mathbf{G}\_t \\boldsymbol{\\pi}\_{t\-1}
\\end{equation}\\]
The variance of \\(\\boldsymbol{\\theta}\_{t\|t\-1}\\) is
\\\[\\begin{equation}
\\tag{9\.25}
\\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1}) \= \\mathbf{G}\_t \\text{Var}(\\boldsymbol{\\theta}\_{t\-1\|t\-1}) \\mathbf{G}\_t^{\\top} \+ \\mathbf{Q} \= \\mathbf{G}\_t \\mathbf{\\Lambda}\_{t\-1} \\mathbf{G}\_t^{\\top} \+ \\mathbf{Q}
\\end{equation}\\]
Thus the distribution of \\(\\boldsymbol{\\theta}\_{t}\\) conditioned on \\(y\_{1:t\-1}\\) is
\\\[\\begin{equation}
\\tag{9\.26}
\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\sim \\text{MVN}(\\mathbf{G}\_t \\boldsymbol{\\pi}\_{t\-1}, \\mathbf{G}\_t \\mathbf{\\Lambda}\_{t\-1} \\mathbf{G}\_t^{\\top} \+ \\mathbf{Q})
\\end{equation}\\]
### 9\.8\.2 Prediction of the response variable \\(y\_t\\)
For step 2, we make the prediction of \\(y\_{t}\\) given the predictor variables at time \\(t\\) and the estimate of the regression parameters at time \\(t\\). This is called the one\-step ahead prediction for the observation at time \\(t\\). We will denote the prediction of \\(y\\) as \\(\\hat{y}\\) and we want to compute its distribution (mean and variance). We do this using the equation for \\(y\_t\\) but substituting the expected value of \\(\\boldsymbol{\\theta}\_{t\|t\-1}\\) for \\(\\boldsymbol{\\theta}\_t\\).
\\\[\\begin{equation}
\\tag{9\.27}
\\hat{y}\_{t\|t\-1} \= \\mathbf{F}^{\\top}\_{t} \\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \+ e\_{t}, \\, e\_{t} \\sim \\text{N}(0, r)
\\end{equation}\\]
Our prediction of \\(y\\) at \\(t\\) has a normal distribution with mean (expected value) and variance. The expected value of \\(\\hat{y}\_{t\|t\-1}\\) is
\\\[\\begin{equation}
\\tag{9\.28}
\\text{E}(\\hat{y}\_{t\|t\-1}) \= \\mathbf{F}^{\\top}\_{t} \\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \= \\mathbf{F}^{\\top}\_{t} (\\mathbf{G}\_t \\boldsymbol{\\pi}\_{t\-1})
\\end{equation}\\]
and the variance of \\(\\hat{y}\_{t\|t\-1}\\) is
\\\[\\begin{align}
\\tag{9\.29}
\\text{Var}(\\hat{y}\_{t\|t\-1}) \&\= \\mathbf{F}^{\\top}\_{t} \\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\mathbf{F}\_{t} \+ r \\\\
\&\= \\mathbf{F}^{\\top}\_{t} (\\mathbf{G}\_t \\mathbf{\\Lambda}\_{t\-1} \\mathbf{G}\_t^{\\top} \+ \\mathbf{Q}) \\mathbf{F}\_{t} \+ r
\\end{align}\\]
### 9\.8\.3 Computing the prediction
The expectations and variance of \\(\\boldsymbol{\\theta}\_t\\) conditioned on \\(y\_{1:t}\\) and \\(y\_{1:t\-1}\\) are standard output from the Kalman filter. Thus to produce the predictions, all we need to do is run our DLM state\-space model through a Kalman filter to get \\(\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1})\\) and \\(\\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1})\\) and then use Equation [(9\.28\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-predict-y-mean) to compute the mean prediction and Equation [(9\.29\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-predict-y-var) to compute its variance.
The Kalman filter will need \\(\\mathbf{F}\_t\\), \\(\\mathbf{G}\_t\\) and estimates of \\(\\mathbf{Q}\\) and \\(r\\). The latter are calculated by fitting the DLM to the data \\(y\_{1:t}\\), using for example the `MARSS()` function.
Let’s see an example with the salmon survival DLM. We will use the Kalman filter function in the **MARSS** package and the DLM fit from `MARSS()`.
### 9\.8\.4 Forecasting salmon survival
[Scheuerell and Williams](references.html#ref-ScheuerellWilliams2005) ([2005](references.html#ref-ScheuerellWilliams2005)) were interested in how well upwelling could be used to actually *forecast* expected survival of salmon, so let’s look at how well our model does in that context. To do so, we need the predictive distribution for the survival at time \\(t\\) given the upwelling at time \\(t\\) and the predicted regression parameters at \\(t\\).
In the salmon survival DLM, the \\(\\mathbf{G}\_t\\) matrix is the identity matrix, thus the mean and variance of the one\-step ahead predictive distribution for the observation at time \\(t\\) reduces to (from Equations [(9\.28\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-predict-y-mean) and [(9\.29\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-predict-y-var))
\\\[\\begin{equation}
\\begin{gathered}
\\tag{9\.30}
\\text{E}(\\hat{y}\_{t\|t\-1}) \= \\mathbf{F}^{\\top}\_{t} \\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\\\
\\text{Var}(\\hat{y}\_{t\|t\-1}) \= \\mathbf{F}^{\\top}\_{t} \\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\mathbf{F}\_{t} \+ \\hat{r}
\\end{gathered}
\\end{equation}\\]
where
\\\[
\\mathbf{F}\_{t}\=\\begin{bmatrix}1 \\\\ f\_{t}\\end{bmatrix}
\\]
and \\(f\_{t}\\) is the upwelling index at \\(t\+1\\). \\(\\hat{r}\\) is the estimated observation variance from our model fit.
### 9\.8\.5 Forecasting using MARSS
Working from Equation [(9\.30\)](sec-dlm-forecasting-with-a-univariate-dlm.html#eq:dlm-dlmFore3), we can compute the expected value of the forecast at time \\(t\\) and its variance using the Kalman filter. For the expectation, we need \\(\\mathbf{F}\_{t}^\\top\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1})\\).
\\(\\mathbf{F}\_t^\\top\\) is called \\(\\mathbf{Z}\_t\\) in MARSS notation. The one\-step ahead forecasts of the regression parameters at time \\(t\\), the \\(\\text{E}(\\boldsymbol{\\theta}\_{t\|t\-1})\\), are calculated as part of the Kalman filter algorithm—they are termed \\(\\tilde{x}\_t^{t\-1}\\) in MARSS notation and stored as `xtt1` in the list produced by the `MARSSkfss()` Kalman filter function.
Using the `Z` defined in [9\.6](sec-dlm-salmon-example.html#sec-dlm-salmon-example), we compute the mean forecast as follows:
```
## get list of Kalman filter output
kf_out <- MARSSkfss(dlm_1)
## forecasts of regr parameters; 2xT matrix
eta <- kf_out$xtt1
## ts of E(forecasts)
fore_mean <- vector()
for (t in 1:TT) {
fore_mean[t] <- Z[, , t] %*% eta[, t, drop = FALSE]
}
```
For the variance of the forecasts, we need
\\(\\mathbf{F}^{\\top}\_{t} \\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1}) \\mathbf{F}\_{t} \+ \\hat{r}\\). As with the mean, \\(\\mathbf{F}^\\top\_t \\equiv \\mathbf{Z}\_t\\). The variances of the one\-step ahead forecasts of the regression parameters at time \\(t\\), \\(\\text{Var}(\\boldsymbol{\\theta}\_{t\|t\-1})\\), are also calculated as part of the Kalman filter algorithm—they are stored as `Vtt1` in the list produced by the `MARSSkfss()` function. Lastly, the observation variance \\(\\hat{r}\\) was estimated when we fit the DLM to the data using `MARSS()` and can be extracted from the `dlm_1` fit.
Putting this together, we can compute the forecast variance:
```
## variance of regr parameters; 1x2xT array
Phi <- kf_out$Vtt1
## obs variance; 1x1 matrix
R_est <- coef(dlm_1, type = "matrix")$R
## ts of Var(forecasts)
fore_var <- vector()
for (t in 1:TT) {
tZ <- matrix(Z[, , t], m, 1) ## transpose of Z
fore_var[t] <- Z[, , t] %*% Phi[, , t] %*% tZ + R_est
}
```
Plots of the model mean forecasts with their estimated uncertainty are shown in Figure [9\.3](sec-dlm-forecasting-with-a-univariate-dlm.html#fig:dlm-plotdlmForeLogit). Nearly all of the observed values fell within the approximate prediction interval. Notice that we have a forecasted value for the first year of the time series (1964\), which may seem at odds with our notion of forecasting at time \\(t\\) based on data available only through time \\(t\-1\\). In this case, however, MARSS is actually estimating the states at \\(t\=0\\) (\\(\\boldsymbol{\\theta}\_0\\)), which allows us to compute a forecast for the first time point.
Figure 9\.3: Time series of logit\-transformed survival data (blue dots) and model mean forecasts (thick line). Thin lines denote the approximate 95% prediction intervals.
Although our model forecasts look reasonable in logit\-space, it is worthwhile to examine how well they look when the survival data and forecasts are back\-transformed onto the interval \[0,1] (Figure [9\.4](sec-dlm-forecasting-with-a-univariate-dlm.html#fig:dlm-plotdlmForeRaw)). In that case, the accuracy does not seem to be affected, but the precision appears much worse, especially during the early and late portions of the time series when survival is changing rapidly.
Figure 9\.4: Time series of survival data (blue dots) and model mean forecasts (thick line). Thin lines denote the approximate 95% prediction intervals.
Notice that we passed the DLM fit to all the data to `MARSSkfss()`. This meant that the Kalman filter used estimates of \\(\\mathbf{Q}\\) and \\(r\\) using all the data in the `xtt1` and `Vtt1` calculations. Thus our predictions at time \\(t\\) are not entirely based on only data up to time \\(t\-1\\) since the \\(\\mathbf{Q}\\) and \\(r\\) estimates were from all the data from 1964 to 2005\.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dlm-dlm-forecast-diagnostics.html |
9\.9 Forecast diagnostics
-------------------------
In the literature on state\-space models, the set of \\(e\_t\\) are commonly referred to as “innovations.” `MARSS()` calculates the innovations as part of the Kalman filter algorithm—they are stored as `Innov` in the list produced by the `MARSSkfss()` function.
```
## forecast errors
innov <- kf_out$Innov
```
Let’s see if our innovations meet the model assumptions. Beginning with (1\), we can use a Q\-Q plot to see whether the innovations are normally distributed with a mean of zero. We’ll use the `qqnorm()` function to plot the quantiles of the innovations on the \\(y\\)\-axis versus the theoretical quantiles from a Normal distribution on the \\(x\\)\-axis. If the 2 distributions are similar, the points should fall on the line defined by \\(y \= x\\).
```
## Q-Q plot of innovations
qqnorm(t(innov), main = "", pch = 16, col = "blue")
## add y=x line for easier interpretation
qqline(t(innov))
```
Figure 9\.5: Q\-Q plot of the forecast errors (innovations) for the DLM specified in Equations [(9\.19\)](sec-dlm-salmon-example.html#eq:dlm-dlmSW1)–[(9\.21\)](sec-dlm-salmon-example.html#eq:dlm-dlmSW3).
The Q\-Q plot (Figure [9\.5](sec-dlm-dlm-forecast-diagnostics.html#fig:dlm-plotdlmQQ)) indicates that the innovations appear to be more\-or\-less normally distributed (i.e., most points fall on the line). Furthermore, it looks like the mean of the innovations is about 0, but we should use a more reliable test than simple visual inspection. We can formally test whether the mean of the innovations is significantly different from 0 by using a one\-sample \\(t\\)\-test. based on a null hypothesis of \\(\\,\\text{E}(e\_t)\=0\\). To do so, we will use the function `t.test()` and base our inference on a significance value of \\(\\alpha \= 0\.05\\).
```
## p-value for t-test of H0: E(innov) = 0
t.test(t(innov), mu = 0)$p.value
```
```
[1] 0.4840901
```
The \\(p\\)\-value \\(\>\>\\) 0\.05 so we cannot reject the null hypothesis that \\(\\,\\text{E}(e\_t)\=0\\).
Moving on to assumption (2\), we can use the sample autocorrelation function (ACF) to examine whether the innovations covary with a time\-lagged version of themselves. Using the `acf()` function, we can compute and plot the correlations of \\(e\_t\\) and \\(e\_{t\-k}\\) for various values of \\(k\\). Assumption (2\) will be met if none of the correlation coefficients exceed the 95% confidence intervals defined by \\(\\pm \\, z\_{0\.975} / \\sqrt{n}\\).
```
## plot ACF of innovations
acf(t(innov), lag.max = 10)
```
Figure 9\.6: Autocorrelation plot of the forecast errors (innovations) for the DLM specified in Equations [(9\.19\)](sec-dlm-salmon-example.html#eq:dlm-dlmSW1)–[(9\.21\)](sec-dlm-salmon-example.html#eq:dlm-dlmSW3). Horizontal blue lines define the upper and lower 95% confidence intervals.
The ACF plot (Figure [9\.6](sec-dlm-dlm-forecast-diagnostics.html#fig:dlm-plotdlmACF)) shows no significant autocorrelation in the innovations at lags 1–10, so it looks like both of our model assumptions have indeed been met.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dlm-homework.html |
9\.10 Homework discussion and data
----------------------------------
For the homework this week we will use a DLM to examine some of the time\-varying properties of the spawner\-recruit relationship for Pacific salmon. Much work has been done on this topic, particularly by Randall Peterman and his students and post\-docs at Simon Fraser University. To do so, researchers commonly use a Ricker model because of its relatively simple form, such that the number of recruits (offspring) born in year \\(t\\) (\\(R\_t\\)) from the number of spawners (parents) (\\(S\_t\\)) is
\\\[\\begin{equation}
\\tag{9\.31}
R\_t \= a S\_t e^{\-b S \+ v\_t}.
\\end{equation}\\]
The parameter \\(a\\) determines the maximum reproductive rate in the absence of any density\-dependent effects (the slope of the curve at the origin), \\(b\\) is the strength of density dependence, and \\(v\_t \\sim N(0,\\sigma)\\). In practice, the model is typically log\-transformed so as to make it linear with respect to the predictor variable \\(S\_t\\), such that
\\\[\\begin{align}
\\tag{9\.32}
\\text{log}(R\_t) \&\= \\text{log}(a) \+ \\text{log}(S\_t) \-b S\_t \+ v\_t \\\\
\\text{log}(R\_t) \- \\text{log}(S\_t) \&\= \\text{log}(a) \-b S\_t \+ v\_t \\\\
\\text{log}(R\_t/S\_t) \&\= \\text{log}(a) \- b S\_t \+ v\_t.
\\end{align}\\]
Substituting \\(y\_t \= \\text{log}(R\_t/S\_t)\\), \\(x\_t \= S\_t\\), and \\(\\alpha \= \\text{log}(a)\\) yields a simple linear regression model with intercept \\(\\alpha\\) and slope \\(b\\).
Unfortunately, however, residuals from this simple model typically show high\-autocorrelation due to common environmental conditions that affect overlapping generations. Therefore, to correct for this and allow for an index of stock productivity that controls for any density\-dependent effects, the model may be re\-written as
\\\[\\begin{align}
\\tag{9\.33}
\\text{log}(R\_t/S\_t) \&\= \\alpha\_t \- b S\_t \+ v\_t, \\\\
\\alpha\_t \&\= \\alpha\_{t\-1} \+ w\_t,
\\end{align}\\]
and \\(w\_t \\sim N(0,q)\\). By treating the brood\-year specific productivity as a random walk, we allow it to vary, but in an autocorrelated manner so that consecutive years are not independent from one another.
More recently, interest has grown in using covariates (\\(e.g.\\), sea\-surface temperature) to explain the interannual variability in productivity. In that case, we can can write the model as
\\\[\\begin{equation}
\\tag{9\.34}
\\text{log}(R\_t/S\_t) \= \\alpha \+ \\delta\_t X\_t \- b S\_t \+ v\_t.
\\end{equation}\\]
In this case we are estimating some base\-level productivity (\\(\\alpha\\)) plus the time\-varying effect of some covariate \\(X\_t\\) (\\(\\delta\_t\\)).
### 9\.10\.1 Spawner\-recruit data
The data come from a large public database begun by Ransom Myers many years ago. If you are interested, you can find lots of time series of spawning\-stock, recruitment, and harvest for a variety of fishes around the globe. Here is the website:
<https://www.ramlegacy.org/>
For this exercise, we will use spawner\-recruit data for sockeye salmon (*Oncorhynchus nerka*) from the Kvichak River in SW Alaska that span the years 1952\-1989\. In addition, we’ll examine the potential effects of the Pacific Decadal Oscillation (PDO) during the salmon’s first year in the ocean, which is widely believed to be a “bottleneck” to survival.
These data are in the **atsalibrary** package on GitHub. If needed, install using the **devtools** package.
```
library(devtools)
## Windows users will likely need to set this
## Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
Load the data.
```
data(KvichakSockeye, package = "atsalibrary")
SR_data <- KvichakSockeye
```
The data are a dataframe with columns for brood year (`brood_year`), number of spawners (`spawners`), number of recruits (`recruits`) and PDO at year \\(t\-2\\) in summer (`pdo_summer_t2`) and in winter (`pdo_winter_t2`).
```
## head of data file
head(SR_data)
```
```
# A tibble: 6 x 5
# Groups: brood_year [6]
brood_year spawners recruits pdo_summer_t2 pdo_winter_t2
<dbl> <dbl> <dbl> <dbl> <dbl>
1 1952 NA 20200 -2.79 -1.68
2 1953 NA 593 -1.2 -1.05
3 1954 NA 799 -1.85 -1.25
4 1955 NA 1500 -0.6 -0.68
5 1956 9440 39000 -0.5 -0.31
6 1957 2840 4090 -2.36 -1.78
```
### 9\.10\.1 Spawner\-recruit data
The data come from a large public database begun by Ransom Myers many years ago. If you are interested, you can find lots of time series of spawning\-stock, recruitment, and harvest for a variety of fishes around the globe. Here is the website:
<https://www.ramlegacy.org/>
For this exercise, we will use spawner\-recruit data for sockeye salmon (*Oncorhynchus nerka*) from the Kvichak River in SW Alaska that span the years 1952\-1989\. In addition, we’ll examine the potential effects of the Pacific Decadal Oscillation (PDO) during the salmon’s first year in the ocean, which is widely believed to be a “bottleneck” to survival.
These data are in the **atsalibrary** package on GitHub. If needed, install using the **devtools** package.
```
library(devtools)
## Windows users will likely need to set this
## Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsalibrary")
```
Load the data.
```
data(KvichakSockeye, package = "atsalibrary")
SR_data <- KvichakSockeye
```
The data are a dataframe with columns for brood year (`brood_year`), number of spawners (`spawners`), number of recruits (`recruits`) and PDO at year \\(t\-2\\) in summer (`pdo_summer_t2`) and in winter (`pdo_winter_t2`).
```
## head of data file
head(SR_data)
```
```
# A tibble: 6 x 5
# Groups: brood_year [6]
brood_year spawners recruits pdo_summer_t2 pdo_winter_t2
<dbl> <dbl> <dbl> <dbl> <dbl>
1 1952 NA 20200 -2.79 -1.68
2 1953 NA 593 -1.2 -1.05
3 1954 NA 799 -1.85 -1.25
4 1955 NA 1500 -0.6 -0.68
5 1956 9440 39000 -0.5 -0.31
6 1957 2840 4090 -2.36 -1.78
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dfa.html |
Chapter 10 Dynamic Factor Analysis
==================================
Here we will use the **MARSS** package to do Dynamic Factor Analysis (DFA), which allows us to look for a set of common underlying processes among a relatively large set of time series ([Zuur et al. 2003](references.html#ref-Zuuretal2003a)). There have been a number of recent applications of DFA to ecological questions surrounding Pacific salmon ([Stachura, Mantua, and Scheuerell 2014](references.html#ref-Stachuraetal2014); [Jorgensen et al. 2016](references.html#ref-Jorgensenetal2016); [Ohlberger, Scheuerell, and Schindler 2016](references.html#ref-Ohlbergeretal2016)) and stream temperatures ([Lisi et al. 2015](references.html#ref-Lisietal2015)). For a more in\-depth treatment of potential applications of MARSS models for DFA, see Chapter 9 in the [MARSS User’s Guide](https://cran.r-project.org/web/packages/MARSS/vignettes/UserGuide.pdf).
A script with all the R code in the chapter can be downloaded [here](./Rcode/intro-to-dfa.R). The Rmd for this chapter can be downloaded [here](./Rmds/intro-to-dfa.Rmd).
### Data and packages
All the data used in the chapter are in the **MARSS** package. Install the package, if needed, and load to run the code in the chapter.
```
library(MARSS)
```
### Data and packages
All the data used in the chapter are in the **MARSS** package. Install the package, if needed, and load to run the code in the chapter.
```
library(MARSS)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dfa-lake-wa-data.html |
10\.5 Lake Washington phytoplankton data
----------------------------------------
For this exercise, we will use the Lake Washington phytoplankton data contained in the **MARSS** package. Let’s begin by reading in the monthly values for all of the data, including metabolism, chemistry, and climate.
```
## load the data (there are 3 datasets contained here)
data(lakeWAplankton, package = "MARSS")
## we want lakeWAplanktonTrans, which has been transformed
## so the 0s are replaced with NAs and the data z-scored
all_dat <- lakeWAplanktonTrans
## use only the 10 years from 1980-1989
yr_frst <- 1980
yr_last <- 1989
plank_dat <- all_dat[all_dat[, "Year"] >= yr_frst & all_dat[,
"Year"] <= yr_last, ]
## create vector of phytoplankton group names
phytoplankton <- c("Cryptomonas", "Diatoms", "Greens", "Unicells",
"Other.algae")
## get only the phytoplankton
dat_1980 <- plank_dat[, phytoplankton]
```
Next, we transpose the data matrix and calculate the number of time series and their length.
```
## transpose data so time goes across columns
dat_1980 <- t(dat_1980)
## get number of time series
N_ts <- dim(dat_1980)[1]
## get length of time series
TT <- dim(dat_1980)[2]
```
It will be easier to estimate the real parameters of interest if we de\-mean the data, so let’s do that.
```
y_bar <- apply(dat_1980, 1, mean, na.rm = TRUE)
dat <- dat_1980 - y_bar
rownames(dat) <- rownames(dat_1980)
```
### 10\.5\.1 Plots of the data
Here are time series plots of all five phytoplankton functional groups.
```
spp <- rownames(dat_1980)
clr <- c("brown", "blue", "darkgreen", "darkred", "purple")
cnt <- 1
par(mfrow = c(N_ts, 1), mai = c(0.5, 0.7, 0.1, 0.1), omi = c(0,
0, 0, 0))
for (i in spp) {
plot(dat[i, ], xlab = "", ylab = "Abundance index", bty = "L",
xaxt = "n", pch = 16, col = clr[cnt], type = "b")
axis(1, 12 * (0:dim(dat_1980)[2]) + 1, yr_frst + 0:dim(dat_1980)[2])
title(i)
cnt <- cnt + 1
}
```
Figure 10\.1: Demeaned time series of Lake Washington phytoplankton.
### 10\.5\.1 Plots of the data
Here are time series plots of all five phytoplankton functional groups.
```
spp <- rownames(dat_1980)
clr <- c("brown", "blue", "darkgreen", "darkred", "purple")
cnt <- 1
par(mfrow = c(N_ts, 1), mai = c(0.5, 0.7, 0.1, 0.1), omi = c(0,
0, 0, 0))
for (i in spp) {
plot(dat[i, ], xlab = "", ylab = "Abundance index", bty = "L",
xaxt = "n", pch = 16, col = clr[cnt], type = "b")
axis(1, 12 * (0:dim(dat_1980)[2]) + 1, yr_frst + 0:dim(dat_1980)[2])
title(i)
cnt <- cnt + 1
}
```
Figure 10\.1: Demeaned time series of Lake Washington phytoplankton.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dfa-fitting-dfa-models-with-marss.html |
10\.6 Fitting DFA models with the MARSS package
-----------------------------------------------
The **MARSS** package is designed to work with the fully specified matrix form of the multivariate state\-space model we wrote out in Sec 3\. Thus, we will need to create a model list with forms for each of the vectors and matrices. Note that even though some of the model elements are scalars and vectors, we will need to specify everything as a matrix (or array for time series of matrices).
Notice that the code below uses some of the **MARSS** shortcuts for specifying forms of vectors and matrices. We will also use the `matrix(list(),nrow,ncol)` trick we learned previously.
### 10\.6\.1 The observation model
Here we will fit the DFA model above where we have `R N_ts` observed time series and we want 3 hidden states. Now we need to set up the observation model for `MARSS`. Here are the vectors and matrices for our first model where each nutrient follows its own process. Recall that we will need to set the elements in the upper R corner of \\(\\mathbf{Z}\\) to 0\. We will assume that the observation errors have different variances and they are independent of one another.
```
## 'ZZ' is loadings matrix
Z_vals <- list("z11", 0, 0, "z21", "z22", 0, "z31", "z32", "z33",
"z41", "z42", "z43", "z51", "z52", "z53")
ZZ <- matrix(Z_vals, nrow = N_ts, ncol = 3, byrow = TRUE)
ZZ
```
```
[,1] [,2] [,3]
[1,] "z11" 0 0
[2,] "z21" "z22" 0
[3,] "z31" "z32" "z33"
[4,] "z41" "z42" "z43"
[5,] "z51" "z52" "z53"
```
```
## 'aa' is the offset/scaling
aa <- "zero"
## 'DD' and 'd' are for covariates
DD <- "zero" # matrix(0,mm,1)
dd <- "zero" # matrix(0,1,wk_last)
## 'RR' is var-cov matrix for obs errors
RR <- "diagonal and unequal"
```
### 10\.6\.2 The process model
We need to specify the explicit form for all of the vectors and matrices in the full form of the MARSS model we defined in Sec 3\.1\. Note that we do not have to specify anything for the states \\((\\mathbf{x})\\) – those are elements that `MARSS` will identify and estimate itself based on our definitions of the other vectors and matrices.
```
## number of processes
mm <- 3
## 'BB' is identity: 1's along the diagonal & 0's elsewhere
BB <- "identity" # diag(mm)
## 'uu' is a column vector of 0's
uu <- "zero" # matrix(0, mm, 1)
## 'CC' and 'cc' are for covariates
CC <- "zero" # matrix(0, mm, 1)
cc <- "zero" # matrix(0, 1, wk_last)
## 'QQ' is identity
QQ <- "identity" # diag(mm)
```
### 10\.6\.3 Fit the model in MARSS
Now it’s time to fit our first DFA model To do so, we need to create three lists that we will need to pass to the `MARSS()` function:
1. A list of specifications for the model’s vectors and matrices;
2. A list of any initial values – `MARSS` will pick its own otherwise;
3. A list of control parameters for the `MARSS()` function.
```
## list with specifications for model vectors/matrices
mod_list <- list(Z = ZZ, A = aa, D = DD, d = dd, R = RR, B = BB,
U = uu, C = CC, c = cc, Q = QQ)
## list with model inits
init_list <- list(x0 = matrix(rep(0, mm), mm, 1))
## list with model control parameters
con_list <- list(maxit = 3000, allow.degen = TRUE)
```
Now we can fit the model.
```
## fit MARSS
dfa_1 <- MARSS(y = dat, model = mod_list, inits = init_list,
control = con_list)
```
```
Success! abstol and log-log tests passed at 246 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 246 iterations.
Log-likelihood: -692.9795
AIC: 1425.959 AICc: 1427.42
Estimate
Z.z11 0.2738
Z.z21 0.4487
Z.z31 0.3170
Z.z41 0.4107
Z.z51 0.2553
Z.z22 0.3608
Z.z32 -0.3690
Z.z42 -0.0990
Z.z52 -0.3793
Z.z33 0.0185
Z.z43 -0.1404
Z.z53 0.1317
R.(Cryptomonas,Cryptomonas) 0.1638
R.(Diatoms,Diatoms) 0.2913
R.(Greens,Greens) 0.8621
R.(Unicells,Unicells) 0.3080
R.(Other.algae,Other.algae) 0.5000
x0.X1 0.2218
x0.X2 1.8155
x0.X3 -4.8097
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
### 10\.6\.1 The observation model
Here we will fit the DFA model above where we have `R N_ts` observed time series and we want 3 hidden states. Now we need to set up the observation model for `MARSS`. Here are the vectors and matrices for our first model where each nutrient follows its own process. Recall that we will need to set the elements in the upper R corner of \\(\\mathbf{Z}\\) to 0\. We will assume that the observation errors have different variances and they are independent of one another.
```
## 'ZZ' is loadings matrix
Z_vals <- list("z11", 0, 0, "z21", "z22", 0, "z31", "z32", "z33",
"z41", "z42", "z43", "z51", "z52", "z53")
ZZ <- matrix(Z_vals, nrow = N_ts, ncol = 3, byrow = TRUE)
ZZ
```
```
[,1] [,2] [,3]
[1,] "z11" 0 0
[2,] "z21" "z22" 0
[3,] "z31" "z32" "z33"
[4,] "z41" "z42" "z43"
[5,] "z51" "z52" "z53"
```
```
## 'aa' is the offset/scaling
aa <- "zero"
## 'DD' and 'd' are for covariates
DD <- "zero" # matrix(0,mm,1)
dd <- "zero" # matrix(0,1,wk_last)
## 'RR' is var-cov matrix for obs errors
RR <- "diagonal and unequal"
```
### 10\.6\.2 The process model
We need to specify the explicit form for all of the vectors and matrices in the full form of the MARSS model we defined in Sec 3\.1\. Note that we do not have to specify anything for the states \\((\\mathbf{x})\\) – those are elements that `MARSS` will identify and estimate itself based on our definitions of the other vectors and matrices.
```
## number of processes
mm <- 3
## 'BB' is identity: 1's along the diagonal & 0's elsewhere
BB <- "identity" # diag(mm)
## 'uu' is a column vector of 0's
uu <- "zero" # matrix(0, mm, 1)
## 'CC' and 'cc' are for covariates
CC <- "zero" # matrix(0, mm, 1)
cc <- "zero" # matrix(0, 1, wk_last)
## 'QQ' is identity
QQ <- "identity" # diag(mm)
```
### 10\.6\.3 Fit the model in MARSS
Now it’s time to fit our first DFA model To do so, we need to create three lists that we will need to pass to the `MARSS()` function:
1. A list of specifications for the model’s vectors and matrices;
2. A list of any initial values – `MARSS` will pick its own otherwise;
3. A list of control parameters for the `MARSS()` function.
```
## list with specifications for model vectors/matrices
mod_list <- list(Z = ZZ, A = aa, D = DD, d = dd, R = RR, B = BB,
U = uu, C = CC, c = cc, Q = QQ)
## list with model inits
init_list <- list(x0 = matrix(rep(0, mm), mm, 1))
## list with model control parameters
con_list <- list(maxit = 3000, allow.degen = TRUE)
```
Now we can fit the model.
```
## fit MARSS
dfa_1 <- MARSS(y = dat, model = mod_list, inits = init_list,
control = con_list)
```
```
Success! abstol and log-log tests passed at 246 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 246 iterations.
Log-likelihood: -692.9795
AIC: 1425.959 AICc: 1427.42
Estimate
Z.z11 0.2738
Z.z21 0.4487
Z.z31 0.3170
Z.z41 0.4107
Z.z51 0.2553
Z.z22 0.3608
Z.z32 -0.3690
Z.z42 -0.0990
Z.z52 -0.3793
Z.z33 0.0185
Z.z43 -0.1404
Z.z53 0.1317
R.(Cryptomonas,Cryptomonas) 0.1638
R.(Diatoms,Diatoms) 0.2913
R.(Greens,Greens) 0.8621
R.(Unicells,Unicells) 0.3080
R.(Other.algae,Other.algae) 0.5000
x0.X1 0.2218
x0.X2 1.8155
x0.X3 -4.8097
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dfa-rotating-loadings.html |
10\.8 Rotating trends and loadings
----------------------------------
Before proceeding further, we need to address the constraints we placed on the DFA model in Sec 2\.2\. In particular, we arbitrarily constrained \\(\\mathbf{Z}\\) in such a way to choose only one of these solutions, but fortunately the different solutions are equivalent, and they can be related to each other by a rotation matrix \\(\\mathbf{H}\\). Let \\(\\mathbf{H}\\) be any \\(m \\times m\\) non\-singular matrix. The following are then equivalent DFA models:
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{y}\_t \= \\mathbf{Z}\\mathbf{x}\_t\+\\mathbf{a}\+\\mathbf{v}\_t
\\mathbf{x}\_t \= \\mathbf{x}\_{t\-1}\+\\mathbf{w}\_t \\\\
\\end{gathered}
\\tag{10\.10}
\\end{equation}\\]
and
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{y}\_t \= \\mathbf{Z}\\mathbf{H}^{\-1}\\mathbf{x}\_t\+\\mathbf{a}\+\\mathbf{v}\_t
\\mathbf{H}\\mathbf{x}\_t \= \\mathbf{H}\\mathbf{x}\_{t\-1}\+\\mathbf{H}\\mathbf{w}\_t \\\\
\\end{gathered}.
\\tag{10\.11}
\\end{equation}\\]
There are many ways of doing factor rotations, but a common method is the “varimax”" rotation, which seeks a rotation matrix \\(\\mathbf{H}\\) that creates the largest difference between the loadings in \\(\\mathbf{Z}\\). For example, imagine that row 3 in our estimated \\(\\mathbf{Z}\\) matrix was (0\.2, 0\.2, 0\.2\). That would mean that green algae were a mixture of equal parts of processes 1, 2, and 3\. If instead row 3 was (0\.8, 0\.1, 0\.05\), this would make our interpretation of the model fits easier because we could say that green algae followed the first process most closely. The varimax rotation would find the \\(\\mathbf{H}\\) matrix that makes the rows in \\(\\mathbf{Z}\\) more like (0\.8, 0\.1, 0\.05\) and less like (0\.2, 0\.2, 0\.2\).
The varimax rotation is easy to compute because R has a built in function for this: `varimax()`. Interestingly, the function returns the inverse of \\(\\mathbf{H}\\), which we need anyway.
```
## get the estimated ZZ
Z_est <- coef(dfa_1, type = "matrix")$Z
## get the inverse of the rotation matrix
H_inv <- varimax(Z_est)$rotmat
```
We can now rotate both \\(\\mathbf{Z}\\) and \\(\\mathbf{x}\\).
```
## rotate factor loadings
Z_rot = Z_est %*% H_inv
## rotate processes
proc_rot = solve(H_inv) %*% dfa_1$states
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dfa-estimated-states.html |
10\.9 Estimated states and loadings
-----------------------------------
Here are plots of the three hidden processes (left column) and the loadings for each of phytoplankton groups (right column).
```
ylbl <- phytoplankton
w_ts <- seq(dim(dat)[2])
layout(matrix(c(1, 2, 3, 4, 5, 6), mm, 2), widths = c(2, 1))
## par(mfcol=c(mm,2), mai = c(0.5,0.5,0.5,0.1), omi =
## c(0,0,0,0))
par(mai = c(0.5, 0.5, 0.5, 0.1), omi = c(0, 0, 0, 0))
## plot the processes
for (i in 1:mm) {
ylm <- c(-1, 1) * max(abs(proc_rot[i, ]))
## set up plot area
plot(w_ts, proc_rot[i, ], type = "n", bty = "L", ylim = ylm,
xlab = "", ylab = "", xaxt = "n")
## draw zero-line
abline(h = 0, col = "gray")
## plot trend line
lines(w_ts, proc_rot[i, ], lwd = 2)
lines(w_ts, proc_rot[i, ], lwd = 2)
## add panel labels
mtext(paste("State", i), side = 3, line = 0.5)
axis(1, 12 * (0:dim(dat_1980)[2]) + 1, yr_frst + 0:dim(dat_1980)[2])
}
## plot the loadings
minZ <- 0
ylm <- c(-1, 1) * max(abs(Z_rot))
for (i in 1:mm) {
plot(c(1:N_ts)[abs(Z_rot[, i]) > minZ], as.vector(Z_rot[abs(Z_rot[,
i]) > minZ, i]), type = "h", lwd = 2, xlab = "", ylab = "",
xaxt = "n", ylim = ylm, xlim = c(0.5, N_ts + 0.5), col = clr)
for (j in 1:N_ts) {
if (Z_rot[j, i] > minZ) {
text(j, -0.03, ylbl[j], srt = 90, adj = 1, cex = 1.2,
col = clr[j])
}
if (Z_rot[j, i] < -minZ) {
text(j, 0.03, ylbl[j], srt = 90, adj = 0, cex = 1.2,
col = clr[j])
}
abline(h = 0, lwd = 1.5, col = "gray")
}
mtext(paste("Factor loadings on state", i), side = 3, line = 0.5)
}
```
Figure 10\.2: Estimated states from the DFA model.
It looks like there are strong seasonal cycles in the data, but there is some indication of a phase difference between some of the groups. We can use `ccf()` to investigate further.
```
par(mai = c(0.9, 0.9, 0.1, 0.1))
ccf(proc_rot[1, ], proc_rot[2, ], lag.max = 12, main = "")
```
Figure 10\.3: Cross\-correlation plot of the two rotations.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dfa-plot-data.html |
10\.10 Plotting the data and model fits
---------------------------------------
We can plot the fits for our DFA model along with the data. The following function will return the fitted values ± (1\-\\(\\alpha\\))% confidence intervals.
```
get_DFA_fits <- function(MLEobj, dd = NULL, alpha = 0.05) {
## empty list for results
fits <- list()
## extra stuff for var() calcs
Ey <- MARSS:::MARSShatyt(MLEobj)
## model params
ZZ <- coef(MLEobj, type = "matrix")$Z
## number of obs ts
nn <- dim(Ey$ytT)[1]
## number of time steps
TT <- dim(Ey$ytT)[2]
## get the inverse of the rotation matrix
H_inv <- varimax(ZZ)$rotmat
## check for covars
if (!is.null(dd)) {
DD <- coef(MLEobj, type = "matrix")$D
## model expectation
fits$ex <- ZZ %*% H_inv %*% MLEobj$states + DD %*% dd
} else {
## model expectation
fits$ex <- ZZ %*% H_inv %*% MLEobj$states
}
## Var in model fits
VtT <- MARSSkfss(MLEobj)$VtT
VV <- NULL
for (tt in 1:TT) {
RZVZ <- coef(MLEobj, type = "matrix")$R - ZZ %*% VtT[,
, tt] %*% t(ZZ)
SS <- Ey$yxtT[, , tt] - Ey$ytT[, tt, drop = FALSE] %*%
t(MLEobj$states[, tt, drop = FALSE])
VV <- cbind(VV, diag(RZVZ + SS %*% t(ZZ) + ZZ %*% t(SS)))
}
SE <- sqrt(VV)
## upper & lower (1-alpha)% CI
fits$up <- qnorm(1 - alpha/2) * SE + fits$ex
fits$lo <- qnorm(alpha/2) * SE + fits$ex
return(fits)
}
```
Here are time series of the five phytoplankton groups (points) with the mean of the DFA fits (black line) and the 95% confidence intervals (gray lines).
```
## get model fits & CI's
mod_fit <- get_DFA_fits(dfa_1)
## plot the fits
ylbl <- phytoplankton
par(mfrow = c(N_ts, 1), mai = c(0.5, 0.7, 0.1, 0.1), omi = c(0,
0, 0, 0))
for (i in 1:N_ts) {
up <- mod_fit$up[i, ]
mn <- mod_fit$ex[i, ]
lo <- mod_fit$lo[i, ]
plot(w_ts, mn, xlab = "", ylab = ylbl[i], xaxt = "n", type = "n",
cex.lab = 1.2, ylim = c(min(lo), max(up)))
axis(1, 12 * (0:dim(dat_1980)[2]) + 1, yr_frst + 0:dim(dat_1980)[2])
points(w_ts, dat[i, ], pch = 16, col = clr[i])
lines(w_ts, up, col = "darkgray")
lines(w_ts, mn, col = "black", lwd = 2)
lines(w_ts, lo, col = "darkgray")
}
```
Figure 10\.4: Data and fits from the DFA model.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dfa-lakeWA.html |
10\.12 Example from Lake Washington
-----------------------------------
The Lake Washington dataset has two environmental covariates that we might expect to have effects on phytoplankton growth, and hence, abundance: temperature (`Temp`) and total phosphorous (`TP`). We need the covariate inputs to have the same number of time steps as the variate data, and thus we limit the covariate data to the years 1980\-1994 also.
```
temp <- t(plank_dat[, "Temp", drop = FALSE])
TP <- t(plank_dat[, "TP", drop = FALSE])
```
We will now fit three different models that each add covariate effects (i.e., `Temp`, `TP`, `Temp` and `TP`) to our existing model above where \\(m\\) \= 3 and \\(\\mathbf{R}\\) is `"diagonal and unequal"`.
```
mod_list = list(m = 3, R = "diagonal and unequal")
dfa_temp <- MARSS(dat, model = mod_list, form = "dfa", z.score = FALSE,
control = con_list, covariates = temp)
dfa_TP <- MARSS(dat, model = mod_list, form = "dfa", z.score = FALSE,
control = con_list, covariates = TP)
dfa_both <- MARSS(dat, model = mod_list, form = "dfa", z.score = FALSE,
control = con_list, covariates = rbind(temp, TP))
```
Next we can compare whether the addition of the covariates improves the model fit.
```
print(cbind(model = c("no covars", "Temp", "TP", "Temp & TP"),
AICc = round(c(dfa_1$AICc, dfa_temp$AICc, dfa_TP$AICc, dfa_both$AICc))),
quote = FALSE)
```
```
model AICc
[1,] no covars 1427
[2,] Temp 1356
[3,] TP 1414
[4,] Temp & TP 1362
```
This suggests that adding temperature or phosphorus to the model, either alone or in combination with one another, does seem to improve overall model fit. If we were truly interested in assessing the “best” model structure that includes covariates, however, we should examine all combinations of 1\-4 trends and different structures for \\(\\mathbf{R}\\).
Now let’s try to fit a model with a dummy variable for season, and see how that does.
```
cos_t <- cos(2 * pi * seq(TT)/12)
sin_t <- sin(2 * pi * seq(TT)/12)
dd <- rbind(cos_t, sin_t)
dfa_seas <- MARSS(dat, model = mod_list, form = "dfa", z.score = FALSE,
control = con_list, covariates = dd)
```
```
Success! abstol and log-log tests passed at 451 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 451 iterations.
Log-likelihood: -633.1283
AIC: 1320.257 AICc: 1322.919
Estimate
Z.11 0.26207
Z.21 0.24762
Z.31 0.03689
Z.41 0.51329
Z.51 0.18479
Z.22 0.04819
Z.32 -0.08824
Z.42 0.06454
Z.52 0.05905
Z.33 0.02673
Z.43 0.19343
Z.53 -0.10528
R.(Cryptomonas,Cryptomonas) 0.14406
R.(Diatoms,Diatoms) 0.44205
R.(Greens,Greens) 0.73113
R.(Unicells,Unicells) 0.19533
R.(Other.algae,Other.algae) 0.50127
D.(Cryptomonas,cos_t) -0.23244
D.(Diatoms,cos_t) -0.40829
D.(Greens,cos_t) -0.72656
D.(Unicells,cos_t) -0.34666
D.(Other.algae,cos_t) -0.41606
D.(Cryptomonas,sin_t) 0.12515
D.(Diatoms,sin_t) 0.65621
D.(Greens,sin_t) -0.50657
D.(Unicells,sin_t) -0.00867
D.(Other.algae,sin_t) -0.62474
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
```
dfa_seas$AICc
```
```
[1] 1322.919
```
The model with a dummy seasonal factor does much better than the covariate models. The model fits for the seasonal effects model are shown below.
```
## get model fits & CI's
mod_fit <- get_DFA_fits(dfa_seas, dd = dd)
## plot the fits
ylbl <- phytoplankton
par(mfrow = c(N_ts, 1), mai = c(0.5, 0.7, 0.1, 0.1), omi = c(0,
0, 0, 0))
for (i in 1:N_ts) {
up <- mod_fit$up[i, ]
mn <- mod_fit$ex[i, ]
lo <- mod_fit$lo[i, ]
plot(w_ts, mn, xlab = "", ylab = ylbl[i], xaxt = "n", type = "n",
cex.lab = 1.2, ylim = c(min(lo), max(up)))
axis(1, 12 * (0:dim(dat_1980)[2]) + 1, yr_frst + 0:dim(dat_1980)[2])
points(w_ts, dat[i, ], pch = 16, col = clr[i])
lines(w_ts, up, col = "darkgray")
lines(w_ts, mn, col = "black", lwd = 2)
lines(w_ts, lo, col = "darkgray")
}
```
Figure 10\.5: Data and model fits for the DFA with covariates.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-dfa-problems.html |
10\.13 Problems
---------------
For questions 1\-3, use the Lake Washington plankton data from the chapter. `dat` is the data to use.
```
library(MARSS)
data(lakeWAplankton, package = "MARSS")
all_dat <- lakeWAplanktonTrans
yr_frst <- 1980
yr_last <- 1989
plank_dat <- all_dat[all_dat[, "Year"] >= yr_frst & all_dat[,
"Year"] <= yr_last, ]
phytoplankton <- c("Cryptomonas", "Diatoms", "Greens", "Unicells",
"Other.algae")
dat_1980 <- plank_dat[, phytoplankton]
## transpose data so time goes across columns
dat_1980 <- t(dat_1980)
## remove the mean
dat <- zscore(dat_1980, mean.only = TRUE)
```
1. Fit other DFA models to the phytoplankton data with varying numbers of latent trends from 1\-4 (we fit a 3 latent trend model above). Do not include any covariates in these models. Using `R="diagonal and unequal"` for the observation errors, which of the DFA models has the most support from the data?
Plot the model states (latent trends) and loadings as in Section [10\.9](sec-dfa-estimated-states.html#sec-dfa-estimated-states). Describe the general patterns in the states and the ways the different taxa load onto those trends.
Also plot the the model fits as in Section [10\.10](sec-dfa-plot-data.html#sec-dfa-plot-data). Do they look reasonable? Are there any particular problems or outliers?
2. How does the best model from Question 1 compare to a DFA model with the same number of latent trends, but with `R="unconstrained"`?
Plot the model states (latent trends) and loadings as in Section [10\.9](sec-dfa-estimated-states.html#sec-dfa-estimated-states). Describe the general patterns in the states and the ways the different taxa load onto those trends.
Also plot the the model fits as in Section [10\.10](sec-dfa-plot-data.html#sec-dfa-plot-data). Do they look reasonable? Are there any particular problems or outliers?
3. Fit a DFA model that includes temperature as a covariate and 3 trends (as in Section [10\.12](sec-dfa-lakeWA.html#sec-dfa-lakeWA)), but with`R="unconstrained"`? How does this model compare to the model with `R="diagonal and unequal"`? How does it compare to the model in Question 2?
Plot the model states and loadings as in Section [10\.9](sec-dfa-estimated-states.html#sec-dfa-estimated-states). Describe the general patterns in the states and the ways the different taxa load onto those trends.
Also plot the the model fits as in Section [10\.10](sec-dfa-plot-data.html#sec-dfa-plot-data). Do they look reasonable? Are there any particular problems or outliers?
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-mssmiss.html |
Chapter 11 Covariates with Missing Values
=========================================
A script with all the R code in the chapter can be downloaded [here](./Rcode/multivariate-ss-missing-cov.R). The Rmd for this chapter can be downloaded [here](./Rmds/multivariate-ss-missing-cov.Rmd).
### Data and packages
This chapter will use a SNOTEL dataset. These are data on snow water equivalency at locations throughtout the state of Washington. The data are in the **atsalibrary** package.
```
data(snotel, package = "atsalibrary")
```
The main packages used in this chapter are **MARSS** and **forecast**.
```
library(MARSS)
library(forecast)
library(ggplot2)
library(ggmap)
library(broom)
```
### Data and packages
This chapter will use a SNOTEL dataset. These are data on snow water equivalency at locations throughtout the state of Washington. The data are in the **atsalibrary** package.
```
data(snotel, package = "atsalibrary")
```
The main packages used in this chapter are **MARSS** and **forecast**.
```
library(MARSS)
library(forecast)
library(ggplot2)
library(ggmap)
library(broom)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-mssmiss-overview.html |
11\.1 Covariates with missing values or observation error
---------------------------------------------------------
The specific formulation of Equation [(8\.1\)](sec-msscov-overview.html#eq:msscov-covars) creates restrictions on the assumptions regarding the covariate data. You have to assume that your covariate data has no error, which is probably not true. You cannot have missing values in your covariate data, again unlikely. You cannot combine instrument time series; for example, if you have two temperature recorders with different error rates and biases. Also, what if you have one noisy temperature sensor in the first part of your time series and then you switch to a much better sensor in the second half of your time series? All these problems require pre\-analysis massaging of the covariate data, leaving out noisy and gappy covariate data, and making what can feel like arbitrary choices about which covariate time series to include.
To circumvent these potential problems and allow more flexibility in how we incorporate covariate data, one can instead treat the covariates as components of an auto\-regressive process by including them in both the process and observation models. Beginning with the process equation, we can write
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix}\\mathbf{x}^{(v)} \\\\ \\mathbf{x}^{(c)}\\end{bmatrix}\_t
\= \\begin{bmatrix}\\mathbf{B}^{(v)} \& \\mathbf{C} \\\\ 0 \& \\mathbf{B}^{(c)}\\end{bmatrix}
\\begin{bmatrix}\\mathbf{x}^{(v)} \\\\ \\mathbf{x}^{(c)}\\end{bmatrix}\_{t\-1}
\+ \\begin{bmatrix}\\mathbf{u}^{(v)} \\\\ \\mathbf{u}^{(c)} \\end{bmatrix}
\+ \\mathbf{w}\_t,\\\\
\\mathbf{w}\_t \\sim \\,\\text{MVN}\\begin{pmatrix}0,\\begin{bmatrix}\\mathbf{Q}^{(v)} \& 0 \\\\ 0 \& \\mathbf{Q}^{(c)} \\end{bmatrix} \\end{pmatrix}
\\end{gathered}
\\tag{11\.1}
\\end{equation}\\]
The elements with superscript \\({(v)}\\) are for the \\(k\\) variate states and those with superscript \\({(c)}\\) are for the \\(q\\) covariate states. The dimension of \\(\\mathbf{x}^{(c)}\\) is \\(q \\times 1\\) and \\(q\\) is not necessarily equal to \\(p\\), the number of covariate observation time series in your dataset. Imagine, for example, that you have two temperature sensors and you are combining these data. Then you have two covariate observation time series (\\(p\=2\\)) but only one underlying covariate state time series (\\(q\=1\\)). The matrix \\(\\mathbf{C}\\) is dimension \\(k \\times q\\), and \\(\\mathbf{B}^{(c)}\\) and \\(\\mathbf{Q}^{(c)}\\) are dimension \\(q \\times q\\). The dimension of \\(\\mathbf{x}^{(v)}\\) is \\(k \\times 1\\), and \\(\\mathbf{B}^{(v)}\\) and \\(\\mathbf{Q}^{(v)}\\) are dimension \\(k \\times k\\). The dimension of \\(\\mathbf{x}\\) is always denoted \\(m\\). If your process model includes only variates, then \\(k\=m\\), but now your process model includes \\(k\\) variates and \\(q\\) covariate states so \\(m\=k\+q\\).
Next, we can write the observation equation in an analogous manner, such that
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix} \\mathbf{y}^{(v)} \\\\ \\mathbf{y}^{(c)} \\end{bmatrix}\_t
\= \\begin{bmatrix}\\mathbf{Z}^{(v)} \& \\mathbf{D} \\\\ 0 \& \\mathbf{Z}^{(c)} \\end{bmatrix}
\\begin{bmatrix}\\mathbf{x}^{(v)} \\\\ \\mathbf{x}^{(c)} \\end{bmatrix}\_t
\+ \\begin{bmatrix} \\mathbf{a}^{(v)} \\\\ \\mathbf{a}^{(c)} \\end{bmatrix}
\+ \\mathbf{v}\_t,\\\\
\\mathbf{v}\_t \\sim \\,\\text{MVN}\\begin{pmatrix}0,\\begin{bmatrix}\\mathbf{R}^{(v)} \& 0 \\\\ 0 \& \\mathbf{R}^{(c)} \\end{bmatrix} \\end{pmatrix}
\\end{gathered}
\\tag{11\.2}
\\end{equation}\\]
The dimension of \\(\\mathbf{y}^{(c)}\\) is \\(p \\times 1\\), where \\(p\\) is the number of covariate observation time series in your dataset. The dimension of \\(\\mathbf{y}^{(v)}\\) is \\(l \\times 1\\), where \\(l\\) is the number of variate observation time series in your dataset. The total dimension of \\(\\mathbf{y}\\) is \\(l\+p\\). The matrix \\(\\mathbf{D}\\) is dimension \\(l \\times q\\), \\(\\mathbf{Z}^{(c)}\\) is dimension \\(p \\times q\\), and \\(\\mathbf{R}^{(c)}\\) are dimension \\(p \\times p\\). The dimension of \\(\\mathbf{Z}^{(v)}\\) is dimension \\(l \\times k\\), and \\(\\mathbf{R}^{(v)}\\) are dimension \\(l \\times l\\).
The \\(\\mathbf{D}\\) matrix would presumably have a number of all zero rows in it, as would the \\(\\mathbf{C}\\) matrix. The covariates that affect the states would often be different than the covariates that affect the observations. For example, mean annual temperature might affect population growth rates for many species while having little or no affect on observability, and turbidity might strongly affect observability in many types of aquatic surveys but have little affect on population growth rate.
Our MARSS model with covariates now looks on the surface like a regular MARSS model:
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{x}\_t \= \\mathbf{B}\\mathbf{x}\_{t\-1} \+ \\mathbf{u} \+ \\mathbf{w}\_t, \\text{ where } \\mathbf{w}\_t \\sim \\,\\text{MVN}(0,\\mathbf{Q}) \\\\
\\mathbf{y}\_t \= \\mathbf{Z}\\mathbf{x}\_t \+ \\mathbf{a} \+ \\mathbf{v}\_t, \\text{ where } \\mathbf{v}\_t \\sim \\,\\text{MVN}(0,\\mathbf{R})
\\end{gathered}
\\end{equation}\\]
with the \\(\\mathbf{x}\_t\\), \\(\\mathbf{y}\_t\\) and parameter matrices redefined as in Equations [(11\.1\)](sec-mssmiss-overview.html#eq:mssmiss-marsscovarx) and [(11\.2\)](sec-mssmiss-overview.html#eq:mssmiss-marsscovary):
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{x}\=\\begin{bmatrix}\\mathbf{x}^{(v)}\\\\ \\mathbf{x}^{(c)}\\end{bmatrix} \\quad \\mathbf{B}\=\\begin{bmatrix}\\mathbf{B}^{(v)} \& \\mathbf{C} \\\\ 0 \& \\mathbf{B}^{(c)}\\end{bmatrix} \\quad \\mathbf{u}\=\\begin{bmatrix}\\mathbf{u}^{(v)}\\\\ \\mathbf{u}^{(c)}\\end{bmatrix} \\quad \\mathbf{Q}\=\\begin{bmatrix}\\mathbf{Q}^{(v)} \& 0 \\\\ 0 \& \\mathbf{Q}^{(c)}\\end{bmatrix} \\\\
\\mathbf{y}\=\\begin{bmatrix}\\mathbf{y}^{(v)}\\\\ \\mathbf{y}^{(c)}\\end{bmatrix} \\quad \\mathbf{Z}\=\\begin{bmatrix}\\mathbf{Z}^{(v)} \& \\mathbf{D} \\\\ 0 \& \\mathbf{Z}^{(c)}\\end{bmatrix} \\quad \\mathbf{a}\=\\begin{bmatrix}\\mathbf{a}^{(v)}\\\\ \\mathbf{a}^{(c)}\\end{bmatrix} \\quad \\mathbf{R}\=\\begin{bmatrix}\\mathbf{R}^{(v)} \& 0 \\\\ 0 \& \\mathbf{R}^{(c)}\\end{bmatrix}
\\end{gathered}
\\tag{11\.3}
\\end{equation}\\]
Note \\(\\mathbf{Q}\\) and \\(\\mathbf{R}\\) are written as block diagonal matrices, but you could allow covariances if that made sense. \\(\\mathbf{u}\\) and \\(\\mathbf{a}\\) are column vectors here. We can fit the model (Equation [(11\.3\)](sec-mssmiss-overview.html#eq:mssmiss-marss-covar)) as usual using the `MARSS()` function.
The log\-likelihood that is returned by MARSS will include the log\-likelihood of the covariates under the covariate state model. If you want only the the log\-likelihood of the non\-covariate data, you will need to subtract off the log\-likelihood of the covariate model:
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{x}^{(c)}\_t \= \\mathbf{B}^{(c)}\\mathbf{x}\_{t\-1}^{(c)} \+ \\mathbf{u}^{(c)} \+ \\mathbf{w}\_t, \\text{ where } \\mathbf{w}\_t \\sim \\,\\text{MVN}(0,\\mathbf{Q}^{(c)}) \\\\
\\mathbf{y}^{(c)}\_t \= \\mathbf{Z}^{(c)}\\mathbf{x}\_t^{(c)} \+ \\mathbf{a}^{(c)} \+ \\mathbf{v}\_t, \\text{ where } \\mathbf{v}\_t \\sim \\,\\text{MVN}(0,\\mathbf{R}^{(c)})
\\end{gathered}
\\tag{11\.4}
\\end{equation}\\]
An easy way to get this log\-likelihood for the covariate data only is use
the augmented model (Equation [(11\.2\)](sec-mssmiss-overview.html#eq:mssmiss-marsscovary) with terms defined as
in Equation [(11\.3\)](sec-mssmiss-overview.html#eq:mssmiss-marss-covar) but pass in missing values for the
non\-covariate data. The following code shows how to do this.
```
y.aug = rbind(data, covariates)
fit.aug = MARSS(y.aug, model = model.aug)
```
`fit.aug` is the MLE object that can be passed to `MARSSkf()`. You need to make a version of this MLE object with the non\-covariate data filled with NAs so that you can compute the log\-likelihood without the covariates. This needs to be done in the `marss` element since that is what is used by `MARSSkf()`. Below is code to do this.
```
fit.cov = fit.aug
fit.cov$marss$data[1:dim(data)[1], ] = NA
extra.LL = MARSSkf(fit.cov)$logLik
```
Note that when you fit the augmented model, the estimates of \\(\\mathbf{C}\\) and \\(\\mathbf{B}^{(c)}\\) are affected by the non\-covariate data since the model for both the non\-covariate and covariate data are estimated simultaneously and are not independent (since the covariate states affect the non\-covariates states). If you want the covariate model to be unaffected by the non\-covariate data, you can fit the covariate model separately and use the estimates for \\(\\mathbf{B}^{(c)}\\) and \\(\\mathbf{Q}^{(c)}\\) as fixed values in your augmented model.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/example-snotel-data.html |
11\.2 Example: Snotel Data
--------------------------
Let’s see an example using the Washington SNOTEL data. The data we will use is the snow water equivalent percent of normal. This represents the snow water equivalent compared to the average value for that site on the same day. We will look at a subset of sites in the Central Cascades in our `snotel` dataset (Figure [11\.1](example-snotel-data.html#fig:mssmiss-plotsnotel)).
```
y <- snotelmeta
# Just use a subset
y = y[which(y$Longitude < -121.4), ]
y = y[which(y$Longitude > -122.5), ]
y = y[which(y$Latitude < 47.5), ]
y = y[which(y$Latitude > 46.5), ]
```
Figure 11\.1: Subset of SNOTEL sties used in this chapter.
For the first analysis, we are just going to look at February Snow Water Equivalent (SWE). Our subset of stations is `y$Station.Id`. There are many missing years among some of our stations (Figure [11\.2](example-snotel-data.html#fig:mssmiss-plotsnotelts)).
```
swe.feb <- snotel
swe.feb <- swe.feb[swe.feb$Station.Id %in% y$Station.Id & swe.feb$Month ==
"Feb", ]
p <- ggplot(swe.feb, aes(x = Date, y = SWE)) + geom_line()
p + facet_wrap(~Station)
```
Figure 11\.2: Snow water equivalent time series from each SNOTEL station.
### 11\.2\.1 Estimate missing Feb SWE using AR(1\) with spatial correlation
Imagine that for our study we need an estimate of SWE for all sites. We will use the information from the sites with full data to estimate the missing SWE for other sites. We will use a MARSS model to use all the available data.
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
b\&0\&\\dots\&0 \\\\
0\&b\&\\dots\&0 \\\\
\\dots\&\\dots\&\\dots\&\\dots \\\\
0\&0\&\\dots\&b
\\end{bmatrix}
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_{t\-1} \+
\\begin{bmatrix}
w\_1 \\\\ w\_2 \\\\ \\dots \\\\ w\_{15}
\\end{bmatrix}\_{t} \\\\
\\begin{bmatrix}
y\_1 \\\\ y\_2 \\\\ \\dots \\\\ y\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_t \+
\\begin{bmatrix}
a\_1 \\\\ a\_2 \\\\ \\dots \\\\ a\_{15}
\\end{bmatrix}\_{t} \+
\\begin{bmatrix}
v\_1 \\\\ v\_2 \\\\ \\dots \\\\ v\_{15}
\\end{bmatrix}\_t
\\end{gathered}
\\tag{11\.5}
\\end{equation}\\]
We will use an unconstrained variance\-covariance structure for \\(\\mathbf{w}\\) and assume that \\(\\mathbf{v}\\) is identical and independent and very low (SNOTEL instrument variability). The \\(a\_i\\) determine the level of the \\(x\_i\\).
We need our data to be in rows. We will use `reshape2::acast()`.
```
dat.feb <- reshape2::acast(swe.feb, Station ~ Year, value.var = "SWE")
```
We set up the model for MARSS so that it is the same as [(11\.5\)](example-snotel-data.html#eq:mssmiss-ar1). We will fix the measurement error to be small; we could use 0 but the fitting is more stable if we use a small variance instead. When estimating \\(\\mathbf{B}\\), setting the initial value to be at \\(t\=1\\) instead of \\(t\=0\\) works better.
```
ns <- length(unique(swe.feb$Station))
B <- "diagonal and equal"
Q <- "unconstrained"
R <- diag(0.01, ns)
U <- "zero"
A <- "unequal"
x0 <- "unequal"
mod.list.ar1 = list(B = B, Q = Q, R = R, U = U, x0 = x0, A = A,
tinitx = 1)
```
Now we can fit a MARSS model and get estimates of the missing SWEs. Convergence is slow. We set \\(\\mathbf{a}\\) equal to the mean of the time series to speed convergence.
```
library(MARSS)
m <- apply(dat.feb, 1, mean, na.rm = TRUE)
fit.ar1 <- MARSS(dat.feb, model = mod.list.ar1, control = list(maxit = 5000),
inits = list(A = matrix(m, ns, 1)))
```
The \\(b\\) estimate is `0.4494841`.
Let’s plot the estimated SWEs for the missing years (Figure [11\.3](example-snotel-data.html#fig:mssmiss-snotelplotfits-ar1)). These estimates use all the information about the correlation with other sites and uses information about correlation with the prior and subsequent years. We will use the `tidy()` function to get the estimates and the 95% prediction intervals. The prediction interval is for the range of SWE values we might observe for that site. Notice that for some sites, intervals are low in early years as these sites are highly correlated with site for which there are data. In other sites, the uncertainty is high in early years because the sites with data in those years are not highly correlated. There are no intervals for sites with data. We have data for those sites, so we are not uncertain about the observed SWE for those.
```
fit <- fit.ar1
d <- fitted(fit, interval = "prediction", type = "ytT")
d$Year <- d$t + 1980
d$Station <- d$.rownames
p <- ggplot(data = d) + geom_line(aes(Year, .fitted)) + geom_point(aes(Year,
y)) + geom_ribbon(aes(x = Year, ymin = .lwr, ymax = .upr),
linetype = 2, alpha = 0.2, fill = "blue") + facet_wrap(~Station) +
xlab("") + ylab("SWE (demeaned)")
p
```
Figure 11\.3: Estimated SWEs for the missing sites with prediction intervals.
If we were using these SWE as covariates in a site specific model, we could then use the estimates as our covariates, however this would not incorporate parameter uncertainty. Alternatively we could use Equation [(11\.1\)](sec-mssmiss-overview.html#eq:mssmiss-marsscovarx) and set the parameters for the covariate process to those estimated for our covariate\-only model. This approach will incorporate the uncertainty in the SWE estimates in the early years for the sites with no data.
Note, we should do some cross\-validation (fitting with data left out) to ensure that the estimated SWEs are well\-matched to actual measurements. It would probably be best to do ‘leave\-three’ out instead of ‘leave\-one’ out since the estimates for time \\(t\\) uses information from \\(t\-1\\) and \\(t\+1\\) (if present).
#### 11\.2\.1\.1 Diagnostics
The model residuals have a tendency for negative autocorrelation at lag\-1 (Figure [11\.4](example-snotel-data.html#fig:mssmiss-modelresids-ar1)).
```
fit <- fit.ar1
par(mfrow = c(4, 4), mar = c(2, 2, 1, 1))
apply(MARSSresiduals(fit, type = "tt1")$model.residuals[, 1:30],
1, acf, na.action = na.pass)
```
Figure 11\.4: Model residuals for the AR(1\) model.
### 11\.2\.2 Estimate missing Feb SWE using only correlation
Another approach is to treat the February data as temporally uncorrelated. The two longest time series (Paradise and Olallie Meadows) show minimal autocorrelation so we might decide to just use the correlation across stations for our estimates. In this case, the state of the missing SWE values at time \\(t\\) is the expected value conditioned on all the stations with data at time \\(t\\) given the estimated variance\-covariance matrix \\(\\mathbf{Q}\\).
We could set this model up as
\\\[\\begin{equation}
\\begin{bmatrix}
y\_1 \\\\ y\_2 \\\\ \\dots \\\\ y\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
a\_1 \\\\ a\_2 \\\\ \\dots \\\\ a\_{15}
\\end{bmatrix}\_{t} \+
\\begin{bmatrix}
v\_1 \\\\ v\_2 \\\\ \\dots \\\\ v\_{15}
\\end{bmatrix}\_t, \\,\\,\\,
\\begin{bmatrix}
\\sigma\_1\&\\zeta\_{1,2}\&\\dots\&\\zeta\_{1,15} \\\\
\\zeta\_{2,1}\&\\sigma\_2\&\\dots\&\\zeta\_{2,15} \\\\
\\dots\&\\dots\&\\dots\&\\dots \\\\
\\zeta\_{15,1}\&\\zeta\_{15,2}\&\\dots\&\\sigma\_{15}
\\end{bmatrix}
\\tag{11\.6}
\\end{equation}\\]
However the EM algorithm used by `MARSS()` runs into numerical issues. Instead we will set the model up as follows. Allowing a hidden state observed with small error makes the estimation more stable.
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
w\_1 \\\\ w\_2 \\\\ \\dots \\\\ w\_{15}
\\end{bmatrix}\_{t}, \\,\\,\\,
\\begin{bmatrix}
w\_1 \\\\ w\_2 \\\\ \\dots \\\\ w\_{15}
\\end{bmatrix}\_{t} \\sim
\\begin{bmatrix}
\\sigma\_1\&\\zeta\_{1,2}\&\\dots\&\\zeta\_{1,15} \\\\
\\zeta\_{2,1}\&\\sigma\_2\&\\dots\&\\zeta\_{2,15} \\\\
\\dots\&\\dots\&\\dots\&\\dots \\\\
\\zeta\_{15,1}\&\\zeta\_{15,2}\&\\dots\&\\sigma\_{15}
\\end{bmatrix} \\\\
\\begin{bmatrix}
y\_1 \\\\ y\_2 \\\\ \\dots \\\\ y\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_t \+
\\begin{bmatrix}
a\_1 \\\\ a\_2 \\\\ \\dots \\\\ a\_{15}
\\end{bmatrix}\_{t} \+
\\begin{bmatrix}
v\_1 \\\\ v\_2 \\\\ \\dots \\\\ v\_{15}
\\end{bmatrix}\_t, \\,\\,\\, \\begin{bmatrix}
0\.01\&0\&\\dots\&0 \\\\
0\&0\.01\&\\dots\&0 \\\\
\\dots\&\\dots\&\\dots\&\\dots \\\\
0\&0\&\\dots\&0\.01
\\end{bmatrix}
\\end{gathered}
\\tag{11\.7}
\\end{equation}\\]
Again \\(\\mathbf{a}\\) is the mean level in the time series. Note that the expected value of \\(\\mathbf{x}\\) is zero if there are no data, so \\(E(\\mathbf{x}\_0\)\=0\\).
```
ns <- length(unique(swe.feb$Station))
B <- "zero"
Q <- "unconstrained"
R <- diag(0.01, ns)
U <- "zero"
A <- "unequal"
x0 <- "zero"
mod.list.corr = list(B = B, Q = Q, R = R, U = U, x0 = x0, A = A,
tinitx = 0)
```
Now we can fit a MARSS model and get estimates of the missing SWEs. Convergence is slow. We set \\(\\mathbf{a}\\) equal to the mean of the time series to speed convergence.
```
m <- apply(dat.feb, 1, mean, na.rm = TRUE)
fit.corr <- MARSS(dat.feb, model = mod.list.corr, control = list(maxit = 5000),
inits = list(A = matrix(m, ns, 1)))
```
The estimated SWEs for the missing years uses the information about the correlation with other sites only.
```
fit <- fit.corr
d <- fitted(fit, type = "ytT", interval = "prediction")
d$Year <- d$t + 1980
d$Station <- d$.rownames
p <- ggplot(data = d) + geom_line(aes(Year, .fitted)) + geom_point(aes(Year,
y)) + geom_ribbon(aes(x = Year, ymin = .lwr, ymax = .upr),
linetype = 2, alpha = 0.2, fill = "blue") + facet_wrap(~Station) +
xlab("") + ylab("SWE (demeaned)")
p
```
Figure 11\.5: Estimated SWEs from the expected value of the states \\(\\hat{x}\\) conditioned on all the data for the model with only correlation across stations at time \\(t\\).
#### 11\.2\.2\.1 Diagnostics
The model residuals have no tendency towards negative autocorrelation now that we removed the autoregressive component from the process (\\(x\\)) model.
```
fit <- fit.corr
par(mfrow = c(4, 4), mar = c(2, 2, 1, 1))
apply(MARSSresiduals(fit, type = "tt1")$model.residuals, 1, acf,
na.action = na.pass)
mtext("Model Residuals ACF", outer = TRUE, side = 3)
```
### 11\.2\.3 Estimate missing Feb SWE using DFA
Another approach we might take is to model SWE using Dynamic Factor Analysis. Our model might take the following form with two factors, modeled as AR(1\) processes. \\(\\mathbf{a}\\) is the mean level of the time series.
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix}
x\_1 \\\\ x\_2
\\end{bmatrix}\_t \=
\\begin{bmatrix}
b\_1\&0\\\\0\&b\_2
\\end{bmatrix}
\\begin{bmatrix}
x\_1 \\\\ x\_2
\\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}
w\_1 \\\\ w\_2
\\end{bmatrix}\_{t} \\\\
\\begin{bmatrix}
y\_1 \\\\ y\_2 \\\\ \\dots \\\\ y\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
z\_{1,1}\&0\\\\z\_{2,1}\&z\_{2,2}\\\\ \\dots\\\\z\_{3,1}\&z\_{3,2}
\\end{bmatrix}\\begin{bmatrix}
x\_1 \\\\ x\_2
\\end{bmatrix}\_t \+
\\begin{bmatrix}
a\_1 \\\\ a\_2 \\\\ \\dots \\\\ a\_{15}
\\end{bmatrix} \+
\\begin{bmatrix}
v\_1 \\\\ v\_2 \\\\ \\dots \\\\ v\_{15}
\\end{bmatrix}\_t
\\end{gathered}
\\end{equation}\\]
The model is set up as follows:
```
ns <- dim(dat.feb)[1]
B <- matrix(list(0), 2, 2)
B[1, 1] <- "b1"
B[2, 2] <- "b2"
Q <- diag(1, 2)
R <- "diagonal and unequal"
U <- "zero"
x0 <- "zero"
Z <- matrix(list(0), ns, 2)
Z[1:(ns * 2)] <- c(paste0("z1", 1:ns), paste0("z2", 1:ns))
Z[1, 2] <- 0
A <- "unequal"
mod.list.dfa = list(B = B, Z = Z, Q = Q, R = R, U = U, A = A,
x0 = x0)
```
Now we can fit a MARSS model and get estimates of the missing SWEs. We pass in the initial value for \\(\\mathbf{a}\\) as the mean level so it fits easier.
```
library(MARSS)
m <- apply(dat.feb, 1, mean, na.rm = TRUE)
fit.dfa <- MARSS(dat.feb, model = mod.list.dfa, control = list(maxit = 1000),
inits = list(A = matrix(m, ns, 1)))
```
### 11\.2\.4 Diagnostics
The model residuals are uncorrelated.
```
par(mfrow = c(4, 4), mar = c(2, 2, 1, 1))
apply(MARSSresiduals(fit, type = "tt1")$model.residual, 1, function(x) {
acf(x, na.action = na.pass)
})
```
### 11\.2\.5 Plot the fitted or mean Feb SWE using DFA
The plots showed the estimate of the missing Feb SWE values, which is the expected value of \\(\\mathbf{y}\\) conditioned on all the data. For the non\-missing SWE, this expected value is just the observation. Many times we want the model fit for the covariate. If the measurements have observation error, the fitted value is the estimate without this observation error.
For the estimated states conditioned on all the data we want `tsSmooth()`. We will not show the prediction intervals which would be for new data. We will just show the confidence intervals on the fitted estimate for the missing values. The confidence intervals are small so they are a bit hard to see.
### 11\.2\.1 Estimate missing Feb SWE using AR(1\) with spatial correlation
Imagine that for our study we need an estimate of SWE for all sites. We will use the information from the sites with full data to estimate the missing SWE for other sites. We will use a MARSS model to use all the available data.
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
b\&0\&\\dots\&0 \\\\
0\&b\&\\dots\&0 \\\\
\\dots\&\\dots\&\\dots\&\\dots \\\\
0\&0\&\\dots\&b
\\end{bmatrix}
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_{t\-1} \+
\\begin{bmatrix}
w\_1 \\\\ w\_2 \\\\ \\dots \\\\ w\_{15}
\\end{bmatrix}\_{t} \\\\
\\begin{bmatrix}
y\_1 \\\\ y\_2 \\\\ \\dots \\\\ y\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_t \+
\\begin{bmatrix}
a\_1 \\\\ a\_2 \\\\ \\dots \\\\ a\_{15}
\\end{bmatrix}\_{t} \+
\\begin{bmatrix}
v\_1 \\\\ v\_2 \\\\ \\dots \\\\ v\_{15}
\\end{bmatrix}\_t
\\end{gathered}
\\tag{11\.5}
\\end{equation}\\]
We will use an unconstrained variance\-covariance structure for \\(\\mathbf{w}\\) and assume that \\(\\mathbf{v}\\) is identical and independent and very low (SNOTEL instrument variability). The \\(a\_i\\) determine the level of the \\(x\_i\\).
We need our data to be in rows. We will use `reshape2::acast()`.
```
dat.feb <- reshape2::acast(swe.feb, Station ~ Year, value.var = "SWE")
```
We set up the model for MARSS so that it is the same as [(11\.5\)](example-snotel-data.html#eq:mssmiss-ar1). We will fix the measurement error to be small; we could use 0 but the fitting is more stable if we use a small variance instead. When estimating \\(\\mathbf{B}\\), setting the initial value to be at \\(t\=1\\) instead of \\(t\=0\\) works better.
```
ns <- length(unique(swe.feb$Station))
B <- "diagonal and equal"
Q <- "unconstrained"
R <- diag(0.01, ns)
U <- "zero"
A <- "unequal"
x0 <- "unequal"
mod.list.ar1 = list(B = B, Q = Q, R = R, U = U, x0 = x0, A = A,
tinitx = 1)
```
Now we can fit a MARSS model and get estimates of the missing SWEs. Convergence is slow. We set \\(\\mathbf{a}\\) equal to the mean of the time series to speed convergence.
```
library(MARSS)
m <- apply(dat.feb, 1, mean, na.rm = TRUE)
fit.ar1 <- MARSS(dat.feb, model = mod.list.ar1, control = list(maxit = 5000),
inits = list(A = matrix(m, ns, 1)))
```
The \\(b\\) estimate is `0.4494841`.
Let’s plot the estimated SWEs for the missing years (Figure [11\.3](example-snotel-data.html#fig:mssmiss-snotelplotfits-ar1)). These estimates use all the information about the correlation with other sites and uses information about correlation with the prior and subsequent years. We will use the `tidy()` function to get the estimates and the 95% prediction intervals. The prediction interval is for the range of SWE values we might observe for that site. Notice that for some sites, intervals are low in early years as these sites are highly correlated with site for which there are data. In other sites, the uncertainty is high in early years because the sites with data in those years are not highly correlated. There are no intervals for sites with data. We have data for those sites, so we are not uncertain about the observed SWE for those.
```
fit <- fit.ar1
d <- fitted(fit, interval = "prediction", type = "ytT")
d$Year <- d$t + 1980
d$Station <- d$.rownames
p <- ggplot(data = d) + geom_line(aes(Year, .fitted)) + geom_point(aes(Year,
y)) + geom_ribbon(aes(x = Year, ymin = .lwr, ymax = .upr),
linetype = 2, alpha = 0.2, fill = "blue") + facet_wrap(~Station) +
xlab("") + ylab("SWE (demeaned)")
p
```
Figure 11\.3: Estimated SWEs for the missing sites with prediction intervals.
If we were using these SWE as covariates in a site specific model, we could then use the estimates as our covariates, however this would not incorporate parameter uncertainty. Alternatively we could use Equation [(11\.1\)](sec-mssmiss-overview.html#eq:mssmiss-marsscovarx) and set the parameters for the covariate process to those estimated for our covariate\-only model. This approach will incorporate the uncertainty in the SWE estimates in the early years for the sites with no data.
Note, we should do some cross\-validation (fitting with data left out) to ensure that the estimated SWEs are well\-matched to actual measurements. It would probably be best to do ‘leave\-three’ out instead of ‘leave\-one’ out since the estimates for time \\(t\\) uses information from \\(t\-1\\) and \\(t\+1\\) (if present).
#### 11\.2\.1\.1 Diagnostics
The model residuals have a tendency for negative autocorrelation at lag\-1 (Figure [11\.4](example-snotel-data.html#fig:mssmiss-modelresids-ar1)).
```
fit <- fit.ar1
par(mfrow = c(4, 4), mar = c(2, 2, 1, 1))
apply(MARSSresiduals(fit, type = "tt1")$model.residuals[, 1:30],
1, acf, na.action = na.pass)
```
Figure 11\.4: Model residuals for the AR(1\) model.
#### 11\.2\.1\.1 Diagnostics
The model residuals have a tendency for negative autocorrelation at lag\-1 (Figure [11\.4](example-snotel-data.html#fig:mssmiss-modelresids-ar1)).
```
fit <- fit.ar1
par(mfrow = c(4, 4), mar = c(2, 2, 1, 1))
apply(MARSSresiduals(fit, type = "tt1")$model.residuals[, 1:30],
1, acf, na.action = na.pass)
```
Figure 11\.4: Model residuals for the AR(1\) model.
### 11\.2\.2 Estimate missing Feb SWE using only correlation
Another approach is to treat the February data as temporally uncorrelated. The two longest time series (Paradise and Olallie Meadows) show minimal autocorrelation so we might decide to just use the correlation across stations for our estimates. In this case, the state of the missing SWE values at time \\(t\\) is the expected value conditioned on all the stations with data at time \\(t\\) given the estimated variance\-covariance matrix \\(\\mathbf{Q}\\).
We could set this model up as
\\\[\\begin{equation}
\\begin{bmatrix}
y\_1 \\\\ y\_2 \\\\ \\dots \\\\ y\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
a\_1 \\\\ a\_2 \\\\ \\dots \\\\ a\_{15}
\\end{bmatrix}\_{t} \+
\\begin{bmatrix}
v\_1 \\\\ v\_2 \\\\ \\dots \\\\ v\_{15}
\\end{bmatrix}\_t, \\,\\,\\,
\\begin{bmatrix}
\\sigma\_1\&\\zeta\_{1,2}\&\\dots\&\\zeta\_{1,15} \\\\
\\zeta\_{2,1}\&\\sigma\_2\&\\dots\&\\zeta\_{2,15} \\\\
\\dots\&\\dots\&\\dots\&\\dots \\\\
\\zeta\_{15,1}\&\\zeta\_{15,2}\&\\dots\&\\sigma\_{15}
\\end{bmatrix}
\\tag{11\.6}
\\end{equation}\\]
However the EM algorithm used by `MARSS()` runs into numerical issues. Instead we will set the model up as follows. Allowing a hidden state observed with small error makes the estimation more stable.
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
w\_1 \\\\ w\_2 \\\\ \\dots \\\\ w\_{15}
\\end{bmatrix}\_{t}, \\,\\,\\,
\\begin{bmatrix}
w\_1 \\\\ w\_2 \\\\ \\dots \\\\ w\_{15}
\\end{bmatrix}\_{t} \\sim
\\begin{bmatrix}
\\sigma\_1\&\\zeta\_{1,2}\&\\dots\&\\zeta\_{1,15} \\\\
\\zeta\_{2,1}\&\\sigma\_2\&\\dots\&\\zeta\_{2,15} \\\\
\\dots\&\\dots\&\\dots\&\\dots \\\\
\\zeta\_{15,1}\&\\zeta\_{15,2}\&\\dots\&\\sigma\_{15}
\\end{bmatrix} \\\\
\\begin{bmatrix}
y\_1 \\\\ y\_2 \\\\ \\dots \\\\ y\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
x\_1 \\\\ x\_2 \\\\ \\dots \\\\ x\_{15}
\\end{bmatrix}\_t \+
\\begin{bmatrix}
a\_1 \\\\ a\_2 \\\\ \\dots \\\\ a\_{15}
\\end{bmatrix}\_{t} \+
\\begin{bmatrix}
v\_1 \\\\ v\_2 \\\\ \\dots \\\\ v\_{15}
\\end{bmatrix}\_t, \\,\\,\\, \\begin{bmatrix}
0\.01\&0\&\\dots\&0 \\\\
0\&0\.01\&\\dots\&0 \\\\
\\dots\&\\dots\&\\dots\&\\dots \\\\
0\&0\&\\dots\&0\.01
\\end{bmatrix}
\\end{gathered}
\\tag{11\.7}
\\end{equation}\\]
Again \\(\\mathbf{a}\\) is the mean level in the time series. Note that the expected value of \\(\\mathbf{x}\\) is zero if there are no data, so \\(E(\\mathbf{x}\_0\)\=0\\).
```
ns <- length(unique(swe.feb$Station))
B <- "zero"
Q <- "unconstrained"
R <- diag(0.01, ns)
U <- "zero"
A <- "unequal"
x0 <- "zero"
mod.list.corr = list(B = B, Q = Q, R = R, U = U, x0 = x0, A = A,
tinitx = 0)
```
Now we can fit a MARSS model and get estimates of the missing SWEs. Convergence is slow. We set \\(\\mathbf{a}\\) equal to the mean of the time series to speed convergence.
```
m <- apply(dat.feb, 1, mean, na.rm = TRUE)
fit.corr <- MARSS(dat.feb, model = mod.list.corr, control = list(maxit = 5000),
inits = list(A = matrix(m, ns, 1)))
```
The estimated SWEs for the missing years uses the information about the correlation with other sites only.
```
fit <- fit.corr
d <- fitted(fit, type = "ytT", interval = "prediction")
d$Year <- d$t + 1980
d$Station <- d$.rownames
p <- ggplot(data = d) + geom_line(aes(Year, .fitted)) + geom_point(aes(Year,
y)) + geom_ribbon(aes(x = Year, ymin = .lwr, ymax = .upr),
linetype = 2, alpha = 0.2, fill = "blue") + facet_wrap(~Station) +
xlab("") + ylab("SWE (demeaned)")
p
```
Figure 11\.5: Estimated SWEs from the expected value of the states \\(\\hat{x}\\) conditioned on all the data for the model with only correlation across stations at time \\(t\\).
#### 11\.2\.2\.1 Diagnostics
The model residuals have no tendency towards negative autocorrelation now that we removed the autoregressive component from the process (\\(x\\)) model.
```
fit <- fit.corr
par(mfrow = c(4, 4), mar = c(2, 2, 1, 1))
apply(MARSSresiduals(fit, type = "tt1")$model.residuals, 1, acf,
na.action = na.pass)
mtext("Model Residuals ACF", outer = TRUE, side = 3)
```
#### 11\.2\.2\.1 Diagnostics
The model residuals have no tendency towards negative autocorrelation now that we removed the autoregressive component from the process (\\(x\\)) model.
```
fit <- fit.corr
par(mfrow = c(4, 4), mar = c(2, 2, 1, 1))
apply(MARSSresiduals(fit, type = "tt1")$model.residuals, 1, acf,
na.action = na.pass)
mtext("Model Residuals ACF", outer = TRUE, side = 3)
```
### 11\.2\.3 Estimate missing Feb SWE using DFA
Another approach we might take is to model SWE using Dynamic Factor Analysis. Our model might take the following form with two factors, modeled as AR(1\) processes. \\(\\mathbf{a}\\) is the mean level of the time series.
\\\[\\begin{equation}
\\begin{gathered}
\\begin{bmatrix}
x\_1 \\\\ x\_2
\\end{bmatrix}\_t \=
\\begin{bmatrix}
b\_1\&0\\\\0\&b\_2
\\end{bmatrix}
\\begin{bmatrix}
x\_1 \\\\ x\_2
\\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}
w\_1 \\\\ w\_2
\\end{bmatrix}\_{t} \\\\
\\begin{bmatrix}
y\_1 \\\\ y\_2 \\\\ \\dots \\\\ y\_{15}
\\end{bmatrix}\_t \=
\\begin{bmatrix}
z\_{1,1}\&0\\\\z\_{2,1}\&z\_{2,2}\\\\ \\dots\\\\z\_{3,1}\&z\_{3,2}
\\end{bmatrix}\\begin{bmatrix}
x\_1 \\\\ x\_2
\\end{bmatrix}\_t \+
\\begin{bmatrix}
a\_1 \\\\ a\_2 \\\\ \\dots \\\\ a\_{15}
\\end{bmatrix} \+
\\begin{bmatrix}
v\_1 \\\\ v\_2 \\\\ \\dots \\\\ v\_{15}
\\end{bmatrix}\_t
\\end{gathered}
\\end{equation}\\]
The model is set up as follows:
```
ns <- dim(dat.feb)[1]
B <- matrix(list(0), 2, 2)
B[1, 1] <- "b1"
B[2, 2] <- "b2"
Q <- diag(1, 2)
R <- "diagonal and unequal"
U <- "zero"
x0 <- "zero"
Z <- matrix(list(0), ns, 2)
Z[1:(ns * 2)] <- c(paste0("z1", 1:ns), paste0("z2", 1:ns))
Z[1, 2] <- 0
A <- "unequal"
mod.list.dfa = list(B = B, Z = Z, Q = Q, R = R, U = U, A = A,
x0 = x0)
```
Now we can fit a MARSS model and get estimates of the missing SWEs. We pass in the initial value for \\(\\mathbf{a}\\) as the mean level so it fits easier.
```
library(MARSS)
m <- apply(dat.feb, 1, mean, na.rm = TRUE)
fit.dfa <- MARSS(dat.feb, model = mod.list.dfa, control = list(maxit = 1000),
inits = list(A = matrix(m, ns, 1)))
```
### 11\.2\.4 Diagnostics
The model residuals are uncorrelated.
```
par(mfrow = c(4, 4), mar = c(2, 2, 1, 1))
apply(MARSSresiduals(fit, type = "tt1")$model.residual, 1, function(x) {
acf(x, na.action = na.pass)
})
```
### 11\.2\.5 Plot the fitted or mean Feb SWE using DFA
The plots showed the estimate of the missing Feb SWE values, which is the expected value of \\(\\mathbf{y}\\) conditioned on all the data. For the non\-missing SWE, this expected value is just the observation. Many times we want the model fit for the covariate. If the measurements have observation error, the fitted value is the estimate without this observation error.
For the estimated states conditioned on all the data we want `tsSmooth()`. We will not show the prediction intervals which would be for new data. We will just show the confidence intervals on the fitted estimate for the missing values. The confidence intervals are small so they are a bit hard to see.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/modeling-seasonal-swe.html |
11\.3 Modeling Seasonal SWE
---------------------------
When we look at all months, we see that SWE is highly seasonal. Note October and November are missing for all years.
```
swe.yr <- snotel
swe.yr <- swe.yr[swe.yr$Station.Id %in% y$Station.Id, ]
swe.yr$Station <- droplevels(swe.yr$Station)
```
Set up the data matrix of monthly SNOTEL data:
```
dat.yr <- snotel
dat.yr <- dat.yr[dat.yr$Station.Id %in% y$Station.Id, ]
dat.yr$Station <- droplevels(dat.yr$Station)
dat.yr$Month <- factor(dat.yr$Month, level = month.abb)
dat.yr <- reshape2::acast(dat.yr, Station ~ Year + Month, value.var = "SWE")
```
We will model the seasonal differences using a periodic model. The covariates are
```
period <- 12
TT <- dim(dat.yr)[2]
cos.t <- cos(2 * pi * seq(TT)/period)
sin.t <- sin(2 * pi * seq(TT)/period)
c.seas <- rbind(cos.t, sin.t)
```
### 11\.3\.1 Modeling season across sites
We will create a state for the seasonal cycle and each station will have a scaled effect of that seasonal cycle. The observations will have the seasonal effect plus a mean and residuals (observation \- season \- mean) will be allowed to correlate across stations.
```
ns <- dim(dat.yr)[1]
B <- "zero"
Q <- matrix(1)
R <- "unconstrained"
U <- "zero"
x0 <- "zero"
Z <- matrix(paste0("z", 1:ns), ns, 1)
A <- "unequal"
mod.list.dfa = list(B = B, Z = Z, Q = Q, R = R, U = U, A = A,
x0 = x0)
C <- matrix(c("c1", "c2"), 1, 2)
c <- c.seas
mod.list.seas <- list(B = B, U = U, Q = Q, A = A, R = R, Z = Z,
C = C, c = c, x0 = x0, tinitx = 0)
```
Now we can fit the model:
```
m <- apply(dat.yr, 1, mean, na.rm = TRUE)
fit.seas <- MARSS(dat.yr, model = mod.list.seas, control = list(maxit = 500),
inits = list(A = matrix(m, ns, 1)))
```
**The seasonal patterns**
Figure @ref{fig:mssmiss\-seas} shows the seasonal estimate plus prediction intervals for each station. This is \\(z\_i x\_i \+ a\_i\\). The prediction interval shows our estimate of the range of the data we would see around the seasonal estimate.
**Estimates for the missing years**
The estimated mean SWE at each station is \\(E(y\_{t,i}\|y\_{1:T})\\). This is the estimate of \\(y\_{t,i}\\) conditioned on all the data and includes the seasonal component plus the information from the data from other stations. If \\(y\_{t,i}\\) is observed, \\(E(y\_{t,i}\|y\_{1:T}) \= y\_{t,i}\\), i.e. just the observed value. But if \\(y\_{t,i}\\) is unobserved, the stations with data at time \\(t\\) help inform \\(y\_{t,i}\\), the value of the station without data at time \\(t\\). Note this is not the case when we computed the fitted value for \\(y\_{t,i}\\). In that case, the data inform \\(\\mathbf{R}\\) but we do not treat the observed data at \\(t\=i\\) as ‘observed’ and influencing the missing the missing \\(y\_{t,i}\\) through \\(\\mathbf{R}\\).
Only years up to 1990 are shown, but the model is fit to all years. The stations with no data before 1990 are being estimated based on the information in the later years when they do have data. We did not constrain the SWE to be positive, so negative estimates are possible and occurs in the months in which we have no SWE data (because there is no snow).
### 11\.3\.1 Modeling season across sites
We will create a state for the seasonal cycle and each station will have a scaled effect of that seasonal cycle. The observations will have the seasonal effect plus a mean and residuals (observation \- season \- mean) will be allowed to correlate across stations.
```
ns <- dim(dat.yr)[1]
B <- "zero"
Q <- matrix(1)
R <- "unconstrained"
U <- "zero"
x0 <- "zero"
Z <- matrix(paste0("z", 1:ns), ns, 1)
A <- "unequal"
mod.list.dfa = list(B = B, Z = Z, Q = Q, R = R, U = U, A = A,
x0 = x0)
C <- matrix(c("c1", "c2"), 1, 2)
c <- c.seas
mod.list.seas <- list(B = B, U = U, Q = Q, A = A, R = R, Z = Z,
C = C, c = c, x0 = x0, tinitx = 0)
```
Now we can fit the model:
```
m <- apply(dat.yr, 1, mean, na.rm = TRUE)
fit.seas <- MARSS(dat.yr, model = mod.list.seas, control = list(maxit = 500),
inits = list(A = matrix(m, ns, 1)))
```
**The seasonal patterns**
Figure @ref{fig:mssmiss\-seas} shows the seasonal estimate plus prediction intervals for each station. This is \\(z\_i x\_i \+ a\_i\\). The prediction interval shows our estimate of the range of the data we would see around the seasonal estimate.
**Estimates for the missing years**
The estimated mean SWE at each station is \\(E(y\_{t,i}\|y\_{1:T})\\). This is the estimate of \\(y\_{t,i}\\) conditioned on all the data and includes the seasonal component plus the information from the data from other stations. If \\(y\_{t,i}\\) is observed, \\(E(y\_{t,i}\|y\_{1:T}) \= y\_{t,i}\\), i.e. just the observed value. But if \\(y\_{t,i}\\) is unobserved, the stations with data at time \\(t\\) help inform \\(y\_{t,i}\\), the value of the station without data at time \\(t\\). Note this is not the case when we computed the fitted value for \\(y\_{t,i}\\). In that case, the data inform \\(\\mathbf{R}\\) but we do not treat the observed data at \\(t\=i\\) as ‘observed’ and influencing the missing the missing \\(y\_{t,i}\\) through \\(\\mathbf{R}\\).
Only years up to 1990 are shown, but the model is fit to all years. The stations with no data before 1990 are being estimated based on the information in the later years when they do have data. We did not constrain the SWE to be positive, so negative estimates are possible and occurs in the months in which we have no SWE data (because there is no snow).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-jags.html |
Chapter 12 JAGS for Bayesian time series analysis
=================================================
In this lab, we will illustrate how to use JAGS to fit time series models with Bayesian methods. The purpose of this chapter is to teach you some basic JAGS models. To go beyond these basics, study the wide variety of software tools to do time series analysis using Bayesian methods, e.g. packages listed on the R Cran [TimeSeries](http://cran.r-project.org/web/views/TimeSeries.html) task view.
A script with all the R code in the chapter can be downloaded [here](./Rcode/intro-to-jags.R). The Rmd for this chapter can be downloaded [here](./Rmds/intro-to-jags.Rmd).
### Data and packages
For data for this lab, we will use a dataset on air quality in New York. For the majority of our models, we are going to treat wind speed as the response variable for our time series models.
```
data(airquality, package = "datasets")
Wind <- airquality$Wind # wind speed
Temp <- airquality$Temp # air temperature
N <- dim(airquality)[1] # number of data points
```
To run this code, you will need to install JAGS for your operating platform using the instructions [here](http://sourceforge.net/projects/mcmc-jags/files/). Click on JAGS, then the most recent folder, then the platform of your machine. You will also need the **coda**, **rjags** and **R2jags** packages.
```
library(coda)
library(rjags)
library(R2jags)
```
### Data and packages
For data for this lab, we will use a dataset on air quality in New York. For the majority of our models, we are going to treat wind speed as the response variable for our time series models.
```
data(airquality, package = "datasets")
Wind <- airquality$Wind # wind speed
Temp <- airquality$Temp # air temperature
N <- dim(airquality)[1] # number of data points
```
To run this code, you will need to install JAGS for your operating platform using the instructions [here](http://sourceforge.net/projects/mcmc-jags/files/). Click on JAGS, then the most recent folder, then the platform of your machine. You will also need the **coda**, **rjags** and **R2jags** packages.
```
library(coda)
library(rjags)
library(R2jags)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-jags-univariate.html |
12\.2 Univariatate response models
----------------------------------
### 12\.2\.1 Linear regression with no covariates
We will start with a linear regression with only an intercept. We will write the model in the form of Equation [(12\.1\)](sec-jags-overview.html#eq:jags-uniss). Our model is
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= u \\\\
y\_t \= x\_t \+ v\_t, v\_t \\sim \\,\\text{N}(0, r)
\\end{gathered}
\\tag{12\.2}
\\end{equation}\\]
An equivalent way to think about this model is
\\\[\\begin{equation}
Y\_t \\sim \\,\\text{N}(E\[Y\_t], r)
\\end{equation}\\]
\\(E\[Y\_{t}] \= x\_t\\) where \\(x\_t \= u\\).
In this linear regression model, we will treat the residual error as independent and identically distributed Gaussian observation error.
To run the JAGS model, we will need to start by writing the model in JAGS notation. We can construct the model in Equation [(12\.2\)](sec-jags-univariate.html#eq:jags-lr1) as
```
# LINEAR REGRESSION intercept only.
model.loc <- "lm_intercept.txt" # name of the txt file
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.r ~ dgamma(0.001,0.001); # This is inverse gamma
r <- 1/inv.r; # derived value
# likelihood
for(i in 1:N) {
X[i] <- u
EY[i] <- X[i]; # derived value
Y[i] ~ dnorm(EY[i], inv.r);
}
}
",
file = model.loc)
```
The JAGS code has three parts: our parameter priors, our data model and derived parameters.
**Parameter priors** There are two parameters in the model (\\(u\\), the mean, and \\(r\\), the variance of the observation error). We need to set a prior on both of these. We will set a vague prior of a Gaussian with varianc 10 on \\(u\\). In JAGS instead of specifying the normal distribution with the variance, \\(N(0, 10\)\\), you specify it with the precision (1/variance), so our prior on \\(u\\) is `dnorm(0, 0.01)`. For \\(r\\), we need to set a prior on the precision \\(1/r\\), which we call `inv.r` in the code. The precision receives a gamma prior, which is equivalent to the variance receiving an inverse gamma prior (fairly common for standard Bayesian regression models).
**Likelihood** Our data distribution is \\(Y\_t \\sim \\,\\text{N}(E\[Y\_t], r)\\). We use the `dnorm()` distribution with the precision (\\(1/r\\)) instead of \\(r\\). So our data model is `Y[t] = dnorm(EY[t], inv.r)`. JAGS is not vectorized so we need to use for loops (instead of matrix multiplication) and use the for loop to specify the distribution for each `Y[t]`. For, this model we didn’t actually need `X[t]` but we use it because we are building up to a state\-space model which has both \\(x\_t\\) and \\(y\_t\\).
**Derived values** Derived values are things we want output so we can track them. In this example, our derived values are a bit useless but in more complex models they will be quite handy. Also they can make your code easier to understand.
To run the model, we need to create several new objects, representing (1\) a list of data that we will pass to JAGS `jags.data`, (2\) a vector of parameters that we want to monitor and have returned back to R `jags.params`, and (3\) the name of our text file that contains the JAGS model we wrote above. With those three things, we can call the `jags()` function.
```
jags.data <- list(Y = Wind, N = N)
jags.params <- c("r", "u") # parameters to be monitored
mod_lm_intercept <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
The function from the **R2jags** package that we use to run the model is `jags()`. There is a parallel version of the function called `jags.parallel()` which is useful for larger, more complex models. The details of both can be found with `?jags` or `?jags.parallel`.
Notice that the `jags()` function contains a number of other important arguments. In general, larger is better for all arguments: we want to run multiple MCMC chains (maybe 3 or more), and have a burn\-in of at least 5000\. The total number of samples after the burn\-in period is n.iter\-n.burnin, which in this case is 5000 samples. Because we are doing this with 3 MCMC chains, and the thinning rate equals 1 (meaning we are saving every sample), we will retain a total of 1500 posterior samples for each parameter.
The saved object storing our model diagnostics can be accessed directly, and includes some useful summary output.
```
mod_lm_intercept
```
```
Inference for Bugs model at "lm_intercept.txt", fit using jags,
3 chains, each with 10000 iterations (first 5000 discarded)
n.sims = 15000 iterations saved
mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
r 12.563 1.469 10.009 11.525 12.460 13.484 15.752 1.001 15000
u 9.950 0.287 9.378 9.757 9.950 10.145 10.502 1.001 15000
deviance 820.566 2.013 818.591 819.137 819.943 821.342 826.073 1.001 15000
For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
DIC info (using the rule, pD = var(deviance)/2)
pD = 2.0 and DIC = 822.6
DIC is an estimate of expected predictive error (lower deviance is better).
```
The last two columns in the summary contain `Rhat` (which we want to be close to 1\.0\), and `neff` (the effective sample size of each set of posterior draws). To examine the output more closely, we can pull all of the results directly into R,
```
R2jags::attach.jags(mod_lm_intercept)
```
Attaching the **R2jags** object loads the posteriors for the parameters and we can call them directly, e.g. `u`. If we don’t want to attach them to our workspace, we can find the posteriors within the model object.
```
post.params <- mod_lm_intercept$BUGSoutput$sims.list
```
We make a histogram of the posterior distributions of the parameters `u` and `r` with the following code,
```
# Now we can make plots of posterior values
par(mfrow = c(2, 1))
hist(post.params$u, 40, col = "grey", xlab = "u", main = "")
hist(post.params$r, 40, col = "grey", xlab = "r", main = "")
```
Figure 12\.1: Plot of the posteriors for the linear regression model.
We can run some useful diagnostics from the **coda** package on this model output. We have written a small function to make the creation of a MCMC list (an argument required for many of the diagnostics). The function is
```
createMcmcList <- function(jagsmodel) {
McmcArray <- as.array(jagsmodel$BUGSoutput$sims.array)
McmcList <- vector("list", length = dim(McmcArray)[2])
for (i in 1:length(McmcList)) McmcList[[i]] <- as.mcmc(McmcArray[,
i, ])
McmcList <- mcmc.list(McmcList)
return(McmcList)
}
```
Creating the MCMC list preserves the random samples generated from each chain and allows you to extract the samples for a given parameter (such as \\(\\mu\\)) from any chain you want. To extract \\(\\mu\\) from the first chain, for example, you could use the following code. Because `createMcmcList()` returns a list of **mcmc** objects, we can summarize and plot these directly. Figure [12\.2](sec-jags-univariate.html#fig:jags-plot-myList) shows the plot from `plot(myList[[1]])`.
```
myList <- createMcmcList(mod_lm_intercept)
summary(myList[[1]])
```
```
Iterations = 1:5000
Thinning interval = 1
Number of chains = 1
Sample size per chain = 5000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
deviance 820.576 2.0226 0.028604 0.029635
r 12.561 1.4679 0.020760 0.020760
u 9.947 0.2877 0.004069 0.004069
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
deviance 818.591 819.133 819.961 821.37 826.09
r 9.982 11.536 12.452 13.49 15.68
u 9.368 9.756 9.945 10.14 10.50
```
```
plot(myList[[1]])
```
Figure 12\.2: Plot of an object output from \\(\\texttt{creatMcmcList}\\).
For more quantitative diagnostics of MCMC convergence, we can rely on the **coda** package in R. There
are several useful statistics available, including the Gelman\-Rubin diagnostic (for one or several chains), autocorrelation diagnostics (similar to the ACF you calculated above), the Geweke diagnostic, and Heidelberger\-Welch test of stationarity.
```
library(coda)
gelmanDiags <- coda::gelman.diag(createMcmcList(mod_lm_intercept),
multivariate = FALSE)
autocorDiags <- coda::autocorr.diag(createMcmcList(mod_lm_intercept))
gewekeDiags <- coda::geweke.diag(createMcmcList(mod_lm_intercept))
heidelDiags <- coda::heidel.diag(createMcmcList(mod_lm_intercept))
```
### 12\.2\.2 Linear regression with covariates
We can introduce `Temp` as the covariate explaining our response variable `Wind`. Our new equation is
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= u \+ C\\,c\_t\\\\
y\_t \= x\_t \+ v\_t, v\_t \\sim \\,\\text{N}(0, r)
\\end{gathered}
\\tag{12\.3}
\\end{equation}\\]
To create JAGS code for this model, we (1\) add a prior for our new parameter `C`, (2\) update `X[i]` equation to include the new covariate, and (3\) we include the new covariate in our named data list.
```
# 1. LINEAR REGRESSION with covariates
model.loc <- ("lm_covariate.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
C ~ dnorm(0,0.01);
inv.r ~ dgamma(0.001,0.001);
r <- 1/inv.r;
# likelihood
for(i in 1:N) {
X[i] <- u + C*c[i];
EY[i] <- X[i]
Y[i] ~ dnorm(EY[i], inv.r);
}
}
",
file = model.loc)
jags.data <- list(Y = Wind, N = N, c = Temp)
jags.params <- c("r", "EY", "u", "C")
mod_lm <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
We can show the the posterior fits (the model fits) to the data. Here is a simple function whose arguments are one of our fitted models and the raw data. The function is:
```
plotModelOutput <- function(jagsmodel, Y) {
# attach the model
EY <- jagsmodel$BUGSoutput$sims.list$EY
x <- seq(1, length(Y))
summaryPredictions <- cbind(apply(EY, 2, quantile, 0.025),
apply(EY, 2, mean), apply(EY, 2, quantile, 0.975))
plot(Y, col = "white", ylim = c(min(c(Y, summaryPredictions)),
max(c(Y, summaryPredictions))), xlab = "", ylab = "95% CIs of predictions and data",
main = paste("JAGS results:", jagsmodel$model.file))
polygon(c(x, rev(x)), c(summaryPredictions[, 1], rev(summaryPredictions[,
3])), col = "grey70", border = NA)
lines(summaryPredictions[, 2])
points(Y)
}
```
We can use the function to plot the predicted posterior mean with 95% CIs, as well as the raw data. Note that the shading is for the CIs on the expected value of \\(y\_t\\) so will look narrow relative to the data. For example, try
```
plotModelOutput(mod_lm, Wind)
```
Figure 12\.3: Predicted posterior mean with 95% CIs
### 12\.2\.3 Random walk with drift
The previous models were observation error only models. Switching gears, we can create process error models. We will start with a random walk model. In this model, the assumption is that the underlying state \\(x\_t\\) is measured perfectly. All stochasticity is originating from process variation: variation in \\(x\_t\\) to \\(x\_{t\+1}\\).
For this simple model, we will assume that wind behaves as a random walk. We will call this process \\(x\\) to prepare for the state\-space model to come. We have no \\(y\_t\\) part of the equation in this model.
\\\[\\begin{equation}
x\_t \= x\_{t\-1} \+ u \+ w\_t, \\text{ where }w\_t \\sim \\,\\text{N}(0,q)
\\tag{12\.4}
\\end{equation}\\]
Now \\(x\_t\\) is stochastic and \\(E\[X\_t] \= x\_{t\-1} \+ u\\) and \\(X\_t \\sim \\,\\text{N}(E\[X\_t],q)\\).
We are going to need to put a prior on \\(x\_0\\), which appears in \\(E\[X\_1]\\). We could start with \\(t\=2\\) and skip this but we will start at \\(t\=1\\) since we will need to do that for later problems. The question is what prior should we put on \\(x\_0\\)? This is not a stationary process. We will just put a vague prior on \\(x\_0\\).
The JAGS random walk model is:
```
# RANDOM WALK with drift
model.loc <- ("rw_intercept.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
X0 ~ dnorm(0, 0.001);
# likelihood
X[1] ~ dnorm(X0 + u, inv.q);
for(i in 2:N) {
X[i] ~ dnorm(X[i-1] + u, inv.q);
}
}
",
file = model.loc)
```
To fit this model, we need to change `jags.data` to pass in `X = Wind` instead of `Y = Wind`. Obvioously we could have written the JAGS code with `Y` in place of `X` and kept our `jags.data` code the same as before, but we are working up to a state\-space model where we have a hidden random walk called `X` and an observation of that called `Y`.
```
jags.data <- list(X = Wind, N = N)
jags.params <- c("q", "u")
mod_rw_intercept <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
### 12\.2\.4 Autoregressive AR(1\) time series models
A variation of the random walk model is the autoregressive time series model of order 1, AR(1\). This model introduces a coefficient, which we will call \\(b\\). The parameter \\(b\\) controls the degree to which the random walk reverts to the mean. When \\(b \= 1\\), the model is identical to the random walk, but at smaller values, the model will revert back to the mean (which in this case is zero). Also, \\(b\\) can take on negative values.
\\\[\\begin{equation}
x\_t \= b \\, x\_{t\-1} \+ u \+ w\_t, \\text{ where }w\_t \\sim \\,\\text{N}(0,q)
\\tag{12\.5}
\\end{equation}\\]
Now \\(E\[X\_t] \= b \\, x\_{t\-1} \+ u\\).
Once again we need to put a prior on \\(x\_0\\), which appears in \\(E\[X\_1]\\). An AR(1\) with \\(\|b\|\<1\\) is a stationary process and the variance of the stationary distribution of \\(x\_t\\) is \\(q/(1\-b^2\)\\). If you think that \\(x\_0\\) has the stationary distribution (does your data look stationary?) then you can use the variance of the stationary distribution of \\(x\_t\\) for your prior. We specify priors with the precision (1 over the variance) instead of the variance. Thus the precision of the stationary distribution of \\(x\_0\\) is \\((1/q)(1\-b^2\)\\). In the code, `inv.q` is \\(1/q\\) and the precision is `inv.q * (1-b*b)`.
```
# AR(1) MODEL WITH AND ESTIMATED AR COEFFICIENT
model.loc <- ("ar1_intercept.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
b ~ dunif(-1,1);
X0 ~ dnorm(0, inv.q * (1 - b * b));
# likelihood
X[1] ~ dnorm(b * X0 + u, inv.q);
for(t in 2:N) {
X[t] ~ dnorm(b * X[t-1] + u, inv.q);
}
}
",
file = model.loc)
jags.data <- list(X = Wind, N = N)
jags.params <- c("q", "u", "b")
mod_ar1_intercept <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
### 12\.2\.5 Regression with AR(1\) errors
The AR(1\) model in the previous section suggests a way that we could include correlated errors in our linear regression. We could use the \\(x\_t\\) AR(1\) process as our errors for \\(y\_t\\). Here is an example of modifying the intercept only linear regression model. We will set \\(u\\) to 0 so that our AR(1\) errors have a mean of 0\.
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= b \\, x\_{t\-1} \+ w\_t, \\text{ where }w\_t \\sim \\,\\text{N}(0,q) \\\\
y\_t \= a \+ x\_t
\\end{gathered}
\\tag{12\.6}
\\end{equation}\\]
The problem with this is that we need a distribution for \\(y\_t\\). We cannot use `Y[t] <- a + X[t]` in our JAGS code (\\(Y\_t\\) is a random varible with a distribution; you cannot assign it a value). We need to re\-write this as \\(Y\_t \\sim N(a \+ b \\, x\_{t\-1}, q)\\).
\\\[\\begin{equation}
\\begin{gathered}
Y\_t \\sim N(a \+ b \\, x\_{t\-1}, q) \\\\
x\_t \= y\_t \- a
\\end{gathered}
\\tag{12\.7}
\\end{equation}\\]
We will create the variable `EY` so we can keep track of the expected value of \\(Y\_t\\), conditioned on \\(t\-1\\).
```
# LINEAR REGRESSION with autocorrelated errors no
# covariates, intercept only.
model.loc <- ("lm_intercept_ar1b.txt")
jagsscript <- cat("
model {
# priors on parameters
a ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
b ~ dunif(-1,1);
X0 ~ dnorm(0, inv.q * (1 - b * b));
# likelihood
EY[1] <- a + b * X0;
Y[1] ~ dnorm(EY[1], inv.q);
X[1] <- Y[1] - a;
for(t in 2:N) {
EY[t] <- a + b * X[t-1];
Y[t] ~ dnorm(EY[1], inv.q);
X[t] <- Y[t]-a;
}
}
",
file = model.loc)
jags.data <- list(Y = Wind, N = N)
jags.params <- c("q", "EY", "a", "b")
mod_ar1_intercept <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
### 12\.2\.6 Univariate state space model
Now we will combine the process and observation models to create a univariate state\-space model. This is the classic stochastic level model.
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1} \+ u \+ w\_t, \\, w\_t \\sim N(0,q)\\\\
y\_t \= x\_t \+ v\_t, \\, v\_t \\sim \\,\\text{N}(0, r)
\\end{gathered}
\\tag{12\.8}
\\end{equation}\\]
Because \\(x\\) is a random walk model not a stationary AR(1\), we will place a vague weakly informative prior on \\(x\_0\\): \\(x\_0 \\sim \\,\\text{N}(y\_1, 1000\)\\). We had to pass in `Y1` as data because JAGS would complain if we used `Y[1]` in our prior (because have `X0` in our model for \\(Y\[1]\\)). `EY` is added so that we can track the model fits for \\(y\\). In this case it is just `X` but in more complex models it will involve more parameters.
```
model.loc <- ("ss_model.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
inv.r ~ dgamma(0.001,0.001);
r <- 1/inv.r;
X0 ~ dnorm(Y1, 0.001);
# likelihood
X[1] ~ dnorm(X0 + u, inv.q);
EY[1] <- X[1];
Y[1] ~ dnorm(EY[1], inv.r);
for(t in 2:N) {
X[t] ~ dnorm(X[t-1] + u, inv.q);
EY[t] <- X[t];
Y[t] ~ dnorm(EY[t], inv.r);
}
}
",
file = model.loc)
```
We fit as usual with the addition of `Y1` in `jags.data`.
```
jags.data <- list(Y = Wind, N = N, Y1 = Wind[1])
jags.params <- c("q", "r", "EY", "u")
mod_ss <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
### 12\.2\.1 Linear regression with no covariates
We will start with a linear regression with only an intercept. We will write the model in the form of Equation [(12\.1\)](sec-jags-overview.html#eq:jags-uniss). Our model is
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= u \\\\
y\_t \= x\_t \+ v\_t, v\_t \\sim \\,\\text{N}(0, r)
\\end{gathered}
\\tag{12\.2}
\\end{equation}\\]
An equivalent way to think about this model is
\\\[\\begin{equation}
Y\_t \\sim \\,\\text{N}(E\[Y\_t], r)
\\end{equation}\\]
\\(E\[Y\_{t}] \= x\_t\\) where \\(x\_t \= u\\).
In this linear regression model, we will treat the residual error as independent and identically distributed Gaussian observation error.
To run the JAGS model, we will need to start by writing the model in JAGS notation. We can construct the model in Equation [(12\.2\)](sec-jags-univariate.html#eq:jags-lr1) as
```
# LINEAR REGRESSION intercept only.
model.loc <- "lm_intercept.txt" # name of the txt file
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.r ~ dgamma(0.001,0.001); # This is inverse gamma
r <- 1/inv.r; # derived value
# likelihood
for(i in 1:N) {
X[i] <- u
EY[i] <- X[i]; # derived value
Y[i] ~ dnorm(EY[i], inv.r);
}
}
",
file = model.loc)
```
The JAGS code has three parts: our parameter priors, our data model and derived parameters.
**Parameter priors** There are two parameters in the model (\\(u\\), the mean, and \\(r\\), the variance of the observation error). We need to set a prior on both of these. We will set a vague prior of a Gaussian with varianc 10 on \\(u\\). In JAGS instead of specifying the normal distribution with the variance, \\(N(0, 10\)\\), you specify it with the precision (1/variance), so our prior on \\(u\\) is `dnorm(0, 0.01)`. For \\(r\\), we need to set a prior on the precision \\(1/r\\), which we call `inv.r` in the code. The precision receives a gamma prior, which is equivalent to the variance receiving an inverse gamma prior (fairly common for standard Bayesian regression models).
**Likelihood** Our data distribution is \\(Y\_t \\sim \\,\\text{N}(E\[Y\_t], r)\\). We use the `dnorm()` distribution with the precision (\\(1/r\\)) instead of \\(r\\). So our data model is `Y[t] = dnorm(EY[t], inv.r)`. JAGS is not vectorized so we need to use for loops (instead of matrix multiplication) and use the for loop to specify the distribution for each `Y[t]`. For, this model we didn’t actually need `X[t]` but we use it because we are building up to a state\-space model which has both \\(x\_t\\) and \\(y\_t\\).
**Derived values** Derived values are things we want output so we can track them. In this example, our derived values are a bit useless but in more complex models they will be quite handy. Also they can make your code easier to understand.
To run the model, we need to create several new objects, representing (1\) a list of data that we will pass to JAGS `jags.data`, (2\) a vector of parameters that we want to monitor and have returned back to R `jags.params`, and (3\) the name of our text file that contains the JAGS model we wrote above. With those three things, we can call the `jags()` function.
```
jags.data <- list(Y = Wind, N = N)
jags.params <- c("r", "u") # parameters to be monitored
mod_lm_intercept <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
The function from the **R2jags** package that we use to run the model is `jags()`. There is a parallel version of the function called `jags.parallel()` which is useful for larger, more complex models. The details of both can be found with `?jags` or `?jags.parallel`.
Notice that the `jags()` function contains a number of other important arguments. In general, larger is better for all arguments: we want to run multiple MCMC chains (maybe 3 or more), and have a burn\-in of at least 5000\. The total number of samples after the burn\-in period is n.iter\-n.burnin, which in this case is 5000 samples. Because we are doing this with 3 MCMC chains, and the thinning rate equals 1 (meaning we are saving every sample), we will retain a total of 1500 posterior samples for each parameter.
The saved object storing our model diagnostics can be accessed directly, and includes some useful summary output.
```
mod_lm_intercept
```
```
Inference for Bugs model at "lm_intercept.txt", fit using jags,
3 chains, each with 10000 iterations (first 5000 discarded)
n.sims = 15000 iterations saved
mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
r 12.563 1.469 10.009 11.525 12.460 13.484 15.752 1.001 15000
u 9.950 0.287 9.378 9.757 9.950 10.145 10.502 1.001 15000
deviance 820.566 2.013 818.591 819.137 819.943 821.342 826.073 1.001 15000
For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
DIC info (using the rule, pD = var(deviance)/2)
pD = 2.0 and DIC = 822.6
DIC is an estimate of expected predictive error (lower deviance is better).
```
The last two columns in the summary contain `Rhat` (which we want to be close to 1\.0\), and `neff` (the effective sample size of each set of posterior draws). To examine the output more closely, we can pull all of the results directly into R,
```
R2jags::attach.jags(mod_lm_intercept)
```
Attaching the **R2jags** object loads the posteriors for the parameters and we can call them directly, e.g. `u`. If we don’t want to attach them to our workspace, we can find the posteriors within the model object.
```
post.params <- mod_lm_intercept$BUGSoutput$sims.list
```
We make a histogram of the posterior distributions of the parameters `u` and `r` with the following code,
```
# Now we can make plots of posterior values
par(mfrow = c(2, 1))
hist(post.params$u, 40, col = "grey", xlab = "u", main = "")
hist(post.params$r, 40, col = "grey", xlab = "r", main = "")
```
Figure 12\.1: Plot of the posteriors for the linear regression model.
We can run some useful diagnostics from the **coda** package on this model output. We have written a small function to make the creation of a MCMC list (an argument required for many of the diagnostics). The function is
```
createMcmcList <- function(jagsmodel) {
McmcArray <- as.array(jagsmodel$BUGSoutput$sims.array)
McmcList <- vector("list", length = dim(McmcArray)[2])
for (i in 1:length(McmcList)) McmcList[[i]] <- as.mcmc(McmcArray[,
i, ])
McmcList <- mcmc.list(McmcList)
return(McmcList)
}
```
Creating the MCMC list preserves the random samples generated from each chain and allows you to extract the samples for a given parameter (such as \\(\\mu\\)) from any chain you want. To extract \\(\\mu\\) from the first chain, for example, you could use the following code. Because `createMcmcList()` returns a list of **mcmc** objects, we can summarize and plot these directly. Figure [12\.2](sec-jags-univariate.html#fig:jags-plot-myList) shows the plot from `plot(myList[[1]])`.
```
myList <- createMcmcList(mod_lm_intercept)
summary(myList[[1]])
```
```
Iterations = 1:5000
Thinning interval = 1
Number of chains = 1
Sample size per chain = 5000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
deviance 820.576 2.0226 0.028604 0.029635
r 12.561 1.4679 0.020760 0.020760
u 9.947 0.2877 0.004069 0.004069
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
deviance 818.591 819.133 819.961 821.37 826.09
r 9.982 11.536 12.452 13.49 15.68
u 9.368 9.756 9.945 10.14 10.50
```
```
plot(myList[[1]])
```
Figure 12\.2: Plot of an object output from \\(\\texttt{creatMcmcList}\\).
For more quantitative diagnostics of MCMC convergence, we can rely on the **coda** package in R. There
are several useful statistics available, including the Gelman\-Rubin diagnostic (for one or several chains), autocorrelation diagnostics (similar to the ACF you calculated above), the Geweke diagnostic, and Heidelberger\-Welch test of stationarity.
```
library(coda)
gelmanDiags <- coda::gelman.diag(createMcmcList(mod_lm_intercept),
multivariate = FALSE)
autocorDiags <- coda::autocorr.diag(createMcmcList(mod_lm_intercept))
gewekeDiags <- coda::geweke.diag(createMcmcList(mod_lm_intercept))
heidelDiags <- coda::heidel.diag(createMcmcList(mod_lm_intercept))
```
### 12\.2\.2 Linear regression with covariates
We can introduce `Temp` as the covariate explaining our response variable `Wind`. Our new equation is
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= u \+ C\\,c\_t\\\\
y\_t \= x\_t \+ v\_t, v\_t \\sim \\,\\text{N}(0, r)
\\end{gathered}
\\tag{12\.3}
\\end{equation}\\]
To create JAGS code for this model, we (1\) add a prior for our new parameter `C`, (2\) update `X[i]` equation to include the new covariate, and (3\) we include the new covariate in our named data list.
```
# 1. LINEAR REGRESSION with covariates
model.loc <- ("lm_covariate.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
C ~ dnorm(0,0.01);
inv.r ~ dgamma(0.001,0.001);
r <- 1/inv.r;
# likelihood
for(i in 1:N) {
X[i] <- u + C*c[i];
EY[i] <- X[i]
Y[i] ~ dnorm(EY[i], inv.r);
}
}
",
file = model.loc)
jags.data <- list(Y = Wind, N = N, c = Temp)
jags.params <- c("r", "EY", "u", "C")
mod_lm <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
We can show the the posterior fits (the model fits) to the data. Here is a simple function whose arguments are one of our fitted models and the raw data. The function is:
```
plotModelOutput <- function(jagsmodel, Y) {
# attach the model
EY <- jagsmodel$BUGSoutput$sims.list$EY
x <- seq(1, length(Y))
summaryPredictions <- cbind(apply(EY, 2, quantile, 0.025),
apply(EY, 2, mean), apply(EY, 2, quantile, 0.975))
plot(Y, col = "white", ylim = c(min(c(Y, summaryPredictions)),
max(c(Y, summaryPredictions))), xlab = "", ylab = "95% CIs of predictions and data",
main = paste("JAGS results:", jagsmodel$model.file))
polygon(c(x, rev(x)), c(summaryPredictions[, 1], rev(summaryPredictions[,
3])), col = "grey70", border = NA)
lines(summaryPredictions[, 2])
points(Y)
}
```
We can use the function to plot the predicted posterior mean with 95% CIs, as well as the raw data. Note that the shading is for the CIs on the expected value of \\(y\_t\\) so will look narrow relative to the data. For example, try
```
plotModelOutput(mod_lm, Wind)
```
Figure 12\.3: Predicted posterior mean with 95% CIs
### 12\.2\.3 Random walk with drift
The previous models were observation error only models. Switching gears, we can create process error models. We will start with a random walk model. In this model, the assumption is that the underlying state \\(x\_t\\) is measured perfectly. All stochasticity is originating from process variation: variation in \\(x\_t\\) to \\(x\_{t\+1}\\).
For this simple model, we will assume that wind behaves as a random walk. We will call this process \\(x\\) to prepare for the state\-space model to come. We have no \\(y\_t\\) part of the equation in this model.
\\\[\\begin{equation}
x\_t \= x\_{t\-1} \+ u \+ w\_t, \\text{ where }w\_t \\sim \\,\\text{N}(0,q)
\\tag{12\.4}
\\end{equation}\\]
Now \\(x\_t\\) is stochastic and \\(E\[X\_t] \= x\_{t\-1} \+ u\\) and \\(X\_t \\sim \\,\\text{N}(E\[X\_t],q)\\).
We are going to need to put a prior on \\(x\_0\\), which appears in \\(E\[X\_1]\\). We could start with \\(t\=2\\) and skip this but we will start at \\(t\=1\\) since we will need to do that for later problems. The question is what prior should we put on \\(x\_0\\)? This is not a stationary process. We will just put a vague prior on \\(x\_0\\).
The JAGS random walk model is:
```
# RANDOM WALK with drift
model.loc <- ("rw_intercept.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
X0 ~ dnorm(0, 0.001);
# likelihood
X[1] ~ dnorm(X0 + u, inv.q);
for(i in 2:N) {
X[i] ~ dnorm(X[i-1] + u, inv.q);
}
}
",
file = model.loc)
```
To fit this model, we need to change `jags.data` to pass in `X = Wind` instead of `Y = Wind`. Obvioously we could have written the JAGS code with `Y` in place of `X` and kept our `jags.data` code the same as before, but we are working up to a state\-space model where we have a hidden random walk called `X` and an observation of that called `Y`.
```
jags.data <- list(X = Wind, N = N)
jags.params <- c("q", "u")
mod_rw_intercept <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
### 12\.2\.4 Autoregressive AR(1\) time series models
A variation of the random walk model is the autoregressive time series model of order 1, AR(1\). This model introduces a coefficient, which we will call \\(b\\). The parameter \\(b\\) controls the degree to which the random walk reverts to the mean. When \\(b \= 1\\), the model is identical to the random walk, but at smaller values, the model will revert back to the mean (which in this case is zero). Also, \\(b\\) can take on negative values.
\\\[\\begin{equation}
x\_t \= b \\, x\_{t\-1} \+ u \+ w\_t, \\text{ where }w\_t \\sim \\,\\text{N}(0,q)
\\tag{12\.5}
\\end{equation}\\]
Now \\(E\[X\_t] \= b \\, x\_{t\-1} \+ u\\).
Once again we need to put a prior on \\(x\_0\\), which appears in \\(E\[X\_1]\\). An AR(1\) with \\(\|b\|\<1\\) is a stationary process and the variance of the stationary distribution of \\(x\_t\\) is \\(q/(1\-b^2\)\\). If you think that \\(x\_0\\) has the stationary distribution (does your data look stationary?) then you can use the variance of the stationary distribution of \\(x\_t\\) for your prior. We specify priors with the precision (1 over the variance) instead of the variance. Thus the precision of the stationary distribution of \\(x\_0\\) is \\((1/q)(1\-b^2\)\\). In the code, `inv.q` is \\(1/q\\) and the precision is `inv.q * (1-b*b)`.
```
# AR(1) MODEL WITH AND ESTIMATED AR COEFFICIENT
model.loc <- ("ar1_intercept.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
b ~ dunif(-1,1);
X0 ~ dnorm(0, inv.q * (1 - b * b));
# likelihood
X[1] ~ dnorm(b * X0 + u, inv.q);
for(t in 2:N) {
X[t] ~ dnorm(b * X[t-1] + u, inv.q);
}
}
",
file = model.loc)
jags.data <- list(X = Wind, N = N)
jags.params <- c("q", "u", "b")
mod_ar1_intercept <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
### 12\.2\.5 Regression with AR(1\) errors
The AR(1\) model in the previous section suggests a way that we could include correlated errors in our linear regression. We could use the \\(x\_t\\) AR(1\) process as our errors for \\(y\_t\\). Here is an example of modifying the intercept only linear regression model. We will set \\(u\\) to 0 so that our AR(1\) errors have a mean of 0\.
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= b \\, x\_{t\-1} \+ w\_t, \\text{ where }w\_t \\sim \\,\\text{N}(0,q) \\\\
y\_t \= a \+ x\_t
\\end{gathered}
\\tag{12\.6}
\\end{equation}\\]
The problem with this is that we need a distribution for \\(y\_t\\). We cannot use `Y[t] <- a + X[t]` in our JAGS code (\\(Y\_t\\) is a random varible with a distribution; you cannot assign it a value). We need to re\-write this as \\(Y\_t \\sim N(a \+ b \\, x\_{t\-1}, q)\\).
\\\[\\begin{equation}
\\begin{gathered}
Y\_t \\sim N(a \+ b \\, x\_{t\-1}, q) \\\\
x\_t \= y\_t \- a
\\end{gathered}
\\tag{12\.7}
\\end{equation}\\]
We will create the variable `EY` so we can keep track of the expected value of \\(Y\_t\\), conditioned on \\(t\-1\\).
```
# LINEAR REGRESSION with autocorrelated errors no
# covariates, intercept only.
model.loc <- ("lm_intercept_ar1b.txt")
jagsscript <- cat("
model {
# priors on parameters
a ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
b ~ dunif(-1,1);
X0 ~ dnorm(0, inv.q * (1 - b * b));
# likelihood
EY[1] <- a + b * X0;
Y[1] ~ dnorm(EY[1], inv.q);
X[1] <- Y[1] - a;
for(t in 2:N) {
EY[t] <- a + b * X[t-1];
Y[t] ~ dnorm(EY[1], inv.q);
X[t] <- Y[t]-a;
}
}
",
file = model.loc)
jags.data <- list(Y = Wind, N = N)
jags.params <- c("q", "EY", "a", "b")
mod_ar1_intercept <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
### 12\.2\.6 Univariate state space model
Now we will combine the process and observation models to create a univariate state\-space model. This is the classic stochastic level model.
\\\[\\begin{equation}
\\begin{gathered}
x\_t \= x\_{t\-1} \+ u \+ w\_t, \\, w\_t \\sim N(0,q)\\\\
y\_t \= x\_t \+ v\_t, \\, v\_t \\sim \\,\\text{N}(0, r)
\\end{gathered}
\\tag{12\.8}
\\end{equation}\\]
Because \\(x\\) is a random walk model not a stationary AR(1\), we will place a vague weakly informative prior on \\(x\_0\\): \\(x\_0 \\sim \\,\\text{N}(y\_1, 1000\)\\). We had to pass in `Y1` as data because JAGS would complain if we used `Y[1]` in our prior (because have `X0` in our model for \\(Y\[1]\\)). `EY` is added so that we can track the model fits for \\(y\\). In this case it is just `X` but in more complex models it will involve more parameters.
```
model.loc <- ("ss_model.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
inv.r ~ dgamma(0.001,0.001);
r <- 1/inv.r;
X0 ~ dnorm(Y1, 0.001);
# likelihood
X[1] ~ dnorm(X0 + u, inv.q);
EY[1] <- X[1];
Y[1] ~ dnorm(EY[1], inv.r);
for(t in 2:N) {
X[t] ~ dnorm(X[t-1] + u, inv.q);
EY[t] <- X[t];
Y[t] ~ dnorm(EY[t], inv.r);
}
}
",
file = model.loc)
```
We fit as usual with the addition of `Y1` in `jags.data`.
```
jags.data <- list(Y = Wind, N = N, Y1 = Wind[1])
jags.params <- c("q", "r", "EY", "u")
mod_ss <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-jags-marss.html |
12\.3 Multivariate state\-space models
--------------------------------------
In the multivariate state\-space model, our observations and hidden states can be multivariate along with all the parameters:
\\\[\\begin{equation}
\\begin{gathered}
\\mathbf{x}\_t \= \\mathbf{B} \\mathbf{x}\_{t\-1}\+\\mathbf{u}\+\\mathbf{w}\_t \\text{ where } \\mathbf{w}\_t \\sim \\,\\text{N}(0,\\mathbf{Q}) \\\\
\\mathbf{y}\_t \= \\mathbf{Z}\\mathbf{x}\_t\+\\mathbf{a}\+\\mathbf{v}\_t \\text{ where } \\mathbf{v}\_t \\sim \\,\\text{N}(0,\\mathbf{R}) \\\\
\\mathbf{x}\_0 \= \\boldsymbol{\\mu}
\\end{gathered}
\\tag{12\.9}
\\end{equation}\\]
### 12\.3\.1 One hidden state
Let’s start with a very simple MARSS model with JAGS: two observation time\-series and one hidden state. Our \\(\\mathbf{x}\_t\\) model is \\(x\_t \= x\_{t\-1} \+ u \+ w\_t\\) and our \\(\\mathbf{y}\_t\\) model is
\\\[\\begin{equation}
\\begin{bmatrix}
y\_{1} \\\\
y\_{2}\\end{bmatrix}\_t \=
\\begin{bmatrix}
1\\\\
1\\end{bmatrix} x\_t \+
\\begin{bmatrix}
0 \\\\
a\_2\\end{bmatrix} \+
\\begin{bmatrix}
v\_{1} \\\\
v\_{2}\\end{bmatrix}\_t, \\,
\\begin{bmatrix}
v\_{1} \\\\
v\_{2}\\end{bmatrix}\_t \\sim
\\,\\text{MVN}\\left(0, \\begin{bmatrix}
r\_1\&0 \\\\
0\&r\_2\\end{bmatrix}\\right)
\\tag{12\.10}
\\end{equation}\\]
We need to put a prior on our \\(x\_0\\) (initial \\(x\\)). Since \\(b\=1\\), we have a random walk rather than a stationary process and we will put a vague prior on the \\(x\_0\\). We need to deal with the \\(\\mathbf{a}\\) so that our code doesn’t run in circles by trying to match \\(x\\) up with different \\(y\_t\\) time series. We force \\(x\_t\\) to track the mean of \\(y\_{1,t}\\) and then use \\(a\_2\\) to scale the other \\(y\_t\\) relative to that. The problem is that a random walk is very flexible and if we tried to estimate \\(a\_1\\) then we would have infinite solutions.
To keep our JAGS code organized, let’s separate the \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\) parts of the code.
```
jagsscript <- cat("
model {
# process model priors
u ~ dnorm(0, 0.01); # one u
inv.q~dgamma(0.001,0.001);
q <- 1/inv.q; # one q
X0 ~ dnorm(Y1,0.001); # initial state
# process model likelihood
EX[1] <- X0 + u;
X[1] ~ dnorm(EX[1], inv.q);
for(t in 2:N) {
EX[t] <- X[t-1] + u;
X[t] ~ dnorm(EX[t], inv.q);
}
# observation model priors
for(i in 1:n) { # r's differ by site
inv.r[i]~dgamma(0.001,0.001);
r[i] <- 1/inv.r[i];
}
a[1] <- 0; # first a is 0, rest estimated
for(i in 2:n) {
a[i]~dnorm(0,0.001);
}
# observation model likelihood
for(t in 1:N) {
for(i in 1:n) {
EY[i,t] <- X[t]+a[i]
Y[i,t] ~ dnorm(EY[i,t], inv.r[i]);
}
}
}
",
file = "marss-jags1.txt")
```
To fit the model, we write the data list, parameter list, and pass the model to the `jags()` function.
```
data(harborSealWA, package = "MARSS")
dat <- t(harborSealWA[, 2:3])
jags.data <- list(Y = dat, n = nrow(dat), N = ncol(dat), Y1 = dat[1,
1])
jags.params <- c("EY", "u", "q", "r")
model.loc <- "marss-jags1.txt" # name of the txt file
mod_marss1 <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
We can make a plot of our estimated parameters:
```
post.params <- mod_marss1$BUGSoutput$sims.list
par(mfrow = c(2, 2))
hist(log(post.params$q), main = "log(q)", xlab = "")
hist(post.params$u, main = "u", xlab = "")
hist(log(post.params$r[, 1]), main = "log(r_1)", xlab = "")
hist(log(post.params$r[, 2]), main = "log(r_2)", xlab = "")
```
We can make a plot of the model fitted \\(y\_t\\) with 50% credible intervals and the data. Note that the credible intervals are for the expected value of \\(y\_{i,t}\\) so will be narrower than the data.
```
make.ey.plot <- function(mod, dat) {
library(ggplot2)
EY <- mod$BUGSoutput$sims.list$EY
n <- nrow(dat)
N <- ncol(dat)
df <- c()
for (i in 1:n) {
tmp <- data.frame(n = paste0("Y", i), x = 1:N, ey = apply(EY[,
i, , drop = FALSE], 3, median), ey.low = apply(EY[,
i, , drop = FALSE], 3, quantile, probs = 0.25), ey.up = apply(EY[,
i, , drop = FALSE], 3, quantile, probs = 0.75), y = dat[i,
])
df <- rbind(df, tmp)
}
ggplot(df, aes(x = x, y = ey)) + geom_line() + geom_ribbon(aes(ymin = ey.low,
ymax = ey.up), alpha = 0.25) + geom_point(data = df,
aes(x = x, y = y)) + facet_wrap(~n) + theme_bw()
}
```
```
make.ey.plot(mod_marss1, dat)
```
### 12\.3\.2 \\(m\\) hidden states
Let’s add multiple hidden states. We’ll say that each \\(y\_t\\) is observing its own \\(x\_t\\) but the \\(x\_t\\) share the same \\(q\\) but not \\(u\\). Our \\(\\mathbf{x}\_t\\) model is \\\[\\begin{equation}
\\begin{bmatrix}
x\_{1} \\\\
x\_{2}\\end{bmatrix}\_t \=
\\begin{bmatrix}
1\&0\\\\
0\&1\\end{bmatrix}
\\begin{bmatrix}
x\_{1} \\\\
x\_{2}\\end{bmatrix}\_{t\-1} \+
\\begin{bmatrix}
u\_1 \\\\
u\_2\\end{bmatrix} \+
\\begin{bmatrix}
w\_{1} \\\\
w\_{2}\\end{bmatrix}\_t, \\,
\\begin{bmatrix}
w\_{1} \\\\
w\_{2}\\end{bmatrix}\_t \\sim
\\,\\text{MVN}\\left(0, \\begin{bmatrix}
q\&0 \\\\
0\&q\\end{bmatrix}\\right)
(\\\#eq:jags\-marss2\)
\\end{equation}\\]
Here is the JAGS model. Note that \\(a\_i\\) is 0 for all \\(i\\) because each \\(y\_t\\) is associated with its own \\(x\_t\\).
```
jagsscript <- cat("
model {
# process model priors
inv.q~dgamma(0.001,0.001);
q <- 1/inv.q; # one q
for(i in 1:n) {
u[i] ~ dnorm(0, 0.01);
X0[i] ~ dnorm(Y1[i],0.001); # initial states
}
# process model likelihood
for(i in 1:n) {
EX[i,1] <- X0[i] + u[i];
X[i,1] ~ dnorm(EX[i,1], inv.q);
}
for(t in 2:N) {
for(i in 1:n) {
EX[i,t] <- X[i,t-1] + u[i];
X[i,t] ~ dnorm(EX[i,t], inv.q);
}
}
# observation model priors
for(i in 1:n) { # The r's are different by site
inv.r[i]~dgamma(0.001,0.001);
r[i] <- 1/inv.r[i];
}
# observation model likelihood
for(t in 1:N) {
for(i in 1:n) {
EY[i,t] <- X[i,t]
Y[i,t] ~ dnorm(EY[i,t], inv.r[i]);
}
}
}
",
file = "marss-jags2.txt")
```
Our code to fit the model changes a little.
```
data(harborSealWA, package = "MARSS")
dat <- t(harborSealWA[, 2:3])
jags.data <- list(Y = dat, n = nrow(dat), N = ncol(dat), Y1 = dat[,
1])
jags.params <- c("EY", "u", "q", "r")
model.loc <- "marss-jags2.txt" # name of the txt file
mod_marss1 <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
```
make.ey.plot(mod_marss1, dat)
```
### 12\.3\.1 One hidden state
Let’s start with a very simple MARSS model with JAGS: two observation time\-series and one hidden state. Our \\(\\mathbf{x}\_t\\) model is \\(x\_t \= x\_{t\-1} \+ u \+ w\_t\\) and our \\(\\mathbf{y}\_t\\) model is
\\\[\\begin{equation}
\\begin{bmatrix}
y\_{1} \\\\
y\_{2}\\end{bmatrix}\_t \=
\\begin{bmatrix}
1\\\\
1\\end{bmatrix} x\_t \+
\\begin{bmatrix}
0 \\\\
a\_2\\end{bmatrix} \+
\\begin{bmatrix}
v\_{1} \\\\
v\_{2}\\end{bmatrix}\_t, \\,
\\begin{bmatrix}
v\_{1} \\\\
v\_{2}\\end{bmatrix}\_t \\sim
\\,\\text{MVN}\\left(0, \\begin{bmatrix}
r\_1\&0 \\\\
0\&r\_2\\end{bmatrix}\\right)
\\tag{12\.10}
\\end{equation}\\]
We need to put a prior on our \\(x\_0\\) (initial \\(x\\)). Since \\(b\=1\\), we have a random walk rather than a stationary process and we will put a vague prior on the \\(x\_0\\). We need to deal with the \\(\\mathbf{a}\\) so that our code doesn’t run in circles by trying to match \\(x\\) up with different \\(y\_t\\) time series. We force \\(x\_t\\) to track the mean of \\(y\_{1,t}\\) and then use \\(a\_2\\) to scale the other \\(y\_t\\) relative to that. The problem is that a random walk is very flexible and if we tried to estimate \\(a\_1\\) then we would have infinite solutions.
To keep our JAGS code organized, let’s separate the \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\) parts of the code.
```
jagsscript <- cat("
model {
# process model priors
u ~ dnorm(0, 0.01); # one u
inv.q~dgamma(0.001,0.001);
q <- 1/inv.q; # one q
X0 ~ dnorm(Y1,0.001); # initial state
# process model likelihood
EX[1] <- X0 + u;
X[1] ~ dnorm(EX[1], inv.q);
for(t in 2:N) {
EX[t] <- X[t-1] + u;
X[t] ~ dnorm(EX[t], inv.q);
}
# observation model priors
for(i in 1:n) { # r's differ by site
inv.r[i]~dgamma(0.001,0.001);
r[i] <- 1/inv.r[i];
}
a[1] <- 0; # first a is 0, rest estimated
for(i in 2:n) {
a[i]~dnorm(0,0.001);
}
# observation model likelihood
for(t in 1:N) {
for(i in 1:n) {
EY[i,t] <- X[t]+a[i]
Y[i,t] ~ dnorm(EY[i,t], inv.r[i]);
}
}
}
",
file = "marss-jags1.txt")
```
To fit the model, we write the data list, parameter list, and pass the model to the `jags()` function.
```
data(harborSealWA, package = "MARSS")
dat <- t(harborSealWA[, 2:3])
jags.data <- list(Y = dat, n = nrow(dat), N = ncol(dat), Y1 = dat[1,
1])
jags.params <- c("EY", "u", "q", "r")
model.loc <- "marss-jags1.txt" # name of the txt file
mod_marss1 <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
We can make a plot of our estimated parameters:
```
post.params <- mod_marss1$BUGSoutput$sims.list
par(mfrow = c(2, 2))
hist(log(post.params$q), main = "log(q)", xlab = "")
hist(post.params$u, main = "u", xlab = "")
hist(log(post.params$r[, 1]), main = "log(r_1)", xlab = "")
hist(log(post.params$r[, 2]), main = "log(r_2)", xlab = "")
```
We can make a plot of the model fitted \\(y\_t\\) with 50% credible intervals and the data. Note that the credible intervals are for the expected value of \\(y\_{i,t}\\) so will be narrower than the data.
```
make.ey.plot <- function(mod, dat) {
library(ggplot2)
EY <- mod$BUGSoutput$sims.list$EY
n <- nrow(dat)
N <- ncol(dat)
df <- c()
for (i in 1:n) {
tmp <- data.frame(n = paste0("Y", i), x = 1:N, ey = apply(EY[,
i, , drop = FALSE], 3, median), ey.low = apply(EY[,
i, , drop = FALSE], 3, quantile, probs = 0.25), ey.up = apply(EY[,
i, , drop = FALSE], 3, quantile, probs = 0.75), y = dat[i,
])
df <- rbind(df, tmp)
}
ggplot(df, aes(x = x, y = ey)) + geom_line() + geom_ribbon(aes(ymin = ey.low,
ymax = ey.up), alpha = 0.25) + geom_point(data = df,
aes(x = x, y = y)) + facet_wrap(~n) + theme_bw()
}
```
```
make.ey.plot(mod_marss1, dat)
```
### 12\.3\.2 \\(m\\) hidden states
Let’s add multiple hidden states. We’ll say that each \\(y\_t\\) is observing its own \\(x\_t\\) but the \\(x\_t\\) share the same \\(q\\) but not \\(u\\). Our \\(\\mathbf{x}\_t\\) model is \\\[\\begin{equation}
\\begin{bmatrix}
x\_{1} \\\\
x\_{2}\\end{bmatrix}\_t \=
\\begin{bmatrix}
1\&0\\\\
0\&1\\end{bmatrix}
\\begin{bmatrix}
x\_{1} \\\\
x\_{2}\\end{bmatrix}\_{t\-1} \+
\\begin{bmatrix}
u\_1 \\\\
u\_2\\end{bmatrix} \+
\\begin{bmatrix}
w\_{1} \\\\
w\_{2}\\end{bmatrix}\_t, \\,
\\begin{bmatrix}
w\_{1} \\\\
w\_{2}\\end{bmatrix}\_t \\sim
\\,\\text{MVN}\\left(0, \\begin{bmatrix}
q\&0 \\\\
0\&q\\end{bmatrix}\\right)
(\\\#eq:jags\-marss2\)
\\end{equation}\\]
Here is the JAGS model. Note that \\(a\_i\\) is 0 for all \\(i\\) because each \\(y\_t\\) is associated with its own \\(x\_t\\).
```
jagsscript <- cat("
model {
# process model priors
inv.q~dgamma(0.001,0.001);
q <- 1/inv.q; # one q
for(i in 1:n) {
u[i] ~ dnorm(0, 0.01);
X0[i] ~ dnorm(Y1[i],0.001); # initial states
}
# process model likelihood
for(i in 1:n) {
EX[i,1] <- X0[i] + u[i];
X[i,1] ~ dnorm(EX[i,1], inv.q);
}
for(t in 2:N) {
for(i in 1:n) {
EX[i,t] <- X[i,t-1] + u[i];
X[i,t] ~ dnorm(EX[i,t], inv.q);
}
}
# observation model priors
for(i in 1:n) { # The r's are different by site
inv.r[i]~dgamma(0.001,0.001);
r[i] <- 1/inv.r[i];
}
# observation model likelihood
for(t in 1:N) {
for(i in 1:n) {
EY[i,t] <- X[i,t]
Y[i,t] ~ dnorm(EY[i,t], inv.r[i]);
}
}
}
",
file = "marss-jags2.txt")
```
Our code to fit the model changes a little.
```
data(harborSealWA, package = "MARSS")
dat <- t(harborSealWA[, 2:3])
jags.data <- list(Y = dat, n = nrow(dat), N = ncol(dat), Y1 = dat[,
1])
jags.params <- c("EY", "u", "q", "r")
model.loc <- "marss-jags2.txt" # name of the txt file
mod_marss1 <- R2jags::jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
```
make.ey.plot(mod_marss1, dat)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-jags-non-gaussian.html |
12\.4 Non\-Gaussian observation errors
--------------------------------------
### 12\.4\.1 Poisson observation errors
So far we have used the following observation model \\(y\_t \\sim \\,\\text{N}(x\_t, r)\\).
We can change this to a Poisson observation error model:
\\(Y\_t \\sim \\text{Pois}(\\lambda\_t)\\) where \\(E\[Y\_t] \= \\lambda\_t\\). \\(\\text{log}(\\lambda\_t) \= x\_t\\) where \\(x\_t\\) is our process model.
All we need to change to allow Poisson errors is to change the `Y[t]` part to
```
log(EY[t]) <- X[i]
Y[t] ~ dpois(EY[t])
```
We also need to ensure that our data are
integers and we remove the `r` part from our model code since the Poisson does not have that.
Our univariate state\-space code with Poisson observation errors is the following:
```
# SS MODEL with Poisson errors
model.loc <- ("ss_model_pois.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
X0 ~ dnorm(0, 0.001);
# likelihood
X[1] ~ dnorm(X0 + u, inv.q);
log(EY[1]) <- X[1]
Y[1] ~ dpois(EY[1])
for(t in 2:N) {
X[t] ~ dnorm(X[t-1] + u, inv.q);
log(EY[t]) <- X[t]
Y[t] ~ dpois(EY[t]);
}
}
",
file = model.loc)
```
We will fit this to the wild dogs data in the **MARSS** package.
```
data(wilddogs, package = "MARSS")
jags.data <- list(Y = wilddogs[, 2], N = nrow(wilddogs))
jags.params <- c("q", "EY", "u")
mod_ss <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
When we use this univariate state\-space model with population data, like the wild dogs, we would log the data\\(^\\dagger\\), and our \\(y\_t\\) in our JAGS code is really \\(log(y\_t)\\). In that case, \\(E\[log(Y\_t)]) \= f(x\_t)\\). So there is a log\-link that we are not really explicit about when we pass in the log of our data. In the Poisson model, that log relationship is explicit, aka we specify \\(log(E\[Y\_t]) \= x\_t\\) and we pass in the raw count data not the log of the data.
\\(\\dagger\\) Why would we typically log population data in this case? Because we would typically think of population processes as multiplicative. Population size at time \\(t\\) is growth **times** population size at time \\(t\-1\\). By logging the data, we convert to an additive process. Log population size at time \\(t\\) is log growth **plus** log population size at time \\(t\-1\\).
### 12\.4\.2 Negative binomial observation errors
In the Poisson distribution, the mean and variance are the same. Using the negative binomial distribution, we can relax that assumption and allow the mean and variance to be different. The negative binomial distribution has two parameters, \\(r\\) and \\(p\\). \\(r\\) is the dispersion parameter. As \\(r \\rightarrow \\infty\\), the distribution becomes the Poisson distribution and when \\(r\\) is small, the distribution is overdispersed (higher variance) relative to the Poisson. In practice, \\(r \> 30\\) is going to be very close to the Poisson. \\(p\\) is the success parameter, \\(p \= r/(r\+E\[Y\_t])\\). As for the Poisson, \\(log E\[Y\_{t}] \= x\_t\\)—for the univariate state\-space model in this example with one state, \\(z\=1\\) and \\(a\=0\\).
To allow negative binomial errors we change the `Y[t]` part to
```
log(EY[t]) <- X[t]
p[t] <- r/(r + EY[t])
Y[t] ~ dnegbin(p[t], r)
```
Now that we have \\(r\\) again in the model, we will need to put a prior on it. \\(r\\) is positive and 50 is close to infinity. The following is a sufficiently vague prior.
```
r ~ dunif(0,50)
```
Our univariate state\-space code with negative binomial observation errors is the following:
```
# SS MODEL with negative binomial errors
model.loc <- ("ss_model_negbin.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
r ~ dunif(0,50);
X0 ~ dnorm(0, 0.001);
# likelihood
X[1] ~ dnorm(X0 + u, inv.q);
log(EY[1]) <- X[1]
p[1] <- r/(r + EY[1])
Y[1] ~ dnegbin(p[1], r)
for(t in 2:N) {
X[t] ~ dnorm(X[t-1] + u, inv.q);
log(EY[t]) <- X[t]
p[t] <- r/(r + EY[t])
Y[t] ~ dnegbin(p[t], r)
}
}
",
file = model.loc)
```
We will fit this to the wild dogs data in the **MARSS** package.
```
data(wilddogs, package = "MARSS")
jags.data <- list(Y = wilddogs[, 2], N = nrow(wilddogs))
jags.params <- c("q", "EY", "u", "r")
mod_ss <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
### 12\.4\.1 Poisson observation errors
So far we have used the following observation model \\(y\_t \\sim \\,\\text{N}(x\_t, r)\\).
We can change this to a Poisson observation error model:
\\(Y\_t \\sim \\text{Pois}(\\lambda\_t)\\) where \\(E\[Y\_t] \= \\lambda\_t\\). \\(\\text{log}(\\lambda\_t) \= x\_t\\) where \\(x\_t\\) is our process model.
All we need to change to allow Poisson errors is to change the `Y[t]` part to
```
log(EY[t]) <- X[i]
Y[t] ~ dpois(EY[t])
```
We also need to ensure that our data are
integers and we remove the `r` part from our model code since the Poisson does not have that.
Our univariate state\-space code with Poisson observation errors is the following:
```
# SS MODEL with Poisson errors
model.loc <- ("ss_model_pois.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
X0 ~ dnorm(0, 0.001);
# likelihood
X[1] ~ dnorm(X0 + u, inv.q);
log(EY[1]) <- X[1]
Y[1] ~ dpois(EY[1])
for(t in 2:N) {
X[t] ~ dnorm(X[t-1] + u, inv.q);
log(EY[t]) <- X[t]
Y[t] ~ dpois(EY[t]);
}
}
",
file = model.loc)
```
We will fit this to the wild dogs data in the **MARSS** package.
```
data(wilddogs, package = "MARSS")
jags.data <- list(Y = wilddogs[, 2], N = nrow(wilddogs))
jags.params <- c("q", "EY", "u")
mod_ss <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
When we use this univariate state\-space model with population data, like the wild dogs, we would log the data\\(^\\dagger\\), and our \\(y\_t\\) in our JAGS code is really \\(log(y\_t)\\). In that case, \\(E\[log(Y\_t)]) \= f(x\_t)\\). So there is a log\-link that we are not really explicit about when we pass in the log of our data. In the Poisson model, that log relationship is explicit, aka we specify \\(log(E\[Y\_t]) \= x\_t\\) and we pass in the raw count data not the log of the data.
\\(\\dagger\\) Why would we typically log population data in this case? Because we would typically think of population processes as multiplicative. Population size at time \\(t\\) is growth **times** population size at time \\(t\-1\\). By logging the data, we convert to an additive process. Log population size at time \\(t\\) is log growth **plus** log population size at time \\(t\-1\\).
### 12\.4\.2 Negative binomial observation errors
In the Poisson distribution, the mean and variance are the same. Using the negative binomial distribution, we can relax that assumption and allow the mean and variance to be different. The negative binomial distribution has two parameters, \\(r\\) and \\(p\\). \\(r\\) is the dispersion parameter. As \\(r \\rightarrow \\infty\\), the distribution becomes the Poisson distribution and when \\(r\\) is small, the distribution is overdispersed (higher variance) relative to the Poisson. In practice, \\(r \> 30\\) is going to be very close to the Poisson. \\(p\\) is the success parameter, \\(p \= r/(r\+E\[Y\_t])\\). As for the Poisson, \\(log E\[Y\_{t}] \= x\_t\\)—for the univariate state\-space model in this example with one state, \\(z\=1\\) and \\(a\=0\\).
To allow negative binomial errors we change the `Y[t]` part to
```
log(EY[t]) <- X[t]
p[t] <- r/(r + EY[t])
Y[t] ~ dnegbin(p[t], r)
```
Now that we have \\(r\\) again in the model, we will need to put a prior on it. \\(r\\) is positive and 50 is close to infinity. The following is a sufficiently vague prior.
```
r ~ dunif(0,50)
```
Our univariate state\-space code with negative binomial observation errors is the following:
```
# SS MODEL with negative binomial errors
model.loc <- ("ss_model_negbin.txt")
jagsscript <- cat("
model {
# priors on parameters
u ~ dnorm(0, 0.01);
inv.q ~ dgamma(0.001,0.001);
q <- 1/inv.q;
r ~ dunif(0,50);
X0 ~ dnorm(0, 0.001);
# likelihood
X[1] ~ dnorm(X0 + u, inv.q);
log(EY[1]) <- X[1]
p[1] <- r/(r + EY[1])
Y[1] ~ dnegbin(p[1], r)
for(t in 2:N) {
X[t] ~ dnorm(X[t-1] + u, inv.q);
log(EY[t]) <- X[t]
p[t] <- r/(r + EY[t])
Y[t] ~ dnegbin(p[t], r)
}
}
",
file = model.loc)
```
We will fit this to the wild dogs data in the **MARSS** package.
```
data(wilddogs, package = "MARSS")
jags.data <- list(Y = wilddogs[, 2], N = nrow(wilddogs))
jags.params <- c("q", "EY", "u", "r")
mod_ss <- jags(jags.data, parameters.to.save = jags.params, model.file = model.loc,
n.chains = 3, n.burnin = 5000, n.thin = 1, n.iter = 10000,
DIC = TRUE)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-jags-forecast.html |
12\.5 Forecasting with JAGS models
----------------------------------
There are a number of different approaches to using Bayesian time series models to perform forecasting. One approach might be to fit a model, and use those posterior distributions to forecast as a secondary step (say within R). A more streamlined approach is to do this within the JAGS code itself. We can take advantage of the fact that JAGS allows you to include NAs in the response variable (but never in the predictors). Let’s use the same Wind dataset, and the univariate state\-space model described above to forecast three time steps into the future. We can do this by including 3 more NAs in the dataset, and incrementing the variable `N` by 3\.
```
jags.data <- list(Y = c(Wind, NA, NA, NA), N = (N + 3), Y1 = Wind[1])
jags.params <- c("q", "r", "EY", "u")
model.loc <- ("ss_model.txt")
mod_ss_forecast <- jags(jags.data, parameters.to.save = jags.params,
model.file = model.loc, n.chains = 3, n.burnin = 5000, n.thin = 1,
n.iter = 10000, DIC = TRUE)
```
We can inspect the fitted model object, and see that `EY` contains the 3 new predictions for the forecasts from this model.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-jags-problems.html |
12\.6 Problems
--------------
1. Fit the intercept only model from section [12\.2\.1](sec-jags-univariate.html#sec-jags-lr-no-covariates). Set the burn\-in to 3, and when the model completes, plot the time series of the parameter `u` for the first MCMC chain.
1. Based on your visual inspection, has the MCMC chain convered?
2. What is the ACF of the first MCMC chain?
2. Increase the MCMC burn\-in for the model in question 1 to a value that you think is reasonable. After the model has converged, calculate the Gelman\-Rubin diagnostic for the fitted model object.
3. Compare the results of the `plotModelOutput()` function for the intercept only model from section [12\.2\.1](sec-jags-univariate.html#sec-jags-lr-no-covariates). You will to add “predY” to your JAGS model and to the list of parameters to monitor, and re\-run the model.
4. Plot the posterior distribution of \\(b\\) for the AR(1\) model in section [12\.2\.4](sec-jags-univariate.html#sec-jags-ar1). Can this parameter be well estimated for this dataset?
5. Plot the posteriors for the process and observation variances (not standard deviation) for the univariate state\-space model in section [12\.2\.6](sec-jags-univariate.html#sec-jags-uss). Which is larger for this dataset?
6. Add the effect of temperature to the AR(1\) model in section [12\.2\.4](sec-jags-univariate.html#sec-jags-ar1). Plot the posterior for `C` and compare to the posterior for `C` from the model in section [12\.2\.2](sec-jags-univariate.html#sec-jags-covariates).
7. Plot the fitted values from the model in section [12\.5](sec-jags-forecast.html#sec-jags-forecast), including the forecasts, with the 95% credible intervals for each data point.
8. The following is a dataset from the Upper Skagit River (Puget Sound, 1952\-2005\) on salmon spawners and recruits:
```
Spawners <- c(2662, 1806, 1707, 1339, 1686, 2220, 3121, 5028,
9263, 4567, 1850, 3353, 2836, 3961, 4624, 3262, 3898, 3039,
5966, 5931, 7346, 4911, 3116, 3185, 5590, 2485, 2987, 3829,
4921, 2348, 1932, 3151, 2306, 1686, 4584, 2635, 2339, 1454,
3705, 1510, 1331, 942, 884, 666, 1521, 409, 2388, 1043, 3262,
2606, 4866, 1161, 3070, 3320)
Recruits <- c(12741, 15618, 23675, 37710, 62260, 32725, 8659,
28101, 17054, 29885, 33047, 20059, 35192, 11006, 48154, 35829,
46231, 32405, 20782, 21340, 58392, 21553, 27528, 28246, 35163,
15419, 16276, 32946, 11075, 16909, 22359, 8022, 16445, 2912,
17642, 2929, 7554, 3047, 3488, 577, 4511, 1478, 3283, 1633,
8536, 7019, 3947, 2789, 4606, 3545, 4421, 1289, 6416, 3647)
logRS <- log(Recruits/Spawners)
```
1. Fit the following Ricker model to these data using the following linear form of this model with normally distributed errors:
\\\[\\begin{equation\*}
log(R\_t/S\_t) \= a \+ b \\times S\_t \+ e\_t,\\text{ where } e\_t \\sim \\,\\text{N}(0,\\sigma^2\)
\\end{equation\*}\\]
You will recognize that this form is exactly the same as linear regression, with independent errors (very similar to the intercept only model of Wind we fit in section [12\.2\.1](sec-jags-univariate.html#sec-jags-lr-no-covariates)).
2. Within the constraints of the Ricker model, think about other ways you might want to treat the errors. The basic model described above has independent errors that are not correlated in time. Approaches to analyzing this dataset might involve
* modeling the errors as independent (as described above)
* modeling the errors as autocorrelated
* fitting a state\-space model, with independent or correlated process errorsFit each of these models, and compare their performance (either using their predictive ability, or forecasting ability).
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-stan.html |
Chapter 13 Stan for Bayesian time series analysis
=================================================
For this lab, we will use [Stan](http://mc-stan.org/documentation/) for fitting models. These examples are primarily drawn from the Stan manual and previous code from this class.
A script with all the R code in the chapter can be downloaded [here](./Rcode/fitting-models-with-stan.R). The Rmd for this chapter can be downloaded [here](./Rmds/fitting-models-with-stan.Rmd)
### Data and packages
You will need the **atsar** and **bayesdfa** packages we have written for fitting state\-space time series models with Stan. Install using the **devtools** package.
```
library(devtools)
# Windows users will likely need to set this
# Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsar")
devtools::install_github("nwfsc-timeseries/tvvarss")
devtools::install_github("fate-ewi/bayesdfa")
```
In addition, you will need the **rstan**, **datasets**, **parallel** and **loo** packages. After installing, if needed, load the packages:
```
library(atsar)
library(rstan)
library(loo)
```
Once you have Stan and **rstan** installed, optimize Stan on your machine:
```
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
```
For this lab, we will use a data set on air quality in New York from the **datasets** package. Load the data and create a couple new variables for future use.
```
data(airquality, package = "datasets")
Wind <- airquality$Wind # wind speed
Temp <- airquality$Temp # air temperature
```
### Data and packages
You will need the **atsar** and **bayesdfa** packages we have written for fitting state\-space time series models with Stan. Install using the **devtools** package.
```
library(devtools)
# Windows users will likely need to set this
# Sys.setenv('R_REMOTES_NO_ERRORS_FROM_WARNINGS' = 'true')
devtools::install_github("nwfsc-timeseries/atsar")
devtools::install_github("nwfsc-timeseries/tvvarss")
devtools::install_github("fate-ewi/bayesdfa")
```
In addition, you will need the **rstan**, **datasets**, **parallel** and **loo** packages. After installing, if needed, load the packages:
```
library(atsar)
library(rstan)
library(loo)
```
Once you have Stan and **rstan** installed, optimize Stan on your machine:
```
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
```
For this lab, we will use a data set on air quality in New York from the **datasets** package. Load the data and create a couple new variables for future use.
```
data(airquality, package = "datasets")
Wind <- airquality$Wind # wind speed
Temp <- airquality$Temp # air temperature
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-stan-lr.html |
13\.1 Linear regression
-----------------------
We’ll start with the simplest time series model possible: linear regression with only an intercept, so that the predicted values of all observations are the same. There are several ways we can write this equation. First, the predicted values can be written as \\(E\[Y\_{t}] \= \\beta x\\), where \\(x\=1\\). Assuming that the residuals are normally distributed, the model linking our predictions to observed data is written as
\\\[y\_t \= \\beta x \+ e\_{t}, e\_{t} \\sim N(0,\\sigma), x\=1\\]
An equivalent way to think about this model is that instead of the residuals as normally distributed with mean zero, we can think of the data \\(y\_t\\) as being drawn from a normal distribution with a mean of the intercept, and the same residual standard deviation:
\\\[Y\_t \\sim N(E\[Y\_{t}],\\sigma)\\]
Remember that in linear regression models, the residual error is interpreted as independent and identically distributed observation error.
To run this model using our package, we’ll need to specify the response and predictor variables. The covariate matrix with an intercept only is a matrix of 1s. To double check, you could always look at
```
x <- model.matrix(lm(Temp ~ 1))
```
Fitting the model using our function is done with this code,
```
lm_intercept <- atsar::fit_stan(y = as.numeric(Temp), x = rep(1,
length(Temp)), model_name = "regression")
```
Coarse summaries of `stanfit` objects can be examined by typing one of the following
```
lm_intercept
# this is huge
summary(lm_intercept)
```
But to get more detailed output for each parameter, you have to use the `extract()` function,
```
pars <- rstan::extract(lm_intercept)
names(pars)
```
```
[1] "beta" "sigma" "pred" "log_lik" "lp__"
```
`extract()` will return the draws from the posterior for your parameters and any derived variables specified in your stan code. In this case, our model is
\\\[y\_t \= \\beta \\times 1 \+ e\_t, e\_t \\sim N(0,\\sigma)\\]
so our estimated parameters are \\(\\beta\\) and \\(\\sigma\\). Our stan code computed the derived variables: predicted \\(y\_t\\) which is \\(\\hat{y}\_t \= \\beta \\times 1\\) and the log\-likelihood. lp\_\_ is the log posterior which is automatically returned.
We can then make basic plots or summaries of each of these parameters,
```
hist(pars$beta, 40, col = "grey", xlab = "Intercept", main = "")
```
```
quantile(pars$beta, c(0.025, 0.5, 0.975))
```
```
2.5% 50% 97.5%
4.620617 9.016637 13.326209
```
One of the other useful things we can do is look at the predicted values of our model (\\(\\hat{y}\_t\=\\beta \\times 1\\)) and overlay the data. The predicted values are *pars$pred*.
```
plot(apply(pars$pred, 2, mean), main = "Predicted values", lwd = 2,
ylab = "Temp", ylim = c(min(pars$pred), max(pars$pred)),
type = "l")
lines(apply(pars$pred, 2, quantile, 0.025))
lines(apply(pars$pred, 2, quantile, 0.975))
points(Temp, col = "red")
```
Figure 13\.1: Data and predicted values for the linear regression model.
### 13\.1\.1 Burn\-in and thinning
To illustrate the effects of the burn\-in/warmup period and thinning, we can re\-run the above model, but for just 1 MCMC chain (the default is 3\).
```
lm_intercept <- atsar::fit_stan(y = Temp, x = rep(1, length(Temp)),
model_name = "regression", mcmc_list = list(n_mcmc = 1000,
n_burn = 1, n_chain = 1, n_thin = 1))
```
Here is a plot of the time series of `beta` with one chain and no burn\-in. Based on visual inspection, when does the chain converge?
```
pars <- rstan::extract(lm_intercept)
plot(pars$beta)
```
Figure 13\.2: A time series of our posterior draws using one chain and no burn\-in.
### 13\.1\.1 Burn\-in and thinning
To illustrate the effects of the burn\-in/warmup period and thinning, we can re\-run the above model, but for just 1 MCMC chain (the default is 3\).
```
lm_intercept <- atsar::fit_stan(y = Temp, x = rep(1, length(Temp)),
model_name = "regression", mcmc_list = list(n_mcmc = 1000,
n_burn = 1, n_chain = 1, n_thin = 1))
```
Here is a plot of the time series of `beta` with one chain and no burn\-in. Based on visual inspection, when does the chain converge?
```
pars <- rstan::extract(lm_intercept)
plot(pars$beta)
```
Figure 13\.2: A time series of our posterior draws using one chain and no burn\-in.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-stan-lr-ar.html |
13\.2 Linear regression with correlated errors
----------------------------------------------
In our first model, the errors were independent in time. We’re going to modify this to model autocorrelated errors. Autocorrelated errors are widely used in ecology and other fields – for a greater discussion, see Morris and Doak (2002\) Quantitative Conservation Biology. To make the errors autocorrelated, we start by defining the error in the first time step, \\({e}\_{1} \= y\_{1} \- \\beta\\). The expectation of \\({Y\_t}\\) in each time step is then written as
\\\[E\[{Y\_t}] \= \\beta \+ \\phi e\_{t\-1}\\]
In addition to affecting the expectation, the correlation parameter \\(\\phi\\) also affects the variance of the errors, so that
\\\[{ \\sigma }^{ 2 }\={ \\psi }^{ 2 }\\left( 1\-{ \\phi }^{ 2 } \\right)\\]
Like in our first model, we assume that the data follows a normal likelihood (or equivalently that the residuals are normally distributed), \\(y\_t \= E\[Y\_t] \+ e\_t\\), or \\(Y\_t \\sim N(E\[{Y\_t}], \\sigma)\\). Thus, it is possible to express the subsequent deviations as \\({e}\_{t} \= {y}\_{t} \- E\[{Y\_t}]\\), or equivalently as \\({e}\_{t} \= {y}\_{t} \- \\beta \-\\phi {e}\_{t\-1}\\).
We can fit this regression with autocorrelated errors by changing the model name to ‘regression\_cor’
```
lm_intercept_cor <- atsar::fit_stan(y = Temp, x = rep(1, length(Temp)),
model_name = "regression_cor", mcmc_list = list(n_mcmc = 1000,
n_burn = 1, n_chain = 1, n_thin = 1))
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-stan-rw.html |
13\.3 Random walk model
-----------------------
All of the previous three models can be interpreted as observation error models. Switching gears, we can alternatively model error in the state of nature, creating process error models. A simple process error model that many of you may have seen before is the random walk model. In this model, the assumption is that the true state of nature (or latent states) are measured perfectly. Thus, all uncertainty is originating from process variation (for ecological problems, this is often interpreted as environmental variation). For this simple model, we’ll assume that our process of interest (in this case, daily wind speed) exhibits no daily trend, but behaves as a random walk.
\\\[y\_t \= y\_{t\-1} \+ e\_{t}\\]
And the \\({e}\_{t} \\sim N(0, \\sigma)\\). Remember back to the autocorrelated model (or MA(1\) models) that we assumed that the errors \\(e\_t\\) followed a random walk. In contrast, this model assumes that the errors are independent, but that the state of nature follows a random walk. Note also that this model as written doesn’t include a drift term (this can be turned on / off using the `est_drift` argument).
We can fit the random walk model using argument `model_name = 'rw'` passed to the `fit_stan()` function.
```
rw <- atsar::fit_stan(y = Temp, est_drift = FALSE, model_name = "rw")
```
| Time Series Analysis and Forecasting |