domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/17-exploration.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/17-exploration.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/20-op-reference-auto.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/20-op-reference-auto.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/a10-functions-special-builtin.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/a10-functions-special-builtin.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/a25-step-by-step.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/a25-step-by-step.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/a30-glossary.html | R Programming |
|
coolbutuseless.github.io | https://coolbutuseless.github.io/book/rbytecodebook/a30-glossary.html | R Programming |
|
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/020_install.html |
Chapter 2 Installation
======================
Before developing with Rcpp, you need to install a C\+\+ compiler.
2\.1 Install C\+\+ compiler
---------------------------
### 2\.1\.1 Windows
Install [Rtools](https://cran.r-project.org/bin/windows/Rtools/index.html).
### 2\.1\.2 Mac
Install Xcode command line tools. Execute the command `xcode-select --install` on Terminal.
### 2\.1\.3 Linux
Install gcc and related packages.
In Ubuntu Linux, execute the command `sudo apt-get install r-base-dev` on Terminal.
2\.2 Using other compilers installed by yourself
------------------------------------------------
If you installed other compiler (g\+\+, clang\+\+) different from above, create the following file under the user’s home directory. Then set environment variables in the file.
**Linux, Mac**
* `.R/Makevars`
**Windows**
* `.R/Makevars.win`
**Example settings of environmental variables**
```
CC=/opt/local/bin/gcc-mp-4.7
CXX=/opt/local/bin/g++-mp-4.7
CPLUS_INCLUDE_PATH=/opt/local/include:$CPLUS_INCLUDE_PATH
LD_LIBRARY_PATH=/opt/local/lib:$LD_LIBRARY_PATH
CXXFLAGS= -g0 -O2 -Wall
MAKE=make -j4
```
2\.3 Install Rcpp
-----------------
You can install Rcpp by executing the following code.
```
install.packages("Rcpp")
```
2\.1 Install C\+\+ compiler
---------------------------
### 2\.1\.1 Windows
Install [Rtools](https://cran.r-project.org/bin/windows/Rtools/index.html).
### 2\.1\.2 Mac
Install Xcode command line tools. Execute the command `xcode-select --install` on Terminal.
### 2\.1\.3 Linux
Install gcc and related packages.
In Ubuntu Linux, execute the command `sudo apt-get install r-base-dev` on Terminal.
### 2\.1\.1 Windows
Install [Rtools](https://cran.r-project.org/bin/windows/Rtools/index.html).
### 2\.1\.2 Mac
Install Xcode command line tools. Execute the command `xcode-select --install` on Terminal.
### 2\.1\.3 Linux
Install gcc and related packages.
In Ubuntu Linux, execute the command `sudo apt-get install r-base-dev` on Terminal.
2\.2 Using other compilers installed by yourself
------------------------------------------------
If you installed other compiler (g\+\+, clang\+\+) different from above, create the following file under the user’s home directory. Then set environment variables in the file.
**Linux, Mac**
* `.R/Makevars`
**Windows**
* `.R/Makevars.win`
**Example settings of environmental variables**
```
CC=/opt/local/bin/gcc-mp-4.7
CXX=/opt/local/bin/g++-mp-4.7
CPLUS_INCLUDE_PATH=/opt/local/include:$CPLUS_INCLUDE_PATH
LD_LIBRARY_PATH=/opt/local/lib:$LD_LIBRARY_PATH
CXXFLAGS= -g0 -O2 -Wall
MAKE=make -j4
```
2\.3 Install Rcpp
-----------------
You can install Rcpp by executing the following code.
```
install.packages("Rcpp")
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/030_basic_usage.html |
Chapter 3 Basic usage
=====================
You can use your Rcpp function in 3 steps.
1. Writing Rcpp source code
2. Compiling the code
3. Executing the function
3\.1 Writing your Rcpp code
---------------------------
The below code defines a function `rcpp_sum()` that calculates the sum of a vector. Save this content as a file named “sum.cpp”.
**sum.cpp**
```
//sum.cpp
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double rcpp_sum(NumericVector v){
double sum = 0;
for(int i=0; i<v.length(); ++i){
sum += v[i];
}
return(sum);
}
```
### 3\.1\.1 Format for defining a function in Rcpp.
The following code shows the basic format for defining a Rcpp function.
```
#include<Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
RETURN_TYPE FUNCTION_NAME(ARGUMENT_TYPE ARGUMENT){
//do something
return RETURN_VALUE;
}
```
* `#include<Rcpp.h>` : This sentence enables you to use classes and functions defined by the Rcpp package.
* `// [[Rcpp::export]]`:The function defined just below this sentence will be accessible from R. You need to attach this sentence to every function you want to use from R.
* `using namespace Rcpp;` : This sentence is optional. But if you did not write this sentence, you have to add the prefix `Rcpp::` to specify classes and functions defined by Rcpp. (For example, `Rcpp::NumericVector`)
* `RETURN_TYPE FUNCTION_NAME(ARGUMENT_TYPE ARGUMENT){}`:You need to specify data types of functions and arguments.
* `return RETURN_VALUE;`:`return` statement is mandatory if your function would return a value. However, if your function do not return a value (i.e. `RETURN_TYPE` is `void`), the return statement can be omitted.
3\.2 Compiling the code
-----------------------
The function `Rcpp::sourceCpp()` will compile your source code and load the defined function into R.
```
library(Rcpp)
sourceCpp('sum.cpp')
```
3\.3 Executing the function
---------------------------
You can use your Rcpp functions as usual R functions.
```
> rcpp_sum(1:10)
[1] 55
> sum(1:10)
[1] 55
```
3\.1 Writing your Rcpp code
---------------------------
The below code defines a function `rcpp_sum()` that calculates the sum of a vector. Save this content as a file named “sum.cpp”.
**sum.cpp**
```
//sum.cpp
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double rcpp_sum(NumericVector v){
double sum = 0;
for(int i=0; i<v.length(); ++i){
sum += v[i];
}
return(sum);
}
```
### 3\.1\.1 Format for defining a function in Rcpp.
The following code shows the basic format for defining a Rcpp function.
```
#include<Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
RETURN_TYPE FUNCTION_NAME(ARGUMENT_TYPE ARGUMENT){
//do something
return RETURN_VALUE;
}
```
* `#include<Rcpp.h>` : This sentence enables you to use classes and functions defined by the Rcpp package.
* `// [[Rcpp::export]]`:The function defined just below this sentence will be accessible from R. You need to attach this sentence to every function you want to use from R.
* `using namespace Rcpp;` : This sentence is optional. But if you did not write this sentence, you have to add the prefix `Rcpp::` to specify classes and functions defined by Rcpp. (For example, `Rcpp::NumericVector`)
* `RETURN_TYPE FUNCTION_NAME(ARGUMENT_TYPE ARGUMENT){}`:You need to specify data types of functions and arguments.
* `return RETURN_VALUE;`:`return` statement is mandatory if your function would return a value. However, if your function do not return a value (i.e. `RETURN_TYPE` is `void`), the return statement can be omitted.
### 3\.1\.1 Format for defining a function in Rcpp.
The following code shows the basic format for defining a Rcpp function.
```
#include<Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
RETURN_TYPE FUNCTION_NAME(ARGUMENT_TYPE ARGUMENT){
//do something
return RETURN_VALUE;
}
```
* `#include<Rcpp.h>` : This sentence enables you to use classes and functions defined by the Rcpp package.
* `// [[Rcpp::export]]`:The function defined just below this sentence will be accessible from R. You need to attach this sentence to every function you want to use from R.
* `using namespace Rcpp;` : This sentence is optional. But if you did not write this sentence, you have to add the prefix `Rcpp::` to specify classes and functions defined by Rcpp. (For example, `Rcpp::NumericVector`)
* `RETURN_TYPE FUNCTION_NAME(ARGUMENT_TYPE ARGUMENT){}`:You need to specify data types of functions and arguments.
* `return RETURN_VALUE;`:`return` statement is mandatory if your function would return a value. However, if your function do not return a value (i.e. `RETURN_TYPE` is `void`), the return statement can be omitted.
3\.2 Compiling the code
-----------------------
The function `Rcpp::sourceCpp()` will compile your source code and load the defined function into R.
```
library(Rcpp)
sourceCpp('sum.cpp')
```
3\.3 Executing the function
---------------------------
You can use your Rcpp functions as usual R functions.
```
> rcpp_sum(1:10)
[1] 55
> sum(1:10)
[1] 55
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/040_function.html |
Chapter 4 Embedding Rcpp code in your R code
============================================
You can also write Rcpp code in your R code in 3 ways using `sourceCpp()` `cppFunction()` `evalCpp()` respectively.
4\.1 sourceCpp()
----------------
Save Rcpp code as string object in R and compile it with `sourceCpp()`.
```
src <-
"#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double rcpp_sum(NumericVector v){
double sum = 0;
for(int i=0; i<v.length(); ++i){
sum += v[i];
}
return(sum);
}"
sourceCpp(code = src)
rcpp_sum(1:10)
```
4\.2 cppFunction()
------------------
The `cppFunction()` offers a handy way to create a single Rcpp function. You can omit `#include <Rcpp.h>` and `using namespase Rcpp;` when you use `cppFunction()`.
```
src <-
"double rcpp_sum(NumericVector v){
double sum = 0;
for(int i=0; i<v.length(); ++i){
sum += v[i];
}
return(sum);
}
"
Rcpp::cppFunction(src)
rcpp_sum(1:10)
```
4\.3 evalCpp()
--------------
You can evaluate a single C\+\+ statement by using `evalCpp()`.
```
# Showing maximum value of double.
evalCpp('std::numeric_limits<double>::max()')
```
4\.1 sourceCpp()
----------------
Save Rcpp code as string object in R and compile it with `sourceCpp()`.
```
src <-
"#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double rcpp_sum(NumericVector v){
double sum = 0;
for(int i=0; i<v.length(); ++i){
sum += v[i];
}
return(sum);
}"
sourceCpp(code = src)
rcpp_sum(1:10)
```
4\.2 cppFunction()
------------------
The `cppFunction()` offers a handy way to create a single Rcpp function. You can omit `#include <Rcpp.h>` and `using namespase Rcpp;` when you use `cppFunction()`.
```
src <-
"double rcpp_sum(NumericVector v){
double sum = 0;
for(int i=0; i<v.length(); ++i){
sum += v[i];
}
return(sum);
}
"
Rcpp::cppFunction(src)
rcpp_sum(1:10)
```
4\.3 evalCpp()
--------------
You can evaluate a single C\+\+ statement by using `evalCpp()`.
```
# Showing maximum value of double.
evalCpp('std::numeric_limits<double>::max()')
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/050_c++11.html |
Chapter 5 C\+\+11
=================
C\+\+ 11 is a standard of C\+\+ newly established in 2011, it introduces new functionalities and notations. Compared with the previous standard, many new features have been added to make C\+\+ even easier for beginners. This document will actively exploit these features of C\+\+11\.
**Important: The code examples in this document are written with C\+\+11 enabled.**
5\.1 Enabling C\+\+11
---------------------
To enable `C++11`, add the following description somewhere in your Rcpp code, this is sufficient when you compile your rcpp code with `Rcpp::sourceCpp()`.
```
// [[Rcpp::plugins("cpp11")]]
```
If you want to enable `C++11` in your package, add code below in the `DESCRIPTION` file of your package.
```
SystemRequirements: C++11
```
5\.2 Recommended C\+\+11 features
---------------------------------
### 5\.2\.1 Initializer list
Initialization of variables using `{}`.
```
// Initialize Vector
// The next three are the same as c (1, 2, 3).
NumericVector v1 = NumericVector::create(1.0, 2.0, 3.0);
NumericVector v2 = {1.0, 2.0, 3.0};
NumericVector v3 {1.0, 2.0, 3.0}; // You can omit "=".
```
### 5\.2\.2 auto
By using the `auto` specifier, the type of a defined variable is inferred by the compiler automatically according to the assigned value.
```
// variable "i" will be int
auto i = 4;
NumericVector v;
// variable "it" will be NumericVector::iterator
auto it = v.begin();
```
### 5\.2\.3 decltype
By using `decltype`, you can declare a variable of the same type as an existing variable.
```
int i;
decltype(i) x; // variable "x" will be int
```
### 5\.2\.4 Range\-based for\-loop
You can write a `for` statement with the same style as R.
```
IntegerVector v {1,2,3};
int sum=0;
for(auto& x : v) {
sum += x;
}
```
### 5\.2\.5 Lambda expression
You can create a function object by using a lambda expression. A function object is usually created as an unnamed function and passed to the other function.
Lambda expressions are written in the form `[](){}`.
In `[]`, you write a list of local variables you want to use in this function object.
* `[]` do not allow access to all the local variables from the function object.
* `[=]` will copy values of the all local variables to the function object.
* `[&]` enables direct access to all local variables from the function object.
* `[=x, &y]` means that the local variable “x” will be copied to the function object, and the local variable “y” is allowed to be accessed directly from the function object.
In `()`, you write arguments to be passed to this function object.
In `{}`, you describe processes you want.
**Return type of the lambda expression**
The return type of this function object is automatically set to the type of the returned value described in `{}`. If you want to define return type explicitly, write it like `[]()->int{}`.
**Example**
The following example shows how to use a lambda expression. You can find Some types of C\+\+ code can be written in the same style as R.
*R example*
```
v <- c(1,2,3,4,5)
A <- 2.0
res <-
sapply(v, function(x){A*x})
```
*Rcpp example*
```
// [[Rcpp::export]]
NumericVector rcpp_lambda_1(){
NumericVector v = {1,2,3,4,5};
double A = 2.0;
NumericVector res =
sapply(v, [&](double x){return A*x;});
return res;
}
```
5\.1 Enabling C\+\+11
---------------------
To enable `C++11`, add the following description somewhere in your Rcpp code, this is sufficient when you compile your rcpp code with `Rcpp::sourceCpp()`.
```
// [[Rcpp::plugins("cpp11")]]
```
If you want to enable `C++11` in your package, add code below in the `DESCRIPTION` file of your package.
```
SystemRequirements: C++11
```
5\.2 Recommended C\+\+11 features
---------------------------------
### 5\.2\.1 Initializer list
Initialization of variables using `{}`.
```
// Initialize Vector
// The next three are the same as c (1, 2, 3).
NumericVector v1 = NumericVector::create(1.0, 2.0, 3.0);
NumericVector v2 = {1.0, 2.0, 3.0};
NumericVector v3 {1.0, 2.0, 3.0}; // You can omit "=".
```
### 5\.2\.2 auto
By using the `auto` specifier, the type of a defined variable is inferred by the compiler automatically according to the assigned value.
```
// variable "i" will be int
auto i = 4;
NumericVector v;
// variable "it" will be NumericVector::iterator
auto it = v.begin();
```
### 5\.2\.3 decltype
By using `decltype`, you can declare a variable of the same type as an existing variable.
```
int i;
decltype(i) x; // variable "x" will be int
```
### 5\.2\.4 Range\-based for\-loop
You can write a `for` statement with the same style as R.
```
IntegerVector v {1,2,3};
int sum=0;
for(auto& x : v) {
sum += x;
}
```
### 5\.2\.5 Lambda expression
You can create a function object by using a lambda expression. A function object is usually created as an unnamed function and passed to the other function.
Lambda expressions are written in the form `[](){}`.
In `[]`, you write a list of local variables you want to use in this function object.
* `[]` do not allow access to all the local variables from the function object.
* `[=]` will copy values of the all local variables to the function object.
* `[&]` enables direct access to all local variables from the function object.
* `[=x, &y]` means that the local variable “x” will be copied to the function object, and the local variable “y” is allowed to be accessed directly from the function object.
In `()`, you write arguments to be passed to this function object.
In `{}`, you describe processes you want.
**Return type of the lambda expression**
The return type of this function object is automatically set to the type of the returned value described in `{}`. If you want to define return type explicitly, write it like `[]()->int{}`.
**Example**
The following example shows how to use a lambda expression. You can find Some types of C\+\+ code can be written in the same style as R.
*R example*
```
v <- c(1,2,3,4,5)
A <- 2.0
res <-
sapply(v, function(x){A*x})
```
*Rcpp example*
```
// [[Rcpp::export]]
NumericVector rcpp_lambda_1(){
NumericVector v = {1,2,3,4,5};
double A = 2.0;
NumericVector res =
sapply(v, [&](double x){return A*x;});
return res;
}
```
### 5\.2\.1 Initializer list
Initialization of variables using `{}`.
```
// Initialize Vector
// The next three are the same as c (1, 2, 3).
NumericVector v1 = NumericVector::create(1.0, 2.0, 3.0);
NumericVector v2 = {1.0, 2.0, 3.0};
NumericVector v3 {1.0, 2.0, 3.0}; // You can omit "=".
```
### 5\.2\.2 auto
By using the `auto` specifier, the type of a defined variable is inferred by the compiler automatically according to the assigned value.
```
// variable "i" will be int
auto i = 4;
NumericVector v;
// variable "it" will be NumericVector::iterator
auto it = v.begin();
```
### 5\.2\.3 decltype
By using `decltype`, you can declare a variable of the same type as an existing variable.
```
int i;
decltype(i) x; // variable "x" will be int
```
### 5\.2\.4 Range\-based for\-loop
You can write a `for` statement with the same style as R.
```
IntegerVector v {1,2,3};
int sum=0;
for(auto& x : v) {
sum += x;
}
```
### 5\.2\.5 Lambda expression
You can create a function object by using a lambda expression. A function object is usually created as an unnamed function and passed to the other function.
Lambda expressions are written in the form `[](){}`.
In `[]`, you write a list of local variables you want to use in this function object.
* `[]` do not allow access to all the local variables from the function object.
* `[=]` will copy values of the all local variables to the function object.
* `[&]` enables direct access to all local variables from the function object.
* `[=x, &y]` means that the local variable “x” will be copied to the function object, and the local variable “y” is allowed to be accessed directly from the function object.
In `()`, you write arguments to be passed to this function object.
In `{}`, you describe processes you want.
**Return type of the lambda expression**
The return type of this function object is automatically set to the type of the returned value described in `{}`. If you want to define return type explicitly, write it like `[]()->int{}`.
**Example**
The following example shows how to use a lambda expression. You can find Some types of C\+\+ code can be written in the same style as R.
*R example*
```
v <- c(1,2,3,4,5)
A <- 2.0
res <-
sapply(v, function(x){A*x})
```
*Rcpp example*
```
// [[Rcpp::export]]
NumericVector rcpp_lambda_1(){
NumericVector v = {1,2,3,4,5};
double A = 2.0;
NumericVector res =
sapply(v, [&](double x){return A*x;});
return res;
}
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/060_printing_massages.html |
Chapter 6 Printing messages
===========================
You can print messages and values of objects on the R console screen by using `Rprintf()` and `Rcout`.
`REprintf()` and `Rcerr` can be used for printing error messages.
6\.1 Rcout, Rcerr
-----------------
The way of using `Rcout` and `Rcerr` is the same as `std::cout` and `std::cerr`. Connecting messages or variables with `<<` in the order you want. When you give a vector object to `<<`, it will print all the elements of the vector.
```
// [[Rcpp::export]]
void rcpp_rcout(NumericVector v){
// printing value of vector
Rcout << "The value of v : " << v << "\n";
// printing error message
Rcerr << "Error message\n";
}
```
6\.2 Rprintf(), REprintf()
--------------------------
The way of using `Rprintf()` and `REprintf()` is the same as `std::printf()`, it prints a message by specifying format.
```
Rprintf( format, variables)
```
In the `format` string, you can use following format specifiers for printing the values of variables. When you want to print multiple variables, you have to pass these variables in the order that its corresponding specifier appears in the format string.
Only a part of the format specifier is presented below, please refer to other documentation for detail (For example, [cplusplus.com](http://www.cplusplus.com/reference/cstdio/printf/)).
| specifier | explanation |
| --- | --- |
| `%i` | printing signed integer (`int`) |
| `%u` | printing unsigned integer (`unsigned int`) |
| `%f` | printing floating point number (`double`) |
| `%e` | printing floating point number (`double`) in exponential style |
| `%s` | printing C string (`char*`) |
Additionally, `Rprintf()` and `REprintf()` can only print data types that exist in standard C language, thus you cannot pass data types defined by Rcpp package (such as `NumericVector`) to `Rprintf()` directly. If you want to print the values of elements of an Rcpp vector using `Rprintf()`, you have to pass each element separately to it (see below).
```
// [[Rcpp::export]]
void rcpp_rprintf(NumericVector v){
// printing values of all the elements of Rcpp vector
for(int i=0; i<v.length(); ++i){
Rprintf("the value of v[%i] : %f \n", i, v[i]);
}
}
```
6\.1 Rcout, Rcerr
-----------------
The way of using `Rcout` and `Rcerr` is the same as `std::cout` and `std::cerr`. Connecting messages or variables with `<<` in the order you want. When you give a vector object to `<<`, it will print all the elements of the vector.
```
// [[Rcpp::export]]
void rcpp_rcout(NumericVector v){
// printing value of vector
Rcout << "The value of v : " << v << "\n";
// printing error message
Rcerr << "Error message\n";
}
```
6\.2 Rprintf(), REprintf()
--------------------------
The way of using `Rprintf()` and `REprintf()` is the same as `std::printf()`, it prints a message by specifying format.
```
Rprintf( format, variables)
```
In the `format` string, you can use following format specifiers for printing the values of variables. When you want to print multiple variables, you have to pass these variables in the order that its corresponding specifier appears in the format string.
Only a part of the format specifier is presented below, please refer to other documentation for detail (For example, [cplusplus.com](http://www.cplusplus.com/reference/cstdio/printf/)).
| specifier | explanation |
| --- | --- |
| `%i` | printing signed integer (`int`) |
| `%u` | printing unsigned integer (`unsigned int`) |
| `%f` | printing floating point number (`double`) |
| `%e` | printing floating point number (`double`) in exponential style |
| `%s` | printing C string (`char*`) |
Additionally, `Rprintf()` and `REprintf()` can only print data types that exist in standard C language, thus you cannot pass data types defined by Rcpp package (such as `NumericVector`) to `Rprintf()` directly. If you want to print the values of elements of an Rcpp vector using `Rprintf()`, you have to pass each element separately to it (see below).
```
// [[Rcpp::export]]
void rcpp_rprintf(NumericVector v){
// printing values of all the elements of Rcpp vector
for(int i=0; i<v.length(); ++i){
Rprintf("the value of v[%i] : %f \n", i, v[i]);
}
}
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/080_vector.html |
Chapter 8 Vector
================
8\.1 Creating vector object
---------------------------
You can create vector objects in several ways.
```
// Create a Vector object equivalent to
// v <- rep(0, 3)
NumericVector v (3);
// v <- rep(1, 3)
NumericVector v (3,1);
// v <- c(1,2,3)
// C++11 Initializer list
NumericVector v = {1,2,3};
// v <- c(1,2,3)
NumericVector v = NumericVector::create(1,2,3);
// v <- c(x=1, y=2, z=3)
NumericVector v =
NumericVector::create(Named("x",1), Named("y")=2 , _["z"]=3);
```
8\.2 Accessing vector elements
------------------------------
You can access an individual element of a vector object using `[]` or `()` operator. Both operators accept NumericVector/IntegerVector (numerical index), CharacterVector (element names) and LogicalVector. `[]` operator ignores out of bound access, while `()` operator throws an exception `index_out_of_bounds`.
**Note that the index of the Vector object in C\+\+ starts from 0\.**
```
// [[Rcpp::export]]
void rcpp_vector_access(){
// Creating vector
NumericVector v {10,20,30,40,50};
// Setting element names
v.names() = CharacterVector({"A","B","C","D","E"});
// Preparing vector for access
NumericVector numeric = {1,3};
IntegerVector integer = {1,3};
CharacterVector character = {"B","D"};
LogicalVector logical = {false, true, false, true, false};
// Getting values of vector elements
double x1 = v[0];
double x2 = v["A"];
NumericVector res1 = v[numeric];
NumericVector res2 = v[integer];
NumericVector res3 = v[character];
NumericVector res4 = v[logical];
// Assigning values to vector elements
v[0] = 100;
v["A"] = 100;
NumericVector v2 {100,200};
v[numeric] = v2;
v[integer] = v2;
v[character] = v2;
v[logical] = v2;
}
```
8\.3 Member functions
---------------------
Member functions (also called Methods) are functions that are attached to an individual object. You can call member functions `f()` of object `v` in the form of `v.f()`.
```
NumericVector v = {1,2,3,4,5};
// Calling member function
int n = v.length(); // 5
```
The `Vector` object in Rcpp has member functions listed below.
### 8\.3\.1 `length()`, `size()`
Returns the number of elements of this vector object.
### 8\.3\.2 `names()`
Returns the element names of this vector object as `CharacterVector.`
### 8\.3\.3 `offset( name )`, `findName( name )`
Returns numerical index of the element specified by character string `name`.
### 8\.3\.4 `offset( i )`
Returns numerical index of the element specified by numerical index `i` after doing bounds checking to ensure `i` is valid.
### 8\.3\.5 `fill( x )`
Fills all the elements of this vector object with scalar value `x`.
### 8\.3\.6 `sort()`
Returns a vector that sorts this vector object in ascending order.
### 8\.3\.7 `assign( first_it, last_it )`
Assigns values specified by the iterator `first_it` and `last_it` to this vector object.
### 8\.3\.8 `push_back( x )`
Append a scalar value `x` to the end of this vector object.
### 8\.3\.9 `push_back( x, name )`
Append a scalar value `x` to the end of this vector object and set name of the element as character string `name`.
### 8\.3\.10 `push_front( x )`
Append a scalar value `x` to the front of this vector object.
### 8\.3\.11 `push_front( x, name )`
Append a scalar value `x` to the front of this vector and set name of the element as character string `name`.
### 8\.3\.12 `begin()`
Returns an iterator pointing to the first element of this vector. See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.13 `end()`
Returns an iterator pointing to the end of the vector (**one past the last element of this vector**). See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.14 `cbegin()`
Returns a const iterator pointing to the first element of the vector. See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.15 `cend()`
Returns a const iterator pointing to the end of the vector (**one past the last element of this vector**). See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.16 `insert( i, x )`
Insert scalar value `x` to the position pointed by numerical index `i`. And returns the iterator pointing to the inserted element.
### 8\.3\.17 `insert( it, x )`
Insert scalar value `x` to the position pointed by iterator `it`. And returns the iterator pointing to the inserted element.
### 8\.3\.18 `erase(i)`
Erase element at the position pointed by numerical index `i`. And returns the iterator pointing to the element just behind the erased element.
### 8\.3\.19 `erase(it)`
Erase element at the position pointed by iterator `it`. And returns the iterator pointing to the element just behind the erased element.
### 8\.3\.20 `erase( first_i, last_i )`
Erase elements from the position pointed by numerical index `first_i` to `last_i - 1`. And returns the iterator pointing to the element just behind the erased elements.
### 8\.3\.21 `erase( first_it, last_it )`
Erase elements from the position pointed by the iterator `first_it` to `last_it - 1`. Return the iterator pointing the element just behind the erased elements.
### 8\.3\.22 `containsElementNamed(name)`
Returns `true` if this vector contains an element with the name specified by character string `name`.
8\.4 Static member functions
----------------------------
Static member function is a function that is attached to the class rather than a from which an object is being molded.
You can call static member functions in the form such as `NumericVector::create()`.
### 8\.4\.1 `get_na()`
Returns the `NA` value of this `Vector` class. See [chapter 24 for NA](240_na_nan_inf.html).
### 8\.4\.2 `is_na(x)`
Returns `true` if a vector element specified by `x` is `NA`.
### 8\.4\.3 `create( x1, x2, ...)`
Creates a `Vector` object containing elements specified by scalar value `x1` and `x2`. Maximum number of arguments is 20\.
### 8\.4\.4 `import( first_it , last_it )`
Creates a `Vector` object filled with the range of data specified by the iterator from `first_it` to `last_it - 1`.
### 8\.4\.5 `import_transform( first_it, last_it, func)`
Creates a `Vector` object filled with the range of data specified by the iterator from `first_it` to `last_it - 1` that is transformed by function specified by `func`.
8\.1 Creating vector object
---------------------------
You can create vector objects in several ways.
```
// Create a Vector object equivalent to
// v <- rep(0, 3)
NumericVector v (3);
// v <- rep(1, 3)
NumericVector v (3,1);
// v <- c(1,2,3)
// C++11 Initializer list
NumericVector v = {1,2,3};
// v <- c(1,2,3)
NumericVector v = NumericVector::create(1,2,3);
// v <- c(x=1, y=2, z=3)
NumericVector v =
NumericVector::create(Named("x",1), Named("y")=2 , _["z"]=3);
```
8\.2 Accessing vector elements
------------------------------
You can access an individual element of a vector object using `[]` or `()` operator. Both operators accept NumericVector/IntegerVector (numerical index), CharacterVector (element names) and LogicalVector. `[]` operator ignores out of bound access, while `()` operator throws an exception `index_out_of_bounds`.
**Note that the index of the Vector object in C\+\+ starts from 0\.**
```
// [[Rcpp::export]]
void rcpp_vector_access(){
// Creating vector
NumericVector v {10,20,30,40,50};
// Setting element names
v.names() = CharacterVector({"A","B","C","D","E"});
// Preparing vector for access
NumericVector numeric = {1,3};
IntegerVector integer = {1,3};
CharacterVector character = {"B","D"};
LogicalVector logical = {false, true, false, true, false};
// Getting values of vector elements
double x1 = v[0];
double x2 = v["A"];
NumericVector res1 = v[numeric];
NumericVector res2 = v[integer];
NumericVector res3 = v[character];
NumericVector res4 = v[logical];
// Assigning values to vector elements
v[0] = 100;
v["A"] = 100;
NumericVector v2 {100,200};
v[numeric] = v2;
v[integer] = v2;
v[character] = v2;
v[logical] = v2;
}
```
8\.3 Member functions
---------------------
Member functions (also called Methods) are functions that are attached to an individual object. You can call member functions `f()` of object `v` in the form of `v.f()`.
```
NumericVector v = {1,2,3,4,5};
// Calling member function
int n = v.length(); // 5
```
The `Vector` object in Rcpp has member functions listed below.
### 8\.3\.1 `length()`, `size()`
Returns the number of elements of this vector object.
### 8\.3\.2 `names()`
Returns the element names of this vector object as `CharacterVector.`
### 8\.3\.3 `offset( name )`, `findName( name )`
Returns numerical index of the element specified by character string `name`.
### 8\.3\.4 `offset( i )`
Returns numerical index of the element specified by numerical index `i` after doing bounds checking to ensure `i` is valid.
### 8\.3\.5 `fill( x )`
Fills all the elements of this vector object with scalar value `x`.
### 8\.3\.6 `sort()`
Returns a vector that sorts this vector object in ascending order.
### 8\.3\.7 `assign( first_it, last_it )`
Assigns values specified by the iterator `first_it` and `last_it` to this vector object.
### 8\.3\.8 `push_back( x )`
Append a scalar value `x` to the end of this vector object.
### 8\.3\.9 `push_back( x, name )`
Append a scalar value `x` to the end of this vector object and set name of the element as character string `name`.
### 8\.3\.10 `push_front( x )`
Append a scalar value `x` to the front of this vector object.
### 8\.3\.11 `push_front( x, name )`
Append a scalar value `x` to the front of this vector and set name of the element as character string `name`.
### 8\.3\.12 `begin()`
Returns an iterator pointing to the first element of this vector. See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.13 `end()`
Returns an iterator pointing to the end of the vector (**one past the last element of this vector**). See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.14 `cbegin()`
Returns a const iterator pointing to the first element of the vector. See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.15 `cend()`
Returns a const iterator pointing to the end of the vector (**one past the last element of this vector**). See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.16 `insert( i, x )`
Insert scalar value `x` to the position pointed by numerical index `i`. And returns the iterator pointing to the inserted element.
### 8\.3\.17 `insert( it, x )`
Insert scalar value `x` to the position pointed by iterator `it`. And returns the iterator pointing to the inserted element.
### 8\.3\.18 `erase(i)`
Erase element at the position pointed by numerical index `i`. And returns the iterator pointing to the element just behind the erased element.
### 8\.3\.19 `erase(it)`
Erase element at the position pointed by iterator `it`. And returns the iterator pointing to the element just behind the erased element.
### 8\.3\.20 `erase( first_i, last_i )`
Erase elements from the position pointed by numerical index `first_i` to `last_i - 1`. And returns the iterator pointing to the element just behind the erased elements.
### 8\.3\.21 `erase( first_it, last_it )`
Erase elements from the position pointed by the iterator `first_it` to `last_it - 1`. Return the iterator pointing the element just behind the erased elements.
### 8\.3\.22 `containsElementNamed(name)`
Returns `true` if this vector contains an element with the name specified by character string `name`.
### 8\.3\.1 `length()`, `size()`
Returns the number of elements of this vector object.
### 8\.3\.2 `names()`
Returns the element names of this vector object as `CharacterVector.`
### 8\.3\.3 `offset( name )`, `findName( name )`
Returns numerical index of the element specified by character string `name`.
### 8\.3\.4 `offset( i )`
Returns numerical index of the element specified by numerical index `i` after doing bounds checking to ensure `i` is valid.
### 8\.3\.5 `fill( x )`
Fills all the elements of this vector object with scalar value `x`.
### 8\.3\.6 `sort()`
Returns a vector that sorts this vector object in ascending order.
### 8\.3\.7 `assign( first_it, last_it )`
Assigns values specified by the iterator `first_it` and `last_it` to this vector object.
### 8\.3\.8 `push_back( x )`
Append a scalar value `x` to the end of this vector object.
### 8\.3\.9 `push_back( x, name )`
Append a scalar value `x` to the end of this vector object and set name of the element as character string `name`.
### 8\.3\.10 `push_front( x )`
Append a scalar value `x` to the front of this vector object.
### 8\.3\.11 `push_front( x, name )`
Append a scalar value `x` to the front of this vector and set name of the element as character string `name`.
### 8\.3\.12 `begin()`
Returns an iterator pointing to the first element of this vector. See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.13 `end()`
Returns an iterator pointing to the end of the vector (**one past the last element of this vector**). See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.14 `cbegin()`
Returns a const iterator pointing to the first element of the vector. See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.15 `cend()`
Returns a const iterator pointing to the end of the vector (**one past the last element of this vector**). See [the Chapter 28 Iterator](280_iterator.html).
### 8\.3\.16 `insert( i, x )`
Insert scalar value `x` to the position pointed by numerical index `i`. And returns the iterator pointing to the inserted element.
### 8\.3\.17 `insert( it, x )`
Insert scalar value `x` to the position pointed by iterator `it`. And returns the iterator pointing to the inserted element.
### 8\.3\.18 `erase(i)`
Erase element at the position pointed by numerical index `i`. And returns the iterator pointing to the element just behind the erased element.
### 8\.3\.19 `erase(it)`
Erase element at the position pointed by iterator `it`. And returns the iterator pointing to the element just behind the erased element.
### 8\.3\.20 `erase( first_i, last_i )`
Erase elements from the position pointed by numerical index `first_i` to `last_i - 1`. And returns the iterator pointing to the element just behind the erased elements.
### 8\.3\.21 `erase( first_it, last_it )`
Erase elements from the position pointed by the iterator `first_it` to `last_it - 1`. Return the iterator pointing the element just behind the erased elements.
### 8\.3\.22 `containsElementNamed(name)`
Returns `true` if this vector contains an element with the name specified by character string `name`.
8\.4 Static member functions
----------------------------
Static member function is a function that is attached to the class rather than a from which an object is being molded.
You can call static member functions in the form such as `NumericVector::create()`.
### 8\.4\.1 `get_na()`
Returns the `NA` value of this `Vector` class. See [chapter 24 for NA](240_na_nan_inf.html).
### 8\.4\.2 `is_na(x)`
Returns `true` if a vector element specified by `x` is `NA`.
### 8\.4\.3 `create( x1, x2, ...)`
Creates a `Vector` object containing elements specified by scalar value `x1` and `x2`. Maximum number of arguments is 20\.
### 8\.4\.4 `import( first_it , last_it )`
Creates a `Vector` object filled with the range of data specified by the iterator from `first_it` to `last_it - 1`.
### 8\.4\.5 `import_transform( first_it, last_it, func)`
Creates a `Vector` object filled with the range of data specified by the iterator from `first_it` to `last_it - 1` that is transformed by function specified by `func`.
### 8\.4\.1 `get_na()`
Returns the `NA` value of this `Vector` class. See [chapter 24 for NA](240_na_nan_inf.html).
### 8\.4\.2 `is_na(x)`
Returns `true` if a vector element specified by `x` is `NA`.
### 8\.4\.3 `create( x1, x2, ...)`
Creates a `Vector` object containing elements specified by scalar value `x1` and `x2`. Maximum number of arguments is 20\.
### 8\.4\.4 `import( first_it , last_it )`
Creates a `Vector` object filled with the range of data specified by the iterator from `first_it` to `last_it - 1`.
### 8\.4\.5 `import_transform( first_it, last_it, func)`
Creates a `Vector` object filled with the range of data specified by the iterator from `first_it` to `last_it - 1` that is transformed by function specified by `func`.
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/100_matrix.html |
Chapter 9 Matrix
================
9\.1 Creating Matrix object
---------------------------
`Matrix` objects can be created in several ways.
```
// Create a Matrix object equivalent to
// m <- matrix(0, nrow=2, ncol=2)
NumericMatrix m1( 2 );
// m <- matrix(0, nrow=2, ncol=3)
NumericMatrix m2( 2 , 3 );
// m <- matrix(v, nrow=2, ncol=3)
NumericMatrix m3( 2 , 3 , v.begin() );
```
In addition, a matrix object in R is actually a vector that the number of rows and columns are set in the attribute `dim`. Thus, if you create a vector with attribute `dim` in Rcpp and return it to R, it will be treated as a matrix.
```
// [[Rcpp::export]]
NumericVector rcpp_matrix(){
// Creating a vector object
NumericVector v = {1,2,3,4};
// Set the number of rows and columns to attribute dim of the vector object.
v.attr("dim") = Dimension(2, 2);
// Return the vector to R
return v;
}
```
Execution result
```
> rcpp_matrix()
[,1] [,2]
[1,] 1 3
[2,] 2 4
```
However, even if you set a value to attribute `dim` of a `Vector` object, the type of the object remains a `Vector` type in Rcpp code. Thus, if you want to convert it to `Matrix` type in Rcpp, you need to use `as<T>()` function.
```
// Set number of rows and columns to attribute dim
v.attr("dim") = Dimension(2, 2);
// Converting to Rcpp Matrix type
NumericMatrix m = as<NumericMatrix>(v);
```
9\.2 Accessing to Matrix elements
---------------------------------
By using the `()` operator, you can get from and assign to the values of elements of a `Matrix` object by specifying its column number and row number. As in the case of vectors, row numbers and column numbers in `Matrix` start with 0\. If you want to access a specific row or column, use the symbol `_`.
You can also use the `[]` operator to access an element as a vector connecting the columns of a matrix.
```
// Creating a 5x5 numerical matrix
NumericMatrix m( 5, 5 );
// Retrieving the element of row 0 and column 2
double x = m( 0 , 2 );
// Copying the value of row 0 to the vector v
NumericVector v = m( 0 , _ );
// Copying the value of column 2 to the vector v
NumericVector v = m( _ , 2 );
// Copying the row (0 to 1) and column (2 to 3) to the matrix m2
NumericMatrix m2 = m( Range(0,1) , Range(2,3) );
// Accessing matrix element as vector
m[5]; // This points to the same element as m(0,1)
```
### 9\.2\.1 Accessing as reference to row, column and sub matrix
Rcpp also provides types that hold “references” to specific parts of a matrix.
```
NumericMatrix::Column col = m( _ , 1); // Reference to the column 1
NumericMatrix::Row row = m( 1 , _ ); // Reference to the row 1
NumericMatrix::Sub sub = m( Range(0,1) , Range(2,3) ); // Reference to sub matrix
```
Assigning a value to a “reference” object of a matrix is equivalent to assigning the value to its original matrix. For example, assigning a value to `col` will assign a value to the column 1 of `m`.
```
// Reference to the column 1
NumericMatrix::Column col = m( _ , 1);
// The value of the column 1 of matrix m will be doubled
col = 2 * col;
// Synonymous with the above example
m( _ , 1) = 2 * m( _ , 1 );
```
9\.3 Member functions
---------------------
Since `Matrix` is actually `Vector`, `Matrix` basically has the same member functions as `Vector`. Thus, member functions unique to `Matrix` are only presented below.
### 9\.3\.1 nrow() rows()
Returns the number of rows.
### 9\.3\.2 ncol() cols()
Returns the number of columns.
### 9\.3\.3 row( i )
Return a reference `Vector::Row` to the `i`th row.
### 9\.3\.4 column( i )
Return a reference `Vector::Column` to the `i`th column.
### 9\.3\.5 fill\_diag( x )
Fill diagonal elements with scalar value `x`.
### 9\.3\.6 offset( i, j )
Returns the numerical index in the original vector of the matrix corresponding to the element of row `i` and column `j`.
9\.4 Static member functions
----------------------------
`Matrix` basically has the same static member function as `Vector`. The static member functions unique to `Matrix` are shown below.
### 9\.4\.1 Matrix::diag( size, x )
Returns a diagonal matrix whose number of rows and columns equals to “size” and the value of the diagonal element is “x”.
9\.5 Other functions related to Matrix
--------------------------------------
This section shows functions relating to the `Matrix`.
### 9\.5\.1 rownames( m )
Get and set the row name of matrix m.
```
CharacterVector ch = rownames(m);
rownames(m) = ch;
```
### 9\.5\.2 colnames( m )
Get and set the column name of matrix m.
```
CharacterVector ch = colnames(m);
colnames(m) = ch;
```
### 9\.5\.3 transpose( m )
Returns the transposed matrix of matrix m.
9\.1 Creating Matrix object
---------------------------
`Matrix` objects can be created in several ways.
```
// Create a Matrix object equivalent to
// m <- matrix(0, nrow=2, ncol=2)
NumericMatrix m1( 2 );
// m <- matrix(0, nrow=2, ncol=3)
NumericMatrix m2( 2 , 3 );
// m <- matrix(v, nrow=2, ncol=3)
NumericMatrix m3( 2 , 3 , v.begin() );
```
In addition, a matrix object in R is actually a vector that the number of rows and columns are set in the attribute `dim`. Thus, if you create a vector with attribute `dim` in Rcpp and return it to R, it will be treated as a matrix.
```
// [[Rcpp::export]]
NumericVector rcpp_matrix(){
// Creating a vector object
NumericVector v = {1,2,3,4};
// Set the number of rows and columns to attribute dim of the vector object.
v.attr("dim") = Dimension(2, 2);
// Return the vector to R
return v;
}
```
Execution result
```
> rcpp_matrix()
[,1] [,2]
[1,] 1 3
[2,] 2 4
```
However, even if you set a value to attribute `dim` of a `Vector` object, the type of the object remains a `Vector` type in Rcpp code. Thus, if you want to convert it to `Matrix` type in Rcpp, you need to use `as<T>()` function.
```
// Set number of rows and columns to attribute dim
v.attr("dim") = Dimension(2, 2);
// Converting to Rcpp Matrix type
NumericMatrix m = as<NumericMatrix>(v);
```
9\.2 Accessing to Matrix elements
---------------------------------
By using the `()` operator, you can get from and assign to the values of elements of a `Matrix` object by specifying its column number and row number. As in the case of vectors, row numbers and column numbers in `Matrix` start with 0\. If you want to access a specific row or column, use the symbol `_`.
You can also use the `[]` operator to access an element as a vector connecting the columns of a matrix.
```
// Creating a 5x5 numerical matrix
NumericMatrix m( 5, 5 );
// Retrieving the element of row 0 and column 2
double x = m( 0 , 2 );
// Copying the value of row 0 to the vector v
NumericVector v = m( 0 , _ );
// Copying the value of column 2 to the vector v
NumericVector v = m( _ , 2 );
// Copying the row (0 to 1) and column (2 to 3) to the matrix m2
NumericMatrix m2 = m( Range(0,1) , Range(2,3) );
// Accessing matrix element as vector
m[5]; // This points to the same element as m(0,1)
```
### 9\.2\.1 Accessing as reference to row, column and sub matrix
Rcpp also provides types that hold “references” to specific parts of a matrix.
```
NumericMatrix::Column col = m( _ , 1); // Reference to the column 1
NumericMatrix::Row row = m( 1 , _ ); // Reference to the row 1
NumericMatrix::Sub sub = m( Range(0,1) , Range(2,3) ); // Reference to sub matrix
```
Assigning a value to a “reference” object of a matrix is equivalent to assigning the value to its original matrix. For example, assigning a value to `col` will assign a value to the column 1 of `m`.
```
// Reference to the column 1
NumericMatrix::Column col = m( _ , 1);
// The value of the column 1 of matrix m will be doubled
col = 2 * col;
// Synonymous with the above example
m( _ , 1) = 2 * m( _ , 1 );
```
### 9\.2\.1 Accessing as reference to row, column and sub matrix
Rcpp also provides types that hold “references” to specific parts of a matrix.
```
NumericMatrix::Column col = m( _ , 1); // Reference to the column 1
NumericMatrix::Row row = m( 1 , _ ); // Reference to the row 1
NumericMatrix::Sub sub = m( Range(0,1) , Range(2,3) ); // Reference to sub matrix
```
Assigning a value to a “reference” object of a matrix is equivalent to assigning the value to its original matrix. For example, assigning a value to `col` will assign a value to the column 1 of `m`.
```
// Reference to the column 1
NumericMatrix::Column col = m( _ , 1);
// The value of the column 1 of matrix m will be doubled
col = 2 * col;
// Synonymous with the above example
m( _ , 1) = 2 * m( _ , 1 );
```
9\.3 Member functions
---------------------
Since `Matrix` is actually `Vector`, `Matrix` basically has the same member functions as `Vector`. Thus, member functions unique to `Matrix` are only presented below.
### 9\.3\.1 nrow() rows()
Returns the number of rows.
### 9\.3\.2 ncol() cols()
Returns the number of columns.
### 9\.3\.3 row( i )
Return a reference `Vector::Row` to the `i`th row.
### 9\.3\.4 column( i )
Return a reference `Vector::Column` to the `i`th column.
### 9\.3\.5 fill\_diag( x )
Fill diagonal elements with scalar value `x`.
### 9\.3\.6 offset( i, j )
Returns the numerical index in the original vector of the matrix corresponding to the element of row `i` and column `j`.
### 9\.3\.1 nrow() rows()
Returns the number of rows.
### 9\.3\.2 ncol() cols()
Returns the number of columns.
### 9\.3\.3 row( i )
Return a reference `Vector::Row` to the `i`th row.
### 9\.3\.4 column( i )
Return a reference `Vector::Column` to the `i`th column.
### 9\.3\.5 fill\_diag( x )
Fill diagonal elements with scalar value `x`.
### 9\.3\.6 offset( i, j )
Returns the numerical index in the original vector of the matrix corresponding to the element of row `i` and column `j`.
9\.4 Static member functions
----------------------------
`Matrix` basically has the same static member function as `Vector`. The static member functions unique to `Matrix` are shown below.
### 9\.4\.1 Matrix::diag( size, x )
Returns a diagonal matrix whose number of rows and columns equals to “size” and the value of the diagonal element is “x”.
### 9\.4\.1 Matrix::diag( size, x )
Returns a diagonal matrix whose number of rows and columns equals to “size” and the value of the diagonal element is “x”.
9\.5 Other functions related to Matrix
--------------------------------------
This section shows functions relating to the `Matrix`.
### 9\.5\.1 rownames( m )
Get and set the row name of matrix m.
```
CharacterVector ch = rownames(m);
rownames(m) = ch;
```
### 9\.5\.2 colnames( m )
Get and set the column name of matrix m.
```
CharacterVector ch = colnames(m);
colnames(m) = ch;
```
### 9\.5\.3 transpose( m )
Returns the transposed matrix of matrix m.
### 9\.5\.1 rownames( m )
Get and set the row name of matrix m.
```
CharacterVector ch = rownames(m);
rownames(m) = ch;
```
### 9\.5\.2 colnames( m )
Get and set the column name of matrix m.
```
CharacterVector ch = colnames(m);
colnames(m) = ch;
```
### 9\.5\.3 transpose( m )
Returns the transposed matrix of matrix m.
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/110_calculation.html |
Chapter 10 Vector operations
============================
10\.1 Arithmetic operations
---------------------------
By using the `+ - * /` operator you can perform elementwise arithmetic operations between vectors of the same length.
```
NumericVector x ;
NumericVector y ;
// Vector and vector operation
NumericVector res = x + y ;
NumericVector res = x - y ;
NumericVector res = x * y ;
NumericVector res = x / y ;
// Vector and scalar operation
NumericVector res = x + 2.0 ;
NumericVector res = 2.0 - x;
NumericVector res = y * 2.0 ;
NumericVector res = 2.0 / y;
// expression and expression operation
NumericVector res = x * y + y / 2.0 ;
NumericVector res = x * ( y - 2.0 ) ;
NumericVector res = x / ( y * y ) ;
```
The `-` operator inverts the sign.
```
NumericVector res = -x ;
```
10\.2 Comparison operations
---------------------------
Comparison of vectors using `==` `! =` `<` `>` `> =` `<=` operators produces logical vectors. You can also access vector elements using logical vectors.
```
NumericVector x ;
NumericVector y ;
// Comparison of vector and vector
LogicalVector res = x < y ;
LogicalVector res = x > y ;
LogicalVector res = x <= y ;
LogicalVector res = x >= y ;
LogicalVector res = x == y ;
LogicalVector res = x != y ;
// Comparison of vector and scalar
LogicalVector res = x < 2 ;
LogicalVector res = 2 > x;
LogicalVector res = y <= 2 ;
LogicalVector res = 2 != y;
// Comparison of expression and expression
LogicalVector res = ( x + y ) < ( x*x ) ;
LogicalVector res = ( x + y ) >= ( x*x ) ;
LogicalVector res = ( x + y ) == ( x*x ) ;
```
The `!` operator negates the logical value.
```
LogicalVector res = ! ( x < y );
```
Accessing the elements of the vector using logical vectors.
```
NumericVector res = x[x < 2];
```
10\.1 Arithmetic operations
---------------------------
By using the `+ - * /` operator you can perform elementwise arithmetic operations between vectors of the same length.
```
NumericVector x ;
NumericVector y ;
// Vector and vector operation
NumericVector res = x + y ;
NumericVector res = x - y ;
NumericVector res = x * y ;
NumericVector res = x / y ;
// Vector and scalar operation
NumericVector res = x + 2.0 ;
NumericVector res = 2.0 - x;
NumericVector res = y * 2.0 ;
NumericVector res = 2.0 / y;
// expression and expression operation
NumericVector res = x * y + y / 2.0 ;
NumericVector res = x * ( y - 2.0 ) ;
NumericVector res = x / ( y * y ) ;
```
The `-` operator inverts the sign.
```
NumericVector res = -x ;
```
10\.2 Comparison operations
---------------------------
Comparison of vectors using `==` `! =` `<` `>` `> =` `<=` operators produces logical vectors. You can also access vector elements using logical vectors.
```
NumericVector x ;
NumericVector y ;
// Comparison of vector and vector
LogicalVector res = x < y ;
LogicalVector res = x > y ;
LogicalVector res = x <= y ;
LogicalVector res = x >= y ;
LogicalVector res = x == y ;
LogicalVector res = x != y ;
// Comparison of vector and scalar
LogicalVector res = x < 2 ;
LogicalVector res = 2 > x;
LogicalVector res = y <= 2 ;
LogicalVector res = 2 != y;
// Comparison of expression and expression
LogicalVector res = ( x + y ) < ( x*x ) ;
LogicalVector res = ( x + y ) >= ( x*x ) ;
LogicalVector res = ( x + y ) == ( x*x ) ;
```
The `!` operator negates the logical value.
```
LogicalVector res = ! ( x < y );
```
Accessing the elements of the vector using logical vectors.
```
NumericVector res = x[x < 2];
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/120_logical.html |
Chapter 11 Logical operations
=============================
11\.1 LogicalVector
-------------------
### 11\.1\.1 Data type of LogicalVector elements
Since boolean type in C\+\+ is `bool`, you may think that the type of the element of `LogicalVector` is also `bool`, but it is `int`. This is because `bool` type can only represent `true` or `false`, but there are three possible values `TRUE`,`FALSE`, and `NA` for elements of the logical vector in R.
In Rcpp, `TRUE` is represented by 1,`FALSE` by 0, and `NA` by`NA_LOGICAL` (minimum value of int: \-2147483648\).
| R | Rcpp | int | bool |
| --- | --- | --- | --- |
| TRUE | TRUE | 1 (Values other than 0 and \-2147483648\) | true |
| FALSE | FALSE | 0 | false |
| NA | NA\_LOGICAL | \-2147483648 | true |
11\.2 Logical operations
------------------------
Use the operator `&` (logical product) `|` (logical sum) `!` (Logical negation) for the logical operation for each element of LogicalVector.
```
LogicalVector v1 = {1,1,0,0};
LogicalVector v2 = {1,0,1,0};
LogicalVector res1 = v1 & v2;
LogicalVector res2 = v1 | v2;
LogicalVector res3 = !(v1 | v2);
Rcout << res1 << "\n"; // 1 0 0 0
Rcout << res2 << "\n"; // 1 1 1 0
Rcout << res3 << "\n"; // 0 0 0 1
```
11\.3 Function that receives LogicalVector
------------------------------------------
Examples of functions that receive `LogicalVector` are `all()`, `any()` and `ifelse()`.
### 11\.3\.1 all(), any()
For `LogicalVector` v, `all (v)`returns`TRUE` when all elements of v are `TRUE`, and `any(v)` returns `TRUE` if any of v’s elements are `TRUE`.
You can not use the return value of the `all()` and `any()` as the conditional expression of the `if` statement. This is because the return type of the `all()` and `any()` is not `bool` but `SingleLogicalResult`. To use the return value of the `all()` and `any()` as a conditional expression of an `if` statement, use the function `is_true()`, `is_false()` and `is_na()`. These functions convert `SingleLogicalResult` to `bool`.
The code example below shows how to use the return values of the functions `all()` and `any()` as a conditional expression of an `if` statement. In this example, the conditional expression of all `if` statements will be `true`, and the return value of `all()`, `any()` will be displayed.
```
// [[Rcpp::export]]
List rcpp_logical_03(){
LogicalVector v1 = LogicalVector::create(1,1,1,NA_LOGICAL);
LogicalVector v2 = LogicalVector::create(0,1,0,NA_LOGICAL);
// Behavior of all (), any () for LogicalVector including NA is the same as R
LogicalVector lv1 = all( v1 ); // NA
LogicalVector lv2 = all( v2 ); // FALSE
LogicalVector lv3 = any( v2 ); // TRUE
// In case assigning to bool
bool b1 = is_true ( all(v1) ); // false
bool b2 = is_false( all(v1) ); // false
bool b3 = is_na ( all(v1) ); // true
// In case used in conditional expression of if statement
if(is_na(all( v1 ))) { // OK
Rcout << "all( v1 ) is NA\n";
}
return List::create(lv1, lv2, lv3, b1, b2, b3);
}
```
### 11\.3\.2 ifelse()
`ifelse (v, x1, x2)` receives the logical vector v, and returns the corresponding element of x1 when the element of v is `TRUE` and the corresponding element of x2 when it is`FALSE`. Although x1 and x2 can be vectors or scalars, in the case of vectors the length of x1 and x2 must match the length of v.
```
NumericVector v1;
NumericVector v2;
//Number of elements of vector
int n = v1.length();
// In case, both x1 and x2 are scalar
IntegerVector res1 = ifelse( v1>v2, 1, 0);
NumericVector res2 = ifelse( v1>v2, 1.0, 0.0);
//CharacterVector res3 = ifelse( v1>v2, "T", "F"); // not supported
// Since ifelse() does not work with a scalar character string,
// in order to obtain results equivalent to R,
// we need to use a string vector whose values of elements are all the same.
CharacterVector chr_v1 = rep(CharacterVector("T"), n);
CharacterVector chr_v2 = rep(CharacterVector("F"), n);
CharacterVector res3 = ifelse( v1>v2, chr_v1, chr_v2);
// In case, x1 and x2 are vector and scalar
IntegerVector int_v1, int_v2;
NumericVector num_v1, num_v2;
IntegerVector res4 = ifelse( v1>v2, int_v1, 0);
NumericVector res5 = ifelse( v1>v2, num_v1, 0.0);
CharacterVector res6 = ifelse( v1>v2, chr_v1, Rf_mkChar("F")); // Note
// In case, x1 and x2 are vector and vector
IntegerVector res7 = ifelse( v1>v2, int_v1, int_v2);
NumericVector res8 = ifelse( v1>v2, num_v1, num_v2);
CharacterVector res9 = ifelse( v1>v2, chr_v1, chr_v2);
```
Note: `Rf_mkChar ()` is a function that convert C language string (`char*`) to `CHARSXP` (type of element of `CharacterVector`).
11\.4 Evaluation of elements of LogicalVector
---------------------------------------------
The value of the element of `LogicalVector` should not be used as a conditional expression of `if` statement. Because the conditional expression of the C\+\+ `if` statement evaluates the value of the expression as a `bool` type. `bool` type evaluates all values other than 0 as `true`, thus the `NA` of `LogicalVector` (`NA_LOGICAL`) is evaluated as `true`.
See the following code example for how to evaluate the value of an element of `LogicalVector` with an `if` statement.
```
// [[Rcpp::export]]
LogicalVector rcpp_logical(){
// Create an integer vector containing NA
IntegerVector x = {1,2,3,4,NA_INTEGER};
// The result of the comparison operation becomes LogicalVector
LogicalVector v = (x >= 3);
// If you use the element of LogicalVector directly in the "if" statement
// NA_LOGICAL will be evaluated as TRUE
for(int i=0; i<v.size();++i) {
if(v[i]) Rprintf("v[%i] is evaluated as true.\n",i);
else Rprintf("v[%i] is evaluated as false.\n",i);
}
// Evaluate the elements of LogicalVector
for(int i=0; i<v.size();++i) {
if(v[i]==TRUE) Rprintf("v[%i] is TRUE.\n",i);
else if (v[i]==FALSE) Rprintf("v[%i] is FALSE.\n",i);
else if (v[i]==NA_LOGICAL) Rprintf("v[%i] is NA.\n",i);
else Rcout << "v[" << i << "] is not 1\n";
}
// Displays the value of TRUE, FALSE and NA_LOGICAL
Rcout << "TRUE " << TRUE << "\n";
Rcout << "FALSE " << FALSE << "\n";
Rcout << "NA_LOGICAL " << NA_LOGICAL << "\n";
return v;
}
```
Execution result
```
> rcpp_logical()
v[0] is evaluated as false.
v[1] is evaluated as false.
v[2] is evaluated as true.
v[3] is evaluated as true.
v[4] is evaluated as true.
v[0] is FALSE.
v[1] is FALSE.
v[2] is TRUE.
v[3] is TRUE.
v[4] is NA.
TRUE 1
FALSE 0
NA_LOGICAL -2147483648
[1] FALSE FALSE TRUE TRUE NA
```
11\.1 LogicalVector
-------------------
### 11\.1\.1 Data type of LogicalVector elements
Since boolean type in C\+\+ is `bool`, you may think that the type of the element of `LogicalVector` is also `bool`, but it is `int`. This is because `bool` type can only represent `true` or `false`, but there are three possible values `TRUE`,`FALSE`, and `NA` for elements of the logical vector in R.
In Rcpp, `TRUE` is represented by 1,`FALSE` by 0, and `NA` by`NA_LOGICAL` (minimum value of int: \-2147483648\).
| R | Rcpp | int | bool |
| --- | --- | --- | --- |
| TRUE | TRUE | 1 (Values other than 0 and \-2147483648\) | true |
| FALSE | FALSE | 0 | false |
| NA | NA\_LOGICAL | \-2147483648 | true |
### 11\.1\.1 Data type of LogicalVector elements
Since boolean type in C\+\+ is `bool`, you may think that the type of the element of `LogicalVector` is also `bool`, but it is `int`. This is because `bool` type can only represent `true` or `false`, but there are three possible values `TRUE`,`FALSE`, and `NA` for elements of the logical vector in R.
In Rcpp, `TRUE` is represented by 1,`FALSE` by 0, and `NA` by`NA_LOGICAL` (minimum value of int: \-2147483648\).
| R | Rcpp | int | bool |
| --- | --- | --- | --- |
| TRUE | TRUE | 1 (Values other than 0 and \-2147483648\) | true |
| FALSE | FALSE | 0 | false |
| NA | NA\_LOGICAL | \-2147483648 | true |
11\.2 Logical operations
------------------------
Use the operator `&` (logical product) `|` (logical sum) `!` (Logical negation) for the logical operation for each element of LogicalVector.
```
LogicalVector v1 = {1,1,0,0};
LogicalVector v2 = {1,0,1,0};
LogicalVector res1 = v1 & v2;
LogicalVector res2 = v1 | v2;
LogicalVector res3 = !(v1 | v2);
Rcout << res1 << "\n"; // 1 0 0 0
Rcout << res2 << "\n"; // 1 1 1 0
Rcout << res3 << "\n"; // 0 0 0 1
```
11\.3 Function that receives LogicalVector
------------------------------------------
Examples of functions that receive `LogicalVector` are `all()`, `any()` and `ifelse()`.
### 11\.3\.1 all(), any()
For `LogicalVector` v, `all (v)`returns`TRUE` when all elements of v are `TRUE`, and `any(v)` returns `TRUE` if any of v’s elements are `TRUE`.
You can not use the return value of the `all()` and `any()` as the conditional expression of the `if` statement. This is because the return type of the `all()` and `any()` is not `bool` but `SingleLogicalResult`. To use the return value of the `all()` and `any()` as a conditional expression of an `if` statement, use the function `is_true()`, `is_false()` and `is_na()`. These functions convert `SingleLogicalResult` to `bool`.
The code example below shows how to use the return values of the functions `all()` and `any()` as a conditional expression of an `if` statement. In this example, the conditional expression of all `if` statements will be `true`, and the return value of `all()`, `any()` will be displayed.
```
// [[Rcpp::export]]
List rcpp_logical_03(){
LogicalVector v1 = LogicalVector::create(1,1,1,NA_LOGICAL);
LogicalVector v2 = LogicalVector::create(0,1,0,NA_LOGICAL);
// Behavior of all (), any () for LogicalVector including NA is the same as R
LogicalVector lv1 = all( v1 ); // NA
LogicalVector lv2 = all( v2 ); // FALSE
LogicalVector lv3 = any( v2 ); // TRUE
// In case assigning to bool
bool b1 = is_true ( all(v1) ); // false
bool b2 = is_false( all(v1) ); // false
bool b3 = is_na ( all(v1) ); // true
// In case used in conditional expression of if statement
if(is_na(all( v1 ))) { // OK
Rcout << "all( v1 ) is NA\n";
}
return List::create(lv1, lv2, lv3, b1, b2, b3);
}
```
### 11\.3\.2 ifelse()
`ifelse (v, x1, x2)` receives the logical vector v, and returns the corresponding element of x1 when the element of v is `TRUE` and the corresponding element of x2 when it is`FALSE`. Although x1 and x2 can be vectors or scalars, in the case of vectors the length of x1 and x2 must match the length of v.
```
NumericVector v1;
NumericVector v2;
//Number of elements of vector
int n = v1.length();
// In case, both x1 and x2 are scalar
IntegerVector res1 = ifelse( v1>v2, 1, 0);
NumericVector res2 = ifelse( v1>v2, 1.0, 0.0);
//CharacterVector res3 = ifelse( v1>v2, "T", "F"); // not supported
// Since ifelse() does not work with a scalar character string,
// in order to obtain results equivalent to R,
// we need to use a string vector whose values of elements are all the same.
CharacterVector chr_v1 = rep(CharacterVector("T"), n);
CharacterVector chr_v2 = rep(CharacterVector("F"), n);
CharacterVector res3 = ifelse( v1>v2, chr_v1, chr_v2);
// In case, x1 and x2 are vector and scalar
IntegerVector int_v1, int_v2;
NumericVector num_v1, num_v2;
IntegerVector res4 = ifelse( v1>v2, int_v1, 0);
NumericVector res5 = ifelse( v1>v2, num_v1, 0.0);
CharacterVector res6 = ifelse( v1>v2, chr_v1, Rf_mkChar("F")); // Note
// In case, x1 and x2 are vector and vector
IntegerVector res7 = ifelse( v1>v2, int_v1, int_v2);
NumericVector res8 = ifelse( v1>v2, num_v1, num_v2);
CharacterVector res9 = ifelse( v1>v2, chr_v1, chr_v2);
```
Note: `Rf_mkChar ()` is a function that convert C language string (`char*`) to `CHARSXP` (type of element of `CharacterVector`).
### 11\.3\.1 all(), any()
For `LogicalVector` v, `all (v)`returns`TRUE` when all elements of v are `TRUE`, and `any(v)` returns `TRUE` if any of v’s elements are `TRUE`.
You can not use the return value of the `all()` and `any()` as the conditional expression of the `if` statement. This is because the return type of the `all()` and `any()` is not `bool` but `SingleLogicalResult`. To use the return value of the `all()` and `any()` as a conditional expression of an `if` statement, use the function `is_true()`, `is_false()` and `is_na()`. These functions convert `SingleLogicalResult` to `bool`.
The code example below shows how to use the return values of the functions `all()` and `any()` as a conditional expression of an `if` statement. In this example, the conditional expression of all `if` statements will be `true`, and the return value of `all()`, `any()` will be displayed.
```
// [[Rcpp::export]]
List rcpp_logical_03(){
LogicalVector v1 = LogicalVector::create(1,1,1,NA_LOGICAL);
LogicalVector v2 = LogicalVector::create(0,1,0,NA_LOGICAL);
// Behavior of all (), any () for LogicalVector including NA is the same as R
LogicalVector lv1 = all( v1 ); // NA
LogicalVector lv2 = all( v2 ); // FALSE
LogicalVector lv3 = any( v2 ); // TRUE
// In case assigning to bool
bool b1 = is_true ( all(v1) ); // false
bool b2 = is_false( all(v1) ); // false
bool b3 = is_na ( all(v1) ); // true
// In case used in conditional expression of if statement
if(is_na(all( v1 ))) { // OK
Rcout << "all( v1 ) is NA\n";
}
return List::create(lv1, lv2, lv3, b1, b2, b3);
}
```
### 11\.3\.2 ifelse()
`ifelse (v, x1, x2)` receives the logical vector v, and returns the corresponding element of x1 when the element of v is `TRUE` and the corresponding element of x2 when it is`FALSE`. Although x1 and x2 can be vectors or scalars, in the case of vectors the length of x1 and x2 must match the length of v.
```
NumericVector v1;
NumericVector v2;
//Number of elements of vector
int n = v1.length();
// In case, both x1 and x2 are scalar
IntegerVector res1 = ifelse( v1>v2, 1, 0);
NumericVector res2 = ifelse( v1>v2, 1.0, 0.0);
//CharacterVector res3 = ifelse( v1>v2, "T", "F"); // not supported
// Since ifelse() does not work with a scalar character string,
// in order to obtain results equivalent to R,
// we need to use a string vector whose values of elements are all the same.
CharacterVector chr_v1 = rep(CharacterVector("T"), n);
CharacterVector chr_v2 = rep(CharacterVector("F"), n);
CharacterVector res3 = ifelse( v1>v2, chr_v1, chr_v2);
// In case, x1 and x2 are vector and scalar
IntegerVector int_v1, int_v2;
NumericVector num_v1, num_v2;
IntegerVector res4 = ifelse( v1>v2, int_v1, 0);
NumericVector res5 = ifelse( v1>v2, num_v1, 0.0);
CharacterVector res6 = ifelse( v1>v2, chr_v1, Rf_mkChar("F")); // Note
// In case, x1 and x2 are vector and vector
IntegerVector res7 = ifelse( v1>v2, int_v1, int_v2);
NumericVector res8 = ifelse( v1>v2, num_v1, num_v2);
CharacterVector res9 = ifelse( v1>v2, chr_v1, chr_v2);
```
Note: `Rf_mkChar ()` is a function that convert C language string (`char*`) to `CHARSXP` (type of element of `CharacterVector`).
11\.4 Evaluation of elements of LogicalVector
---------------------------------------------
The value of the element of `LogicalVector` should not be used as a conditional expression of `if` statement. Because the conditional expression of the C\+\+ `if` statement evaluates the value of the expression as a `bool` type. `bool` type evaluates all values other than 0 as `true`, thus the `NA` of `LogicalVector` (`NA_LOGICAL`) is evaluated as `true`.
See the following code example for how to evaluate the value of an element of `LogicalVector` with an `if` statement.
```
// [[Rcpp::export]]
LogicalVector rcpp_logical(){
// Create an integer vector containing NA
IntegerVector x = {1,2,3,4,NA_INTEGER};
// The result of the comparison operation becomes LogicalVector
LogicalVector v = (x >= 3);
// If you use the element of LogicalVector directly in the "if" statement
// NA_LOGICAL will be evaluated as TRUE
for(int i=0; i<v.size();++i) {
if(v[i]) Rprintf("v[%i] is evaluated as true.\n",i);
else Rprintf("v[%i] is evaluated as false.\n",i);
}
// Evaluate the elements of LogicalVector
for(int i=0; i<v.size();++i) {
if(v[i]==TRUE) Rprintf("v[%i] is TRUE.\n",i);
else if (v[i]==FALSE) Rprintf("v[%i] is FALSE.\n",i);
else if (v[i]==NA_LOGICAL) Rprintf("v[%i] is NA.\n",i);
else Rcout << "v[" << i << "] is not 1\n";
}
// Displays the value of TRUE, FALSE and NA_LOGICAL
Rcout << "TRUE " << TRUE << "\n";
Rcout << "FALSE " << FALSE << "\n";
Rcout << "NA_LOGICAL " << NA_LOGICAL << "\n";
return v;
}
```
Execution result
```
> rcpp_logical()
v[0] is evaluated as false.
v[1] is evaluated as false.
v[2] is evaluated as true.
v[3] is evaluated as true.
v[4] is evaluated as true.
v[0] is FALSE.
v[1] is FALSE.
v[2] is TRUE.
v[3] is TRUE.
v[4] is NA.
TRUE 1
FALSE 0
NA_LOGICAL -2147483648
[1] FALSE FALSE TRUE TRUE NA
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/140_dataframe.html |
Chapter 12 DataFrame
====================
This chapter explains how to create `DataFrame` object, how to access its elements, and its member functions. In Rcpp, `DataFrame` is implemented as a kind of vector. In other words, `Vector` is a vector whose element is scalar value, while `DataFrame` is a vector whose elements are `Vector`s of the same length. Therefore, `Vector` and `DataFrame` have many common methods of creating objects, accessing elements, and member functions.
12\.1 Creating a DataFrame object
---------------------------------
`DataFrame::create()` is used to create a `DataFrame` object. Also, use `Named()` or `_[]` if you want to specify column names when creating `DataFrame` object.
```
// Creating DataFrame df from Vector v1, v2
DataFrame df = DataFrame::create(v1, v2);
// When giving names to columns
DataFrame df = DataFrame::create( Named("V1") = v1 , _["V2"] = v2 );
```
When you create a `DataFrame` with`DataFrame::create()`, the value of the original`Vector` element will not be duplicated in the columns of the `DataFrame`, but the columns will be the “reference” to the original `Vector`. Therefore, changing the value of the original `Vector` changes the value of the columns. To avoid this, we use the `clone()` function to duplicate the value of the `Vector` element when creating a `DataFrame` column.
To see the difference between using the `clone()` function and not using it, see the code example below. In the code example, we are creating `DataFrame` df from `Vector` v. There, column V1 is a reference to v, and column V2 replicates the value of v by the `clone ()` function. After that, if you change to `Vector` v, the values of column V1 is changed, but V2 is not affected.
```
// [[Rcpp::export]]
DataFrame rcpp_df(){
// Creating vector v
NumericVector v = {1,2};
// Creating DataFrame df
DataFrame df = DataFrame::create( Named("V1") = v, // simple assign
Named("V2") = clone(v)); // using clone()
// Changing vector v
v = v * 2;
return df;
}
```
Execution result
```
> rcpp_df()
V1 V2
1 2 1
2 4 2
```
12\.2 Accessing DataFrame elements
----------------------------------
When accessing a specific column of `DataFrame`, the column is temporarily assigned to `Vector` object and accessed via the object. As with `Vector`, the `DataFrame` column can be specified by a numeric vector (column number), a string vector (column name), and a logical vector.
```
NumericVector v1 = df[0];
NumericVector v2 = df["V2"];
```
As with `DataFrame` creation, assigning a`DataFrame` column to `Vector` in the above way will not copy the column value to `Vector` object, but it will be a “reference” to the column. Therefore, when you change the values of `Vector` object, the content of the column will also be changed.
If you want to create a `Vector` by copying the value of the column, use `clone()` function so that the value of the original `DataFrame` column is not changed.
```
NumericVector v1 = df[0]; // v1 becomes "reference" to the 0th column of df
v1 = v1 * 2; // Changing the value of v1 also changes the value of df[0]
NumericVector v2 = clone(df[0]); // Duplicate the value of the element of df[0] to v2
v2 = v2*2; // Changing v2 does not change the value of df [0]
```
12\.3 Member functions
----------------------
In Rcpp, `DataFrame` is implemented as certain kinds of vectors. In other words, `Vector` is a vector whose elements are scalar values, and `DataFrame` is a vector whose elements are `Vector`s. Therefore, `DataFrame` has many member functions common to `Vector`.
### 12\.3\.1 length() size()
Returns the number of columns.
### 12\.3\.2 nrows()
Returns the number of rows.
### 12\.3\.3 names()
Returns the column name as a CharacterVector.
### 12\.3\.4 offset(name) findName(name)
Returns the numerical index of the column with the name specified by the string “name”.
### 12\.3\.5 fill(v)
fills all the columns of this `DataFrame` with`Vector` v.
### 12\.3\.6 assign( first\_it, last\_it)
Assign columns in the range specified by the iterators first\_it and last\_it to this `DataFrame`.
### 12\.3\.7 push\_back(v)
Add `Vector` v to the end of this`DataFrame`.
### 12\.3\.8 push\_back( v, name )
Append a `Vector` v to the end of this`DataFrame`. Specify the name of the added column with the string “name”.
### 12\.3\.9 push\_front(x)
Append a `Vector` v at the beginning of this`DataFrame`.
### 12\.3\.10 push\_front( x, name )
Append a `Vector` v at the beginning of this`DataFrame`. Specify the name of the added column with the string “name”.
### 12\.3\.11 begin()
Returns an iterator pointing to the first column of this `DataFrame`.
### 12\.3\.12 end()
Return an iterator pointing to the end of this `DataFrame`.
### 12\.3\.13 insert( it, v )
Add `Vector` v to this `DataFrame` at the position pointed by the iterator `it` and return an iterator to that element.
### 12\.3\.14 erase(i)
Delete the `i`th column of this `DataFrame` and return an iterator to the column just after erased column.
### 12\.3\.15 erase(it)
Deletes the column specified by the iterator `it` and returns an iterator to the column just after erased column.
### 12\.3\.16 erase(first\_i, last\_i)
Deletes the `first_i`th to `last_i - 1`th columns and returns an iterator to the column just after erased column.
### 12\.3\.17 erase(first\_it, last\_it)
Deletes the range of columns from those specified by the iterator `first_it` to those specified by `last_it - 1` and returns an iterator to the column just after the erased columns.
### 12\.3\.18 containsElementNamed(name)
Returns true if this `DataFrame` has a column with the name specified by the string `name`.
### 12\.3\.19 inherits(str)
Returns true if the attribute “class” of this object contains the string `str`.
12\.1 Creating a DataFrame object
---------------------------------
`DataFrame::create()` is used to create a `DataFrame` object. Also, use `Named()` or `_[]` if you want to specify column names when creating `DataFrame` object.
```
// Creating DataFrame df from Vector v1, v2
DataFrame df = DataFrame::create(v1, v2);
// When giving names to columns
DataFrame df = DataFrame::create( Named("V1") = v1 , _["V2"] = v2 );
```
When you create a `DataFrame` with`DataFrame::create()`, the value of the original`Vector` element will not be duplicated in the columns of the `DataFrame`, but the columns will be the “reference” to the original `Vector`. Therefore, changing the value of the original `Vector` changes the value of the columns. To avoid this, we use the `clone()` function to duplicate the value of the `Vector` element when creating a `DataFrame` column.
To see the difference between using the `clone()` function and not using it, see the code example below. In the code example, we are creating `DataFrame` df from `Vector` v. There, column V1 is a reference to v, and column V2 replicates the value of v by the `clone ()` function. After that, if you change to `Vector` v, the values of column V1 is changed, but V2 is not affected.
```
// [[Rcpp::export]]
DataFrame rcpp_df(){
// Creating vector v
NumericVector v = {1,2};
// Creating DataFrame df
DataFrame df = DataFrame::create( Named("V1") = v, // simple assign
Named("V2") = clone(v)); // using clone()
// Changing vector v
v = v * 2;
return df;
}
```
Execution result
```
> rcpp_df()
V1 V2
1 2 1
2 4 2
```
12\.2 Accessing DataFrame elements
----------------------------------
When accessing a specific column of `DataFrame`, the column is temporarily assigned to `Vector` object and accessed via the object. As with `Vector`, the `DataFrame` column can be specified by a numeric vector (column number), a string vector (column name), and a logical vector.
```
NumericVector v1 = df[0];
NumericVector v2 = df["V2"];
```
As with `DataFrame` creation, assigning a`DataFrame` column to `Vector` in the above way will not copy the column value to `Vector` object, but it will be a “reference” to the column. Therefore, when you change the values of `Vector` object, the content of the column will also be changed.
If you want to create a `Vector` by copying the value of the column, use `clone()` function so that the value of the original `DataFrame` column is not changed.
```
NumericVector v1 = df[0]; // v1 becomes "reference" to the 0th column of df
v1 = v1 * 2; // Changing the value of v1 also changes the value of df[0]
NumericVector v2 = clone(df[0]); // Duplicate the value of the element of df[0] to v2
v2 = v2*2; // Changing v2 does not change the value of df [0]
```
12\.3 Member functions
----------------------
In Rcpp, `DataFrame` is implemented as certain kinds of vectors. In other words, `Vector` is a vector whose elements are scalar values, and `DataFrame` is a vector whose elements are `Vector`s. Therefore, `DataFrame` has many member functions common to `Vector`.
### 12\.3\.1 length() size()
Returns the number of columns.
### 12\.3\.2 nrows()
Returns the number of rows.
### 12\.3\.3 names()
Returns the column name as a CharacterVector.
### 12\.3\.4 offset(name) findName(name)
Returns the numerical index of the column with the name specified by the string “name”.
### 12\.3\.5 fill(v)
fills all the columns of this `DataFrame` with`Vector` v.
### 12\.3\.6 assign( first\_it, last\_it)
Assign columns in the range specified by the iterators first\_it and last\_it to this `DataFrame`.
### 12\.3\.7 push\_back(v)
Add `Vector` v to the end of this`DataFrame`.
### 12\.3\.8 push\_back( v, name )
Append a `Vector` v to the end of this`DataFrame`. Specify the name of the added column with the string “name”.
### 12\.3\.9 push\_front(x)
Append a `Vector` v at the beginning of this`DataFrame`.
### 12\.3\.10 push\_front( x, name )
Append a `Vector` v at the beginning of this`DataFrame`. Specify the name of the added column with the string “name”.
### 12\.3\.11 begin()
Returns an iterator pointing to the first column of this `DataFrame`.
### 12\.3\.12 end()
Return an iterator pointing to the end of this `DataFrame`.
### 12\.3\.13 insert( it, v )
Add `Vector` v to this `DataFrame` at the position pointed by the iterator `it` and return an iterator to that element.
### 12\.3\.14 erase(i)
Delete the `i`th column of this `DataFrame` and return an iterator to the column just after erased column.
### 12\.3\.15 erase(it)
Deletes the column specified by the iterator `it` and returns an iterator to the column just after erased column.
### 12\.3\.16 erase(first\_i, last\_i)
Deletes the `first_i`th to `last_i - 1`th columns and returns an iterator to the column just after erased column.
### 12\.3\.17 erase(first\_it, last\_it)
Deletes the range of columns from those specified by the iterator `first_it` to those specified by `last_it - 1` and returns an iterator to the column just after the erased columns.
### 12\.3\.18 containsElementNamed(name)
Returns true if this `DataFrame` has a column with the name specified by the string `name`.
### 12\.3\.19 inherits(str)
Returns true if the attribute “class” of this object contains the string `str`.
### 12\.3\.1 length() size()
Returns the number of columns.
### 12\.3\.2 nrows()
Returns the number of rows.
### 12\.3\.3 names()
Returns the column name as a CharacterVector.
### 12\.3\.4 offset(name) findName(name)
Returns the numerical index of the column with the name specified by the string “name”.
### 12\.3\.5 fill(v)
fills all the columns of this `DataFrame` with`Vector` v.
### 12\.3\.6 assign( first\_it, last\_it)
Assign columns in the range specified by the iterators first\_it and last\_it to this `DataFrame`.
### 12\.3\.7 push\_back(v)
Add `Vector` v to the end of this`DataFrame`.
### 12\.3\.8 push\_back( v, name )
Append a `Vector` v to the end of this`DataFrame`. Specify the name of the added column with the string “name”.
### 12\.3\.9 push\_front(x)
Append a `Vector` v at the beginning of this`DataFrame`.
### 12\.3\.10 push\_front( x, name )
Append a `Vector` v at the beginning of this`DataFrame`. Specify the name of the added column with the string “name”.
### 12\.3\.11 begin()
Returns an iterator pointing to the first column of this `DataFrame`.
### 12\.3\.12 end()
Return an iterator pointing to the end of this `DataFrame`.
### 12\.3\.13 insert( it, v )
Add `Vector` v to this `DataFrame` at the position pointed by the iterator `it` and return an iterator to that element.
### 12\.3\.14 erase(i)
Delete the `i`th column of this `DataFrame` and return an iterator to the column just after erased column.
### 12\.3\.15 erase(it)
Deletes the column specified by the iterator `it` and returns an iterator to the column just after erased column.
### 12\.3\.16 erase(first\_i, last\_i)
Deletes the `first_i`th to `last_i - 1`th columns and returns an iterator to the column just after erased column.
### 12\.3\.17 erase(first\_it, last\_it)
Deletes the range of columns from those specified by the iterator `first_it` to those specified by `last_it - 1` and returns an iterator to the column just after the erased columns.
### 12\.3\.18 containsElementNamed(name)
Returns true if this `DataFrame` has a column with the name specified by the string `name`.
### 12\.3\.19 inherits(str)
Returns true if the attribute “class” of this object contains the string `str`.
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/150_list.html |
Chapter 13 List
===============
This chapter explains how to create `List` object, how to access its elements, and its member functions. In Rcpp, `List` is implemented as a kind of vector. In other words, `Vector` is a vector whose element is scalar value, while `List` is a vector whose elements are any kinds of data types. Therefore, `Vector` and `List` generally have common member functions.
Since the contents described on the page of [DataFrame](140_dataframe.html) can be mostly valid for `List` if replacing `DataFrame` with `List`, please refer to that for details.
13\.1 Creating List object
--------------------------
To create a `List` object we use the `List::create()` function. Also, to specify the element name when creating `List`, use `Named()` function or `_[]`.
```
// Create list L from vector v1, v2
List L = List::create(v1, v2);
// When giving names to elements
List L = List::create(Named("name1") = v1 , _["name2"] = v2);
```
13\.2 Accessing List elements
-----------------------------
When accessing a specific element of `List`, we assign it to the other object and access it via that object.
The elements of `List` can be specified by numerical index, element names and logical vector.
```
NumericVector v1 = L[0];
NumericVector v2 = L["V1"];
```
13\.3 Member functions
----------------------
`List` has the same member functions as `Vector`.
13\.1 Creating List object
--------------------------
To create a `List` object we use the `List::create()` function. Also, to specify the element name when creating `List`, use `Named()` function or `_[]`.
```
// Create list L from vector v1, v2
List L = List::create(v1, v2);
// When giving names to elements
List L = List::create(Named("name1") = v1 , _["name2"] = v2);
```
13\.2 Accessing List elements
-----------------------------
When accessing a specific element of `List`, we assign it to the other object and access it via that object.
The elements of `List` can be specified by numerical index, element names and logical vector.
```
NumericVector v1 = L[0];
NumericVector v2 = L["V1"];
```
13\.3 Member functions
----------------------
`List` has the same member functions as `Vector`.
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/160_s3_s4.html |
Chapter 14 S3・S4 class
======================
14\.1 S3 class
--------------
The S3 class is actually a list whose attribute `class` has its own value. For that, see the section `List` and section Attributes for creating S3 objects and accessing its elements.
In the code example below, as an example of handling objects of S3, we show the function that receives the return value of function `lm()` and computes RMSE (Root Mean Square Error) as an index of the prediction accuracy of the model in the learning data.
```
//Receiving lm() model object and calculate RMSE
// [[Rcpp::export]]
double rcpp_rmse(List lm_model) {
// Since S3 is a list, data type of the argument is specified as List.
// If the object given to this function is not an lm() model object,
// it outputs an error message and stops execution.
if (! lm_model.inherits("lm")) stop("Input must be a lm() model object.");
// Extracting residuals (i.e. actual - prediction) from the S3 object
NumericVector resid = lm_model["residuals"];
// Number of elements of the residual vector
R_xlen_t n = resid.length();
// The sum of squares of the residual vector
double rmse(0.0);
for(double& r : resid){
rmse += r*r;
}
// Divide the residual sum of squares by the number of elements and take the square root
return(sqrt((1.0/n)*rmse));
}
```
As an example of execution, use R’s sample data `mtcars` to calculate the RMSE of the model linearly regressing the fuel efficiency of the car.
```
> mod <- lm(mpg ~ ., data = mtcars)
> rcpp_rmse(mod)
[1] 2.146905
```
14\.2 S4 class
--------------
### 14\.2\.1 Accessing to slot
To access the slots of a S4 class object, use the `slot()` member function. Also, use the `hasSlot()` member function to check if the object have a slot with a specific name.
```
x.slot("slot_name");
x.hasSlot("slot_name");
```
### 14\.2\.2 Creating a new S4 class object
Rcpp alone can not define a new S4 class, but you can create an S4 class object defined in R.
The following code example shows how to define a new S4 class Person in R, then create an object of Person class with Rcpp.
We first define S4 class “Person”. This class holds name and birthday of a person in slot `name` and `birth`.
```
# R code
# Defining S4 class Person in R
setClass (
# Class name
"Person",
# Defining slot type
representation (
name = "character",
birth = "Date"
),
# Initializing slots
prototype = list(
name = as.character(NULL),
birth = as.Date(as.character(NULL))
)
)
# Creating an object of Person class in R
person_01 <- new("Person",
name = "Ronald Fisher",
birth = as.Date("1890-02-17"))
```
The following code example creates an object of Person class in Rcpp and set values for the slots.
```
// [[Rcpp::export]]
S4 rcpp_s4(){
// Creating an object of Person class
S4 x("Person");
// Setting values to the slots
x.slot("name") = "Sewall Wright";
x.slot("birth") = Date("1889-12-21");
return(x);
}
```
Execution result
```
> rcpp_s4()
An object of class "Person"
Slot "name":
[1] "Sewall Wright"
Slot "birth":
[1] "1889-12-21"
```
14\.1 S3 class
--------------
The S3 class is actually a list whose attribute `class` has its own value. For that, see the section `List` and section Attributes for creating S3 objects and accessing its elements.
In the code example below, as an example of handling objects of S3, we show the function that receives the return value of function `lm()` and computes RMSE (Root Mean Square Error) as an index of the prediction accuracy of the model in the learning data.
```
//Receiving lm() model object and calculate RMSE
// [[Rcpp::export]]
double rcpp_rmse(List lm_model) {
// Since S3 is a list, data type of the argument is specified as List.
// If the object given to this function is not an lm() model object,
// it outputs an error message and stops execution.
if (! lm_model.inherits("lm")) stop("Input must be a lm() model object.");
// Extracting residuals (i.e. actual - prediction) from the S3 object
NumericVector resid = lm_model["residuals"];
// Number of elements of the residual vector
R_xlen_t n = resid.length();
// The sum of squares of the residual vector
double rmse(0.0);
for(double& r : resid){
rmse += r*r;
}
// Divide the residual sum of squares by the number of elements and take the square root
return(sqrt((1.0/n)*rmse));
}
```
As an example of execution, use R’s sample data `mtcars` to calculate the RMSE of the model linearly regressing the fuel efficiency of the car.
```
> mod <- lm(mpg ~ ., data = mtcars)
> rcpp_rmse(mod)
[1] 2.146905
```
14\.2 S4 class
--------------
### 14\.2\.1 Accessing to slot
To access the slots of a S4 class object, use the `slot()` member function. Also, use the `hasSlot()` member function to check if the object have a slot with a specific name.
```
x.slot("slot_name");
x.hasSlot("slot_name");
```
### 14\.2\.2 Creating a new S4 class object
Rcpp alone can not define a new S4 class, but you can create an S4 class object defined in R.
The following code example shows how to define a new S4 class Person in R, then create an object of Person class with Rcpp.
We first define S4 class “Person”. This class holds name and birthday of a person in slot `name` and `birth`.
```
# R code
# Defining S4 class Person in R
setClass (
# Class name
"Person",
# Defining slot type
representation (
name = "character",
birth = "Date"
),
# Initializing slots
prototype = list(
name = as.character(NULL),
birth = as.Date(as.character(NULL))
)
)
# Creating an object of Person class in R
person_01 <- new("Person",
name = "Ronald Fisher",
birth = as.Date("1890-02-17"))
```
The following code example creates an object of Person class in Rcpp and set values for the slots.
```
// [[Rcpp::export]]
S4 rcpp_s4(){
// Creating an object of Person class
S4 x("Person");
// Setting values to the slots
x.slot("name") = "Sewall Wright";
x.slot("birth") = Date("1889-12-21");
return(x);
}
```
Execution result
```
> rcpp_s4()
An object of class "Person"
Slot "name":
[1] "Sewall Wright"
Slot "birth":
[1] "1889-12-21"
```
### 14\.2\.1 Accessing to slot
To access the slots of a S4 class object, use the `slot()` member function. Also, use the `hasSlot()` member function to check if the object have a slot with a specific name.
```
x.slot("slot_name");
x.hasSlot("slot_name");
```
### 14\.2\.2 Creating a new S4 class object
Rcpp alone can not define a new S4 class, but you can create an S4 class object defined in R.
The following code example shows how to define a new S4 class Person in R, then create an object of Person class with Rcpp.
We first define S4 class “Person”. This class holds name and birthday of a person in slot `name` and `birth`.
```
# R code
# Defining S4 class Person in R
setClass (
# Class name
"Person",
# Defining slot type
representation (
name = "character",
birth = "Date"
),
# Initializing slots
prototype = list(
name = as.character(NULL),
birth = as.Date(as.character(NULL))
)
)
# Creating an object of Person class in R
person_01 <- new("Person",
name = "Ronald Fisher",
birth = as.Date("1890-02-17"))
```
The following code example creates an object of Person class in Rcpp and set values for the slots.
```
// [[Rcpp::export]]
S4 rcpp_s4(){
// Creating an object of Person class
S4 x("Person");
// Setting values to the slots
x.slot("name") = "Sewall Wright";
x.slot("birth") = Date("1889-12-21");
return(x);
}
```
Execution result
```
> rcpp_s4()
An object of class "Person"
Slot "name":
[1] "Sewall Wright"
Slot "birth":
[1] "1889-12-21"
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/170_string.html |
Chapter 15 String
=================
`String` is a scalar type corresponding to the element of `CharacterVector`. `String` can also handle NA values (`NA_STRING`) which are not supported by the C character string `char*` and the C\+\+ string `std::string`.
15\.1 Creating String object
----------------------------
There are roughly three ways to create a `String` object, as follows. The first method is to create from a C/C\+\+ string, the second is to create it from another `String` object, and the third is to create it from one element of a `CharacterVector`.
```
// Creating from C string
String s("X"); // "X"
// Creating from Rcpp String
String s(str);
//Creating from single element of CharacterVector object
String s(char_vec[0])
```
15\.2 Operators
---------------
The `+=` operator is defined in `String` type. This allows you to combine another string object at the end of the string. (Note that the `+` operator is not defined)
```
// Creating String object
String s("A");
// Conbining a string
s += "B";
Rcout << s << "\n"; // "AB"
```
15\.3 Member functions
----------------------
Note: The member functions `replace_first()`, `replace_last()`, `replace_all()` do not just return the replaced character string, but instead rewrite the value of this object.
### 15\.3\.1 replace\_first( str, new\_str )
Replace first substring that matches the string `str` with the string `new_str`.
### 15\.3\.2 replace\_last( str, new\_str )
Replace last substring that matches the string `str` with the string `new_str`.
### 15\.3\.3 replace\_all( str, new\_str )
Replace all substrings that matches the string `str` with the string `new_str`.
### 15\.3\.4 push\_back(str)
Combine the string `str` to the end of this `String` object. (Same as \+\= operator)
### 15\.3\.5 push\_front(str)
Combine the string str at the beginning of this `String` object.
### 15\.3\.6 set\_na()
Set NA value to this `String` object.
### 15\.3\.7 get\_cstring()
Convert the string of this String object into a C character string constant (const char\*) and return it.
### 15\.3\.8 get\_encoding()
Returns the character encoding. The encoding is represented by [`cetype_t`](https://github.com/wch/r-source/blob/bf0a0a9d12f2ce5d66673dc32cd253524f3270bf/src/include/Rinternals.h#L928-L935).
### 15\.3\.9 set\_encoding(enc)
Set the character encoding specified by [`cetype_t`](https://github.com/wch/r-source/blob/bf0a0a9d12f2ce5d66673dc32cd253524f3270bf/src/include/Rinternals.h#L928-L935).
### 15\.3\.10 Code example
```
// [[Rcpp::export]]
void rcpp_replace(){
// Replace only at the first occurrence of "ab"
String s("abcdabcd");
s.replace_first("ab", "AB");
Rcout << s.get_cstring() << "\n"; // ABcdabcd
// Replace only at the last occurrence of "ab"
s="abcdabcd";
s.replace_last("ab", "AB");
Rcout << s.get_cstring() << "\n"; // abcdABcd
// Replace every occurrence of "ab"
s="abcdabcd";
s.replace_all("ab", "AB");
Rcout << s.get_cstring() << "\n"; // ABcdABcd
}
```
15\.1 Creating String object
----------------------------
There are roughly three ways to create a `String` object, as follows. The first method is to create from a C/C\+\+ string, the second is to create it from another `String` object, and the third is to create it from one element of a `CharacterVector`.
```
// Creating from C string
String s("X"); // "X"
// Creating from Rcpp String
String s(str);
//Creating from single element of CharacterVector object
String s(char_vec[0])
```
15\.2 Operators
---------------
The `+=` operator is defined in `String` type. This allows you to combine another string object at the end of the string. (Note that the `+` operator is not defined)
```
// Creating String object
String s("A");
// Conbining a string
s += "B";
Rcout << s << "\n"; // "AB"
```
15\.3 Member functions
----------------------
Note: The member functions `replace_first()`, `replace_last()`, `replace_all()` do not just return the replaced character string, but instead rewrite the value of this object.
### 15\.3\.1 replace\_first( str, new\_str )
Replace first substring that matches the string `str` with the string `new_str`.
### 15\.3\.2 replace\_last( str, new\_str )
Replace last substring that matches the string `str` with the string `new_str`.
### 15\.3\.3 replace\_all( str, new\_str )
Replace all substrings that matches the string `str` with the string `new_str`.
### 15\.3\.4 push\_back(str)
Combine the string `str` to the end of this `String` object. (Same as \+\= operator)
### 15\.3\.5 push\_front(str)
Combine the string str at the beginning of this `String` object.
### 15\.3\.6 set\_na()
Set NA value to this `String` object.
### 15\.3\.7 get\_cstring()
Convert the string of this String object into a C character string constant (const char\*) and return it.
### 15\.3\.8 get\_encoding()
Returns the character encoding. The encoding is represented by [`cetype_t`](https://github.com/wch/r-source/blob/bf0a0a9d12f2ce5d66673dc32cd253524f3270bf/src/include/Rinternals.h#L928-L935).
### 15\.3\.9 set\_encoding(enc)
Set the character encoding specified by [`cetype_t`](https://github.com/wch/r-source/blob/bf0a0a9d12f2ce5d66673dc32cd253524f3270bf/src/include/Rinternals.h#L928-L935).
### 15\.3\.10 Code example
```
// [[Rcpp::export]]
void rcpp_replace(){
// Replace only at the first occurrence of "ab"
String s("abcdabcd");
s.replace_first("ab", "AB");
Rcout << s.get_cstring() << "\n"; // ABcdabcd
// Replace only at the last occurrence of "ab"
s="abcdabcd";
s.replace_last("ab", "AB");
Rcout << s.get_cstring() << "\n"; // abcdABcd
// Replace every occurrence of "ab"
s="abcdabcd";
s.replace_all("ab", "AB");
Rcout << s.get_cstring() << "\n"; // ABcdABcd
}
```
### 15\.3\.1 replace\_first( str, new\_str )
Replace first substring that matches the string `str` with the string `new_str`.
### 15\.3\.2 replace\_last( str, new\_str )
Replace last substring that matches the string `str` with the string `new_str`.
### 15\.3\.3 replace\_all( str, new\_str )
Replace all substrings that matches the string `str` with the string `new_str`.
### 15\.3\.4 push\_back(str)
Combine the string `str` to the end of this `String` object. (Same as \+\= operator)
### 15\.3\.5 push\_front(str)
Combine the string str at the beginning of this `String` object.
### 15\.3\.6 set\_na()
Set NA value to this `String` object.
### 15\.3\.7 get\_cstring()
Convert the string of this String object into a C character string constant (const char\*) and return it.
### 15\.3\.8 get\_encoding()
Returns the character encoding. The encoding is represented by [`cetype_t`](https://github.com/wch/r-source/blob/bf0a0a9d12f2ce5d66673dc32cd253524f3270bf/src/include/Rinternals.h#L928-L935).
### 15\.3\.9 set\_encoding(enc)
Set the character encoding specified by [`cetype_t`](https://github.com/wch/r-source/blob/bf0a0a9d12f2ce5d66673dc32cd253524f3270bf/src/include/Rinternals.h#L928-L935).
### 15\.3\.10 Code example
```
// [[Rcpp::export]]
void rcpp_replace(){
// Replace only at the first occurrence of "ab"
String s("abcdabcd");
s.replace_first("ab", "AB");
Rcout << s.get_cstring() << "\n"; // ABcdabcd
// Replace only at the last occurrence of "ab"
s="abcdabcd";
s.replace_last("ab", "AB");
Rcout << s.get_cstring() << "\n"; // abcdABcd
// Replace every occurrence of "ab"
s="abcdabcd";
s.replace_all("ab", "AB");
Rcout << s.get_cstring() << "\n"; // ABcdABcd
}
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/180_date.html |
Chapter 16 Date and DateVector
==============================
`Date` is a scalar type corresponding to an element of `DateVector` (but see the [`DateVector` subsetting](180_date.html#datevector-subsetting) section below for a potential pitfall).
16\.1 Creating Date objects
---------------------------
```
Date d; //"1970-01-01"
Date d(1); //"1970-01-01" + 1 day
Date d(1.1); //"1970-01-01" + ceil(1.1) day
Date( "2000-01-01", "%Y-%m-%d"); //default format is "%Y-%m-%d"
Date( 1, 2, 2000); // 2000-01-02 Date(mon, day, year)
Date( 2000, 1, 2); // 2000-01-02 Date(year, mon, day)
```
16\.2 Operators
---------------
`Date` has operators `+`, `-`, `<`, `>`, `>=`, `<=`, `==`, `!=`. By using these operators, you can perform addition of days (`+`), difference calculation of date (`-`), and comparison of dates (`<`, `<=`, `>`, `>=`, `==`, `!=`) .
```
// [[Rcpp::export]]
DateVector rcpp_date1(){
// Creating Date objects
Date d1("2000-01-01");
Date d2("2000-02-01");
int i = d2 - d1; // difference of dates
bool b = d2 > d1; // comparison of dates
Rcout << i << "\n"; // 31
Rcout << b << "\n"; // 1
DateVector date(1);
date[0] = d1 + 1; // adding 1 day to d1
return date; // 2000-01-02
}
```
**Note:** As mentioned above, the `-` operator is used for calculating the difference between dates, not subtraction. If you want to subtract a number of days from a `Date` you can add a negative number.
```
// [[Rcpp::export]]
Date subtract_day() {
Date d1("2000-01-01");
// This causes a compiler error
// Date d2 = d1 - 1;
// This subtracts a day
Date d2 = d1 + -1;
return d2; // 1999-12-31
}
```
16\.3 Member functions
----------------------
### 16\.3\.1 format()
Returns the date as a `std::string` using the same specification as base R (see the documentation for [strptime](https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/strptime) or run `help(format.Date)`). The default format is “YYYY\-MM\-DD”.
### 16\.3\.2 getDay()
Returns the day of the date.
### 16\.3\.3 getMonth()
Returns the month of the date.
### 16\.3\.4 getYear()
Returns the year of the date.
### 16\.3\.5 getWeekday()
Returns the day of the week as an int. (1:Sun 2:Mon 3:Tue 4:Wed 5:Thu 6:Sat)
### 16\.3\.6 getYearday()
Returns the number of the date through the year with January 1st as 1 and December 31st as 365\.
### 16\.3\.7 is\_na()
Returns `true` if this object is NA.
16\.4 Execution result
----------------------
```
Date d("2016-1-1");
Rcout << d.format("%d/%m/%Y") << endl; // "1/1/2016"
Rcout << d.getDay() << endl; // 1
Rcout << d.getMonth() << endl; // 1
Rcout << d.getYear() << endl; // 2016
Rcout << d.getWeekday() << endl; // 6
Rcout << d.getYearday() << endl; // 1
```
16\.5 DateVector subsetting
---------------------------
Internally, both `DateVector` and `DateTimeVector` are stored as numeric types. This can cause confusion when subsetting a `DateVector` with `[]`, as the item extracted is a `double`, not a `Date` as you might assume. For example, the code below looks logical but fails to compile because `dates[i]` (a `double`) has no `getYear()` method.
```
// [[Rcpp::export]]
void print_years(DateVector dates) {
for (auto i = 0; i < dates.length(); i++) {
Rcout << dates[i].getYear() << std::endl;
}
}
```
Here’s the compiler error (edited for brevity):
```
E> error: request for member 'getYear' in '[...]' which is of non-class type 'const type {aka const double}'
E> Rcout << dates[i].getYear() << std::endl;
E> ^
```
To make this work you can explicitly create a `Date` object from `dates[i]`:
```
// [[Rcpp::export]]
void print_years(DateVector dates) {
for (auto i = 0; i < dates.length(); i++) {
Date d = dates[i]; // Create a `Date`
Rcout << d.getYear() << std::endl;
}
}
```
As mentioned in [this StackOverflow question](https://stackoverflow.com/questions/55981439/rcpp-error-comparing-a-datevector-element-with-a-date), if you actually want a `std::vector<Date>` you can use the `DateVector.getDates()` method.
16\.1 Creating Date objects
---------------------------
```
Date d; //"1970-01-01"
Date d(1); //"1970-01-01" + 1 day
Date d(1.1); //"1970-01-01" + ceil(1.1) day
Date( "2000-01-01", "%Y-%m-%d"); //default format is "%Y-%m-%d"
Date( 1, 2, 2000); // 2000-01-02 Date(mon, day, year)
Date( 2000, 1, 2); // 2000-01-02 Date(year, mon, day)
```
16\.2 Operators
---------------
`Date` has operators `+`, `-`, `<`, `>`, `>=`, `<=`, `==`, `!=`. By using these operators, you can perform addition of days (`+`), difference calculation of date (`-`), and comparison of dates (`<`, `<=`, `>`, `>=`, `==`, `!=`) .
```
// [[Rcpp::export]]
DateVector rcpp_date1(){
// Creating Date objects
Date d1("2000-01-01");
Date d2("2000-02-01");
int i = d2 - d1; // difference of dates
bool b = d2 > d1; // comparison of dates
Rcout << i << "\n"; // 31
Rcout << b << "\n"; // 1
DateVector date(1);
date[0] = d1 + 1; // adding 1 day to d1
return date; // 2000-01-02
}
```
**Note:** As mentioned above, the `-` operator is used for calculating the difference between dates, not subtraction. If you want to subtract a number of days from a `Date` you can add a negative number.
```
// [[Rcpp::export]]
Date subtract_day() {
Date d1("2000-01-01");
// This causes a compiler error
// Date d2 = d1 - 1;
// This subtracts a day
Date d2 = d1 + -1;
return d2; // 1999-12-31
}
```
16\.3 Member functions
----------------------
### 16\.3\.1 format()
Returns the date as a `std::string` using the same specification as base R (see the documentation for [strptime](https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/strptime) or run `help(format.Date)`). The default format is “YYYY\-MM\-DD”.
### 16\.3\.2 getDay()
Returns the day of the date.
### 16\.3\.3 getMonth()
Returns the month of the date.
### 16\.3\.4 getYear()
Returns the year of the date.
### 16\.3\.5 getWeekday()
Returns the day of the week as an int. (1:Sun 2:Mon 3:Tue 4:Wed 5:Thu 6:Sat)
### 16\.3\.6 getYearday()
Returns the number of the date through the year with January 1st as 1 and December 31st as 365\.
### 16\.3\.7 is\_na()
Returns `true` if this object is NA.
### 16\.3\.1 format()
Returns the date as a `std::string` using the same specification as base R (see the documentation for [strptime](https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/strptime) or run `help(format.Date)`). The default format is “YYYY\-MM\-DD”.
### 16\.3\.2 getDay()
Returns the day of the date.
### 16\.3\.3 getMonth()
Returns the month of the date.
### 16\.3\.4 getYear()
Returns the year of the date.
### 16\.3\.5 getWeekday()
Returns the day of the week as an int. (1:Sun 2:Mon 3:Tue 4:Wed 5:Thu 6:Sat)
### 16\.3\.6 getYearday()
Returns the number of the date through the year with January 1st as 1 and December 31st as 365\.
### 16\.3\.7 is\_na()
Returns `true` if this object is NA.
16\.4 Execution result
----------------------
```
Date d("2016-1-1");
Rcout << d.format("%d/%m/%Y") << endl; // "1/1/2016"
Rcout << d.getDay() << endl; // 1
Rcout << d.getMonth() << endl; // 1
Rcout << d.getYear() << endl; // 2016
Rcout << d.getWeekday() << endl; // 6
Rcout << d.getYearday() << endl; // 1
```
16\.5 DateVector subsetting
---------------------------
Internally, both `DateVector` and `DateTimeVector` are stored as numeric types. This can cause confusion when subsetting a `DateVector` with `[]`, as the item extracted is a `double`, not a `Date` as you might assume. For example, the code below looks logical but fails to compile because `dates[i]` (a `double`) has no `getYear()` method.
```
// [[Rcpp::export]]
void print_years(DateVector dates) {
for (auto i = 0; i < dates.length(); i++) {
Rcout << dates[i].getYear() << std::endl;
}
}
```
Here’s the compiler error (edited for brevity):
```
E> error: request for member 'getYear' in '[...]' which is of non-class type 'const type {aka const double}'
E> Rcout << dates[i].getYear() << std::endl;
E> ^
```
To make this work you can explicitly create a `Date` object from `dates[i]`:
```
// [[Rcpp::export]]
void print_years(DateVector dates) {
for (auto i = 0; i < dates.length(); i++) {
Date d = dates[i]; // Create a `Date`
Rcout << d.getYear() << std::endl;
}
}
```
As mentioned in [this StackOverflow question](https://stackoverflow.com/questions/55981439/rcpp-error-comparing-a-datevector-element-with-a-date), if you actually want a `std::vector<Date>` you can use the `DateVector.getDates()` method.
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/190_datetime.html |
Chapter 17 Datetime
===================
`Datetime` is a scalar type corresponding to the elements of`DatetimeVector`.
17\.1 Creating Datetime object
------------------------------
As with Date, Datetime is also created by specifying the number of seconds from 1970\-01\-01 00:00:00 UTC (Coordinated Universal Time), and by explicitly specifying the date and time.
The format to explicitly specify the date and time is `Datetime dt(str, format)`. This format converts the string `str` to `Datetime` with the format string `format`. Please refer to `help(strptime)` in R for symbols used in the format string.
```
// Creating Datetime object it with elapsed seconds since 00:00:00 on January 1, 1970
Datetime dt; // "1970-01-01 00:00:00 UTC"
Datetime dt(10.1); // "1970-01-01 00:00:00 UTC" + 10.1 sec
// Creating by specifying the date and time
// The default format is "%Y-%m-%d %H:%M:%OS"
// The specified date and time are interpreted as vthe date and time of the local time zone
Datetime dt("2000-01-01 00:00:00");
Datetime dt("2000年1月1日 0時0分0秒", "%Y年%m月%d日 %H時%M分%OS秒");
```
17\.2 Time zone
---------------
`Datetime` internally manages the date and time in seconds (real number) from `1970-01-01 00:00:00` in Coordinated Universal Time (UTC). For example, `Datetime dt(10)` represents the point after 10 seconds from `1970-01-01 00:00:00 UTC`. When this value is returned to R, it is displayed as the time converted to the executed time zone. For example, in Japan, Japan Standard Time (JST) is UTC \+ 9 hours, so `Datetime d(10)` will be `1970-01-01 09:00:10 JST`.
When creating a Datetime object in the form of `Datetime dt(str, format)`, `str` is interpreted as the time in the local timezone. For example, if you run `Datetime dt(2000-01-01 00:00:00);` in Japan Standard Time (JST), the value of `1999-12-31 15:00:00 UTC` is set internally.
17\.3 Operators
---------------
The `+, -, <, >, >=, <=, ==, !=` operators are defined in `Datetime`.
By using these operators, you can perform addition of seconds (`+`), difference of datetime (`-`) in seconds, comparison of datetime (`<, <=, >, >=, ==, !=`) .
```
Datetime dt1("2000-01-01 00:00:00");
Datetime dt2("2000-01-02 00:00:00");
// difference of datetime (seconds)
int sec = dt2 - dt1; // 86400
// addition of seconds
dt1 = dt1 + 1; // "2000-01-01 00:00:01"
// comparison of datetime
bool b = dt2 > dt1; // true
```
17\.4 Member functions
----------------------
Note: The value output using these member functions is the time interpreted at the time of Coordinated Universal Time. Therefore, it looks different from the date and time of the user’s time zone. (For example, refer to the execution result of the code at the end of this chapter)
### 17\.4\.1 getFractionalTimestamp()
Returns the number of seconds (real number) from the base date (1970\-01\-01 00: 00: 00 UTC).
### 17\.4\.2 getMicroSeconds()
Returns the microseconds of the date and time at the Coordinated Universal Time. This value express the value of the second after decimal point in units of microseconds. (i.e. 0\.1 second \= 100000 microseconds)
### 17\.4\.3 getSeconds()
Returns the second of the date and time in Coordinated Universal Time.
### 17\.4\.4 getMinutes()
Returns the minute of the date and time in Coordinated Universal Time.
### 17\.4\.5 getHours()
Returns the hour of the date and time in Coordinated Universal Time.
### 17\.4\.6 getDay()
Returns the day of the date and time in Coordinated Universal Time.
### 17\.4\.7 getMonth()
Returns the month of the date and time in Coordinated Universal Time.
### 17\.4\.8 getYear()
Returns the year of the date and time in Coordinated Universal Time.
### 17\.4\.9 getWeekday()
Returns the day of the week of the date and time in Coordinated Universal Time in `int`.
1:Sun 2:Mon 3:Tue 4:Wed 5:Thu 6:Sat
### 17\.4\.10 getYearday()
Returns the number of the date through the year with January 1st as 1 and December 31st as 365\.
### 17\.4\.11 is\_na()
Returns `true` if this object is `NA`.
17\.5 Code example
------------------
The code example below shows the result of executing in Japan Standard Time (JST) environment.
```
// [[Rcpp::export]]
Datetime rcpp_datetime(){
// Creating Datetime object by specifying date and time to
Datetime dt("2000-01-01 00:00:00");
// Displaying parts of the Datetime object in Coordinated Universal Time
Rcout << "getYear " << dt.getYear() << "\n";
Rcout << "getMonth " << dt.getMonth() << "\n";
Rcout << "getDay " << dt.getDay() << "\n";
Rcout << "getHours " << dt.getHours() << "\n";
Rcout << "getMinutes " << dt.getMinutes() << "\n";
Rcout << "getSeconds " << dt.getSeconds() << "\n";
Rcout << "getMicroSeconds " << dt.getMicroSeconds() << "\n";
Rcout << "getWeekday " << dt.getWeekday() << "\n";
Rcout << "getYearday " << dt.getYearday() << "\n";
Rcout << "getFractionalTimestamp " << dt.getFractionalTimestamp() << "\n";
return dt;
}
```
Execution result
You can see that the time output is 9 hours before Japan Standard Time (JST).
```
> rcpp_datetime()
getYear 1999
getMonth 12
getDay 31
getHours 15
getMinutes 0
getSeconds 0
getMicroSeconds 0
getWeekday 6
getYearday 365
getFractionalTimestamp 9.46652e+08
[1] "2000-01-01 JST"
```
17\.1 Creating Datetime object
------------------------------
As with Date, Datetime is also created by specifying the number of seconds from 1970\-01\-01 00:00:00 UTC (Coordinated Universal Time), and by explicitly specifying the date and time.
The format to explicitly specify the date and time is `Datetime dt(str, format)`. This format converts the string `str` to `Datetime` with the format string `format`. Please refer to `help(strptime)` in R for symbols used in the format string.
```
// Creating Datetime object it with elapsed seconds since 00:00:00 on January 1, 1970
Datetime dt; // "1970-01-01 00:00:00 UTC"
Datetime dt(10.1); // "1970-01-01 00:00:00 UTC" + 10.1 sec
// Creating by specifying the date and time
// The default format is "%Y-%m-%d %H:%M:%OS"
// The specified date and time are interpreted as vthe date and time of the local time zone
Datetime dt("2000-01-01 00:00:00");
Datetime dt("2000年1月1日 0時0分0秒", "%Y年%m月%d日 %H時%M分%OS秒");
```
17\.2 Time zone
---------------
`Datetime` internally manages the date and time in seconds (real number) from `1970-01-01 00:00:00` in Coordinated Universal Time (UTC). For example, `Datetime dt(10)` represents the point after 10 seconds from `1970-01-01 00:00:00 UTC`. When this value is returned to R, it is displayed as the time converted to the executed time zone. For example, in Japan, Japan Standard Time (JST) is UTC \+ 9 hours, so `Datetime d(10)` will be `1970-01-01 09:00:10 JST`.
When creating a Datetime object in the form of `Datetime dt(str, format)`, `str` is interpreted as the time in the local timezone. For example, if you run `Datetime dt(2000-01-01 00:00:00);` in Japan Standard Time (JST), the value of `1999-12-31 15:00:00 UTC` is set internally.
17\.3 Operators
---------------
The `+, -, <, >, >=, <=, ==, !=` operators are defined in `Datetime`.
By using these operators, you can perform addition of seconds (`+`), difference of datetime (`-`) in seconds, comparison of datetime (`<, <=, >, >=, ==, !=`) .
```
Datetime dt1("2000-01-01 00:00:00");
Datetime dt2("2000-01-02 00:00:00");
// difference of datetime (seconds)
int sec = dt2 - dt1; // 86400
// addition of seconds
dt1 = dt1 + 1; // "2000-01-01 00:00:01"
// comparison of datetime
bool b = dt2 > dt1; // true
```
17\.4 Member functions
----------------------
Note: The value output using these member functions is the time interpreted at the time of Coordinated Universal Time. Therefore, it looks different from the date and time of the user’s time zone. (For example, refer to the execution result of the code at the end of this chapter)
### 17\.4\.1 getFractionalTimestamp()
Returns the number of seconds (real number) from the base date (1970\-01\-01 00: 00: 00 UTC).
### 17\.4\.2 getMicroSeconds()
Returns the microseconds of the date and time at the Coordinated Universal Time. This value express the value of the second after decimal point in units of microseconds. (i.e. 0\.1 second \= 100000 microseconds)
### 17\.4\.3 getSeconds()
Returns the second of the date and time in Coordinated Universal Time.
### 17\.4\.4 getMinutes()
Returns the minute of the date and time in Coordinated Universal Time.
### 17\.4\.5 getHours()
Returns the hour of the date and time in Coordinated Universal Time.
### 17\.4\.6 getDay()
Returns the day of the date and time in Coordinated Universal Time.
### 17\.4\.7 getMonth()
Returns the month of the date and time in Coordinated Universal Time.
### 17\.4\.8 getYear()
Returns the year of the date and time in Coordinated Universal Time.
### 17\.4\.9 getWeekday()
Returns the day of the week of the date and time in Coordinated Universal Time in `int`.
1:Sun 2:Mon 3:Tue 4:Wed 5:Thu 6:Sat
### 17\.4\.10 getYearday()
Returns the number of the date through the year with January 1st as 1 and December 31st as 365\.
### 17\.4\.11 is\_na()
Returns `true` if this object is `NA`.
### 17\.4\.1 getFractionalTimestamp()
Returns the number of seconds (real number) from the base date (1970\-01\-01 00: 00: 00 UTC).
### 17\.4\.2 getMicroSeconds()
Returns the microseconds of the date and time at the Coordinated Universal Time. This value express the value of the second after decimal point in units of microseconds. (i.e. 0\.1 second \= 100000 microseconds)
### 17\.4\.3 getSeconds()
Returns the second of the date and time in Coordinated Universal Time.
### 17\.4\.4 getMinutes()
Returns the minute of the date and time in Coordinated Universal Time.
### 17\.4\.5 getHours()
Returns the hour of the date and time in Coordinated Universal Time.
### 17\.4\.6 getDay()
Returns the day of the date and time in Coordinated Universal Time.
### 17\.4\.7 getMonth()
Returns the month of the date and time in Coordinated Universal Time.
### 17\.4\.8 getYear()
Returns the year of the date and time in Coordinated Universal Time.
### 17\.4\.9 getWeekday()
Returns the day of the week of the date and time in Coordinated Universal Time in `int`.
1:Sun 2:Mon 3:Tue 4:Wed 5:Thu 6:Sat
### 17\.4\.10 getYearday()
Returns the number of the date through the year with January 1st as 1 and December 31st as 365\.
### 17\.4\.11 is\_na()
Returns `true` if this object is `NA`.
17\.5 Code example
------------------
The code example below shows the result of executing in Japan Standard Time (JST) environment.
```
// [[Rcpp::export]]
Datetime rcpp_datetime(){
// Creating Datetime object by specifying date and time to
Datetime dt("2000-01-01 00:00:00");
// Displaying parts of the Datetime object in Coordinated Universal Time
Rcout << "getYear " << dt.getYear() << "\n";
Rcout << "getMonth " << dt.getMonth() << "\n";
Rcout << "getDay " << dt.getDay() << "\n";
Rcout << "getHours " << dt.getHours() << "\n";
Rcout << "getMinutes " << dt.getMinutes() << "\n";
Rcout << "getSeconds " << dt.getSeconds() << "\n";
Rcout << "getMicroSeconds " << dt.getMicroSeconds() << "\n";
Rcout << "getWeekday " << dt.getWeekday() << "\n";
Rcout << "getYearday " << dt.getYearday() << "\n";
Rcout << "getFractionalTimestamp " << dt.getFractionalTimestamp() << "\n";
return dt;
}
```
Execution result
You can see that the time output is 9 hours before Japan Standard Time (JST).
```
> rcpp_datetime()
getYear 1999
getMonth 12
getDay 31
getHours 15
getMinutes 0
getSeconds 0
getMicroSeconds 0
getWeekday 6
getYearday 365
getFractionalTimestamp 9.46652e+08
[1] "2000-01-01 JST"
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/200_robject.html |
Chapter 18 RObject
==================
The `RObject` type is a type that can be assigned to any type of object defined in Rcpp. If you do not know what type is passed to the variable at run time, you can use `RObject`.
18\.1 Member functions
----------------------
`RObject` has the following member functions. These member functions also exist in all other API classes (such as `NumericVector`) in Rcpp.
### 18\.1\.1 inherits(str)
Returns `true` if this object inherits the class specified by the string `str`.
### 18\.1\.2 slot(name)
Accesses the slot specified by the character string `name` if this object is `S4`.
### 18\.1\.3 hasSlot(name)
Returns `true` if there is a slot specified by the character string name.
### 18\.1\.4 attr(name)
Accesses the attribute specified by the string `name`.
### 18\.1\.5 attributeNames()
Return the names of all the attributes of this object as `std::vector<std::string>`.
### 18\.1\.6 hasAttribute(name)
Returns `true` if this object has an attribute with the name specified by the string `name`.
### 18\.1\.7 isNULL()
Returns true if this object is `NULL`.
### 18\.1\.8 sexp\_type()
Returns `SXPTYPE` of this object as `int`. See the [R internals](https://cran.r-project.org/doc/manuals/r-release/R-ints.html#SEXPTYPEs) for a list of all `SEXPTYPE` defined in R.
### 18\.1\.9 isObject()
Returns `true` if this object has a “class” attribute.
### 18\.1\.10 isS4()
Return `true` if this object is a `S4` object.
18\.2 Determining type of object assigned to RObject
----------------------------------------------------
One useful use of `RObject` is to determine the type of the object. To determine which type the value assigned to `RObject`, use the `is<T>()` function or member function `isS4()` `isNULL()`.
However, matrices and factor vectors can not be determined by only using the function `is<T>()` because they are vectors with values are set to specific attribute. To determine them, use `Rf_isMatrix()` function or the `Rf_isFactor()` function.
The code example below shows how to determine the type using `RObject`.
```
// [[Rcpp::export]]
void rcpp_type(RObject x){
if(is<NumericVector>(x)){
if(Rf_isMatrix(x)) Rcout << "NumericMatrix\n";
else Rcout << "NumericVector\n";
}
else if(is<IntegerVector>(x)){
if(Rf_isFactor(x)) Rcout << "factor\n";
else Rcout << "IntegerVector\n";
}
else if(is<CharacterVector>(x))
Rcout << "CharacterVector\n";
else if(is<LogicalVector>(x))
Rcout << "LogicalVector\n";
else if(is<DataFrame>(x))
Rcout << "DataFrame\n";
else if(is<List>(x))
Rcout << "List\n";
else if(x.isS4())
Rcout << "S4\n";
else if(x.isNULL())
Rcout << "NULL\n";
else
Rcout << "unknown\n";
}
```
Use `as<T>()` to convert `RObject` to another Rcpp type after determining the type.
```
// Converting `RObject` to `NumericVector`
RObject x;
NumericVector v = as<NumericVector>(x);
```
18\.3 Using `RObject` to write functions that receive various datatypes (function templates)
--------------------------------------------------------------------------------------------
C\+\+ allows us to avoid code duplication by writing [templated functions](https://www.programiz.com/cpp-programming/templates) that are not limited to specific datatypes.
**Note:** This section does not cover C\+\+ templates in any detail \- it only shows how to implement them in Rcpp. It also assumes your compiler can use C\+\+11 syntax. If not, please refer to [this Rcpp Gallery example](https://gallery.rcpp.org/articles/rcpp-return-macros/) by Nathan Russell which covers the process in much more detail (and which provided the source material for this section).
Say we have a very simple C\+\+ templated function that prints “Hello” and then returns its input.
```
template<typename T>
T say_hello(T x) {
std::cout << "Hello!" << std::endl;
return x;
}
```
The `T` in this function is a placeholder for any other type. You can use any identifier you like, but `T` is conventional. At compile time the compiler analyses our code and creates new functions with `T` replaced with the relevant datatypes, so we don’t have to write versions of `add_one()` for `int`, `float`, `double` etc.
In Rcpp the process is almost identical, but we have to jump through a couple of extra hoops.
A first attempt at an Rcpp version of the function might look very similar:
```
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
template<typename T>
T say_hello(T x) {
Rcout << "Hello!" << std::endl;
return x;
}
```
However, this will not compile. To quote [Nathan Russell’s article](https://gallery.rcpp.org/articles/rcpp-return-macros/):
> Sadly this does not work: magical as Rcpp attributes may be, there are limits to what they can do, and at least for the time being, translating C\+\+ template functions into something compatible with R’s C API is out of the question.
Fortunately Rcpp provides two macros, `RCPP_RETURN_VECTOR` and `RCPP_RETURN_MATRIX` that make it very simple to implement a templated function.
```
template<int RTYPE>
Vector<RTYPE> printer(Vector<RTYPE> x) {
Rcout << "Hello!" << std::endl;
return x;
}
// [[Rcpp::export]]
RObject say_hello(RObject x) {
RCPP_RETURN_VECTOR(printer, x);
}
```
The exported `say_hello()` function now works as expected in R.
```
say_hello("a")
## Hello!
## [1] "a"
say_hello(1:10)
## Hello!
## [1] 1 2 3 4 5 6 7 8 9 10
say_hello(FALSE)
## Hello!
## [1] FALSE
```
Note that the template specification looks slightly different: `template<int RTYPE>` rather than `template<typename T>` in the earlier example. This is called a [non\-type parameter](https://www.learncpp.com/cpp-tutorial/134-template-non-type-parameters/), and is used because [the type of an RObject is stored as an integer](https://cran.r-project.org/doc/manuals/r-release/R-ints.html#SEXPTYPEs). Again, Nathan’s article provides more details on this. As with the use of `T` in the first example, the use of `RTYPE` as an alias for the type is entirely arbitrary.
Also note that `printer()` and `say_hello()` have different names because otherwise the call to `RCPP_RETURN_VECTOR` is ambiguous and the code will not compile. If you would prefer both functions to have the same name, you can use [namespaces](https://en.cppreference.com/w/cpp/language/namespace) as shown below.
```
namespace internal {
template<int RTYPE>
Vector<RTYPE> say_hello(Vector<RTYPE> x) {
Rcout << "Hello!" << std::endl;
return x;
}
} // end of "internal" namespace
// [[Rcpp::export]]
RObject say_hello(RObject x) {
RCPP_RETURN_VECTOR(internal::say_hello, x);
}
```
18\.1 Member functions
----------------------
`RObject` has the following member functions. These member functions also exist in all other API classes (such as `NumericVector`) in Rcpp.
### 18\.1\.1 inherits(str)
Returns `true` if this object inherits the class specified by the string `str`.
### 18\.1\.2 slot(name)
Accesses the slot specified by the character string `name` if this object is `S4`.
### 18\.1\.3 hasSlot(name)
Returns `true` if there is a slot specified by the character string name.
### 18\.1\.4 attr(name)
Accesses the attribute specified by the string `name`.
### 18\.1\.5 attributeNames()
Return the names of all the attributes of this object as `std::vector<std::string>`.
### 18\.1\.6 hasAttribute(name)
Returns `true` if this object has an attribute with the name specified by the string `name`.
### 18\.1\.7 isNULL()
Returns true if this object is `NULL`.
### 18\.1\.8 sexp\_type()
Returns `SXPTYPE` of this object as `int`. See the [R internals](https://cran.r-project.org/doc/manuals/r-release/R-ints.html#SEXPTYPEs) for a list of all `SEXPTYPE` defined in R.
### 18\.1\.9 isObject()
Returns `true` if this object has a “class” attribute.
### 18\.1\.10 isS4()
Return `true` if this object is a `S4` object.
### 18\.1\.1 inherits(str)
Returns `true` if this object inherits the class specified by the string `str`.
### 18\.1\.2 slot(name)
Accesses the slot specified by the character string `name` if this object is `S4`.
### 18\.1\.3 hasSlot(name)
Returns `true` if there is a slot specified by the character string name.
### 18\.1\.4 attr(name)
Accesses the attribute specified by the string `name`.
### 18\.1\.5 attributeNames()
Return the names of all the attributes of this object as `std::vector<std::string>`.
### 18\.1\.6 hasAttribute(name)
Returns `true` if this object has an attribute with the name specified by the string `name`.
### 18\.1\.7 isNULL()
Returns true if this object is `NULL`.
### 18\.1\.8 sexp\_type()
Returns `SXPTYPE` of this object as `int`. See the [R internals](https://cran.r-project.org/doc/manuals/r-release/R-ints.html#SEXPTYPEs) for a list of all `SEXPTYPE` defined in R.
### 18\.1\.9 isObject()
Returns `true` if this object has a “class” attribute.
### 18\.1\.10 isS4()
Return `true` if this object is a `S4` object.
18\.2 Determining type of object assigned to RObject
----------------------------------------------------
One useful use of `RObject` is to determine the type of the object. To determine which type the value assigned to `RObject`, use the `is<T>()` function or member function `isS4()` `isNULL()`.
However, matrices and factor vectors can not be determined by only using the function `is<T>()` because they are vectors with values are set to specific attribute. To determine them, use `Rf_isMatrix()` function or the `Rf_isFactor()` function.
The code example below shows how to determine the type using `RObject`.
```
// [[Rcpp::export]]
void rcpp_type(RObject x){
if(is<NumericVector>(x)){
if(Rf_isMatrix(x)) Rcout << "NumericMatrix\n";
else Rcout << "NumericVector\n";
}
else if(is<IntegerVector>(x)){
if(Rf_isFactor(x)) Rcout << "factor\n";
else Rcout << "IntegerVector\n";
}
else if(is<CharacterVector>(x))
Rcout << "CharacterVector\n";
else if(is<LogicalVector>(x))
Rcout << "LogicalVector\n";
else if(is<DataFrame>(x))
Rcout << "DataFrame\n";
else if(is<List>(x))
Rcout << "List\n";
else if(x.isS4())
Rcout << "S4\n";
else if(x.isNULL())
Rcout << "NULL\n";
else
Rcout << "unknown\n";
}
```
Use `as<T>()` to convert `RObject` to another Rcpp type after determining the type.
```
// Converting `RObject` to `NumericVector`
RObject x;
NumericVector v = as<NumericVector>(x);
```
18\.3 Using `RObject` to write functions that receive various datatypes (function templates)
--------------------------------------------------------------------------------------------
C\+\+ allows us to avoid code duplication by writing [templated functions](https://www.programiz.com/cpp-programming/templates) that are not limited to specific datatypes.
**Note:** This section does not cover C\+\+ templates in any detail \- it only shows how to implement them in Rcpp. It also assumes your compiler can use C\+\+11 syntax. If not, please refer to [this Rcpp Gallery example](https://gallery.rcpp.org/articles/rcpp-return-macros/) by Nathan Russell which covers the process in much more detail (and which provided the source material for this section).
Say we have a very simple C\+\+ templated function that prints “Hello” and then returns its input.
```
template<typename T>
T say_hello(T x) {
std::cout << "Hello!" << std::endl;
return x;
}
```
The `T` in this function is a placeholder for any other type. You can use any identifier you like, but `T` is conventional. At compile time the compiler analyses our code and creates new functions with `T` replaced with the relevant datatypes, so we don’t have to write versions of `add_one()` for `int`, `float`, `double` etc.
In Rcpp the process is almost identical, but we have to jump through a couple of extra hoops.
A first attempt at an Rcpp version of the function might look very similar:
```
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
template<typename T>
T say_hello(T x) {
Rcout << "Hello!" << std::endl;
return x;
}
```
However, this will not compile. To quote [Nathan Russell’s article](https://gallery.rcpp.org/articles/rcpp-return-macros/):
> Sadly this does not work: magical as Rcpp attributes may be, there are limits to what they can do, and at least for the time being, translating C\+\+ template functions into something compatible with R’s C API is out of the question.
Fortunately Rcpp provides two macros, `RCPP_RETURN_VECTOR` and `RCPP_RETURN_MATRIX` that make it very simple to implement a templated function.
```
template<int RTYPE>
Vector<RTYPE> printer(Vector<RTYPE> x) {
Rcout << "Hello!" << std::endl;
return x;
}
// [[Rcpp::export]]
RObject say_hello(RObject x) {
RCPP_RETURN_VECTOR(printer, x);
}
```
The exported `say_hello()` function now works as expected in R.
```
say_hello("a")
## Hello!
## [1] "a"
say_hello(1:10)
## Hello!
## [1] 1 2 3 4 5 6 7 8 9 10
say_hello(FALSE)
## Hello!
## [1] FALSE
```
Note that the template specification looks slightly different: `template<int RTYPE>` rather than `template<typename T>` in the earlier example. This is called a [non\-type parameter](https://www.learncpp.com/cpp-tutorial/134-template-non-type-parameters/), and is used because [the type of an RObject is stored as an integer](https://cran.r-project.org/doc/manuals/r-release/R-ints.html#SEXPTYPEs). Again, Nathan’s article provides more details on this. As with the use of `T` in the first example, the use of `RTYPE` as an alias for the type is entirely arbitrary.
Also note that `printer()` and `say_hello()` have different names because otherwise the call to `RCPP_RETURN_VECTOR` is ambiguous and the code will not compile. If you would prefer both functions to have the same name, you can use [namespaces](https://en.cppreference.com/w/cpp/language/namespace) as shown below.
```
namespace internal {
template<int RTYPE>
Vector<RTYPE> say_hello(Vector<RTYPE> x) {
Rcout << "Hello!" << std::endl;
return x;
}
} // end of "internal" namespace
// [[Rcpp::export]]
RObject say_hello(RObject x) {
RCPP_RETURN_VECTOR(internal::say_hello, x);
}
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/201_caution_vector.html |
Chapter 19 Cautions in handling Rcpp objects
============================================
19\.1 Assigning between vectors
-------------------------------
When you assign an object `v1` to another object `v2` using `=` operator (`v2 = v1;`), the value of elements of `v1` is not copied to `v2` but `v2` will be an alias to `v1`. Thus, if you change the value of some elements in `v1`, the change also applied to `v2`. You should use `clone()`, if you want to avoid coupling between objects (see sample code below).
The sample code presented below shows that the difference of the shallow copy and deep copy when you change value of one of vector after assigning.
```
NumericVector v1 = {1,2,3}; // create a vector v1
NumericVector v2 = v1; // v1 is assigned to v2 through shallow copy.
NumericVector v3 = clone(v1); // v1 is assigned to v3 through deep copy.
v1[0] = 100; // changing value of a element of v1
// Following output shows that
// the modification of v1 element
// is also applied to v2 but not to v3
Rcout << "v1 = " << v1 << endl; // 100 2 3
Rcout << "v2 = " << v2 << endl; // 100 2 3
Rcout << "v3 = " << v3 << endl; // 1 2 3
```
As explanation for people who have deeper knowledge of C\+\+, a Rcpp object do not have value of R object (e.g. elements of a vector) itself, but have a pointer to R object. Thus, if you assign object through `v2 = v1;`, the value of pointer of `v1` is copied to `v2`. So, both `v1` and `v2` would be pointing to the same R object. This is called “shallow copy”. On the other hand, if you assign object through `v2 = clone(v1);`, the value of R object that `v1` is pointing is copied to `v2` as new R object. This is called “deep copy”.
19\.2 Data type of numerical index
----------------------------------
Maximum number of vector elements is limited to the length of 2^31 \- 1 in R \<\= version 2\.0\.0 or 32 bit build of R, because `int` is used as data type of numerical index. However, long vector is supported after 64 bit build of R 3\.0\.0\. You should use `R_xlen_t` as data data type for numerical index or the number of elements to support long vector in your Rcpp code.
```
// Declare the number of element "n" using R_xlen_t
R_xlen_t n = v.length();
double sum = 0;
// Declare the numerical index "i" using R_xlen_t
for(R_xlen_t i=0; i<n; ++i){
sum += v[i];
}
```
19\.3 Return type of operator\[]
--------------------------------
When you access to vector elements using `[]` or `()` operator, the return type is not `Vector` itself but `Vector::Proxy`. Thus, it will cause compile error when you pass `v[i]` directly to some function, if the function only supports `Vector` type. To avoid compile error `v[i]` assign to new object or convert it to type `T` using `as<T>()`.
```
NumericVector v {1,2,3,4,5};
IntegerVector i {1,3};
// Compile error
//double x1 = sum(v[i]);
// Save as new object
NumericVector vi = v[i];
double x2 = sum(vi);
// Convert to NumericVector using as<T>()
double x3 = sum(as<NumericVector>(v[i]));
```
19\.1 Assigning between vectors
-------------------------------
When you assign an object `v1` to another object `v2` using `=` operator (`v2 = v1;`), the value of elements of `v1` is not copied to `v2` but `v2` will be an alias to `v1`. Thus, if you change the value of some elements in `v1`, the change also applied to `v2`. You should use `clone()`, if you want to avoid coupling between objects (see sample code below).
The sample code presented below shows that the difference of the shallow copy and deep copy when you change value of one of vector after assigning.
```
NumericVector v1 = {1,2,3}; // create a vector v1
NumericVector v2 = v1; // v1 is assigned to v2 through shallow copy.
NumericVector v3 = clone(v1); // v1 is assigned to v3 through deep copy.
v1[0] = 100; // changing value of a element of v1
// Following output shows that
// the modification of v1 element
// is also applied to v2 but not to v3
Rcout << "v1 = " << v1 << endl; // 100 2 3
Rcout << "v2 = " << v2 << endl; // 100 2 3
Rcout << "v3 = " << v3 << endl; // 1 2 3
```
As explanation for people who have deeper knowledge of C\+\+, a Rcpp object do not have value of R object (e.g. elements of a vector) itself, but have a pointer to R object. Thus, if you assign object through `v2 = v1;`, the value of pointer of `v1` is copied to `v2`. So, both `v1` and `v2` would be pointing to the same R object. This is called “shallow copy”. On the other hand, if you assign object through `v2 = clone(v1);`, the value of R object that `v1` is pointing is copied to `v2` as new R object. This is called “deep copy”.
19\.2 Data type of numerical index
----------------------------------
Maximum number of vector elements is limited to the length of 2^31 \- 1 in R \<\= version 2\.0\.0 or 32 bit build of R, because `int` is used as data type of numerical index. However, long vector is supported after 64 bit build of R 3\.0\.0\. You should use `R_xlen_t` as data data type for numerical index or the number of elements to support long vector in your Rcpp code.
```
// Declare the number of element "n" using R_xlen_t
R_xlen_t n = v.length();
double sum = 0;
// Declare the numerical index "i" using R_xlen_t
for(R_xlen_t i=0; i<n; ++i){
sum += v[i];
}
```
19\.3 Return type of operator\[]
--------------------------------
When you access to vector elements using `[]` or `()` operator, the return type is not `Vector` itself but `Vector::Proxy`. Thus, it will cause compile error when you pass `v[i]` directly to some function, if the function only supports `Vector` type. To avoid compile error `v[i]` assign to new object or convert it to type `T` using `as<T>()`.
```
NumericVector v {1,2,3,4,5};
IntegerVector i {1,3};
// Compile error
//double x1 = sum(v[i]);
// Save as new object
NumericVector vi = v[i];
double x2 = sum(vi);
// Convert to NumericVector using as<T>()
double x3 = sum(as<NumericVector>(v[i]));
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/202_attributes.html |
Chapter 20 Attributes
=====================
20\.1 Attributes related functions
----------------------------------
The following member functions are used to access the attributes of Rcpp objects.
**attr( name )**
accesses the attribute specified by the character string “name” and gets and sets the value.
```
List L;
L.attr("class") = "my_class";
```
**attributeNames()**
Returns a list of the attributes the object has. Since the return type of this function is `std::vector<std::string>`, if you want to convert it to `CharacterVector` use `wrap()` function.
```
CharacterVector ch = wrap(x.attributeNames());
```
**hasAttribute( name )**
If this object has an attribute with the name specified by the string “name”, it returns `true`.
```
bool b = x.hasAttribute("name");
```
```
// Creating a List object
NumericVector v1 = {1,2,3,4,5};
CharacterVector v2 = {"A","B","C"};
List L = List::create(v1, v2);
// Setting element names
L.attr("names") = CharacterVector::create("x", "y");
// Creating a new attribute and set its value
L.attr("new_attribute") = "new_value";
// Changing the class name of this object to "new_class"
L.attr("class") = "new_class";
// Outputting a list of the names of the attributes of this object
CharacterVector ch = wrap(L.attributeNames());
Rcout << ch << "\n"; // "names" "new_attribute" "class"
// Check if this object has the attribute "new_attribute".
bool b = L.hasAttribute("new_attribute");
Rcout << b << "\n"; // 1
```
20\.2 Access functions for common attributes
--------------------------------------------
Dedicated access functions may be prepared for frequently used attributes such as element names.
```
// Element name, the following two sentences are synonymous
x.attr("names");
x.names();
```
The code example below shows how to access common attributes.
```
Vector v
v.attr("names"); // Element names
v.names(); // Element names
Matrix m;
m.ncol(); // Number of columns
m.nrow(); // Number of rows
m.attr(“dim”) = NumericVector::create( nrows, ncols );
m.attr(“dimnames”) = List::create( row_names, col_names );
DataFrame df;
df.attr(“names”); // column names
df.attr(“row.names”); // row names
List L;
L.names(); // Element names
```
20\.1 Attributes related functions
----------------------------------
The following member functions are used to access the attributes of Rcpp objects.
**attr( name )**
accesses the attribute specified by the character string “name” and gets and sets the value.
```
List L;
L.attr("class") = "my_class";
```
**attributeNames()**
Returns a list of the attributes the object has. Since the return type of this function is `std::vector<std::string>`, if you want to convert it to `CharacterVector` use `wrap()` function.
```
CharacterVector ch = wrap(x.attributeNames());
```
**hasAttribute( name )**
If this object has an attribute with the name specified by the string “name”, it returns `true`.
```
bool b = x.hasAttribute("name");
```
```
// Creating a List object
NumericVector v1 = {1,2,3,4,5};
CharacterVector v2 = {"A","B","C"};
List L = List::create(v1, v2);
// Setting element names
L.attr("names") = CharacterVector::create("x", "y");
// Creating a new attribute and set its value
L.attr("new_attribute") = "new_value";
// Changing the class name of this object to "new_class"
L.attr("class") = "new_class";
// Outputting a list of the names of the attributes of this object
CharacterVector ch = wrap(L.attributeNames());
Rcout << ch << "\n"; // "names" "new_attribute" "class"
// Check if this object has the attribute "new_attribute".
bool b = L.hasAttribute("new_attribute");
Rcout << b << "\n"; // 1
```
20\.2 Access functions for common attributes
--------------------------------------------
Dedicated access functions may be prepared for frequently used attributes such as element names.
```
// Element name, the following two sentences are synonymous
x.attr("names");
x.names();
```
The code example below shows how to access common attributes.
```
Vector v
v.attr("names"); // Element names
v.names(); // Element names
Matrix m;
m.ncol(); // Number of columns
m.nrow(); // Number of rows
m.attr(“dim”) = NumericVector::create( nrows, ncols );
m.attr(“dimnames”) = List::create( row_names, col_names );
DataFrame df;
df.attr(“names”); // column names
df.attr(“row.names”); // row names
List L;
L.names(); // Element names
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/210_rcpp_functions.html |
Chapter 21 R\-like functions
============================
Here is a list of Rcpp functions similar to R functions.
Also, if you can guarantee that `NA` is not included in the vector given to these functions, you can use `noNA()` to mark the vector. Then the functions below no longer checks for `NA`, so the calculation may speed up.
```
NumericVector res = mean(noNA(v));
```
21\.1 List of R\-like functions
-------------------------------
* [Vector related functions](210_rcpp_functions.html#vector-related-functions)
* [String related functions](210_rcpp_functions.html#string-related-functions)
* [Functions related to finding values](#functions-related-to-finding-values)
* [Functions related to duplicated values](#functions-related-to-duplicated-values)
* [Functions related to set operation](#functions-related-to-set-operation)
* [Functions related to maximum and minimum values](210_rcpp_functions.html#functions-related-to-maximum-and-minimum-values)
* [Functions related to summaries](210_rcpp_functions.html#functions-related-to-summaries)
* [Functions related to rounding values](210_rcpp_functions.html#functions-related-to-rounding-values)
* [Functions related to math](210_rcpp_functions.html#functions-related-to-math)
* [Functions related to logical values](210_rcpp_functions.html#functions-related-to-logical-values)
* [Functions related to NA Inf NaN](210_rcpp_functions.html#functions-related-to-na-inf-nan)
* [apply functions](210_rcpp_functions.html#apply-functions)
* [cbind function](210_rcpp_functions.html#cbind-function)
### 21\.1\.1 Vector related functions
**head(v, n)**
Returns a vector of `n` elements from the first element of the vector `v`.
**tail(v, n)**
Returns a vector of last `n` elements from the last element of vector `v`.
**rev(v)**
Returns a vector that elements of vector `v` are arranged in reverse order.
**rep(x, n)**
Returns a vector that `x` is repeated `n` times. `x` can be a scalar or vector.
**rep\_each(v, n)**
Returns a vector that each elements of vector v are repeated `n` times.
**rep\_len(v, n)**
Returns a vector that the vector `v` is repeated until the length of the vector becomes `n`.
**seq(start, end)**
Returns a vector of consecutive integers from `start` to `end`.
**seq\_along(v)**
Returns a vector of consecutive integers from 1 to the number of elements of vector `v`
**seq\_len(n)**
Returns a vector of consecutive integers from 1 to `n`
**diff(v)**
Returns a vector that computed `v[i + 1] - v[i]` for each element `i` excluding the last element of the vector `v`.
### 21\.1\.2 String related functions
**collapse(v)**
Returns a string concatenated with each element of the `CharacterVector` v as `String` type.
### 21\.1\.3 Finding values
**match(v, table)**
Returns an integer vector containing the R style numerical index (starting from 1\) of the element of vector `table` that match value to each elements of vector `x`. Namely、if you execute `res = match(v, table);`, then it will be `res[i] == j+1` where `j` equals to minimum `j` satisfying `v[i] == table[j]`
**self\_match(v)**
Synonymous with passing the same vector to `match (v, table)`.
**which\_max(v)**
Returns the numerical index of the largest element of the vector v.
**which\_min(v)**
Returns the numerical index of the smallest element of the vector v.
### 21\.1\.4 Duplicated values
**duplicated(v)**
Returns a vector containing 1 if the value of each element of vector v exists in the previous element, containing 0 if not.
**unique(v)**
Returns a vector that eliminates the duplication of the element value from the vector `v`.
**sort\_unique(v)**
Returns a vector that eliminates the duplication of the element value from the vector `v` and sorts the values in ascending order.
### 21\.1\.5 Set operation
**setdiff(v1,v2\)**
Returns a vector obtained by subtracting the value of the unique element of the vector `v2` from the unique element of the vector `v1`.
**setequal(v1,v2\)**
Returns true if the unique element of vector `v1` is equal to the unique element of vector `v2`.
**intersect(v1,v2\)**
Returns a vector containing elements contained in both the unique element of vector `v1` and the unique element of vector `v2`.
**union\_(v1,v2\)**
Return vector which eliminated value duplication after combining elements of vector `v1` and vector `v2`.
### 21\.1\.6 Functions related to maximum and minimum values
**min(v)**
Returns the minimum value of the vector `v`.
**max(v)**
Returns the maximum value of the vector `v`.
**cummin(v)**
Returns the cumulative minimum elements of vector `v`
**cummax(v)**
Returns the cumulative maximum elements of vector `v`
**pmin(v1,v2\)**
Compares the corresponding elements of vectors `v1` and `v2`, and return a vector containing the smaller elements.
**pmax(v1,v2\)**
Compares the corresponding elements of vectors `v1` and `v2`, and return a vector containing the larger elements.
**range(v)**
Returns a vector consisting of the minimum and maximum values of vector `v`.
**clamp(min, v, max)**
Returns a vector that the elements of vector `v` smaller than `min` is replaced with `min` and the elements larger than `max` is replaced with `max`.
### 21\.1\.7 Functions related to summaries
**sum(v)**
Returns the sum of the elements of vector `v`.
**mean(v)**
Returns the arithmetic mean of the elements of vector `v`.
**median(v)**
Returns the median value of the elements of vector `v`.
**sd(v)**
Returns the standard deviation of the elements of vector `v`.
**var(v)**
Returns the variance of the elements of vector `v`.
**cumsum(v)**
Returns the cumulative sum of the elements of vector `v`
**cumprod(v)**
Returns the cumulative product of the elements of vector `v`
**table(v)**
Returns a named integer vector that counts the number of elements for each unique element of vector `v`.
### 21\.1\.8 Functions related to rounding values
**floor(v)**
Returns a vector containing the largest integer not greater than each element of vector `v`.
**ceil(v)**
Returns a vector containing the largest integer not smaller than each element of vector v.
**ceiling(v)**
Synonymous with `ceil()`.
**round(v, digits)**
Returns a vector obtained by rounding each element of the vector `v` with the number of significant figure `digits`.
**trunc(v)**
Returns a vector with rounded down decimal places.
### 21\.1\.9 Functions related to math
**sign(v)**
Returns a vector with the signs of the corresponding elements of `v` (the sign of a real number is 1, 0, or \-1 if the number is positive, zero, or negative, respectively).
**abs(v)**
Returns a vector containing the absolute value of each element of vector `v`.
**pow(v, n)**
Returns a vector by raising each element of the vector `v` to the `n`th power.
**sqrt(v)**
Returns a vector containing square root of each element of vector v.
**exp(v)**
Returns a vector by raising Napier number (e) to the power of value of each element of the vector v.
**expm1(v)**
Synonymous with `exp(v) - 1`
**log(v)**
Returns a vector containing the natural logarithmic of each element of vector `v`.
**log10(v)**
Returns a vector containing the common logarithmic of each element of vector `v`.
**log1p(v)**
Synonymous with `log(v+1)`
**sin(v)**
Returns a vector containing sine of each element of vector `v`.
**sinh(v)**
Returns a vector containing hyperbolic sine of each element of vector `v`.
**cos(v)**
Returns a vector containing cosine of each element of vector `v`.
**cosh(v)**
Returns a vector containing hyperbolic cosine of each element of vector `v`.
**tan(v)**
Returns a vector containing tangent of each element of vector `v`.
**tanh(v)**
Returns a vector containing hyperbolic tangent of each element of vector `v`.
**acos(v)**
Returns a vector containing arccosine of each element of vector `v`.
**asin(v)**
Returns a vector containing arcsine of each element of vector `v`.
**atan(v)**
Returns a vector containing arctangent of each element of vector `v`.
**gamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the gamma function.
**lgamma(v)**
Synonymous with `log(gamma(v))`
**digamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the first derivative function of `lgamma()`.
**trigamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the second derivative function of `lgamma()`.
**tetragamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the third derivative function of `lgamma()`.
**pentagamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the fourth derivative function of `lgamma()`.
**psigamma(v, deriv)**
Returns a vector obtained by transforming each element of vector `v` with the `deriv`\-th derivative function of `lgamma()`.
**factrial(v)**
Returns a vector containing the factorial of the each element of vector `v`.
**lfactorial(v)**
Synonymous with `log(factrial(v))`
**choose(vn, vk)**
Returns a vector obtained by calculating binomial coefficients using the corresponding elements of real vector `vn` and integer vector `vk`.
**lchoose(vn, vk)**
Synonymous with `log(choose(vn, vk))`
**beta(va, vb)**
Returns a vector obtained by calculating the value of the beta function using the corresponding elements of vectors `va` and `vb`.
**lbeta(va, vb)**
Synonymous with `log(beta(va, vb))`
### 21\.1\.10 Functions related to logical values
**all(v)**
Receives a `LogicalVector` `v` and returns `SingleLogicalResult` type meaning`TRUE` when all of elements are `TRUE`.
**any(v)**
Receives a `LogicalVector` `v` and returns `SingleLogicalResult` type meaning`TRUE` when any of elements are `TRUE`.
**is\_true(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `TRUE`.
**is\_false(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `FALSE`.
**is\_na(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `NA`.
**ifelse(v, x1, x2\)**
Receives the `LogicalVector` `v`, and returns a vector containing the corresponding element of x1 or x2 when the element of v is `TRUE` or `FALSE`, respectively. Although `x1` and `x2` can be vectors or scalars, the length of the vector needs to be equal to `v`.
### 21\.1\.11 Functions related to NA Inf NaN
**na\_omit(v)**
Returns a vector which deleted `NA` from vector `v`.
**is\_finite(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is not `Inf` nor `-Inf` nor `NA`.
**is\_infinite(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `Inf` or `-Inf` or `NA`.
**is\_na(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `NA` or `NaN`.
**is\_nan(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `NaN`.
### 21\.1\.12 apply functions
**lapply(x, fun)**
Applies a C\+\+ function `fun` to each element of the vector `x` and returns the result as `List`.
**sapply(x, fun)**
Applies a C\+\+ function `fun` to each element of the vector `x` and returns the result as `Vector`.
**mapply(x1, x2, fun2\)**
Applies a C\+\+ function `fun` that receives two arguments to each corresponding elements of the vector `x1` and `x2` and returns the result as `Vector`.
**mapply(x1, x2, x3, fun3\)**
Applies a C\+\+ function `fun` that receives three arguments to each corresponding elements of the vector `x1` and `x2`, `x3` and returns the result as `Vector`.
### 21\.1\.13 cbind function
**cbind(x1, x2,…)**
Takes `Vector` or `Matrix` `x1` and `x2`, `...` and combine by columns. And returns the result as `Matrix` or `DataFrame`. You can pass up to 50 arguments.
### 21\.1\.14 sampling
**Vector sample(Vector x, int size, replace \= false, probs \= R\_NilValue)**
As with the `sample` function in R, this function takes a sample from a vector `x`.
* `x` : a vector you want to draw a sample.
* `size` : sample size of returned vector.
* `replace` : should sampling be with replacement. default `true`.
* `probs` : a vector that specify probability weights to be drawn for each elements of vector `x`. default `R_NilValue` (same weight for all elements).
`Vector<RTYPE> sample(const Vector<RTYPE>& x, int size, bool replace = false, sugar::probs_t probs = R_NilValue)`
**sample(n, size, replace \= TRUE, probs \= NULL, one\_based \= TRUE)**
As with the `sample.int` function in R.
`Vector<INTSXP> sample(int n, int size, bool replace = false, sugar::probs_t probs = R_NilValue, bool one_based = true);`
21\.1 List of R\-like functions
-------------------------------
* [Vector related functions](210_rcpp_functions.html#vector-related-functions)
* [String related functions](210_rcpp_functions.html#string-related-functions)
* [Functions related to finding values](#functions-related-to-finding-values)
* [Functions related to duplicated values](#functions-related-to-duplicated-values)
* [Functions related to set operation](#functions-related-to-set-operation)
* [Functions related to maximum and minimum values](210_rcpp_functions.html#functions-related-to-maximum-and-minimum-values)
* [Functions related to summaries](210_rcpp_functions.html#functions-related-to-summaries)
* [Functions related to rounding values](210_rcpp_functions.html#functions-related-to-rounding-values)
* [Functions related to math](210_rcpp_functions.html#functions-related-to-math)
* [Functions related to logical values](210_rcpp_functions.html#functions-related-to-logical-values)
* [Functions related to NA Inf NaN](210_rcpp_functions.html#functions-related-to-na-inf-nan)
* [apply functions](210_rcpp_functions.html#apply-functions)
* [cbind function](210_rcpp_functions.html#cbind-function)
### 21\.1\.1 Vector related functions
**head(v, n)**
Returns a vector of `n` elements from the first element of the vector `v`.
**tail(v, n)**
Returns a vector of last `n` elements from the last element of vector `v`.
**rev(v)**
Returns a vector that elements of vector `v` are arranged in reverse order.
**rep(x, n)**
Returns a vector that `x` is repeated `n` times. `x` can be a scalar or vector.
**rep\_each(v, n)**
Returns a vector that each elements of vector v are repeated `n` times.
**rep\_len(v, n)**
Returns a vector that the vector `v` is repeated until the length of the vector becomes `n`.
**seq(start, end)**
Returns a vector of consecutive integers from `start` to `end`.
**seq\_along(v)**
Returns a vector of consecutive integers from 1 to the number of elements of vector `v`
**seq\_len(n)**
Returns a vector of consecutive integers from 1 to `n`
**diff(v)**
Returns a vector that computed `v[i + 1] - v[i]` for each element `i` excluding the last element of the vector `v`.
### 21\.1\.2 String related functions
**collapse(v)**
Returns a string concatenated with each element of the `CharacterVector` v as `String` type.
### 21\.1\.3 Finding values
**match(v, table)**
Returns an integer vector containing the R style numerical index (starting from 1\) of the element of vector `table` that match value to each elements of vector `x`. Namely、if you execute `res = match(v, table);`, then it will be `res[i] == j+1` where `j` equals to minimum `j` satisfying `v[i] == table[j]`
**self\_match(v)**
Synonymous with passing the same vector to `match (v, table)`.
**which\_max(v)**
Returns the numerical index of the largest element of the vector v.
**which\_min(v)**
Returns the numerical index of the smallest element of the vector v.
### 21\.1\.4 Duplicated values
**duplicated(v)**
Returns a vector containing 1 if the value of each element of vector v exists in the previous element, containing 0 if not.
**unique(v)**
Returns a vector that eliminates the duplication of the element value from the vector `v`.
**sort\_unique(v)**
Returns a vector that eliminates the duplication of the element value from the vector `v` and sorts the values in ascending order.
### 21\.1\.5 Set operation
**setdiff(v1,v2\)**
Returns a vector obtained by subtracting the value of the unique element of the vector `v2` from the unique element of the vector `v1`.
**setequal(v1,v2\)**
Returns true if the unique element of vector `v1` is equal to the unique element of vector `v2`.
**intersect(v1,v2\)**
Returns a vector containing elements contained in both the unique element of vector `v1` and the unique element of vector `v2`.
**union\_(v1,v2\)**
Return vector which eliminated value duplication after combining elements of vector `v1` and vector `v2`.
### 21\.1\.6 Functions related to maximum and minimum values
**min(v)**
Returns the minimum value of the vector `v`.
**max(v)**
Returns the maximum value of the vector `v`.
**cummin(v)**
Returns the cumulative minimum elements of vector `v`
**cummax(v)**
Returns the cumulative maximum elements of vector `v`
**pmin(v1,v2\)**
Compares the corresponding elements of vectors `v1` and `v2`, and return a vector containing the smaller elements.
**pmax(v1,v2\)**
Compares the corresponding elements of vectors `v1` and `v2`, and return a vector containing the larger elements.
**range(v)**
Returns a vector consisting of the minimum and maximum values of vector `v`.
**clamp(min, v, max)**
Returns a vector that the elements of vector `v` smaller than `min` is replaced with `min` and the elements larger than `max` is replaced with `max`.
### 21\.1\.7 Functions related to summaries
**sum(v)**
Returns the sum of the elements of vector `v`.
**mean(v)**
Returns the arithmetic mean of the elements of vector `v`.
**median(v)**
Returns the median value of the elements of vector `v`.
**sd(v)**
Returns the standard deviation of the elements of vector `v`.
**var(v)**
Returns the variance of the elements of vector `v`.
**cumsum(v)**
Returns the cumulative sum of the elements of vector `v`
**cumprod(v)**
Returns the cumulative product of the elements of vector `v`
**table(v)**
Returns a named integer vector that counts the number of elements for each unique element of vector `v`.
### 21\.1\.8 Functions related to rounding values
**floor(v)**
Returns a vector containing the largest integer not greater than each element of vector `v`.
**ceil(v)**
Returns a vector containing the largest integer not smaller than each element of vector v.
**ceiling(v)**
Synonymous with `ceil()`.
**round(v, digits)**
Returns a vector obtained by rounding each element of the vector `v` with the number of significant figure `digits`.
**trunc(v)**
Returns a vector with rounded down decimal places.
### 21\.1\.9 Functions related to math
**sign(v)**
Returns a vector with the signs of the corresponding elements of `v` (the sign of a real number is 1, 0, or \-1 if the number is positive, zero, or negative, respectively).
**abs(v)**
Returns a vector containing the absolute value of each element of vector `v`.
**pow(v, n)**
Returns a vector by raising each element of the vector `v` to the `n`th power.
**sqrt(v)**
Returns a vector containing square root of each element of vector v.
**exp(v)**
Returns a vector by raising Napier number (e) to the power of value of each element of the vector v.
**expm1(v)**
Synonymous with `exp(v) - 1`
**log(v)**
Returns a vector containing the natural logarithmic of each element of vector `v`.
**log10(v)**
Returns a vector containing the common logarithmic of each element of vector `v`.
**log1p(v)**
Synonymous with `log(v+1)`
**sin(v)**
Returns a vector containing sine of each element of vector `v`.
**sinh(v)**
Returns a vector containing hyperbolic sine of each element of vector `v`.
**cos(v)**
Returns a vector containing cosine of each element of vector `v`.
**cosh(v)**
Returns a vector containing hyperbolic cosine of each element of vector `v`.
**tan(v)**
Returns a vector containing tangent of each element of vector `v`.
**tanh(v)**
Returns a vector containing hyperbolic tangent of each element of vector `v`.
**acos(v)**
Returns a vector containing arccosine of each element of vector `v`.
**asin(v)**
Returns a vector containing arcsine of each element of vector `v`.
**atan(v)**
Returns a vector containing arctangent of each element of vector `v`.
**gamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the gamma function.
**lgamma(v)**
Synonymous with `log(gamma(v))`
**digamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the first derivative function of `lgamma()`.
**trigamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the second derivative function of `lgamma()`.
**tetragamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the third derivative function of `lgamma()`.
**pentagamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the fourth derivative function of `lgamma()`.
**psigamma(v, deriv)**
Returns a vector obtained by transforming each element of vector `v` with the `deriv`\-th derivative function of `lgamma()`.
**factrial(v)**
Returns a vector containing the factorial of the each element of vector `v`.
**lfactorial(v)**
Synonymous with `log(factrial(v))`
**choose(vn, vk)**
Returns a vector obtained by calculating binomial coefficients using the corresponding elements of real vector `vn` and integer vector `vk`.
**lchoose(vn, vk)**
Synonymous with `log(choose(vn, vk))`
**beta(va, vb)**
Returns a vector obtained by calculating the value of the beta function using the corresponding elements of vectors `va` and `vb`.
**lbeta(va, vb)**
Synonymous with `log(beta(va, vb))`
### 21\.1\.10 Functions related to logical values
**all(v)**
Receives a `LogicalVector` `v` and returns `SingleLogicalResult` type meaning`TRUE` when all of elements are `TRUE`.
**any(v)**
Receives a `LogicalVector` `v` and returns `SingleLogicalResult` type meaning`TRUE` when any of elements are `TRUE`.
**is\_true(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `TRUE`.
**is\_false(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `FALSE`.
**is\_na(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `NA`.
**ifelse(v, x1, x2\)**
Receives the `LogicalVector` `v`, and returns a vector containing the corresponding element of x1 or x2 when the element of v is `TRUE` or `FALSE`, respectively. Although `x1` and `x2` can be vectors or scalars, the length of the vector needs to be equal to `v`.
### 21\.1\.11 Functions related to NA Inf NaN
**na\_omit(v)**
Returns a vector which deleted `NA` from vector `v`.
**is\_finite(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is not `Inf` nor `-Inf` nor `NA`.
**is\_infinite(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `Inf` or `-Inf` or `NA`.
**is\_na(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `NA` or `NaN`.
**is\_nan(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `NaN`.
### 21\.1\.12 apply functions
**lapply(x, fun)**
Applies a C\+\+ function `fun` to each element of the vector `x` and returns the result as `List`.
**sapply(x, fun)**
Applies a C\+\+ function `fun` to each element of the vector `x` and returns the result as `Vector`.
**mapply(x1, x2, fun2\)**
Applies a C\+\+ function `fun` that receives two arguments to each corresponding elements of the vector `x1` and `x2` and returns the result as `Vector`.
**mapply(x1, x2, x3, fun3\)**
Applies a C\+\+ function `fun` that receives three arguments to each corresponding elements of the vector `x1` and `x2`, `x3` and returns the result as `Vector`.
### 21\.1\.13 cbind function
**cbind(x1, x2,…)**
Takes `Vector` or `Matrix` `x1` and `x2`, `...` and combine by columns. And returns the result as `Matrix` or `DataFrame`. You can pass up to 50 arguments.
### 21\.1\.14 sampling
**Vector sample(Vector x, int size, replace \= false, probs \= R\_NilValue)**
As with the `sample` function in R, this function takes a sample from a vector `x`.
* `x` : a vector you want to draw a sample.
* `size` : sample size of returned vector.
* `replace` : should sampling be with replacement. default `true`.
* `probs` : a vector that specify probability weights to be drawn for each elements of vector `x`. default `R_NilValue` (same weight for all elements).
`Vector<RTYPE> sample(const Vector<RTYPE>& x, int size, bool replace = false, sugar::probs_t probs = R_NilValue)`
**sample(n, size, replace \= TRUE, probs \= NULL, one\_based \= TRUE)**
As with the `sample.int` function in R.
`Vector<INTSXP> sample(int n, int size, bool replace = false, sugar::probs_t probs = R_NilValue, bool one_based = true);`
### 21\.1\.1 Vector related functions
**head(v, n)**
Returns a vector of `n` elements from the first element of the vector `v`.
**tail(v, n)**
Returns a vector of last `n` elements from the last element of vector `v`.
**rev(v)**
Returns a vector that elements of vector `v` are arranged in reverse order.
**rep(x, n)**
Returns a vector that `x` is repeated `n` times. `x` can be a scalar or vector.
**rep\_each(v, n)**
Returns a vector that each elements of vector v are repeated `n` times.
**rep\_len(v, n)**
Returns a vector that the vector `v` is repeated until the length of the vector becomes `n`.
**seq(start, end)**
Returns a vector of consecutive integers from `start` to `end`.
**seq\_along(v)**
Returns a vector of consecutive integers from 1 to the number of elements of vector `v`
**seq\_len(n)**
Returns a vector of consecutive integers from 1 to `n`
**diff(v)**
Returns a vector that computed `v[i + 1] - v[i]` for each element `i` excluding the last element of the vector `v`.
### 21\.1\.2 String related functions
**collapse(v)**
Returns a string concatenated with each element of the `CharacterVector` v as `String` type.
### 21\.1\.3 Finding values
**match(v, table)**
Returns an integer vector containing the R style numerical index (starting from 1\) of the element of vector `table` that match value to each elements of vector `x`. Namely、if you execute `res = match(v, table);`, then it will be `res[i] == j+1` where `j` equals to minimum `j` satisfying `v[i] == table[j]`
**self\_match(v)**
Synonymous with passing the same vector to `match (v, table)`.
**which\_max(v)**
Returns the numerical index of the largest element of the vector v.
**which\_min(v)**
Returns the numerical index of the smallest element of the vector v.
### 21\.1\.4 Duplicated values
**duplicated(v)**
Returns a vector containing 1 if the value of each element of vector v exists in the previous element, containing 0 if not.
**unique(v)**
Returns a vector that eliminates the duplication of the element value from the vector `v`.
**sort\_unique(v)**
Returns a vector that eliminates the duplication of the element value from the vector `v` and sorts the values in ascending order.
### 21\.1\.5 Set operation
**setdiff(v1,v2\)**
Returns a vector obtained by subtracting the value of the unique element of the vector `v2` from the unique element of the vector `v1`.
**setequal(v1,v2\)**
Returns true if the unique element of vector `v1` is equal to the unique element of vector `v2`.
**intersect(v1,v2\)**
Returns a vector containing elements contained in both the unique element of vector `v1` and the unique element of vector `v2`.
**union\_(v1,v2\)**
Return vector which eliminated value duplication after combining elements of vector `v1` and vector `v2`.
### 21\.1\.6 Functions related to maximum and minimum values
**min(v)**
Returns the minimum value of the vector `v`.
**max(v)**
Returns the maximum value of the vector `v`.
**cummin(v)**
Returns the cumulative minimum elements of vector `v`
**cummax(v)**
Returns the cumulative maximum elements of vector `v`
**pmin(v1,v2\)**
Compares the corresponding elements of vectors `v1` and `v2`, and return a vector containing the smaller elements.
**pmax(v1,v2\)**
Compares the corresponding elements of vectors `v1` and `v2`, and return a vector containing the larger elements.
**range(v)**
Returns a vector consisting of the minimum and maximum values of vector `v`.
**clamp(min, v, max)**
Returns a vector that the elements of vector `v` smaller than `min` is replaced with `min` and the elements larger than `max` is replaced with `max`.
### 21\.1\.7 Functions related to summaries
**sum(v)**
Returns the sum of the elements of vector `v`.
**mean(v)**
Returns the arithmetic mean of the elements of vector `v`.
**median(v)**
Returns the median value of the elements of vector `v`.
**sd(v)**
Returns the standard deviation of the elements of vector `v`.
**var(v)**
Returns the variance of the elements of vector `v`.
**cumsum(v)**
Returns the cumulative sum of the elements of vector `v`
**cumprod(v)**
Returns the cumulative product of the elements of vector `v`
**table(v)**
Returns a named integer vector that counts the number of elements for each unique element of vector `v`.
### 21\.1\.8 Functions related to rounding values
**floor(v)**
Returns a vector containing the largest integer not greater than each element of vector `v`.
**ceil(v)**
Returns a vector containing the largest integer not smaller than each element of vector v.
**ceiling(v)**
Synonymous with `ceil()`.
**round(v, digits)**
Returns a vector obtained by rounding each element of the vector `v` with the number of significant figure `digits`.
**trunc(v)**
Returns a vector with rounded down decimal places.
### 21\.1\.9 Functions related to math
**sign(v)**
Returns a vector with the signs of the corresponding elements of `v` (the sign of a real number is 1, 0, or \-1 if the number is positive, zero, or negative, respectively).
**abs(v)**
Returns a vector containing the absolute value of each element of vector `v`.
**pow(v, n)**
Returns a vector by raising each element of the vector `v` to the `n`th power.
**sqrt(v)**
Returns a vector containing square root of each element of vector v.
**exp(v)**
Returns a vector by raising Napier number (e) to the power of value of each element of the vector v.
**expm1(v)**
Synonymous with `exp(v) - 1`
**log(v)**
Returns a vector containing the natural logarithmic of each element of vector `v`.
**log10(v)**
Returns a vector containing the common logarithmic of each element of vector `v`.
**log1p(v)**
Synonymous with `log(v+1)`
**sin(v)**
Returns a vector containing sine of each element of vector `v`.
**sinh(v)**
Returns a vector containing hyperbolic sine of each element of vector `v`.
**cos(v)**
Returns a vector containing cosine of each element of vector `v`.
**cosh(v)**
Returns a vector containing hyperbolic cosine of each element of vector `v`.
**tan(v)**
Returns a vector containing tangent of each element of vector `v`.
**tanh(v)**
Returns a vector containing hyperbolic tangent of each element of vector `v`.
**acos(v)**
Returns a vector containing arccosine of each element of vector `v`.
**asin(v)**
Returns a vector containing arcsine of each element of vector `v`.
**atan(v)**
Returns a vector containing arctangent of each element of vector `v`.
**gamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the gamma function.
**lgamma(v)**
Synonymous with `log(gamma(v))`
**digamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the first derivative function of `lgamma()`.
**trigamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the second derivative function of `lgamma()`.
**tetragamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the third derivative function of `lgamma()`.
**pentagamma(v)**
Returns a vector obtained by transforming each element of vector `v` with the fourth derivative function of `lgamma()`.
**psigamma(v, deriv)**
Returns a vector obtained by transforming each element of vector `v` with the `deriv`\-th derivative function of `lgamma()`.
**factrial(v)**
Returns a vector containing the factorial of the each element of vector `v`.
**lfactorial(v)**
Synonymous with `log(factrial(v))`
**choose(vn, vk)**
Returns a vector obtained by calculating binomial coefficients using the corresponding elements of real vector `vn` and integer vector `vk`.
**lchoose(vn, vk)**
Synonymous with `log(choose(vn, vk))`
**beta(va, vb)**
Returns a vector obtained by calculating the value of the beta function using the corresponding elements of vectors `va` and `vb`.
**lbeta(va, vb)**
Synonymous with `log(beta(va, vb))`
### 21\.1\.10 Functions related to logical values
**all(v)**
Receives a `LogicalVector` `v` and returns `SingleLogicalResult` type meaning`TRUE` when all of elements are `TRUE`.
**any(v)**
Receives a `LogicalVector` `v` and returns `SingleLogicalResult` type meaning`TRUE` when any of elements are `TRUE`.
**is\_true(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `TRUE`.
**is\_false(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `FALSE`.
**is\_na(x)**
Receives a return value of `all()` or `any()` and returns bool type `true` if it means `NA`.
**ifelse(v, x1, x2\)**
Receives the `LogicalVector` `v`, and returns a vector containing the corresponding element of x1 or x2 when the element of v is `TRUE` or `FALSE`, respectively. Although `x1` and `x2` can be vectors or scalars, the length of the vector needs to be equal to `v`.
### 21\.1\.11 Functions related to NA Inf NaN
**na\_omit(v)**
Returns a vector which deleted `NA` from vector `v`.
**is\_finite(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is not `Inf` nor `-Inf` nor `NA`.
**is\_infinite(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `Inf` or `-Inf` or `NA`.
**is\_na(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `NA` or `NaN`.
**is\_nan(v)**
Returns a `LogicalVector` containing `TRUE` if corresponding elements of vector `v` is `NaN`.
### 21\.1\.12 apply functions
**lapply(x, fun)**
Applies a C\+\+ function `fun` to each element of the vector `x` and returns the result as `List`.
**sapply(x, fun)**
Applies a C\+\+ function `fun` to each element of the vector `x` and returns the result as `Vector`.
**mapply(x1, x2, fun2\)**
Applies a C\+\+ function `fun` that receives two arguments to each corresponding elements of the vector `x1` and `x2` and returns the result as `Vector`.
**mapply(x1, x2, x3, fun3\)**
Applies a C\+\+ function `fun` that receives three arguments to each corresponding elements of the vector `x1` and `x2`, `x3` and returns the result as `Vector`.
### 21\.1\.13 cbind function
**cbind(x1, x2,…)**
Takes `Vector` or `Matrix` `x1` and `x2`, `...` and combine by columns. And returns the result as `Matrix` or `DataFrame`. You can pass up to 50 arguments.
### 21\.1\.14 sampling
**Vector sample(Vector x, int size, replace \= false, probs \= R\_NilValue)**
As with the `sample` function in R, this function takes a sample from a vector `x`.
* `x` : a vector you want to draw a sample.
* `size` : sample size of returned vector.
* `replace` : should sampling be with replacement. default `true`.
* `probs` : a vector that specify probability weights to be drawn for each elements of vector `x`. default `R_NilValue` (same weight for all elements).
`Vector<RTYPE> sample(const Vector<RTYPE>& x, int size, bool replace = false, sugar::probs_t probs = R_NilValue)`
**sample(n, size, replace \= TRUE, probs \= NULL, one\_based \= TRUE)**
As with the `sample.int` function in R.
`Vector<INTSXP> sample(int n, int size, bool replace = false, sugar::probs_t probs = R_NilValue, bool one_based = true);`
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/220_dpqr_functions.html |
Chapter 22 Probability distribution
===================================
Rcpp provides all major probability distribution functions in R. Same as R, four functions starting with the character d/p/q/r are defined for each probability distribution.
d/p/q/r functions on probability distribution XXX
* dXXX: Probability density function
* pXXX: Cumulative distribution function
* qXXX: Quantile function
* rXXX: Random number generation function
22\.1 Basic structure of probability distribution function
----------------------------------------------------------
In Rcpp, probability distribution functions with the same name are defined in two namespaces, `R::` and `Rcpp::`. These differences are that the function defined in `Rcpp::` namespace returns a vector, while the function in the `R::` namespace returns a scalar. Basically, the probability distribution functions defined in the `Rcpp::` namespace has the same functionalities as those in R. So normally you can use a function in the `Rcpp::` namespace, but if you want a scalar value, it is better to use that function in `R::` namespace because it’s faster.
The basic structures of the probability distribution functions defined in the `Rcpp::` namespace are shown below. In fact, the definition of the probability distribution function of the `Rcpp ::` namespace is not written directly in the source code of Rcpp (because it is written using macros). But you can assume that the function is defined like as below.
```
NumericVector Rcpp::dXXX( NumericVector x, double par, bool log = false )
NumericVector Rcpp::pXXX( NumericVector q, double par, bool lower = true, bool log = false )
NumericVector Rcpp::qXXX( NumericVector p, double par, bool lower = true, bool log = false )
NumericVector Rcpp::rXXX( int n, double par )
```
The basic structures of the probability distribution functions defined in the `R::` namespace are shown below.
It basically has the same functionality as those defined in the `Rcpp::` namespace except that it accepts and returns `double` type. In addition, the arguments of the function do not have default values, so user must give the value explicitly.
```
double R::dXXX( double x, double par, int log )
double R::pXXX( double q, double par, int lower, int log )
double R::qXXX( double p, double par, int lower, int log )
double R::rXXX( double par )
```
The arguments of the probability distribution function are described below.
* **x, q** : random variable
* **p** : probability
* **n** : number of observation
* **par** : Parameter(the number of distribution parameters varies depending on the distribution)
* **lower** : `true` : Calculate the probability of the region where the random variable is less than or equal to x, `false` : Calculate the probability of the region larger than x
* **log** : true : probabilities p are given as log(p)
22\.2 List of probability distribution functions
------------------------------------------------
List of probability distribution functions provided by Rcpp is shown below. Here, the names of the distribution parameters are matched with those of R, so refer to the R help for details.
### 22\.2\.1 Continuous probability distribution
* [Uniform distribution](220_dpqr_functions.html#uniform-distribution)
* [Normal distribution](220_dpqr_functions.html#normal-distribution)
* [Log\-normal distribution](220_dpqr_functions.html#log-normal-distribution)
* [Gamma distribution](220_dpqr_functions.html#gamma-distribution)
* [Beta distribution](220_dpqr_functions.html#beta-distribution)
* [Noncentral beta distribution](220_dpqr_functions.html#noncentral-beta-distribution)
* [Chi\-squared distribution](220_dpqr_functions.html#chi-squared-distribution)
* [Noncentral chi\-squared distribution](220_dpqr_functions.html#noncentral-chi-squared-distribution)
* [t\-distribution](220_dpqr_functions.html#t-distribution)
* [Noncentral t\-distribution](220_dpqr_functions.html#noncentral-t-distribution)
* [F\-distribution](220_dpqr_functions.html#f-distribution)
* [Noncentral F\-distribution](220_dpqr_functions.html#noncentral-f-distribution)
* [Cauchy distribution](220_dpqr_functions.html#cauchy-distribution)
* [Exponential distribution](220_dpqr_functions.html#exponential-distribution)
* [Logistic distribution](220_dpqr_functions.html#logistic-distribution)
* [Weibull distribution](220_dpqr_functions.html#weibull-distribution)
### 22\.2\.2 Discrete probability distribution
* [Binomial distribution](220_dpqr_functions.html#binomial-distribution)
* [Negative binomial distribution (with success probability as parameter)](220_dpqr_functions.html#negative-binomial-distribution-with-success-probability-as-parameter))
* [Negative binomial distribution (with mean as parameter)](220_dpqr_functions.html#negative-binomial-distribution-with-mean-as-parameter)
* [Poisson distribution](220_dpqr_functions.html#poisson-distribution)
* [Geometric distribution](220_dpqr_functions.html#geometric-distribution)
* [Hypergeometric distribution](#Hypergeometric%20distribution)
* [Distribution of Wilcoxon rank\-sum test statistic](220_dpqr_functions.html#distribution-of-wilcoxon-rank-sum-test-statistic)
* [Distribution of Wilcoxon signed\-rank test statistic](220_dpqr_functions.html#distribution-of-wilcoxon-signed-rank-test-statistic)
22\.3 Continuous probability distribution
-----------------------------------------
### 22\.3\.1 Uniform distribution
These functions provide information about the uniform distribution on the interval from `min` to `max`.
```
Rcpp::dunif( x, min = 0.0, max = 1.0, log = false )
Rcpp::punif( q, min = 0.0, max = 1.0, lower = true, log = false )
Rcpp::qunif( p, min = 0.0, max = 1.0, lower = true, log = false )
Rcpp::runif( n, min = 0.0, max = 1.0 )
R::dunif( x, min, max, log )
R::punif( q, min, max, lower, log )
R::qunif( p, min, max, lower, log )
R::runif( min, max )
```
### 22\.3\.2 Normal distribution
These functions provide information about the normal distribution with mean equal to `mean` and standard deviation equal to `sd`.
```
Rcpp::dnorm( x, mean = 0.0, sd = 1.0, log = false )
Rcpp::pnorm( q, mean = 0.0, sd = 1.0, lower = true, log = false )
Rcpp::qnorm( p, mean = 0.0, sd = 1.0, lower = true, log = false )
Rcpp::rnorm( n, mean = 0.0, sd = 1.0 )
R::dnorm( x, mean, sd, log )
R::pnorm( q, mean, sd, lower, log )
R::qnorm( p, mean, sd, lower, log )
R::rnorm( mean, sd )
```
### 22\.3\.3 Log\-normal distribution
These functions provide information about the log\-normal distribution whose logarithm has mean equal to `meanlog` and standard deviation equal to `sdlog`.
```
Rcpp::dlnorm( x, meanlog = 0.0, sdlog = 1.0, log = false )
Rcpp::plnorm( q, meanlog = 0.0, sdlog = 1.0, lower = true, log = false )
Rcpp::qlnorm( p, meanlog = 0.0, sdlog = 1.0, lower = true, log = false )
Rcpp::rlnorm( n, meanlog = 0.0, sdlog = 1.0 )
R::dlnorm( x, meanlog, sdlog, log )
R::plnorm( q, meanlog, sdlog, lower, log )
R::qlnorm( p, meanlog, sdlog, lower, log )
R::rlnorm( meanlog, sdlog )
```
### 22\.3\.4 Gamma distribution
These functions provide information about the Gamma distribution with parameters `shape` and `scale`.
```
Rcpp::dgamma( x, shape, scale = 1.0, log = false )
Rcpp::pgamma( q, shape, scale = 1.0, lower = true, log = false )
Rcpp::qgamma( p, shape, scale = 1.0, lower = true, log = false )
Rcpp::rgamma( n, shape, scale = 1.0 )
R::dgamma( x, shape, scale, log )
R::pgamma( x, shape, scale, lower, log )
R::qgamma( q, shape, scale, lower, log )
R::rgamma( shape, scale )
```
### 22\.3\.5 Beta distribution
These functions provide information about the Beta distribution with parameters `shape1` and `shape2`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dbeta( x, shape1, shape2, log = false )
Rcpp::pbeta( x, shape1, shape2, lower = true, log = false )
Rcpp::qbeta( q, shape1, shape2, lower = true, log = false )
Rcpp::rbeta( n, shape1, shape2)
R::dbeta( x, shape1, shape2, log )
R::pbeta( x, shape1, shape2, lower, log )
R::qbeta( q, shape1, shape2, lower, log )
R::rbeta( shape1, shape2 )
```
### 22\.3\.6 Noncentral beta distribution
These functions provide information about the Noncentral beta distribution with parameters `shape1` and `shape2`, noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dnbeta( x, shape1, shape2, ncp, log = false );
Rcpp::pnbeta( x, shape1, shape2, ncp, lower = true, log = false );
Rcpp::qnbeta( q, shape1, shape2, ncp, lower = true, log = false );
// Rcpp::rnbeta() does not exist
R::dnbeta( x, shape1, shape2, ncp, log )
R::pnbeta( x, shape1, shape2, ncp, lower, log )
R::qnbeta( q, shape1, shape2, ncp, lower, log )
R::rnbeta( shape1, shape2, ncp )
```
### 22\.3\.7 Chi\-squared distribution
These functions provide information about the Chi\-squared distribution with df degrees of freedom `df`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dchisq( x, df, log = false )
Rcpp::pchisq( x, df, lower = true, log = false )
Rcpp::qchisq( q, df, lower = true, log = false )
Rcpp::rchisq( n, df)
R::dchisq( x, df, log )
R::pchisq( x, df, lower, log )
R::qchisq( q, df, lower, log )
R::rchisq( df )
```
### 22\.3\.8 Noncentral chi\-squared distribution
These functions provide information about the Noncentral chi\-squared distribution with df degrees of freedom `df` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Chi\-squared distribution function in R.
```
Rcpp::dnchisq( x, df, ncp, log = false )
Rcpp::pnchisq( x, df, ncp, lower = true, log = false )
Rcpp::qnchisq( q, df, ncp, lower = true, log = false )
Rcpp::rnchisq( n, df, ncp = 0.0 )
R::dnchisq( x, df, ncp, log )
R::pnchisq( x, df, ncp, lower, log )
R::qnchisq( q, df, ncp, lower, log )
R::rnchisq( df, ncp )
```
### 22\.3\.9 t\-distribution
These functions provide information about the t\-distribution with df degrees of freedom `df`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dt( x, df, log = false )
Rcpp::pt( x, df, lower = true, log = false )
Rcpp::qt( q, df, lower = true, log = false )
Rcpp::rt( n, df )
R::dt( x, df, log )
R::pt( x, df, lower, log )
R::qt( q, df, lower, log )
R::rt( df )
```
### 22\.3\.10 Noncentral t\-distribution
These functions provide information about the Noncentral t\-distribution with df degrees of freedom `df` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the t\-distribution function in R.
```
Rcpp::dnt( x, df, ncp, log = false )
Rcpp::pnt( x, df, ncp, lower = true, log = false )
Rcpp::qnt( q, df, ncp, lower = true, log = false )
// Rcpp::rnt() does not exist.
R::dnt( x, df, ncp, log )
R::pnt( x, df, ncp, lower, log )
R::qnt( q, df, ncp, lower, log )
// R::rnt() does not exist.
```
### 22\.3\.11 F\-distribution
These functions provide information about the F\-distribution with df degrees of freedom `df1` and `df2`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the F\-distribution function in R.
```
Rcpp::df( x, df1, df2, log = false )
Rcpp::pf( x, df1, df2, lower = true, log = false )
Rcpp::qf( q, df1, df2, lower = true, log = false )
Rcpp::rf( n, df1, df1 )
R::df( x, df1, df2, log )
R::pf( x, df1, df2, lower, log )
R::qf( q, df1, df2, lower, log )
R::rf( df1, df2 )
```
### 22\.3\.12 Noncentral F\-distribution
These functions provide information about the F\-distribution with df degrees of freedom `df1`, `df2` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Noncentral F\-distribution function in R.
```
Rcpp::dnf( x, df1, df2, ncp, log = false )
Rcpp::pnf( x, df1, df2, ncp, lower = true, log = false )
Rcpp::qnf( q, df1, df2, ncp, lower = true, log = false )
// Rcpp::rnf() does not exist.
R::dnf( x, df1, df2, ncp, log )
R::pnf( x, df1, df2, ncp, lower, log )
R::qnf( q, df1, df2, ncp, lower, log )
// R::rnf() does not exist.
```
### 22\.3\.13 Cauchy distribution
These functions provide information about the Cauchy distribution with location parameter `location` and scale parameter `scale`.
```
Rcpp::dcauchy( x, location = 0.0, scale = 1.0, log = false )
Rcpp::pcauchy( x, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::qcauchy( q, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::rcauchy( n, location = 0.0, scale = 1.0)
R::dcauchy( x, location, scale, log )
R::pcauchy( x, location, scale, lower, log )
R::qcauchy( q, location, scale, lower, log )
R::rcauchy( location, scale )
```
### 22\.3\.14 Exponential distribution
These functions provide information about the Exponential distribution with rate `rate` (The mean of Exponential distribution equals to 1/rate).
```
Rcpp::dexp( x, rate = 1.0, log = false )
Rcpp::pexp( x, rate = 1.0, lower = true, log = false )
Rcpp::qexp( q, rate = 1.0, lower = true, log = false )
Rcpp::rexp( n, rate = 1.0)
// The R namespace version are parameterised in terms of the scale (=1/rate)
R::dexp( x, scale, log )
R::pexp( x, scale, lower, log )
R::qexp( q, scale, lower, log )
R::rexp( scale )
```
### 22\.3\.15 Logistic distribution
These functions provide information about the Logistic distribution with parameters `location` and `scale`.
```
Rcpp::dlogis( x, location = 0.0, scale = 1.0, log = false )
Rcpp::plogis( x, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::qlogis( q, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::rlogis( n, location = 0.0, scale = 1.0 )
R::dlogis( x, location, scale, log )
R::plogis( x, location, scale, lower, log )
R::qlogis( q, location, scale, lower, log )
R::rlogis( location, scale )
```
### 22\.3\.16 Weibull distribution
These functions provide information about the Weibull distribution with parameters `shape` and `scale`.
```
Rcpp::dweibull( x, shape, scale = 1.0, log = false )
Rcpp::pweibull( x, shape, scale = 1.0, lower = true, log = false )
Rcpp::qweibull( q, shape, scale = 1.0, lower = true, log = false )
Rcpp::rweibull( n, shape, scale = 1.0 )
R::dweibull( x, shape, scale, log )
R::pweibull( x, shape, scale, lower, log )
R::qweibull( q, shape, scale, lower, log )
R::rweibull( shape, scale )
```
22\.4 Discrete probability distribution
---------------------------------------
### 22\.4\.1 Binomial distribution
These functions provide information about the Binomial distribution with number of trials `size` and success probability `prob`.
```
Rcpp::dbinom( x, size, prob, log = false )
Rcpp::pbinom( x, size, prob, lower = true, log = false )
Rcpp::qbinom( q, size, prob, lower = true, log = false )
Rcpp::rbinom( n, size, prob )
R::dbinom( x, size, prob, log )
R::pbinom( x, size, prob, lower, log )
R::qbinom( q, size, prob, lower, log )
R::rbinom( size, prob )
```
### 22\.4\.2 Negative binomial distribution (with success probability as parameter)
These functions provide information about the Negative binomial distribution with number of success `size` and success probability `prob`.
```
Rcpp::dnbinom( x, size, prob, log = false )
Rcpp::pnbinom( x, size, prob, lower = true, log = false )
Rcpp::qnbinom( q, size, prob, lower = true, log = false )
Rcpp::rnbinom( n, size, prob )
R::dnbinom( x, size, prob, log )
R::pnbinom( x, size, prob, lower, log )
R::qnbinom( q, size, prob, lower, log )
R::rnbinom( size, prob )
```
### 22\.4\.3 Negative binomial distribution (with mean as parameter)
These functions provide information about the Negative binomial distribution with number of success `size` and mean `mu`.
```
Rcpp::dnbinom_mu( x, size, mu, log = false )
Rcpp::pnbinom_mu( x, size, mu, lower = true, log = false )
Rcpp::qnbinom_mu( q, size, mu, lower = true, log = false )
Rcpp::rnbinom_mu( n, size, mu )
R::dnbinom_mu( x, size, mu, log )
R::pnbinom_mu( x, size, mu, lower, log )
R::qnbinom_mu( q, size, mu, lower, log )
R::rnbinom_mu( size, mu )
```
### 22\.4\.4 Poisson distribution
These functions provide information about the Poisson distribution with mean and variance are equal to `lambda`.
```
Rcpp::dpois( x, lambda, log = false )
Rcpp::ppois( x, lambda, lower = true, log = false )
Rcpp::qpois( q, lambda, lower = true, log = false )
Rcpp::rpois( n, lambda )
R::dpois( x, lambda, log )
R::ppois( x, lambda, lower, log )
R::qpois( q, lambda, lower, log )
R::rpois( lambda )
```
### 22\.4\.5 Geometric distribution
These functions provide information about the Geometric distribution with success probability `prob`.
```
Rcpp::dgeom( x, prob, log = false )
Rcpp::pgeom( x, prob, lower = true, log = false )
Rcpp::qgeom( q, prob, lower = true, log = false )
Rcpp::rgeom( n, prob )
R::dgeom( x, prob, log )
R::pgeom( x, prob, lower, log )
R::qgeom( q, prob, lower, log )
R::rgeom( prob )
```
### 22\.4\.6 Hypergeometric distribution
These functions provide information about the Hypergeometric distribution with number of success in the population `m` , number of failure in the population `n`, number of sample from the population `k`.
```
Rcpp::dhyper( x, m, n, k, log = false )
Rcpp::phyper( x, m, n, k, lower = true, log = false )
Rcpp::qhyper( q, m, n, k, lower = true, log = false )
Rcpp::rhyper(nn, m, n, k )
R::dhyper( x, m, n, k, log )
R::phyper( x, m, n, k, lower, log )
R::qhyper( q, m, n, k, lower, log )
R::rhyper( m, n, k )
```
### 22\.4\.7 Distribution of Wilcoxon rank\-sum test statistic
These functions provide information about the distribution of test statistic when Wilcoxon rank\-sum test (Mann–Whitney U test) is performed on two specimens with number of samples `m` and `n` respectively.
```
// Rcpp::dwilcox() does not exist.
// Rcpp::pwilcox() does not exist.
// Rcpp::qwilcox() does not exist.
Rcpp::rwilcox( nn, m, n );
R::dwilcox( x, m, n, log )
R::pwilcox( x, m, n, lower, log )
R::qwilcox( q, m, n, lower, log )
R::rwilcox( m, n )
```
### 22\.4\.8 Distribution of Wilcoxon signed\-rank test statistic
These functions provide information about the distribution of test statistic when Wilcoxon signed\-rank test is performed with number of samples `n`.
```
// Rcpp::dsignrank() does not exist.
// Rcpp::psignrank() does not exist.
// Rcpp::qsignrank() does not exist.
Rcpp::rsignrank( nn, n )
R::dsignrank( x, n, log )
R::psignrank( x, n, lower, log )
R::qsignrank( q, n, lower, log )
R::rsignrank( n )
```
22\.1 Basic structure of probability distribution function
----------------------------------------------------------
In Rcpp, probability distribution functions with the same name are defined in two namespaces, `R::` and `Rcpp::`. These differences are that the function defined in `Rcpp::` namespace returns a vector, while the function in the `R::` namespace returns a scalar. Basically, the probability distribution functions defined in the `Rcpp::` namespace has the same functionalities as those in R. So normally you can use a function in the `Rcpp::` namespace, but if you want a scalar value, it is better to use that function in `R::` namespace because it’s faster.
The basic structures of the probability distribution functions defined in the `Rcpp::` namespace are shown below. In fact, the definition of the probability distribution function of the `Rcpp ::` namespace is not written directly in the source code of Rcpp (because it is written using macros). But you can assume that the function is defined like as below.
```
NumericVector Rcpp::dXXX( NumericVector x, double par, bool log = false )
NumericVector Rcpp::pXXX( NumericVector q, double par, bool lower = true, bool log = false )
NumericVector Rcpp::qXXX( NumericVector p, double par, bool lower = true, bool log = false )
NumericVector Rcpp::rXXX( int n, double par )
```
The basic structures of the probability distribution functions defined in the `R::` namespace are shown below.
It basically has the same functionality as those defined in the `Rcpp::` namespace except that it accepts and returns `double` type. In addition, the arguments of the function do not have default values, so user must give the value explicitly.
```
double R::dXXX( double x, double par, int log )
double R::pXXX( double q, double par, int lower, int log )
double R::qXXX( double p, double par, int lower, int log )
double R::rXXX( double par )
```
The arguments of the probability distribution function are described below.
* **x, q** : random variable
* **p** : probability
* **n** : number of observation
* **par** : Parameter(the number of distribution parameters varies depending on the distribution)
* **lower** : `true` : Calculate the probability of the region where the random variable is less than or equal to x, `false` : Calculate the probability of the region larger than x
* **log** : true : probabilities p are given as log(p)
22\.2 List of probability distribution functions
------------------------------------------------
List of probability distribution functions provided by Rcpp is shown below. Here, the names of the distribution parameters are matched with those of R, so refer to the R help for details.
### 22\.2\.1 Continuous probability distribution
* [Uniform distribution](220_dpqr_functions.html#uniform-distribution)
* [Normal distribution](220_dpqr_functions.html#normal-distribution)
* [Log\-normal distribution](220_dpqr_functions.html#log-normal-distribution)
* [Gamma distribution](220_dpqr_functions.html#gamma-distribution)
* [Beta distribution](220_dpqr_functions.html#beta-distribution)
* [Noncentral beta distribution](220_dpqr_functions.html#noncentral-beta-distribution)
* [Chi\-squared distribution](220_dpqr_functions.html#chi-squared-distribution)
* [Noncentral chi\-squared distribution](220_dpqr_functions.html#noncentral-chi-squared-distribution)
* [t\-distribution](220_dpqr_functions.html#t-distribution)
* [Noncentral t\-distribution](220_dpqr_functions.html#noncentral-t-distribution)
* [F\-distribution](220_dpqr_functions.html#f-distribution)
* [Noncentral F\-distribution](220_dpqr_functions.html#noncentral-f-distribution)
* [Cauchy distribution](220_dpqr_functions.html#cauchy-distribution)
* [Exponential distribution](220_dpqr_functions.html#exponential-distribution)
* [Logistic distribution](220_dpqr_functions.html#logistic-distribution)
* [Weibull distribution](220_dpqr_functions.html#weibull-distribution)
### 22\.2\.2 Discrete probability distribution
* [Binomial distribution](220_dpqr_functions.html#binomial-distribution)
* [Negative binomial distribution (with success probability as parameter)](220_dpqr_functions.html#negative-binomial-distribution-with-success-probability-as-parameter))
* [Negative binomial distribution (with mean as parameter)](220_dpqr_functions.html#negative-binomial-distribution-with-mean-as-parameter)
* [Poisson distribution](220_dpqr_functions.html#poisson-distribution)
* [Geometric distribution](220_dpqr_functions.html#geometric-distribution)
* [Hypergeometric distribution](#Hypergeometric%20distribution)
* [Distribution of Wilcoxon rank\-sum test statistic](220_dpqr_functions.html#distribution-of-wilcoxon-rank-sum-test-statistic)
* [Distribution of Wilcoxon signed\-rank test statistic](220_dpqr_functions.html#distribution-of-wilcoxon-signed-rank-test-statistic)
### 22\.2\.1 Continuous probability distribution
* [Uniform distribution](220_dpqr_functions.html#uniform-distribution)
* [Normal distribution](220_dpqr_functions.html#normal-distribution)
* [Log\-normal distribution](220_dpqr_functions.html#log-normal-distribution)
* [Gamma distribution](220_dpqr_functions.html#gamma-distribution)
* [Beta distribution](220_dpqr_functions.html#beta-distribution)
* [Noncentral beta distribution](220_dpqr_functions.html#noncentral-beta-distribution)
* [Chi\-squared distribution](220_dpqr_functions.html#chi-squared-distribution)
* [Noncentral chi\-squared distribution](220_dpqr_functions.html#noncentral-chi-squared-distribution)
* [t\-distribution](220_dpqr_functions.html#t-distribution)
* [Noncentral t\-distribution](220_dpqr_functions.html#noncentral-t-distribution)
* [F\-distribution](220_dpqr_functions.html#f-distribution)
* [Noncentral F\-distribution](220_dpqr_functions.html#noncentral-f-distribution)
* [Cauchy distribution](220_dpqr_functions.html#cauchy-distribution)
* [Exponential distribution](220_dpqr_functions.html#exponential-distribution)
* [Logistic distribution](220_dpqr_functions.html#logistic-distribution)
* [Weibull distribution](220_dpqr_functions.html#weibull-distribution)
### 22\.2\.2 Discrete probability distribution
* [Binomial distribution](220_dpqr_functions.html#binomial-distribution)
* [Negative binomial distribution (with success probability as parameter)](220_dpqr_functions.html#negative-binomial-distribution-with-success-probability-as-parameter))
* [Negative binomial distribution (with mean as parameter)](220_dpqr_functions.html#negative-binomial-distribution-with-mean-as-parameter)
* [Poisson distribution](220_dpqr_functions.html#poisson-distribution)
* [Geometric distribution](220_dpqr_functions.html#geometric-distribution)
* [Hypergeometric distribution](#Hypergeometric%20distribution)
* [Distribution of Wilcoxon rank\-sum test statistic](220_dpqr_functions.html#distribution-of-wilcoxon-rank-sum-test-statistic)
* [Distribution of Wilcoxon signed\-rank test statistic](220_dpqr_functions.html#distribution-of-wilcoxon-signed-rank-test-statistic)
22\.3 Continuous probability distribution
-----------------------------------------
### 22\.3\.1 Uniform distribution
These functions provide information about the uniform distribution on the interval from `min` to `max`.
```
Rcpp::dunif( x, min = 0.0, max = 1.0, log = false )
Rcpp::punif( q, min = 0.0, max = 1.0, lower = true, log = false )
Rcpp::qunif( p, min = 0.0, max = 1.0, lower = true, log = false )
Rcpp::runif( n, min = 0.0, max = 1.0 )
R::dunif( x, min, max, log )
R::punif( q, min, max, lower, log )
R::qunif( p, min, max, lower, log )
R::runif( min, max )
```
### 22\.3\.2 Normal distribution
These functions provide information about the normal distribution with mean equal to `mean` and standard deviation equal to `sd`.
```
Rcpp::dnorm( x, mean = 0.0, sd = 1.0, log = false )
Rcpp::pnorm( q, mean = 0.0, sd = 1.0, lower = true, log = false )
Rcpp::qnorm( p, mean = 0.0, sd = 1.0, lower = true, log = false )
Rcpp::rnorm( n, mean = 0.0, sd = 1.0 )
R::dnorm( x, mean, sd, log )
R::pnorm( q, mean, sd, lower, log )
R::qnorm( p, mean, sd, lower, log )
R::rnorm( mean, sd )
```
### 22\.3\.3 Log\-normal distribution
These functions provide information about the log\-normal distribution whose logarithm has mean equal to `meanlog` and standard deviation equal to `sdlog`.
```
Rcpp::dlnorm( x, meanlog = 0.0, sdlog = 1.0, log = false )
Rcpp::plnorm( q, meanlog = 0.0, sdlog = 1.0, lower = true, log = false )
Rcpp::qlnorm( p, meanlog = 0.0, sdlog = 1.0, lower = true, log = false )
Rcpp::rlnorm( n, meanlog = 0.0, sdlog = 1.0 )
R::dlnorm( x, meanlog, sdlog, log )
R::plnorm( q, meanlog, sdlog, lower, log )
R::qlnorm( p, meanlog, sdlog, lower, log )
R::rlnorm( meanlog, sdlog )
```
### 22\.3\.4 Gamma distribution
These functions provide information about the Gamma distribution with parameters `shape` and `scale`.
```
Rcpp::dgamma( x, shape, scale = 1.0, log = false )
Rcpp::pgamma( q, shape, scale = 1.0, lower = true, log = false )
Rcpp::qgamma( p, shape, scale = 1.0, lower = true, log = false )
Rcpp::rgamma( n, shape, scale = 1.0 )
R::dgamma( x, shape, scale, log )
R::pgamma( x, shape, scale, lower, log )
R::qgamma( q, shape, scale, lower, log )
R::rgamma( shape, scale )
```
### 22\.3\.5 Beta distribution
These functions provide information about the Beta distribution with parameters `shape1` and `shape2`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dbeta( x, shape1, shape2, log = false )
Rcpp::pbeta( x, shape1, shape2, lower = true, log = false )
Rcpp::qbeta( q, shape1, shape2, lower = true, log = false )
Rcpp::rbeta( n, shape1, shape2)
R::dbeta( x, shape1, shape2, log )
R::pbeta( x, shape1, shape2, lower, log )
R::qbeta( q, shape1, shape2, lower, log )
R::rbeta( shape1, shape2 )
```
### 22\.3\.6 Noncentral beta distribution
These functions provide information about the Noncentral beta distribution with parameters `shape1` and `shape2`, noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dnbeta( x, shape1, shape2, ncp, log = false );
Rcpp::pnbeta( x, shape1, shape2, ncp, lower = true, log = false );
Rcpp::qnbeta( q, shape1, shape2, ncp, lower = true, log = false );
// Rcpp::rnbeta() does not exist
R::dnbeta( x, shape1, shape2, ncp, log )
R::pnbeta( x, shape1, shape2, ncp, lower, log )
R::qnbeta( q, shape1, shape2, ncp, lower, log )
R::rnbeta( shape1, shape2, ncp )
```
### 22\.3\.7 Chi\-squared distribution
These functions provide information about the Chi\-squared distribution with df degrees of freedom `df`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dchisq( x, df, log = false )
Rcpp::pchisq( x, df, lower = true, log = false )
Rcpp::qchisq( q, df, lower = true, log = false )
Rcpp::rchisq( n, df)
R::dchisq( x, df, log )
R::pchisq( x, df, lower, log )
R::qchisq( q, df, lower, log )
R::rchisq( df )
```
### 22\.3\.8 Noncentral chi\-squared distribution
These functions provide information about the Noncentral chi\-squared distribution with df degrees of freedom `df` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Chi\-squared distribution function in R.
```
Rcpp::dnchisq( x, df, ncp, log = false )
Rcpp::pnchisq( x, df, ncp, lower = true, log = false )
Rcpp::qnchisq( q, df, ncp, lower = true, log = false )
Rcpp::rnchisq( n, df, ncp = 0.0 )
R::dnchisq( x, df, ncp, log )
R::pnchisq( x, df, ncp, lower, log )
R::qnchisq( q, df, ncp, lower, log )
R::rnchisq( df, ncp )
```
### 22\.3\.9 t\-distribution
These functions provide information about the t\-distribution with df degrees of freedom `df`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dt( x, df, log = false )
Rcpp::pt( x, df, lower = true, log = false )
Rcpp::qt( q, df, lower = true, log = false )
Rcpp::rt( n, df )
R::dt( x, df, log )
R::pt( x, df, lower, log )
R::qt( q, df, lower, log )
R::rt( df )
```
### 22\.3\.10 Noncentral t\-distribution
These functions provide information about the Noncentral t\-distribution with df degrees of freedom `df` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the t\-distribution function in R.
```
Rcpp::dnt( x, df, ncp, log = false )
Rcpp::pnt( x, df, ncp, lower = true, log = false )
Rcpp::qnt( q, df, ncp, lower = true, log = false )
// Rcpp::rnt() does not exist.
R::dnt( x, df, ncp, log )
R::pnt( x, df, ncp, lower, log )
R::qnt( q, df, ncp, lower, log )
// R::rnt() does not exist.
```
### 22\.3\.11 F\-distribution
These functions provide information about the F\-distribution with df degrees of freedom `df1` and `df2`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the F\-distribution function in R.
```
Rcpp::df( x, df1, df2, log = false )
Rcpp::pf( x, df1, df2, lower = true, log = false )
Rcpp::qf( q, df1, df2, lower = true, log = false )
Rcpp::rf( n, df1, df1 )
R::df( x, df1, df2, log )
R::pf( x, df1, df2, lower, log )
R::qf( q, df1, df2, lower, log )
R::rf( df1, df2 )
```
### 22\.3\.12 Noncentral F\-distribution
These functions provide information about the F\-distribution with df degrees of freedom `df1`, `df2` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Noncentral F\-distribution function in R.
```
Rcpp::dnf( x, df1, df2, ncp, log = false )
Rcpp::pnf( x, df1, df2, ncp, lower = true, log = false )
Rcpp::qnf( q, df1, df2, ncp, lower = true, log = false )
// Rcpp::rnf() does not exist.
R::dnf( x, df1, df2, ncp, log )
R::pnf( x, df1, df2, ncp, lower, log )
R::qnf( q, df1, df2, ncp, lower, log )
// R::rnf() does not exist.
```
### 22\.3\.13 Cauchy distribution
These functions provide information about the Cauchy distribution with location parameter `location` and scale parameter `scale`.
```
Rcpp::dcauchy( x, location = 0.0, scale = 1.0, log = false )
Rcpp::pcauchy( x, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::qcauchy( q, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::rcauchy( n, location = 0.0, scale = 1.0)
R::dcauchy( x, location, scale, log )
R::pcauchy( x, location, scale, lower, log )
R::qcauchy( q, location, scale, lower, log )
R::rcauchy( location, scale )
```
### 22\.3\.14 Exponential distribution
These functions provide information about the Exponential distribution with rate `rate` (The mean of Exponential distribution equals to 1/rate).
```
Rcpp::dexp( x, rate = 1.0, log = false )
Rcpp::pexp( x, rate = 1.0, lower = true, log = false )
Rcpp::qexp( q, rate = 1.0, lower = true, log = false )
Rcpp::rexp( n, rate = 1.0)
// The R namespace version are parameterised in terms of the scale (=1/rate)
R::dexp( x, scale, log )
R::pexp( x, scale, lower, log )
R::qexp( q, scale, lower, log )
R::rexp( scale )
```
### 22\.3\.15 Logistic distribution
These functions provide information about the Logistic distribution with parameters `location` and `scale`.
```
Rcpp::dlogis( x, location = 0.0, scale = 1.0, log = false )
Rcpp::plogis( x, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::qlogis( q, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::rlogis( n, location = 0.0, scale = 1.0 )
R::dlogis( x, location, scale, log )
R::plogis( x, location, scale, lower, log )
R::qlogis( q, location, scale, lower, log )
R::rlogis( location, scale )
```
### 22\.3\.16 Weibull distribution
These functions provide information about the Weibull distribution with parameters `shape` and `scale`.
```
Rcpp::dweibull( x, shape, scale = 1.0, log = false )
Rcpp::pweibull( x, shape, scale = 1.0, lower = true, log = false )
Rcpp::qweibull( q, shape, scale = 1.0, lower = true, log = false )
Rcpp::rweibull( n, shape, scale = 1.0 )
R::dweibull( x, shape, scale, log )
R::pweibull( x, shape, scale, lower, log )
R::qweibull( q, shape, scale, lower, log )
R::rweibull( shape, scale )
```
### 22\.3\.1 Uniform distribution
These functions provide information about the uniform distribution on the interval from `min` to `max`.
```
Rcpp::dunif( x, min = 0.0, max = 1.0, log = false )
Rcpp::punif( q, min = 0.0, max = 1.0, lower = true, log = false )
Rcpp::qunif( p, min = 0.0, max = 1.0, lower = true, log = false )
Rcpp::runif( n, min = 0.0, max = 1.0 )
R::dunif( x, min, max, log )
R::punif( q, min, max, lower, log )
R::qunif( p, min, max, lower, log )
R::runif( min, max )
```
### 22\.3\.2 Normal distribution
These functions provide information about the normal distribution with mean equal to `mean` and standard deviation equal to `sd`.
```
Rcpp::dnorm( x, mean = 0.0, sd = 1.0, log = false )
Rcpp::pnorm( q, mean = 0.0, sd = 1.0, lower = true, log = false )
Rcpp::qnorm( p, mean = 0.0, sd = 1.0, lower = true, log = false )
Rcpp::rnorm( n, mean = 0.0, sd = 1.0 )
R::dnorm( x, mean, sd, log )
R::pnorm( q, mean, sd, lower, log )
R::qnorm( p, mean, sd, lower, log )
R::rnorm( mean, sd )
```
### 22\.3\.3 Log\-normal distribution
These functions provide information about the log\-normal distribution whose logarithm has mean equal to `meanlog` and standard deviation equal to `sdlog`.
```
Rcpp::dlnorm( x, meanlog = 0.0, sdlog = 1.0, log = false )
Rcpp::plnorm( q, meanlog = 0.0, sdlog = 1.0, lower = true, log = false )
Rcpp::qlnorm( p, meanlog = 0.0, sdlog = 1.0, lower = true, log = false )
Rcpp::rlnorm( n, meanlog = 0.0, sdlog = 1.0 )
R::dlnorm( x, meanlog, sdlog, log )
R::plnorm( q, meanlog, sdlog, lower, log )
R::qlnorm( p, meanlog, sdlog, lower, log )
R::rlnorm( meanlog, sdlog )
```
### 22\.3\.4 Gamma distribution
These functions provide information about the Gamma distribution with parameters `shape` and `scale`.
```
Rcpp::dgamma( x, shape, scale = 1.0, log = false )
Rcpp::pgamma( q, shape, scale = 1.0, lower = true, log = false )
Rcpp::qgamma( p, shape, scale = 1.0, lower = true, log = false )
Rcpp::rgamma( n, shape, scale = 1.0 )
R::dgamma( x, shape, scale, log )
R::pgamma( x, shape, scale, lower, log )
R::qgamma( q, shape, scale, lower, log )
R::rgamma( shape, scale )
```
### 22\.3\.5 Beta distribution
These functions provide information about the Beta distribution with parameters `shape1` and `shape2`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dbeta( x, shape1, shape2, log = false )
Rcpp::pbeta( x, shape1, shape2, lower = true, log = false )
Rcpp::qbeta( q, shape1, shape2, lower = true, log = false )
Rcpp::rbeta( n, shape1, shape2)
R::dbeta( x, shape1, shape2, log )
R::pbeta( x, shape1, shape2, lower, log )
R::qbeta( q, shape1, shape2, lower, log )
R::rbeta( shape1, shape2 )
```
### 22\.3\.6 Noncentral beta distribution
These functions provide information about the Noncentral beta distribution with parameters `shape1` and `shape2`, noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dnbeta( x, shape1, shape2, ncp, log = false );
Rcpp::pnbeta( x, shape1, shape2, ncp, lower = true, log = false );
Rcpp::qnbeta( q, shape1, shape2, ncp, lower = true, log = false );
// Rcpp::rnbeta() does not exist
R::dnbeta( x, shape1, shape2, ncp, log )
R::pnbeta( x, shape1, shape2, ncp, lower, log )
R::qnbeta( q, shape1, shape2, ncp, lower, log )
R::rnbeta( shape1, shape2, ncp )
```
### 22\.3\.7 Chi\-squared distribution
These functions provide information about the Chi\-squared distribution with df degrees of freedom `df`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dchisq( x, df, log = false )
Rcpp::pchisq( x, df, lower = true, log = false )
Rcpp::qchisq( q, df, lower = true, log = false )
Rcpp::rchisq( n, df)
R::dchisq( x, df, log )
R::pchisq( x, df, lower, log )
R::qchisq( q, df, lower, log )
R::rchisq( df )
```
### 22\.3\.8 Noncentral chi\-squared distribution
These functions provide information about the Noncentral chi\-squared distribution with df degrees of freedom `df` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Chi\-squared distribution function in R.
```
Rcpp::dnchisq( x, df, ncp, log = false )
Rcpp::pnchisq( x, df, ncp, lower = true, log = false )
Rcpp::qnchisq( q, df, ncp, lower = true, log = false )
Rcpp::rnchisq( n, df, ncp = 0.0 )
R::dnchisq( x, df, ncp, log )
R::pnchisq( x, df, ncp, lower, log )
R::qnchisq( q, df, ncp, lower, log )
R::rnchisq( df, ncp )
```
### 22\.3\.9 t\-distribution
These functions provide information about the t\-distribution with df degrees of freedom `df`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the Beta distribution function in R.
```
Rcpp::dt( x, df, log = false )
Rcpp::pt( x, df, lower = true, log = false )
Rcpp::qt( q, df, lower = true, log = false )
Rcpp::rt( n, df )
R::dt( x, df, log )
R::pt( x, df, lower, log )
R::qt( q, df, lower, log )
R::rt( df )
```
### 22\.3\.10 Noncentral t\-distribution
These functions provide information about the Noncentral t\-distribution with df degrees of freedom `df` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the t\-distribution function in R.
```
Rcpp::dnt( x, df, ncp, log = false )
Rcpp::pnt( x, df, ncp, lower = true, log = false )
Rcpp::qnt( q, df, ncp, lower = true, log = false )
// Rcpp::rnt() does not exist.
R::dnt( x, df, ncp, log )
R::pnt( x, df, ncp, lower, log )
R::qnt( q, df, ncp, lower, log )
// R::rnt() does not exist.
```
### 22\.3\.11 F\-distribution
These functions provide information about the F\-distribution with df degrees of freedom `df1` and `df2`. These functions are equivalent to setting 0 for the noncentrality parameter `ncp` in the F\-distribution function in R.
```
Rcpp::df( x, df1, df2, log = false )
Rcpp::pf( x, df1, df2, lower = true, log = false )
Rcpp::qf( q, df1, df2, lower = true, log = false )
Rcpp::rf( n, df1, df1 )
R::df( x, df1, df2, log )
R::pf( x, df1, df2, lower, log )
R::qf( q, df1, df2, lower, log )
R::rf( df1, df2 )
```
### 22\.3\.12 Noncentral F\-distribution
These functions provide information about the F\-distribution with df degrees of freedom `df1`, `df2` and noncentrality parameter `ncp`. These functions are equivalent to setting non 0 value for the noncentrality parameter `ncp` in the Noncentral F\-distribution function in R.
```
Rcpp::dnf( x, df1, df2, ncp, log = false )
Rcpp::pnf( x, df1, df2, ncp, lower = true, log = false )
Rcpp::qnf( q, df1, df2, ncp, lower = true, log = false )
// Rcpp::rnf() does not exist.
R::dnf( x, df1, df2, ncp, log )
R::pnf( x, df1, df2, ncp, lower, log )
R::qnf( q, df1, df2, ncp, lower, log )
// R::rnf() does not exist.
```
### 22\.3\.13 Cauchy distribution
These functions provide information about the Cauchy distribution with location parameter `location` and scale parameter `scale`.
```
Rcpp::dcauchy( x, location = 0.0, scale = 1.0, log = false )
Rcpp::pcauchy( x, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::qcauchy( q, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::rcauchy( n, location = 0.0, scale = 1.0)
R::dcauchy( x, location, scale, log )
R::pcauchy( x, location, scale, lower, log )
R::qcauchy( q, location, scale, lower, log )
R::rcauchy( location, scale )
```
### 22\.3\.14 Exponential distribution
These functions provide information about the Exponential distribution with rate `rate` (The mean of Exponential distribution equals to 1/rate).
```
Rcpp::dexp( x, rate = 1.0, log = false )
Rcpp::pexp( x, rate = 1.0, lower = true, log = false )
Rcpp::qexp( q, rate = 1.0, lower = true, log = false )
Rcpp::rexp( n, rate = 1.0)
// The R namespace version are parameterised in terms of the scale (=1/rate)
R::dexp( x, scale, log )
R::pexp( x, scale, lower, log )
R::qexp( q, scale, lower, log )
R::rexp( scale )
```
### 22\.3\.15 Logistic distribution
These functions provide information about the Logistic distribution with parameters `location` and `scale`.
```
Rcpp::dlogis( x, location = 0.0, scale = 1.0, log = false )
Rcpp::plogis( x, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::qlogis( q, location = 0.0, scale = 1.0, lower = true, log = false )
Rcpp::rlogis( n, location = 0.0, scale = 1.0 )
R::dlogis( x, location, scale, log )
R::plogis( x, location, scale, lower, log )
R::qlogis( q, location, scale, lower, log )
R::rlogis( location, scale )
```
### 22\.3\.16 Weibull distribution
These functions provide information about the Weibull distribution with parameters `shape` and `scale`.
```
Rcpp::dweibull( x, shape, scale = 1.0, log = false )
Rcpp::pweibull( x, shape, scale = 1.0, lower = true, log = false )
Rcpp::qweibull( q, shape, scale = 1.0, lower = true, log = false )
Rcpp::rweibull( n, shape, scale = 1.0 )
R::dweibull( x, shape, scale, log )
R::pweibull( x, shape, scale, lower, log )
R::qweibull( q, shape, scale, lower, log )
R::rweibull( shape, scale )
```
22\.4 Discrete probability distribution
---------------------------------------
### 22\.4\.1 Binomial distribution
These functions provide information about the Binomial distribution with number of trials `size` and success probability `prob`.
```
Rcpp::dbinom( x, size, prob, log = false )
Rcpp::pbinom( x, size, prob, lower = true, log = false )
Rcpp::qbinom( q, size, prob, lower = true, log = false )
Rcpp::rbinom( n, size, prob )
R::dbinom( x, size, prob, log )
R::pbinom( x, size, prob, lower, log )
R::qbinom( q, size, prob, lower, log )
R::rbinom( size, prob )
```
### 22\.4\.2 Negative binomial distribution (with success probability as parameter)
These functions provide information about the Negative binomial distribution with number of success `size` and success probability `prob`.
```
Rcpp::dnbinom( x, size, prob, log = false )
Rcpp::pnbinom( x, size, prob, lower = true, log = false )
Rcpp::qnbinom( q, size, prob, lower = true, log = false )
Rcpp::rnbinom( n, size, prob )
R::dnbinom( x, size, prob, log )
R::pnbinom( x, size, prob, lower, log )
R::qnbinom( q, size, prob, lower, log )
R::rnbinom( size, prob )
```
### 22\.4\.3 Negative binomial distribution (with mean as parameter)
These functions provide information about the Negative binomial distribution with number of success `size` and mean `mu`.
```
Rcpp::dnbinom_mu( x, size, mu, log = false )
Rcpp::pnbinom_mu( x, size, mu, lower = true, log = false )
Rcpp::qnbinom_mu( q, size, mu, lower = true, log = false )
Rcpp::rnbinom_mu( n, size, mu )
R::dnbinom_mu( x, size, mu, log )
R::pnbinom_mu( x, size, mu, lower, log )
R::qnbinom_mu( q, size, mu, lower, log )
R::rnbinom_mu( size, mu )
```
### 22\.4\.4 Poisson distribution
These functions provide information about the Poisson distribution with mean and variance are equal to `lambda`.
```
Rcpp::dpois( x, lambda, log = false )
Rcpp::ppois( x, lambda, lower = true, log = false )
Rcpp::qpois( q, lambda, lower = true, log = false )
Rcpp::rpois( n, lambda )
R::dpois( x, lambda, log )
R::ppois( x, lambda, lower, log )
R::qpois( q, lambda, lower, log )
R::rpois( lambda )
```
### 22\.4\.5 Geometric distribution
These functions provide information about the Geometric distribution with success probability `prob`.
```
Rcpp::dgeom( x, prob, log = false )
Rcpp::pgeom( x, prob, lower = true, log = false )
Rcpp::qgeom( q, prob, lower = true, log = false )
Rcpp::rgeom( n, prob )
R::dgeom( x, prob, log )
R::pgeom( x, prob, lower, log )
R::qgeom( q, prob, lower, log )
R::rgeom( prob )
```
### 22\.4\.6 Hypergeometric distribution
These functions provide information about the Hypergeometric distribution with number of success in the population `m` , number of failure in the population `n`, number of sample from the population `k`.
```
Rcpp::dhyper( x, m, n, k, log = false )
Rcpp::phyper( x, m, n, k, lower = true, log = false )
Rcpp::qhyper( q, m, n, k, lower = true, log = false )
Rcpp::rhyper(nn, m, n, k )
R::dhyper( x, m, n, k, log )
R::phyper( x, m, n, k, lower, log )
R::qhyper( q, m, n, k, lower, log )
R::rhyper( m, n, k )
```
### 22\.4\.7 Distribution of Wilcoxon rank\-sum test statistic
These functions provide information about the distribution of test statistic when Wilcoxon rank\-sum test (Mann–Whitney U test) is performed on two specimens with number of samples `m` and `n` respectively.
```
// Rcpp::dwilcox() does not exist.
// Rcpp::pwilcox() does not exist.
// Rcpp::qwilcox() does not exist.
Rcpp::rwilcox( nn, m, n );
R::dwilcox( x, m, n, log )
R::pwilcox( x, m, n, lower, log )
R::qwilcox( q, m, n, lower, log )
R::rwilcox( m, n )
```
### 22\.4\.8 Distribution of Wilcoxon signed\-rank test statistic
These functions provide information about the distribution of test statistic when Wilcoxon signed\-rank test is performed with number of samples `n`.
```
// Rcpp::dsignrank() does not exist.
// Rcpp::psignrank() does not exist.
// Rcpp::qsignrank() does not exist.
Rcpp::rsignrank( nn, n )
R::dsignrank( x, n, log )
R::psignrank( x, n, lower, log )
R::qsignrank( q, n, lower, log )
R::rsignrank( n )
```
### 22\.4\.1 Binomial distribution
These functions provide information about the Binomial distribution with number of trials `size` and success probability `prob`.
```
Rcpp::dbinom( x, size, prob, log = false )
Rcpp::pbinom( x, size, prob, lower = true, log = false )
Rcpp::qbinom( q, size, prob, lower = true, log = false )
Rcpp::rbinom( n, size, prob )
R::dbinom( x, size, prob, log )
R::pbinom( x, size, prob, lower, log )
R::qbinom( q, size, prob, lower, log )
R::rbinom( size, prob )
```
### 22\.4\.2 Negative binomial distribution (with success probability as parameter)
These functions provide information about the Negative binomial distribution with number of success `size` and success probability `prob`.
```
Rcpp::dnbinom( x, size, prob, log = false )
Rcpp::pnbinom( x, size, prob, lower = true, log = false )
Rcpp::qnbinom( q, size, prob, lower = true, log = false )
Rcpp::rnbinom( n, size, prob )
R::dnbinom( x, size, prob, log )
R::pnbinom( x, size, prob, lower, log )
R::qnbinom( q, size, prob, lower, log )
R::rnbinom( size, prob )
```
### 22\.4\.3 Negative binomial distribution (with mean as parameter)
These functions provide information about the Negative binomial distribution with number of success `size` and mean `mu`.
```
Rcpp::dnbinom_mu( x, size, mu, log = false )
Rcpp::pnbinom_mu( x, size, mu, lower = true, log = false )
Rcpp::qnbinom_mu( q, size, mu, lower = true, log = false )
Rcpp::rnbinom_mu( n, size, mu )
R::dnbinom_mu( x, size, mu, log )
R::pnbinom_mu( x, size, mu, lower, log )
R::qnbinom_mu( q, size, mu, lower, log )
R::rnbinom_mu( size, mu )
```
### 22\.4\.4 Poisson distribution
These functions provide information about the Poisson distribution with mean and variance are equal to `lambda`.
```
Rcpp::dpois( x, lambda, log = false )
Rcpp::ppois( x, lambda, lower = true, log = false )
Rcpp::qpois( q, lambda, lower = true, log = false )
Rcpp::rpois( n, lambda )
R::dpois( x, lambda, log )
R::ppois( x, lambda, lower, log )
R::qpois( q, lambda, lower, log )
R::rpois( lambda )
```
### 22\.4\.5 Geometric distribution
These functions provide information about the Geometric distribution with success probability `prob`.
```
Rcpp::dgeom( x, prob, log = false )
Rcpp::pgeom( x, prob, lower = true, log = false )
Rcpp::qgeom( q, prob, lower = true, log = false )
Rcpp::rgeom( n, prob )
R::dgeom( x, prob, log )
R::pgeom( x, prob, lower, log )
R::qgeom( q, prob, lower, log )
R::rgeom( prob )
```
### 22\.4\.6 Hypergeometric distribution
These functions provide information about the Hypergeometric distribution with number of success in the population `m` , number of failure in the population `n`, number of sample from the population `k`.
```
Rcpp::dhyper( x, m, n, k, log = false )
Rcpp::phyper( x, m, n, k, lower = true, log = false )
Rcpp::qhyper( q, m, n, k, lower = true, log = false )
Rcpp::rhyper(nn, m, n, k )
R::dhyper( x, m, n, k, log )
R::phyper( x, m, n, k, lower, log )
R::qhyper( q, m, n, k, lower, log )
R::rhyper( m, n, k )
```
### 22\.4\.7 Distribution of Wilcoxon rank\-sum test statistic
These functions provide information about the distribution of test statistic when Wilcoxon rank\-sum test (Mann–Whitney U test) is performed on two specimens with number of samples `m` and `n` respectively.
```
// Rcpp::dwilcox() does not exist.
// Rcpp::pwilcox() does not exist.
// Rcpp::qwilcox() does not exist.
Rcpp::rwilcox( nn, m, n );
R::dwilcox( x, m, n, log )
R::pwilcox( x, m, n, lower, log )
R::qwilcox( q, m, n, lower, log )
R::rwilcox( m, n )
```
### 22\.4\.8 Distribution of Wilcoxon signed\-rank test statistic
These functions provide information about the distribution of test statistic when Wilcoxon signed\-rank test is performed with number of samples `n`.
```
// Rcpp::dsignrank() does not exist.
// Rcpp::psignrank() does not exist.
// Rcpp::qsignrank() does not exist.
Rcpp::rsignrank( nn, n )
R::dsignrank( x, n, log )
R::psignrank( x, n, lower, log )
R::qsignrank( q, n, lower, log )
R::rsignrank( n )
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/230_R_function.html |
Chapter 23 Using R functions
============================
In order to use R functions in Rcpp, you can use `Function` and `Environment`.
23\.1 Function
--------------
Using the `Function` class, you can call R functions from Rcpp. The argument given to the R function is determined based on position and name.
Use `Named()` or `_[]` to pass a value to an argument by specifying argument name. `Name()` can be used in two ways: `Named("argument_name", value)` or `Named("argument_name") = value`.
The code example below shows an example of calling the R function `rnorm (n, mean, sd)`from the function defined in Rcpp. However, when calling a package function using `Function` class, you have to add the package environment to the search path using the `library()` function in R in advance.
```
// [[Rcpp::export]]
NumericVector my_fun(){
// calling rnorm()
Function f("rnorm");
// Next code is interpreted as rnorm(n=5, mean=10, sd=2)
return f(5, Named("sd")=2, _["mean"]=10);
}
```
Execution example
```
> my_fun()
[1] 8.014863 10.459980 7.741581 9.000762 11.465920
```
In the above example, the return type of R function called from Rcpp is assumed `NumericVector`. However, as in the example below, the return type of an R function called from Rcpp function is sometimes undefined. In such a case it would be better to assign the return value of the function into `RObject` or `List` element.
The code below shows an example of defining simplified R function `lapply()` with Rcpp.
```
// [[Rcpp::export]]
List rcpp_lapply(List input, Function f) {
// Applies the Function f to each element of the List input and returns the result as List
// Number of elements in the List input
R_xlen_t n = input.length();
// Creating a List for output
List out(n);
// Applying f() to each element of "input" and store it to "out".
// The type of the return value of f() is unknown, but it can be assigned to the List element.
for(R_xlen_t i = 0; i < n; ++i) {
out[i] = f(input[i]);
}
return out;
}
```
23\.2 Environment
-----------------
By using `Environment` class, you can retrieve objects (variables and functions) from packages and other environments.
The code below shows an example of calling `Matrix()` function in the `Matrix` package. When calling a package function in this way, it is not necessary to attach the package using `library()` function.
```
// [[Rcpp::export]]
S4 rcpp_package_function(NumericMatrix m){
// Obtaining namespace of Matrix package
Environment pkg = Environment::namespace_env("Matrix");
// Picking up Matrix() function from Matrix package
Function f = pkg["Matrix"];
// Executing Matrix( m, sparse = TRIE )
return f( m, Named("sparse", true));
}
```
Execution result
```
> m <- matrix(c(1,0,0,2), nrow = 2, ncol = 2)
> rcpp_package_function(m)
2 x 2 sparse Matrix of class "dsCMatrix"
[1,] 1 .
[2,] . 2
```
23\.1 Function
--------------
Using the `Function` class, you can call R functions from Rcpp. The argument given to the R function is determined based on position and name.
Use `Named()` or `_[]` to pass a value to an argument by specifying argument name. `Name()` can be used in two ways: `Named("argument_name", value)` or `Named("argument_name") = value`.
The code example below shows an example of calling the R function `rnorm (n, mean, sd)`from the function defined in Rcpp. However, when calling a package function using `Function` class, you have to add the package environment to the search path using the `library()` function in R in advance.
```
// [[Rcpp::export]]
NumericVector my_fun(){
// calling rnorm()
Function f("rnorm");
// Next code is interpreted as rnorm(n=5, mean=10, sd=2)
return f(5, Named("sd")=2, _["mean"]=10);
}
```
Execution example
```
> my_fun()
[1] 8.014863 10.459980 7.741581 9.000762 11.465920
```
In the above example, the return type of R function called from Rcpp is assumed `NumericVector`. However, as in the example below, the return type of an R function called from Rcpp function is sometimes undefined. In such a case it would be better to assign the return value of the function into `RObject` or `List` element.
The code below shows an example of defining simplified R function `lapply()` with Rcpp.
```
// [[Rcpp::export]]
List rcpp_lapply(List input, Function f) {
// Applies the Function f to each element of the List input and returns the result as List
// Number of elements in the List input
R_xlen_t n = input.length();
// Creating a List for output
List out(n);
// Applying f() to each element of "input" and store it to "out".
// The type of the return value of f() is unknown, but it can be assigned to the List element.
for(R_xlen_t i = 0; i < n; ++i) {
out[i] = f(input[i]);
}
return out;
}
```
23\.2 Environment
-----------------
By using `Environment` class, you can retrieve objects (variables and functions) from packages and other environments.
The code below shows an example of calling `Matrix()` function in the `Matrix` package. When calling a package function in this way, it is not necessary to attach the package using `library()` function.
```
// [[Rcpp::export]]
S4 rcpp_package_function(NumericMatrix m){
// Obtaining namespace of Matrix package
Environment pkg = Environment::namespace_env("Matrix");
// Picking up Matrix() function from Matrix package
Function f = pkg["Matrix"];
// Executing Matrix( m, sparse = TRIE )
return f( m, Named("sparse", true));
}
```
Execution result
```
> m <- matrix(c(1,0,0,2), nrow = 2, ncol = 2)
> rcpp_package_function(m)
2 x 2 sparse Matrix of class "dsCMatrix"
[1,] 1 .
[2,] . 2
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/240_na_nan_inf.html |
Chapter 24 NA NaN Inf NULL
==========================
24\.1 Notations of NA NaN Inf
-----------------------------
To express the value of `Inf` `-Inf` `NaN` in Rcpp, use the symbol `R_PosInf` `R_NegInf` `R_NaN`.
| R | Rcpp |
| --- | --- |
| `Inf` | `R_PosInf` |
| `-Inf` | `R_NegInf` |
| `NaN` | `R_NaN` |
On the other hand, for `NA`, different symbol of `NA` are defined for each `Vector` type.
| Vector | symbol of NA |
| --- | --- |
| `NumericVector` | `NA_REAL` |
| `IntegerVector` | `NA_INTEGER` |
| `LogicalVector` | `NA_LOGICAL` |
| `CharacterVector` | `NA_STRING` |
The following code example shows how to use these symbols to create `Vector` object.
```
NumericVector v1 = NumericVector::create( 1, NA_REAL, R_NaN, R_PosInf, R_NegInf);
IntegerVector v2 = IntegerVector::create( 1, NA_INTEGER);
CharacterVector v3 = CharacterVector::create( "A", NA_STRING);
LogicalVector v4 = LogicalVector::create( true, NA_LOGICAL);
```
24\.2 Evaluating NA NaN Inf
---------------------------
### 24\.2\.1 Evaluating all the elements of a vector at once
To evaluate all the `NA` `NaN` `Inf` `-Inf` elements in a vector at once, use the function `is_na()` `is_nan()` `is_infinite()`.
In the code example below, we create a vector containing `NA` `NaN` `Inf` `-Inf` and evaluate it. From this example we can see that the `is_na()` evaluates both `NA` and`NaN` as `TRUE` (same as R’s `is.na()`).
```
NumericVector v =
NumericVector::create( 1, NA_REAL, R_NaN, R_PosInf, R_NegInf);
LogicalVector l1 = is_na(v);
LogicalVector l2 = is_nan(v);
LogicalVector l3 = is_infinite(v);
Rcout << l1 << "\n"; // 0 1 1 0 0
Rcout << l2 << "\n"; // 0 0 1 0 0
Rcout << l3 << "\n"; // 0 0 0 1 1
```
You can remove `NA` `NaN` `Inf` from a vector by using these functions. You can also use `na_omit()` to remove `NA`.
The code example below shows how to remove `NA` from a vector using the `is_na()` and `na_omit()`.
```
// Creating a Vector object containg NA
NumericVector v =
NumericVector::create( 1, NA_REAL, 2, NA_REAL, 3);
// Removeing NA from the vector
NumericVector v1 = v[!is_na(v)];
NumericVector v2 = na_omit(v);
```
### 24\.2\.2 Evaluating single element of a vector
If you want to evaluate `NA` `NaN` `Inf` `-Inf` on single element of a vector, use the static member function `Vector::is_na()`, `traits::is_nan<RTYPE>()`, `traits:: is_infinite<RTYPE>()`. In `RTYPE`, specify `SEXPTYPE` of the vector to be evaluated.
```
// [[Rcpp::export]]
void rcpp_is_na2() {
// Creating Vector object containing NA NaN Inf -Inf
NumericVector v =
NumericVector::create(1, NA_REAL, R_NaN, R_PosInf, R_NegInf);
// Evaluating the value for each element of the vector
int n = v.length();
for (int i = 0; i < n; ++i) {
if(NumericVector::is_na(v[i]))
Rprintf("v[%i] is NA.\n", i);
if(Rcpp::traits::is_nan<REALSXP>(v[i]))
Rprintf("v[%i] is NaN.\n", i);
if(Rcpp::traits::is_infinite<REALSXP>(v[i]))
Rprintf("v[%i] is Inf or -Inf.\n", i);
}
}
```
Here is the list of `SEXPTYPE` of the major Vector class.
| SEXPTYPE | Vector |
| --- | --- |
| `LGLSXP` | LogicalVector |
| `INTSXP` | IntegerVector |
| `REALSXP` | NumericVector |
| `CPLXSXP` | ComplexVector |
| `STRSXP` | CharacterVector (StringVector) |
24\.3 NULL
----------
You use `R_NilValue` to handle`NULL` in Rcpp. The code example below shows how to check `NULL` in the elements of a `List` object and how to assign `NULL` to clear the value of an attribute.
```
// [[Rcpp::export]]
List rcpp_list()
{
// Create a List object with element names
// One of the two elements is NULL
List L = List::create(Named("x",NumericVector({1,2,3})),
Named("y",R_NilValue));
// Checking NULL
for(int i=0; i<L.length(); ++i){
if(L[i]==R_NilValue) {
Rprintf("L[%i] is NULL.\n\n", i+1);
}
}
// Delete the value of the attribute (element name) of the object
L.attr("names") = R_NilValue;
return(L);
}
```
Execution result
```
> rcpp_list()
L[2] is NULL.
[[1]]
[1] 1 2 3
[[2]]
NULL
```
24\.4 Points to note when handling NA with Rcpp
-----------------------------------------------
Internally `NA_INTEGER` and `NA_LOGICAL` are equivalent to the minimum value of `int` (\-2147483648\). Although, functions and operators defined in Rcpp handle the minimum value of `int` appropriately as`NA` (that is, make the result of operation on `NA` element as `NA`). Standard C\+\+ functions and operators treat the minimum value of `int` as integer value. So if you add 1 to `NA_INTEGER`, the element is no longer minimum value of `int`, so it is not treated as `NA`.
In addition, if you assign `NA_LOGICAL` to `bool` type, it will always be evaluated as `true`. This is because `bool` evaluates all numbers other than 0 as `true`.
On the other hand, since `nan` and `inf` are defined in `double`, the result of the operation on `nan` `inf` in standard C \+\+ will be the same result as R.
The table below summarizes how values are evaluated when assigning the value of `NA` `Inf` `-Inf` `NaN` to `Vector` or scalar type in C\+\+.
| | NA | NaN | Inf | \-Inf |
| --- | --- | --- | --- | --- |
| `NumericVector` | `NA_REAL` | `R_NaN` | `R_PosInf` | `R_NegInf` |
| `IntegerVector` | `NA_INTEGER` | `NA_INTEGER` | `NA_INTEGER` | `NA_INTEGER` |
| `LogicalVector` | `NA_LOGICAL` | `NA_LOGICAL` | `NA_LOGICAL` | `NA_LOGICAL` |
| `CharacterVector` | `NA_STRING` | “NaN” | “Inf” | “\-Inf” |
| `String` | `NA_STRING` | “NaN” | “Inf” | “\-Inf” |
| `double` | `nan` | `nan` | `inf` | `-inf` |
| `int` | \-2147483648 | \-2147483648 | \-2147483648 | \-2147483648 |
| `bool` | `true` | `true` | `true` | `true` |
The code example below shows the difference in results when computing using the Rcpp operator and the standard C\+\+ operator against `NA_INTEGER`. From this result, the operator of Rcpp evaluates the result of the operation on `NA` as `NA`, but you can see that the standard C\+\+ operator treats `NA_INTEGER` as just a integer value.
```
// [[Rcpp::export]]
List rcpp_na_sum(){
// Creating an integer vector containing NA
IntegerVector v1 = IntegerVector::create(1,NA_INTEGER,3);
// Applying the "+" operator between vector and scalar defined in Rcpp
IntegerVector res1 = v1 + 1;
// Applying the "+" operator between int and int defined in standard C++
IntegerVector res2(3);
for(int i=0; i<v1.length(); ++i){
res2[i] = v1[i] + 1;
}
// Outputing the result as named list
return List::create(Named("Rcpp plus", res1),
Named("C++ plus", res2));
}
```
Execution result
```
> rcpp_na_sum()
$`Rcpp plus`
[1] 2 NA 4
$`C++ plus`
[1] 2 -2147483647 4
```
24\.1 Notations of NA NaN Inf
-----------------------------
To express the value of `Inf` `-Inf` `NaN` in Rcpp, use the symbol `R_PosInf` `R_NegInf` `R_NaN`.
| R | Rcpp |
| --- | --- |
| `Inf` | `R_PosInf` |
| `-Inf` | `R_NegInf` |
| `NaN` | `R_NaN` |
On the other hand, for `NA`, different symbol of `NA` are defined for each `Vector` type.
| Vector | symbol of NA |
| --- | --- |
| `NumericVector` | `NA_REAL` |
| `IntegerVector` | `NA_INTEGER` |
| `LogicalVector` | `NA_LOGICAL` |
| `CharacterVector` | `NA_STRING` |
The following code example shows how to use these symbols to create `Vector` object.
```
NumericVector v1 = NumericVector::create( 1, NA_REAL, R_NaN, R_PosInf, R_NegInf);
IntegerVector v2 = IntegerVector::create( 1, NA_INTEGER);
CharacterVector v3 = CharacterVector::create( "A", NA_STRING);
LogicalVector v4 = LogicalVector::create( true, NA_LOGICAL);
```
24\.2 Evaluating NA NaN Inf
---------------------------
### 24\.2\.1 Evaluating all the elements of a vector at once
To evaluate all the `NA` `NaN` `Inf` `-Inf` elements in a vector at once, use the function `is_na()` `is_nan()` `is_infinite()`.
In the code example below, we create a vector containing `NA` `NaN` `Inf` `-Inf` and evaluate it. From this example we can see that the `is_na()` evaluates both `NA` and`NaN` as `TRUE` (same as R’s `is.na()`).
```
NumericVector v =
NumericVector::create( 1, NA_REAL, R_NaN, R_PosInf, R_NegInf);
LogicalVector l1 = is_na(v);
LogicalVector l2 = is_nan(v);
LogicalVector l3 = is_infinite(v);
Rcout << l1 << "\n"; // 0 1 1 0 0
Rcout << l2 << "\n"; // 0 0 1 0 0
Rcout << l3 << "\n"; // 0 0 0 1 1
```
You can remove `NA` `NaN` `Inf` from a vector by using these functions. You can also use `na_omit()` to remove `NA`.
The code example below shows how to remove `NA` from a vector using the `is_na()` and `na_omit()`.
```
// Creating a Vector object containg NA
NumericVector v =
NumericVector::create( 1, NA_REAL, 2, NA_REAL, 3);
// Removeing NA from the vector
NumericVector v1 = v[!is_na(v)];
NumericVector v2 = na_omit(v);
```
### 24\.2\.2 Evaluating single element of a vector
If you want to evaluate `NA` `NaN` `Inf` `-Inf` on single element of a vector, use the static member function `Vector::is_na()`, `traits::is_nan<RTYPE>()`, `traits:: is_infinite<RTYPE>()`. In `RTYPE`, specify `SEXPTYPE` of the vector to be evaluated.
```
// [[Rcpp::export]]
void rcpp_is_na2() {
// Creating Vector object containing NA NaN Inf -Inf
NumericVector v =
NumericVector::create(1, NA_REAL, R_NaN, R_PosInf, R_NegInf);
// Evaluating the value for each element of the vector
int n = v.length();
for (int i = 0; i < n; ++i) {
if(NumericVector::is_na(v[i]))
Rprintf("v[%i] is NA.\n", i);
if(Rcpp::traits::is_nan<REALSXP>(v[i]))
Rprintf("v[%i] is NaN.\n", i);
if(Rcpp::traits::is_infinite<REALSXP>(v[i]))
Rprintf("v[%i] is Inf or -Inf.\n", i);
}
}
```
Here is the list of `SEXPTYPE` of the major Vector class.
| SEXPTYPE | Vector |
| --- | --- |
| `LGLSXP` | LogicalVector |
| `INTSXP` | IntegerVector |
| `REALSXP` | NumericVector |
| `CPLXSXP` | ComplexVector |
| `STRSXP` | CharacterVector (StringVector) |
### 24\.2\.1 Evaluating all the elements of a vector at once
To evaluate all the `NA` `NaN` `Inf` `-Inf` elements in a vector at once, use the function `is_na()` `is_nan()` `is_infinite()`.
In the code example below, we create a vector containing `NA` `NaN` `Inf` `-Inf` and evaluate it. From this example we can see that the `is_na()` evaluates both `NA` and`NaN` as `TRUE` (same as R’s `is.na()`).
```
NumericVector v =
NumericVector::create( 1, NA_REAL, R_NaN, R_PosInf, R_NegInf);
LogicalVector l1 = is_na(v);
LogicalVector l2 = is_nan(v);
LogicalVector l3 = is_infinite(v);
Rcout << l1 << "\n"; // 0 1 1 0 0
Rcout << l2 << "\n"; // 0 0 1 0 0
Rcout << l3 << "\n"; // 0 0 0 1 1
```
You can remove `NA` `NaN` `Inf` from a vector by using these functions. You can also use `na_omit()` to remove `NA`.
The code example below shows how to remove `NA` from a vector using the `is_na()` and `na_omit()`.
```
// Creating a Vector object containg NA
NumericVector v =
NumericVector::create( 1, NA_REAL, 2, NA_REAL, 3);
// Removeing NA from the vector
NumericVector v1 = v[!is_na(v)];
NumericVector v2 = na_omit(v);
```
### 24\.2\.2 Evaluating single element of a vector
If you want to evaluate `NA` `NaN` `Inf` `-Inf` on single element of a vector, use the static member function `Vector::is_na()`, `traits::is_nan<RTYPE>()`, `traits:: is_infinite<RTYPE>()`. In `RTYPE`, specify `SEXPTYPE` of the vector to be evaluated.
```
// [[Rcpp::export]]
void rcpp_is_na2() {
// Creating Vector object containing NA NaN Inf -Inf
NumericVector v =
NumericVector::create(1, NA_REAL, R_NaN, R_PosInf, R_NegInf);
// Evaluating the value for each element of the vector
int n = v.length();
for (int i = 0; i < n; ++i) {
if(NumericVector::is_na(v[i]))
Rprintf("v[%i] is NA.\n", i);
if(Rcpp::traits::is_nan<REALSXP>(v[i]))
Rprintf("v[%i] is NaN.\n", i);
if(Rcpp::traits::is_infinite<REALSXP>(v[i]))
Rprintf("v[%i] is Inf or -Inf.\n", i);
}
}
```
Here is the list of `SEXPTYPE` of the major Vector class.
| SEXPTYPE | Vector |
| --- | --- |
| `LGLSXP` | LogicalVector |
| `INTSXP` | IntegerVector |
| `REALSXP` | NumericVector |
| `CPLXSXP` | ComplexVector |
| `STRSXP` | CharacterVector (StringVector) |
24\.3 NULL
----------
You use `R_NilValue` to handle`NULL` in Rcpp. The code example below shows how to check `NULL` in the elements of a `List` object and how to assign `NULL` to clear the value of an attribute.
```
// [[Rcpp::export]]
List rcpp_list()
{
// Create a List object with element names
// One of the two elements is NULL
List L = List::create(Named("x",NumericVector({1,2,3})),
Named("y",R_NilValue));
// Checking NULL
for(int i=0; i<L.length(); ++i){
if(L[i]==R_NilValue) {
Rprintf("L[%i] is NULL.\n\n", i+1);
}
}
// Delete the value of the attribute (element name) of the object
L.attr("names") = R_NilValue;
return(L);
}
```
Execution result
```
> rcpp_list()
L[2] is NULL.
[[1]]
[1] 1 2 3
[[2]]
NULL
```
24\.4 Points to note when handling NA with Rcpp
-----------------------------------------------
Internally `NA_INTEGER` and `NA_LOGICAL` are equivalent to the minimum value of `int` (\-2147483648\). Although, functions and operators defined in Rcpp handle the minimum value of `int` appropriately as`NA` (that is, make the result of operation on `NA` element as `NA`). Standard C\+\+ functions and operators treat the minimum value of `int` as integer value. So if you add 1 to `NA_INTEGER`, the element is no longer minimum value of `int`, so it is not treated as `NA`.
In addition, if you assign `NA_LOGICAL` to `bool` type, it will always be evaluated as `true`. This is because `bool` evaluates all numbers other than 0 as `true`.
On the other hand, since `nan` and `inf` are defined in `double`, the result of the operation on `nan` `inf` in standard C \+\+ will be the same result as R.
The table below summarizes how values are evaluated when assigning the value of `NA` `Inf` `-Inf` `NaN` to `Vector` or scalar type in C\+\+.
| | NA | NaN | Inf | \-Inf |
| --- | --- | --- | --- | --- |
| `NumericVector` | `NA_REAL` | `R_NaN` | `R_PosInf` | `R_NegInf` |
| `IntegerVector` | `NA_INTEGER` | `NA_INTEGER` | `NA_INTEGER` | `NA_INTEGER` |
| `LogicalVector` | `NA_LOGICAL` | `NA_LOGICAL` | `NA_LOGICAL` | `NA_LOGICAL` |
| `CharacterVector` | `NA_STRING` | “NaN” | “Inf” | “\-Inf” |
| `String` | `NA_STRING` | “NaN” | “Inf” | “\-Inf” |
| `double` | `nan` | `nan` | `inf` | `-inf` |
| `int` | \-2147483648 | \-2147483648 | \-2147483648 | \-2147483648 |
| `bool` | `true` | `true` | `true` | `true` |
The code example below shows the difference in results when computing using the Rcpp operator and the standard C\+\+ operator against `NA_INTEGER`. From this result, the operator of Rcpp evaluates the result of the operation on `NA` as `NA`, but you can see that the standard C\+\+ operator treats `NA_INTEGER` as just a integer value.
```
// [[Rcpp::export]]
List rcpp_na_sum(){
// Creating an integer vector containing NA
IntegerVector v1 = IntegerVector::create(1,NA_INTEGER,3);
// Applying the "+" operator between vector and scalar defined in Rcpp
IntegerVector res1 = v1 + 1;
// Applying the "+" operator between int and int defined in standard C++
IntegerVector res2(3);
for(int i=0; i<v1.length(); ++i){
res2[i] = v1[i] + 1;
}
// Outputing the result as named list
return List::create(Named("Rcpp plus", res1),
Named("C++ plus", res2));
}
```
Execution result
```
> rcpp_na_sum()
$`Rcpp plus`
[1] 2 NA 4
$`C++ plus`
[1] 2 -2147483647 4
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/250_factor.html |
Chapter 25 factor
=================
The factor vector (`factor`) is actually an integer vector with the attributes `levels` and `class` is defined.
In the code below, an example of converting integer vector to `factor` by setting values to attributes.
```
// Creating "factor"
// [[Rcpp::export]]
RObject rcpp_factor(){
IntegerVector v = {1,2,3,1,2,3};
CharacterVector ch = {"A","B","C"};
v.attr("class") = "factor";
v.attr("levels") = ch;
return v;
}
```
The execution result below, we can see that the integer vector returned to R is treated as `factor`.
```
> rcpp_factor()
[1] A B C A B C
Levels: A B C
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/260_error.html |
Chapter 26 Error handling
=========================
In a situation in which the normal progress of the program is hindered, you can print an error message and stop the program. If you want to warn the user without stopping the program, use `warning()` function. Both functions `stop()` and `warning()` can display messages by specifying the format as same as the `Rprintf()` function.
```
stop("Error: Unexpected condition occurred");
stop("Error: Column %i is not numeric.", i+1);
warning("Warning: Unexpected condition occurred");
warning("Warning: Column %i is not numeric.", i+1);
```
In the code example below, an error is printed and execution is stopped if the value given to the function is negative.
```
// [[Rcpp::export]]
double rcpp_log(double x) {
if (x <= 0.0) {
stop("'x' must be a positive value.");
}
return log(x);
}
```
Execution result
```
> rcpp_log(-1)
Error: 'x' must be a positive value.
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/270_cancellation.html |
Chapter 27 Cancel handling
==========================
**Accepting cancellation from user**
`CheckUserInterrupt()` function checks if the ‘ctrl \+ c’ button was pressed, and if it was pressed it will stop the execution of the function.
If you want to execute a calculation that takes a long time, you would be better to run `checkUserInterrupt()` approximately once every few seconds.
```
for (int i=0; i<100000; ++i) {
// Checking interruption every 1000 iterations
if (i % 1000 == 0){
Rcpp::checkUserInterrupt();
}
// do something ...
}
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/280_environment.html |
Chapter 28 Environment
======================
By using `Environment` class, you can keep the environment you want to access as a variable and retrieve variables and functions from the environment.
28\.1 Creating Environment object
---------------------------------
This section shows you how to create an object of the `Environment` class.
```
Environment env(); // Global environment
Environment env = Environment::global_env(); // Global environment
Environment env("package:stats"); // Environment of package "stats"
Environment env(1); // The i-th environment in onbject search path (i = 1 is global environment)
```
To check the object search path, use `search()` function in R.
```
> search()
[1] ".GlobalEnv" "tools:RGUI" "package:stats"
[4] "package:graphics" "package:grDevices" "package:utils"
[7] "package:datasets" "package:methods" "Autoloads"
[10] "package:base"
```
28\.2 Accessing object in a environment
---------------------------------------
You can use the `[]` operator or `get()` member function to access variables and functions in an environment through `Environment` class object. If you access variables or functions that do not exist in that environment, `R_NilValue` (`NULL`) will be returned.
```
// Retrieving the global environment
Environment env = Environment::global_env();
// Retrieving object "x" from the global environment
NumericVector x = env["x"];
// Changing the value of the variable x in the global environment
x[0] = 100;
```
28\.3 Creating new environment
------------------------------
A new empty environment can be created by using the function `new_env()` function.
### 28\.3\.1 new\_env(size \= 29\)
Returns a new environment. The argument `size` specifies the initial size of the hash table of the environment to be created.
### 28\.3\.2 new\_env(parent, size \= 29\)
Returns a new environment whose parent environmnet is the `parent`. The argument `size` specifies the initial size of the hash table of the environment to be created.
28\.4 Member functions
----------------------
### 28\.4\.1 get(name)
Retrieves the object with its name specified by the argument `name`. If it can not be found, it returns `R_NilValue`.
### 28\.4\.2 ls(all)
Returns a list of objects in this environment as a `CharacterVector`. If the argument `all` is true, all objects are displayed, `false` excludes objects whose name begins with `.`.
### 28\.4\.3 find(name)
Retrieves the object with the name specified by the argument `name` from this environment or its parent environment. If the object is not found, the `binding_not_found` exception is thrown.
### 28\.4\.4 exists(name)
Returns `true` if there is an object with the name specified by the argument `name` in this environment.
### 28\.4\.5 assign( name, x )
Assign the value `x` to the object with the name specified by the character string `name` in this environment. Returns `true` if it succeeds.
### 28\.4\.6 isLocked()
Returns `true` if this environment is locked.
### 28\.4\.7 remove(name)
Removes the object with the name specified by the string `name` from this environment. Returns `true` if it succeeds.
### 28\.4\.8 lock(bindings \= false)
Locks this environment. If binding \= `true`, it also locks bindings of this environment. (The “binding” is linking between name of the object and data of the object in R.)
### 28\.4\.9 lockBinding(name)
Lock the binding (i.e. variable) specified by the string `name` in this environment.
### 28\.4\.10 unlockBinding(name){
Unlock the binding (i.e. variable) specified by the string `name` in this environment.
### 28\.4\.11 bindingIsLocked(name)
Returns `true` if the binding (i.e. variable) specified by the string `name` in this environment is locked.
### 28\.4\.12 bindingIsActive(name)
Returns `true` if the binding (i.e. variable) specified by the string `name` in this environment is active.
### 28\.4\.13 is\_user\_database()
Returns true if this environment inherits “UserDefinedDatabase”.
### 28\.4\.14 parent()
Returns the parent environment of this environment.
### 28\.4\.15 new\_child(hashed)
Creates a new environment whose parent is this environment. If hashed \=\= `true`, the created environment uses hash table.
28\.5 Static member functions
-----------------------------
\#\#\#Environment::global\_env()
Returns the global environment.
\#\#\#Environment::empty\_env()
Returns the empty environment.
\#\#\#Environment::base\_env()
Returns the environment of base package.
\#\#\#Environment::base\_namespace()
Returns the namespace of base package.
### 28\.5\.1 Environment::Rcpp\_namespace()
Returns the namespace of Rcpp package.
### 28\.5\.2 Environment::namespace\_env(package)
Returns the namespace of the package whose name is specified by argument `package`. If you use `Environment::namespace_env()`, you can call package functions without preloading the package with the `library()` function in R. This is equivalent to calling package function with the form `PACKAGE::FUNCTION()`. In addition, you can also access functions that are not exported in the package. This is equivalent to calling with the form `PACKAGE:::FUNCTION()` in R.
28\.1 Creating Environment object
---------------------------------
This section shows you how to create an object of the `Environment` class.
```
Environment env(); // Global environment
Environment env = Environment::global_env(); // Global environment
Environment env("package:stats"); // Environment of package "stats"
Environment env(1); // The i-th environment in onbject search path (i = 1 is global environment)
```
To check the object search path, use `search()` function in R.
```
> search()
[1] ".GlobalEnv" "tools:RGUI" "package:stats"
[4] "package:graphics" "package:grDevices" "package:utils"
[7] "package:datasets" "package:methods" "Autoloads"
[10] "package:base"
```
28\.2 Accessing object in a environment
---------------------------------------
You can use the `[]` operator or `get()` member function to access variables and functions in an environment through `Environment` class object. If you access variables or functions that do not exist in that environment, `R_NilValue` (`NULL`) will be returned.
```
// Retrieving the global environment
Environment env = Environment::global_env();
// Retrieving object "x" from the global environment
NumericVector x = env["x"];
// Changing the value of the variable x in the global environment
x[0] = 100;
```
28\.3 Creating new environment
------------------------------
A new empty environment can be created by using the function `new_env()` function.
### 28\.3\.1 new\_env(size \= 29\)
Returns a new environment. The argument `size` specifies the initial size of the hash table of the environment to be created.
### 28\.3\.2 new\_env(parent, size \= 29\)
Returns a new environment whose parent environmnet is the `parent`. The argument `size` specifies the initial size of the hash table of the environment to be created.
### 28\.3\.1 new\_env(size \= 29\)
Returns a new environment. The argument `size` specifies the initial size of the hash table of the environment to be created.
### 28\.3\.2 new\_env(parent, size \= 29\)
Returns a new environment whose parent environmnet is the `parent`. The argument `size` specifies the initial size of the hash table of the environment to be created.
28\.4 Member functions
----------------------
### 28\.4\.1 get(name)
Retrieves the object with its name specified by the argument `name`. If it can not be found, it returns `R_NilValue`.
### 28\.4\.2 ls(all)
Returns a list of objects in this environment as a `CharacterVector`. If the argument `all` is true, all objects are displayed, `false` excludes objects whose name begins with `.`.
### 28\.4\.3 find(name)
Retrieves the object with the name specified by the argument `name` from this environment or its parent environment. If the object is not found, the `binding_not_found` exception is thrown.
### 28\.4\.4 exists(name)
Returns `true` if there is an object with the name specified by the argument `name` in this environment.
### 28\.4\.5 assign( name, x )
Assign the value `x` to the object with the name specified by the character string `name` in this environment. Returns `true` if it succeeds.
### 28\.4\.6 isLocked()
Returns `true` if this environment is locked.
### 28\.4\.7 remove(name)
Removes the object with the name specified by the string `name` from this environment. Returns `true` if it succeeds.
### 28\.4\.8 lock(bindings \= false)
Locks this environment. If binding \= `true`, it also locks bindings of this environment. (The “binding” is linking between name of the object and data of the object in R.)
### 28\.4\.9 lockBinding(name)
Lock the binding (i.e. variable) specified by the string `name` in this environment.
### 28\.4\.10 unlockBinding(name){
Unlock the binding (i.e. variable) specified by the string `name` in this environment.
### 28\.4\.11 bindingIsLocked(name)
Returns `true` if the binding (i.e. variable) specified by the string `name` in this environment is locked.
### 28\.4\.12 bindingIsActive(name)
Returns `true` if the binding (i.e. variable) specified by the string `name` in this environment is active.
### 28\.4\.13 is\_user\_database()
Returns true if this environment inherits “UserDefinedDatabase”.
### 28\.4\.14 parent()
Returns the parent environment of this environment.
### 28\.4\.15 new\_child(hashed)
Creates a new environment whose parent is this environment. If hashed \=\= `true`, the created environment uses hash table.
### 28\.4\.1 get(name)
Retrieves the object with its name specified by the argument `name`. If it can not be found, it returns `R_NilValue`.
### 28\.4\.2 ls(all)
Returns a list of objects in this environment as a `CharacterVector`. If the argument `all` is true, all objects are displayed, `false` excludes objects whose name begins with `.`.
### 28\.4\.3 find(name)
Retrieves the object with the name specified by the argument `name` from this environment or its parent environment. If the object is not found, the `binding_not_found` exception is thrown.
### 28\.4\.4 exists(name)
Returns `true` if there is an object with the name specified by the argument `name` in this environment.
### 28\.4\.5 assign( name, x )
Assign the value `x` to the object with the name specified by the character string `name` in this environment. Returns `true` if it succeeds.
### 28\.4\.6 isLocked()
Returns `true` if this environment is locked.
### 28\.4\.7 remove(name)
Removes the object with the name specified by the string `name` from this environment. Returns `true` if it succeeds.
### 28\.4\.8 lock(bindings \= false)
Locks this environment. If binding \= `true`, it also locks bindings of this environment. (The “binding” is linking between name of the object and data of the object in R.)
### 28\.4\.9 lockBinding(name)
Lock the binding (i.e. variable) specified by the string `name` in this environment.
### 28\.4\.10 unlockBinding(name){
Unlock the binding (i.e. variable) specified by the string `name` in this environment.
### 28\.4\.11 bindingIsLocked(name)
Returns `true` if the binding (i.e. variable) specified by the string `name` in this environment is locked.
### 28\.4\.12 bindingIsActive(name)
Returns `true` if the binding (i.e. variable) specified by the string `name` in this environment is active.
### 28\.4\.13 is\_user\_database()
Returns true if this environment inherits “UserDefinedDatabase”.
### 28\.4\.14 parent()
Returns the parent environment of this environment.
### 28\.4\.15 new\_child(hashed)
Creates a new environment whose parent is this environment. If hashed \=\= `true`, the created environment uses hash table.
28\.5 Static member functions
-----------------------------
\#\#\#Environment::global\_env()
Returns the global environment.
\#\#\#Environment::empty\_env()
Returns the empty environment.
\#\#\#Environment::base\_env()
Returns the environment of base package.
\#\#\#Environment::base\_namespace()
Returns the namespace of base package.
### 28\.5\.1 Environment::Rcpp\_namespace()
Returns the namespace of Rcpp package.
### 28\.5\.2 Environment::namespace\_env(package)
Returns the namespace of the package whose name is specified by argument `package`. If you use `Environment::namespace_env()`, you can call package functions without preloading the package with the `library()` function in R. This is equivalent to calling package function with the form `PACKAGE::FUNCTION()`. In addition, you can also access functions that are not exported in the package. This is equivalent to calling with the form `PACKAGE:::FUNCTION()` in R.
### 28\.5\.1 Environment::Rcpp\_namespace()
Returns the namespace of Rcpp package.
### 28\.5\.2 Environment::namespace\_env(package)
Returns the namespace of the package whose name is specified by argument `package`. If you use `Environment::namespace_env()`, you can call package functions without preloading the package with the `library()` function in R. This is equivalent to calling package function with the form `PACKAGE::FUNCTION()`. In addition, you can also access functions that are not exported in the package. This is equivalent to calling with the form `PACKAGE:::FUNCTION()` in R.
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/290_iterator.html |
Chapter 29 Iterator
===================
Iterator is an class used to access elements of `Vector` `DataFrame` `List`. If you want to use algorithms provided by standard C\+\+, you need to understand iterator. Because many of the algorithms provided by standard C\+\+ use iterators to specify location or range of data to apply the algorithms.
Specific iterator type is defined for each data structure of Rcpp.
```
NumericVector::iterator
IntegerVector::iterator
LogicalVector::iterator
CharacterVector::iterator
DataFrame::iterator
List::iterator
```
The figure below shows schematically how to access vector elements using an iterator.
* `i = v.begin()` : The iterator `i` points to the first element of `v`.
* `++i` : Updates `i` to the state pointing to the next element.
* `--i` : Updates `i` to the state pointing to the previous element.
* `i + 1` : Represents the iterator pointing to the next element (one element after `i`).
* `i - 1` : Represents the iterator pointing to the previous element (one element before `i`).
* `*i` : Represents the value of the element pointed by `i`.
* `v.end()` : Represents an iterator pointing to the end (one after the last element) of `v`.
* `*(v.begin()+k)` : Represents the value of the `k`\-th element of `v` (`v[k]`).
The following code example shows an example of traversing all the elements of a `NumericVector` using iterator to calculate sum of the elements.
```
// [[Rcpp::export]]
double rcpp_sum(NumericVector x) {
double total = 0;
for(NumericVector::iterator i = x.begin(); i != x.end(); ++i) {
total += *i;
}
return total;
}
```
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/300_STL.html |
Chapter 30 Standard C\+\+ data structures and algorithms
========================================================
30\.1 Standard C\+\+ data structure
-----------------------------------
Standard C\+\+ provides various data structures (containers), such as `std::vector` `std::list` `std::map` `std::set`. They differ in their efficiency in accessing, adding, and deleting elements. Thus, you are able to implement the process more efficiently by choosing appropriate data structures.
For example, consider the situation that you want to add an element to a vector, both Rcpp’s `Rcpp::Vector` and standard C\+\+’s `std::vector` provides a member function `push_back()` which adds an element to the end of a vector. However, there is a big difference in efficiency between these. This is because each time `Rcpp::Vector` executes a member function `push_back()` the whole value of the vector is copied to another place in memory, whereas `std::vector` allows you to add an element to the end without copying the whole value in many cases.
Standard C\+\+ classes and functions are defined in the `std::` namespace, so they are specified with `std::` for example, `std::vector`. You can also omit writing `std::` by adding `using namespace std;` in your code.
Below is an example of using `std::vector` to get the row and column numbers of non\-zero matrix elements.
```
// [[Rcpp::export]]
DataFrame matix_rows_cols(NumericMatrix m){
// Returns the column and row numbers of non-zero elements from the matrix.
// For simplicity, we assume that the matrix does not contain NA.
// Tolal number of rows and cols
int I = m.rows();
int J = m.cols();
// variables to store row and column numbers
// store the result in a std::vector
std::vector<int> rows, cols;
// The number of elements can be up to the number of elements in matrix m,
// so we allocate memory for that in advance.
rows.reserve(m.length());
cols.reserve(m.length());
// Accesses all elements of matrix M
// and stores the row and column numbers of elements whose values are not zero.
for(int i=0; i<I; ++i){
for(int j=0; j<J; ++j){
if(m(i,j)!=0.0){
rows.push_back(i+1);
cols.push_back(j+1);
}
}
}
// Returns the result as a data.frame.
return DataFrame::create(Named("rows", rows),
Named("cols", cols));
}
```
Below is an overview of some of the major standard C\+\+ data structures.
| Standard C\+\+ Data Structure | Outline |
| --- | --- |
| `std::vector` | Variable length array: each element is arranged continuously in memory. |
| `std::list` | Variable length array: each element is distributed in memory. |
| `std::map`, `std::unordered_map` | Associative array: Holds data in key\-value format. |
| `std::set`, `std::unordered_set` | Set: Keeps a set of unduplicated values. |
`std::vector` has faster access to elements than `std::list`. On the other hand, `std::list` is faster at adding elements.
`std::map` holds elements in the order sorted by their keys. On the other hand, `std::unordered_map` does not guarantee the order of its elements, but is faster to insert and access elements.
Similarly, `std::set` holds elements in the order sorted by element values. On the other hand, the order is not guaranteed with `std::unordered_set`, but it is faster to insert and access elements.
30\.2 Conversion between standard C\+\+ data structures and Rcpp data structures
--------------------------------------------------------------------------------
The `as<T>()` and `wrap()` functions are used to convert the data structures between Rcpp and standard C\+\+.
* `as<CPP>(RCPP)` : Converts the Rcpp data structure (`RCPP`) to the standard C\+\+ data structure (`CPP`)
* `wrap(CPP)` : Converts the standard C\+\+ data structure (`CPP`) to the Rcpp data structure (`RCPP`)
The following table shows the correspondence between Rcpp and C\+\+ the data structures that can be converted each other.
(`+` indicates compatible, `-` indicates not compatible)
| Rcpp | C\+\+ | as | wrap |
| --- | --- | --- | --- |
| `Vector` | `vector`, `list`, `deque` | \+ | \+ |
| `List`, `DataFrame` | `vector<vector>`, `list<vector>` etc. | \+ | \+ |
| Named `Vector` | `map`, `unordered_map` | \- | \+ |
| `Vector` | `set`, `unordered_set` | \- | \+ |
The following example shows how to convert `Rcpp::Vector` to a standard C\+\+ sequence container (a container which can be treated as a series of values such as `std::vector`, `std::list`, `std::deque`, etc.).
```
NumericVector rcpp_vector = {1,2,3,4,5};
// Conversion from Rcpp::Vector to std::vector
std::vector<double> cpp_vector = as< std::vector<double> >(rcpp_vector);
// Conversion from std::vector to Rcpp::Vector
NumericVector v1 = wrap(cpp_vector);
```
The following code example shows how to convert a 2\-dimensional container, which is nested C\+\+ sequence containers, into a `DataFrame` or `List`.
```
using namespace std;
// A two-dimensional vector with all element vectors of equal length
// can be converted to a DataFrame
vector<vector<double>> cpp_vector_2d_01 = {{1,2},{3,4}};
DataFrame df = wrap(cpp_vector_2d_01);
// A two-dimensional vector with different length of element vectors
// can be converted to a list
vector<vector<double>> cpp_vector_2d_02 = {{1,2},{3,4,5}};
List li = wrap(cpp_vector_2d_02);
```
The following code example shows that standard C\+\+ `std::map<key, value>` and `std::unordered_map<key, value>` are converted to named `Rcpp::Vector` with `key` as the name of the element and `value` as the type of the element.
```
#include<map>
#include<unordered_map>
// [[Rcpp::export]]
List std_map(){
std::map<std::string, double> map_str_dbl;
map_str_dbl["E"] = 5;
map_str_dbl["A"] = 1;
map_str_dbl["C"] = 3;
map_str_dbl["D"] = 4;
map_str_dbl["B"] = 2;
std::unordered_map<std::string, double> umap_str_dbl;
umap_str_dbl["E"] = 5;
umap_str_dbl["A"] = 1;
umap_str_dbl["C"] = 3;
umap_str_dbl["D"] = 4;
umap_str_dbl["B"] = 2;
List li = List::create(Named("std::map", map_str_dbl),
Named("std::unordered_map", umap_str_dbl)
);
return(li);
}
```
execution result
You can see that `std::map` is sorted by key value, whereas `std::unordered_map` is not guaranteed to be ordered.
```
> std_map()
$`std::map`
A B C D E
1 2 3 4 5
$`std::unordered_map`
D B C A E
4 2 3 1 5
```
30\.3 Use standard C\+\+ data structures as arguments and return values of Rcpp functions
-----------------------------------------------------------------------------------------
Standard C\+\+ data structures that can be converted by the `as()` and `wrap()` functions can also be used as arguments or return values of Rcpp functions. The `as()` and `wrap()` is callded implicitly when you use C\+\+ data structures as Rcpp function’s arguments or return values. Thus, you need not to write `as()` and `wrap()` explicitly in your Rcpp functions.
```
// [[Rcpp::plugins("cpp11")]]
// [[Rcpp::export]]
vector<double> times_two_std_vector(vector<double> v){ // as() is called implicitly
for(double &x : v){
x *= 2;
}
return v; // wrap() is called implicitly
}
```
30\.4 Standard C\+\+ Algorithms
-------------------------------
The standard C\+\+ `<algorithm>` and `<numeric>` header files provide various generic algorithms. As mentioned in [the chapter on iterators](290_iterator.html), many of C\+\+ algorithm use iterators to specify the location and extent to which the algorithm is applied.
The following example shows how to use the `std::count()` function of the `<algorithm>` to count the number of elements equal to the specified value.
```
#include <algorithm>
// [[Rcpp::export]]
int rcpp_count(){
// create a string vector
CharacterVector v =
CharacterVector::create("A", "B", "A", "C", NA_STRING);
// count the number of elements whose value is "A" from the string vector v
return std::count(v.begin(), v.end(), "A"); // 2
}
```
For many other algorithms included in standard C\+\+, please refer to other C\+\+ resources.
30\.1 Standard C\+\+ data structure
-----------------------------------
Standard C\+\+ provides various data structures (containers), such as `std::vector` `std::list` `std::map` `std::set`. They differ in their efficiency in accessing, adding, and deleting elements. Thus, you are able to implement the process more efficiently by choosing appropriate data structures.
For example, consider the situation that you want to add an element to a vector, both Rcpp’s `Rcpp::Vector` and standard C\+\+’s `std::vector` provides a member function `push_back()` which adds an element to the end of a vector. However, there is a big difference in efficiency between these. This is because each time `Rcpp::Vector` executes a member function `push_back()` the whole value of the vector is copied to another place in memory, whereas `std::vector` allows you to add an element to the end without copying the whole value in many cases.
Standard C\+\+ classes and functions are defined in the `std::` namespace, so they are specified with `std::` for example, `std::vector`. You can also omit writing `std::` by adding `using namespace std;` in your code.
Below is an example of using `std::vector` to get the row and column numbers of non\-zero matrix elements.
```
// [[Rcpp::export]]
DataFrame matix_rows_cols(NumericMatrix m){
// Returns the column and row numbers of non-zero elements from the matrix.
// For simplicity, we assume that the matrix does not contain NA.
// Tolal number of rows and cols
int I = m.rows();
int J = m.cols();
// variables to store row and column numbers
// store the result in a std::vector
std::vector<int> rows, cols;
// The number of elements can be up to the number of elements in matrix m,
// so we allocate memory for that in advance.
rows.reserve(m.length());
cols.reserve(m.length());
// Accesses all elements of matrix M
// and stores the row and column numbers of elements whose values are not zero.
for(int i=0; i<I; ++i){
for(int j=0; j<J; ++j){
if(m(i,j)!=0.0){
rows.push_back(i+1);
cols.push_back(j+1);
}
}
}
// Returns the result as a data.frame.
return DataFrame::create(Named("rows", rows),
Named("cols", cols));
}
```
Below is an overview of some of the major standard C\+\+ data structures.
| Standard C\+\+ Data Structure | Outline |
| --- | --- |
| `std::vector` | Variable length array: each element is arranged continuously in memory. |
| `std::list` | Variable length array: each element is distributed in memory. |
| `std::map`, `std::unordered_map` | Associative array: Holds data in key\-value format. |
| `std::set`, `std::unordered_set` | Set: Keeps a set of unduplicated values. |
`std::vector` has faster access to elements than `std::list`. On the other hand, `std::list` is faster at adding elements.
`std::map` holds elements in the order sorted by their keys. On the other hand, `std::unordered_map` does not guarantee the order of its elements, but is faster to insert and access elements.
Similarly, `std::set` holds elements in the order sorted by element values. On the other hand, the order is not guaranteed with `std::unordered_set`, but it is faster to insert and access elements.
30\.2 Conversion between standard C\+\+ data structures and Rcpp data structures
--------------------------------------------------------------------------------
The `as<T>()` and `wrap()` functions are used to convert the data structures between Rcpp and standard C\+\+.
* `as<CPP>(RCPP)` : Converts the Rcpp data structure (`RCPP`) to the standard C\+\+ data structure (`CPP`)
* `wrap(CPP)` : Converts the standard C\+\+ data structure (`CPP`) to the Rcpp data structure (`RCPP`)
The following table shows the correspondence between Rcpp and C\+\+ the data structures that can be converted each other.
(`+` indicates compatible, `-` indicates not compatible)
| Rcpp | C\+\+ | as | wrap |
| --- | --- | --- | --- |
| `Vector` | `vector`, `list`, `deque` | \+ | \+ |
| `List`, `DataFrame` | `vector<vector>`, `list<vector>` etc. | \+ | \+ |
| Named `Vector` | `map`, `unordered_map` | \- | \+ |
| `Vector` | `set`, `unordered_set` | \- | \+ |
The following example shows how to convert `Rcpp::Vector` to a standard C\+\+ sequence container (a container which can be treated as a series of values such as `std::vector`, `std::list`, `std::deque`, etc.).
```
NumericVector rcpp_vector = {1,2,3,4,5};
// Conversion from Rcpp::Vector to std::vector
std::vector<double> cpp_vector = as< std::vector<double> >(rcpp_vector);
// Conversion from std::vector to Rcpp::Vector
NumericVector v1 = wrap(cpp_vector);
```
The following code example shows how to convert a 2\-dimensional container, which is nested C\+\+ sequence containers, into a `DataFrame` or `List`.
```
using namespace std;
// A two-dimensional vector with all element vectors of equal length
// can be converted to a DataFrame
vector<vector<double>> cpp_vector_2d_01 = {{1,2},{3,4}};
DataFrame df = wrap(cpp_vector_2d_01);
// A two-dimensional vector with different length of element vectors
// can be converted to a list
vector<vector<double>> cpp_vector_2d_02 = {{1,2},{3,4,5}};
List li = wrap(cpp_vector_2d_02);
```
The following code example shows that standard C\+\+ `std::map<key, value>` and `std::unordered_map<key, value>` are converted to named `Rcpp::Vector` with `key` as the name of the element and `value` as the type of the element.
```
#include<map>
#include<unordered_map>
// [[Rcpp::export]]
List std_map(){
std::map<std::string, double> map_str_dbl;
map_str_dbl["E"] = 5;
map_str_dbl["A"] = 1;
map_str_dbl["C"] = 3;
map_str_dbl["D"] = 4;
map_str_dbl["B"] = 2;
std::unordered_map<std::string, double> umap_str_dbl;
umap_str_dbl["E"] = 5;
umap_str_dbl["A"] = 1;
umap_str_dbl["C"] = 3;
umap_str_dbl["D"] = 4;
umap_str_dbl["B"] = 2;
List li = List::create(Named("std::map", map_str_dbl),
Named("std::unordered_map", umap_str_dbl)
);
return(li);
}
```
execution result
You can see that `std::map` is sorted by key value, whereas `std::unordered_map` is not guaranteed to be ordered.
```
> std_map()
$`std::map`
A B C D E
1 2 3 4 5
$`std::unordered_map`
D B C A E
4 2 3 1 5
```
30\.3 Use standard C\+\+ data structures as arguments and return values of Rcpp functions
-----------------------------------------------------------------------------------------
Standard C\+\+ data structures that can be converted by the `as()` and `wrap()` functions can also be used as arguments or return values of Rcpp functions. The `as()` and `wrap()` is callded implicitly when you use C\+\+ data structures as Rcpp function’s arguments or return values. Thus, you need not to write `as()` and `wrap()` explicitly in your Rcpp functions.
```
// [[Rcpp::plugins("cpp11")]]
// [[Rcpp::export]]
vector<double> times_two_std_vector(vector<double> v){ // as() is called implicitly
for(double &x : v){
x *= 2;
}
return v; // wrap() is called implicitly
}
```
30\.4 Standard C\+\+ Algorithms
-------------------------------
The standard C\+\+ `<algorithm>` and `<numeric>` header files provide various generic algorithms. As mentioned in [the chapter on iterators](290_iterator.html), many of C\+\+ algorithm use iterators to specify the location and extent to which the algorithm is applied.
The following example shows how to use the `std::count()` function of the `<algorithm>` to count the number of elements equal to the specified value.
```
#include <algorithm>
// [[Rcpp::export]]
int rcpp_count(){
// create a string vector
CharacterVector v =
CharacterVector::create("A", "B", "A", "C", NA_STRING);
// count the number of elements whose value is "A" from the string vector v
return std::count(v.begin(), v.end(), "A"); // 2
}
```
For many other algorithms included in standard C\+\+, please refer to other C\+\+ resources.
| R Programming |
teuder.github.io | https://teuder.github.io/rcpp4everyone_en/310_Rmath.html |
Chapter 31 R Math Library
=========================
R Math Library is a library that provides functions such as math and statistics defined by R. These functions become available when you include `Rmath.h`. Since Rmath itself is an independent library, it can be used from other programs. You can also call the function defined in `Rmath.h` from Rcpp. They are defined in the `R::` namespace.
The function of `Rmath.h` is written in C language and it is not vectorized.
```
double R::norm_rand(void)
double R::unif_rand(void)
double R::exp_rand(void)
void R::pnorm_both(double x, double *cum, double *ccum, int lt, int lg)
void R::rmultinom(int n, double* prob, int k, int* rn)
double R::dsignrank(double x, double n, int lg)
double R::psignrank(double x, double n, int lt, int lg)
double R::qsignrank(double x, double n, int lt, int lg)
double R::rsignrank(double n)
double R::dwilcox(double x, double m, double n, int lg)
double R::pwilcox(double q, double m, double n, int lt, int lg)
double R::qwilcox(double x, double m, double n, int lt, int lg)
double R::rwilcox(double m, double n)
double R::ptukey(double q, double rr, double cc, double df, int lt, int lg)
double R::qtukey(double p, double rr, double cc, double df, int lt, int lg)
double R::log1pmx(double x)
double R::log1pexp(double x)
double R::lgamma1p(double a)
double R::logspace_add(double lx, double ly)
double R::logspace_sub(double lx, double ly)
double R::gammafn(double x)
double R::lgammafn(double x)
double R::lgammafn_sign(double x, int *sgn)
void R::dpsifn(double x, int n, int kode, int m, double *ans, int *nz, int *ierr)
double R::psigamma(double x, double deriv)
double R::digamma(double x)
double R::trigamma(double x)
double R::tetragamma(double x)
double R::pentagamma(double x)
double R::beta(double a, double b)
double R::lbeta(double a, double b)
double R::choose(double n, double k)
double R::lchoose(double n, double k)
double R::bessel_i(double x, double al, double ex)
double R::bessel_j(double x, double al)
double R::bessel_k(double x, double al, double ex)
double R::bessel_y(double x, double al)
double R::bessel_i_ex(double x, double al, double ex, double *bi)
double R::bessel_j_ex(double x, double al, double *bj)
double R::bessel_k_ex(double x, double al, double ex, double *bk)
double R::bessel_y_ex(double x, double al, double *by)
double R::hypot(double a, double b)
double R::pythag(double a, double b)
double R::expm1(double x); /* = exp(x)-1 {care for small x} */
double R::log1p(double x); /* = log(1+x) {care for small x} */
int R::imax2(int x, int y)
int R::imin2(int x, int y)
double R::fmax2(double x, double y)
double R::fmin2(double x, double y)
double R::sign(double x)
double R::fprec(double x, double dg)
double R::fround(double x, double dg)
double R::fsign(double x, double y)
double R::ftrunc(double x)
```
The probability distribution function is also described on the page [Probability distributions](21_dpqr_functions.md).
```
double R::dnorm(double x, double mu, double sigma, int lg)
double R::pnorm(double x, double mu, double sigma, int lt, int lg)
double R::qnorm(double p, double mu, double sigma, int lt, int lg)
double R::rnorm(double mu, double sigma)
double R::dunif(double x, double a, double b, int lg)
double R::punif(double x, double a, double b, int lt, int lg)
double R::qunif(double p, double a, double b, int lt, int lg)
double R::runif(double a, double b)
double R::dgamma(double x, double shp, double scl, int lg)
double R::pgamma(double x, double alp, double scl, int lt, int lg)
double R::qgamma(double p, double alp, double scl, int lt, int lg)
double R::rgamma(double a, double scl)
double R::dbeta(double x, double a, double b, int lg)
double R::pbeta(double x, double p, double q, int lt, int lg)
double R::qbeta(double a, double p, double q, int lt, int lg)
double R::rbeta(double a, double b)
double R::dlnorm(double x, double ml, double sl, int lg)
double R::plnorm(double x, double ml, double sl, int lt, int lg)
double R::qlnorm(double p, double ml, double sl, int lt, int lg)
double R::rlnorm(double ml, double sl)
double R::dchisq(double x, double df, int lg)
double R::pchisq(double x, double df, int lt, int lg)
double R::qchisq(double p, double df, int lt, int lg)
double R::rchisq(double df)
double R::dnchisq(double x, double df, double ncp, int lg)
double R::pnchisq(double x, double df, double ncp, int lt, int lg)
double R::qnchisq(double p, double df, double ncp, int lt, int lg)
double R::rnchisq(double df, double lb)
double R::df(double x, double df1, double df2, int lg)
double R::pf(double x, double df1, double df2, int lt, int lg)
double R::qf(double p, double df1, double df2, int lt, int lg)
double R::rf(double df1, double df2)
double R::dt(double x, double n, int lg)
double R::pt(double x, double n, int lt, int lg)
double R::qt(double p, double n, int lt, int lg)
double R::rt(double n)
double R::dbinom(double x, double n, double p, int lg)
double R::pbinom(double x, double n, double p, int lt, int lg)
double R::qbinom(double p, double n, double m, int lt, int lg)
double R::rbinom(double n, double p)
double R::dcauchy(double x, double lc, double sl, int lg)
double R::pcauchy(double x, double lc, double sl, int lt, int lg)
double R::qcauchy(double p, double lc, double sl, int lt, int lg)
double R::rcauchy(double lc, double sl)
double R::dexp(double x, double sl, int lg)
double R::pexp(double x, double sl, int lt, int lg)
double R::qexp(double p, double sl, int lt, int lg)
double R::rexp(double sl)
double R::dgeom(double x, double p, int lg)
double R::pgeom(double x, double p, int lt, int lg)
double R::qgeom(double p, double pb, int lt, int lg)
double R::rgeom(double p)
double R::dhyper(double x, double r, double b, double n, int lg)
double R::phyper(double x, double r, double b, double n, int lt, int lg)
double R::qhyper(double p, double r, double b, double n, int lt, int lg)
double R::rhyper(double r, double b, double n)
double R::dnbinom(double x, double sz, double pb, int lg)
double R::pnbinom(double x, double sz, double pb, int lt, int lg)
double R::qnbinom(double p, double sz, double pb, int lt, int lg)
double R::rnbinom(double sz, double pb)
double R::dnbinom_mu(double x, double sz, double mu, int lg)
double R::pnbinom_mu(double x, double sz, double mu, int lt, int lg)
double R::qnbinom_mu(double x, double sz, double mu, int lt, int lg)
double R::rnbinom_mu(double sz, double mu)
double R::dpois(double x, double lb, int lg)
double R::ppois(double x, double lb, int lt, int lg)
double R::qpois(double p, double lb, int lt, int lg)
double R::rpois(double mu)
double R::dlogis(double x, double lc, double sl, int lg)
double R::plogis(double x, double lc, double sl, int lt, int lg)
double R::qlogis(double p, double lc, double sl, int lt, int lg)
double R::rlogis(double lc, double sl)
double R::dnbeta(double x, double a, double b, double ncp, int lg)
double R::pnbeta(double x, double a, double b, double ncp, int lt, int lg)
double R::qnbeta(double p, double a, double b, double ncp, int lt, int lg)
double R::rnbeta(double a, double b, double np)
double R::dnf(double x, double df1, double df2, double ncp, int lg)
double R::pnf(double x, double df1, double df2, double ncp, int lt, int lg)
double R::qnf(double p, double df1, double df2, double ncp, int lt, int lg)
double R::dnt(double x, double df, double ncp, int lg)
double R::pnt(double x, double df, double ncp, int lt, int lg)
double R::qnt(double p, double df, double ncp, int lt, int lg)
double R::dweibull(double x, double sh, double sl, int lg)
double R::pweibull(double x, double sh, double sl, int lt, int lg)
double R::qweibull(double p, double sh, double sl, int lt, int lg)
double R::rweibull(double sh, double sl)
```
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/how-to-use-this-book.html |
How to use this book
====================
The recipes in this book are organized according to the model of data science presented in [*R for Data Science*](http://r4ds.had.co.nz/introduction.html). When you do data science, you find yourself repeating the same sequence of steps:
I’ve grouped the recipes by task with the exception of Transform, which is split into chapters based on the type of data or structure to transform. To drill down to a recipe, click on one of the task names in the sidebar. If you are new to R, begin with \[Program], which will show you how to install the tidyverse.
To save space, each recipe assumes that you have already run `library("tidyverse")`. If additional `library()` calls are required, they will appear in the recipe.
0\.1 How to get help
--------------------
R comes with a built in reference manual, which is fondly (but sometimes inaccurately) called R’s help pages. You can use R’s help pages to glean details that I do not cover here.
To access the help page for an R function, type a `?` at the command line of your R console, followed by the function name, and then hit enter to run the result, e.g.
```
?mutate
```
If a function comes in an R package, you will need to load the package with `library()` before you can access the function’s help page. Alternatively, you can access the help page by typing a `?` followed by the package name, followed by two colons, followed by the function name, e.g.
```
?dplyr::mutate
```
0\.2 What is the tidyverse?
---------------------------
Each of the recipes in this book relies on R’s [tidyverse](https://www.tidyverse.org/), which is a collection of R packages designed for data science. Tidyverse packages share a common design philosophy, so when you learn how to use one tidyverse package, you learn a lot about how to use the others.
Tidyverse packages are also:
* optimised to run fast, relying on C\+\+ under the hood
* maintained by a paid staff of talented developers
* unusually well documented, see <tidyverse.org> and [*R for Data Science*](http://r4ds.had.co.nz/) as examples.
Install the complete set of tidyverse packages with:
```
install.packages("tidyverse")
```
Each tidyverse package is a collection of functions, documentation, and *ideas*. You do not need to know which ideas are in the tidyverse to use tidyverse functions, just as you do not need to know which ingredients are in a cake mix to make a cake. However, understanding the tidyverse will help you see the best practices embedded in each recipe. This will make it easier for you to adapt the recipes to your own work.
### 0\.2\.1 Tidy data
Each package in the tidyverse is designed to use and return **tidy data** whenever appropriate. Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
In practice, tidy data in R appears as data frames or tibbles. A **tibble** is an enhanced version of a data frame that is easier to view at the command line. R treats tibbles like data frames in almost every other respect, because tibbles are a subclass of data frame.
Tidy tibbles act as a common data structure that tidyverse functions use to talk to each other. Tidy data has other advantages as well. Tidy data aligns with R’s native data structure, the data frame; and tidy data is easy to use with R’s fast, built in vectorized operations. You can think of tidy data as the data format optimized for R.
### 0\.2\.2 Tidy tools
Each package in the tidyverse also provides **tidy tools**. Tidy tools are R functions that:
1. Accept and return the same type of data structure (as input and output)
2. Focus on one task per function
3. Can be combined with other functions to perform multi\-step operations (using the pipe operator, `%>%`)
Tidy tools are easy to understand and easy to use. They encourage you to organize your work into a sequence of steps that can be considered and checked one at a time. If tidy data is the data format optimized for R, tidy tools are the function format optimized for *you*.
If you’d like a deeper understanding of the tidyverse, I encourage you to read [*R for Data Science*](https://r4ds.had.co.nz/), or to work through the [free primers on RStudio Cloud](https://rstudio.cloud/learn/primers).
0\.1 How to get help
--------------------
R comes with a built in reference manual, which is fondly (but sometimes inaccurately) called R’s help pages. You can use R’s help pages to glean details that I do not cover here.
To access the help page for an R function, type a `?` at the command line of your R console, followed by the function name, and then hit enter to run the result, e.g.
```
?mutate
```
If a function comes in an R package, you will need to load the package with `library()` before you can access the function’s help page. Alternatively, you can access the help page by typing a `?` followed by the package name, followed by two colons, followed by the function name, e.g.
```
?dplyr::mutate
```
0\.2 What is the tidyverse?
---------------------------
Each of the recipes in this book relies on R’s [tidyverse](https://www.tidyverse.org/), which is a collection of R packages designed for data science. Tidyverse packages share a common design philosophy, so when you learn how to use one tidyverse package, you learn a lot about how to use the others.
Tidyverse packages are also:
* optimised to run fast, relying on C\+\+ under the hood
* maintained by a paid staff of talented developers
* unusually well documented, see <tidyverse.org> and [*R for Data Science*](http://r4ds.had.co.nz/) as examples.
Install the complete set of tidyverse packages with:
```
install.packages("tidyverse")
```
Each tidyverse package is a collection of functions, documentation, and *ideas*. You do not need to know which ideas are in the tidyverse to use tidyverse functions, just as you do not need to know which ingredients are in a cake mix to make a cake. However, understanding the tidyverse will help you see the best practices embedded in each recipe. This will make it easier for you to adapt the recipes to your own work.
### 0\.2\.1 Tidy data
Each package in the tidyverse is designed to use and return **tidy data** whenever appropriate. Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
In practice, tidy data in R appears as data frames or tibbles. A **tibble** is an enhanced version of a data frame that is easier to view at the command line. R treats tibbles like data frames in almost every other respect, because tibbles are a subclass of data frame.
Tidy tibbles act as a common data structure that tidyverse functions use to talk to each other. Tidy data has other advantages as well. Tidy data aligns with R’s native data structure, the data frame; and tidy data is easy to use with R’s fast, built in vectorized operations. You can think of tidy data as the data format optimized for R.
### 0\.2\.2 Tidy tools
Each package in the tidyverse also provides **tidy tools**. Tidy tools are R functions that:
1. Accept and return the same type of data structure (as input and output)
2. Focus on one task per function
3. Can be combined with other functions to perform multi\-step operations (using the pipe operator, `%>%`)
Tidy tools are easy to understand and easy to use. They encourage you to organize your work into a sequence of steps that can be considered and checked one at a time. If tidy data is the data format optimized for R, tidy tools are the function format optimized for *you*.
If you’d like a deeper understanding of the tidyverse, I encourage you to read [*R for Data Science*](https://r4ds.had.co.nz/), or to work through the [free primers on RStudio Cloud](https://rstudio.cloud/learn/primers).
### 0\.2\.1 Tidy data
Each package in the tidyverse is designed to use and return **tidy data** whenever appropriate. Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
In practice, tidy data in R appears as data frames or tibbles. A **tibble** is an enhanced version of a data frame that is easier to view at the command line. R treats tibbles like data frames in almost every other respect, because tibbles are a subclass of data frame.
Tidy tibbles act as a common data structure that tidyverse functions use to talk to each other. Tidy data has other advantages as well. Tidy data aligns with R’s native data structure, the data frame; and tidy data is easy to use with R’s fast, built in vectorized operations. You can think of tidy data as the data format optimized for R.
### 0\.2\.2 Tidy tools
Each package in the tidyverse also provides **tidy tools**. Tidy tools are R functions that:
1. Accept and return the same type of data structure (as input and output)
2. Focus on one task per function
3. Can be combined with other functions to perform multi\-step operations (using the pipe operator, `%>%`)
Tidy tools are easy to understand and easy to use. They encourage you to organize your work into a sequence of steps that can be considered and checked one at a time. If tidy data is the data format optimized for R, tidy tools are the function format optimized for *you*.
If you’d like a deeper understanding of the tidyverse, I encourage you to read [*R for Data Science*](https://r4ds.had.co.nz/), or to work through the [free primers on RStudio Cloud](https://rstudio.cloud/learn/primers).
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/how-to-use-this-book.html |
How to use this book
====================
The recipes in this book are organized according to the model of data science presented in [*R for Data Science*](http://r4ds.had.co.nz/introduction.html). When you do data science, you find yourself repeating the same sequence of steps:
I’ve grouped the recipes by task with the exception of Transform, which is split into chapters based on the type of data or structure to transform. To drill down to a recipe, click on one of the task names in the sidebar. If you are new to R, begin with \[Program], which will show you how to install the tidyverse.
To save space, each recipe assumes that you have already run `library("tidyverse")`. If additional `library()` calls are required, they will appear in the recipe.
0\.1 How to get help
--------------------
R comes with a built in reference manual, which is fondly (but sometimes inaccurately) called R’s help pages. You can use R’s help pages to glean details that I do not cover here.
To access the help page for an R function, type a `?` at the command line of your R console, followed by the function name, and then hit enter to run the result, e.g.
```
?mutate
```
If a function comes in an R package, you will need to load the package with `library()` before you can access the function’s help page. Alternatively, you can access the help page by typing a `?` followed by the package name, followed by two colons, followed by the function name, e.g.
```
?dplyr::mutate
```
0\.2 What is the tidyverse?
---------------------------
Each of the recipes in this book relies on R’s [tidyverse](https://www.tidyverse.org/), which is a collection of R packages designed for data science. Tidyverse packages share a common design philosophy, so when you learn how to use one tidyverse package, you learn a lot about how to use the others.
Tidyverse packages are also:
* optimised to run fast, relying on C\+\+ under the hood
* maintained by a paid staff of talented developers
* unusually well documented, see <tidyverse.org> and [*R for Data Science*](http://r4ds.had.co.nz/) as examples.
Install the complete set of tidyverse packages with:
```
install.packages("tidyverse")
```
Each tidyverse package is a collection of functions, documentation, and *ideas*. You do not need to know which ideas are in the tidyverse to use tidyverse functions, just as you do not need to know which ingredients are in a cake mix to make a cake. However, understanding the tidyverse will help you see the best practices embedded in each recipe. This will make it easier for you to adapt the recipes to your own work.
### 0\.2\.1 Tidy data
Each package in the tidyverse is designed to use and return **tidy data** whenever appropriate. Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
In practice, tidy data in R appears as data frames or tibbles. A **tibble** is an enhanced version of a data frame that is easier to view at the command line. R treats tibbles like data frames in almost every other respect, because tibbles are a subclass of data frame.
Tidy tibbles act as a common data structure that tidyverse functions use to talk to each other. Tidy data has other advantages as well. Tidy data aligns with R’s native data structure, the data frame; and tidy data is easy to use with R’s fast, built in vectorized operations. You can think of tidy data as the data format optimized for R.
### 0\.2\.2 Tidy tools
Each package in the tidyverse also provides **tidy tools**. Tidy tools are R functions that:
1. Accept and return the same type of data structure (as input and output)
2. Focus on one task per function
3. Can be combined with other functions to perform multi\-step operations (using the pipe operator, `%>%`)
Tidy tools are easy to understand and easy to use. They encourage you to organize your work into a sequence of steps that can be considered and checked one at a time. If tidy data is the data format optimized for R, tidy tools are the function format optimized for *you*.
If you’d like a deeper understanding of the tidyverse, I encourage you to read [*R for Data Science*](https://r4ds.had.co.nz/), or to work through the [free primers on RStudio Cloud](https://rstudio.cloud/learn/primers).
0\.1 How to get help
--------------------
R comes with a built in reference manual, which is fondly (but sometimes inaccurately) called R’s help pages. You can use R’s help pages to glean details that I do not cover here.
To access the help page for an R function, type a `?` at the command line of your R console, followed by the function name, and then hit enter to run the result, e.g.
```
?mutate
```
If a function comes in an R package, you will need to load the package with `library()` before you can access the function’s help page. Alternatively, you can access the help page by typing a `?` followed by the package name, followed by two colons, followed by the function name, e.g.
```
?dplyr::mutate
```
0\.2 What is the tidyverse?
---------------------------
Each of the recipes in this book relies on R’s [tidyverse](https://www.tidyverse.org/), which is a collection of R packages designed for data science. Tidyverse packages share a common design philosophy, so when you learn how to use one tidyverse package, you learn a lot about how to use the others.
Tidyverse packages are also:
* optimised to run fast, relying on C\+\+ under the hood
* maintained by a paid staff of talented developers
* unusually well documented, see <tidyverse.org> and [*R for Data Science*](http://r4ds.had.co.nz/) as examples.
Install the complete set of tidyverse packages with:
```
install.packages("tidyverse")
```
Each tidyverse package is a collection of functions, documentation, and *ideas*. You do not need to know which ideas are in the tidyverse to use tidyverse functions, just as you do not need to know which ingredients are in a cake mix to make a cake. However, understanding the tidyverse will help you see the best practices embedded in each recipe. This will make it easier for you to adapt the recipes to your own work.
### 0\.2\.1 Tidy data
Each package in the tidyverse is designed to use and return **tidy data** whenever appropriate. Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
In practice, tidy data in R appears as data frames or tibbles. A **tibble** is an enhanced version of a data frame that is easier to view at the command line. R treats tibbles like data frames in almost every other respect, because tibbles are a subclass of data frame.
Tidy tibbles act as a common data structure that tidyverse functions use to talk to each other. Tidy data has other advantages as well. Tidy data aligns with R’s native data structure, the data frame; and tidy data is easy to use with R’s fast, built in vectorized operations. You can think of tidy data as the data format optimized for R.
### 0\.2\.2 Tidy tools
Each package in the tidyverse also provides **tidy tools**. Tidy tools are R functions that:
1. Accept and return the same type of data structure (as input and output)
2. Focus on one task per function
3. Can be combined with other functions to perform multi\-step operations (using the pipe operator, `%>%`)
Tidy tools are easy to understand and easy to use. They encourage you to organize your work into a sequence of steps that can be considered and checked one at a time. If tidy data is the data format optimized for R, tidy tools are the function format optimized for *you*.
If you’d like a deeper understanding of the tidyverse, I encourage you to read [*R for Data Science*](https://r4ds.had.co.nz/), or to work through the [free primers on RStudio Cloud](https://rstudio.cloud/learn/primers).
### 0\.2\.1 Tidy data
Each package in the tidyverse is designed to use and return **tidy data** whenever appropriate. Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
In practice, tidy data in R appears as data frames or tibbles. A **tibble** is an enhanced version of a data frame that is easier to view at the command line. R treats tibbles like data frames in almost every other respect, because tibbles are a subclass of data frame.
Tidy tibbles act as a common data structure that tidyverse functions use to talk to each other. Tidy data has other advantages as well. Tidy data aligns with R’s native data structure, the data frame; and tidy data is easy to use with R’s fast, built in vectorized operations. You can think of tidy data as the data format optimized for R.
### 0\.2\.2 Tidy tools
Each package in the tidyverse also provides **tidy tools**. Tidy tools are R functions that:
1. Accept and return the same type of data structure (as input and output)
2. Focus on one task per function
3. Can be combined with other functions to perform multi\-step operations (using the pipe operator, `%>%`)
Tidy tools are easy to understand and easy to use. They encourage you to organize your work into a sequence of steps that can be considered and checked one at a time. If tidy data is the data format optimized for R, tidy tools are the function format optimized for *you*.
If you’d like a deeper understanding of the tidyverse, I encourage you to read [*R for Data Science*](https://r4ds.had.co.nz/), or to work through the [free primers on RStudio Cloud](https://rstudio.cloud/learn/primers).
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/program.html |
1 Program
=========
---
This chapter includes the following recipes. To manipulate vectors with [purrr](http://purrr.tidyverse.org), see \[Transform Lists and Vectors].
1. [Install a tidyverse package](program.html#install)
2. [Install all of the tidyverse packages](program.html#install-all)
3. [Load a tidyverse package](program.html#load)
4. [Load the core set of tidyverse packages](program.html#load-the-core-set-of-tidyverse-packages)
5. [Update a tidyverse package](program.html#update)
6. [Update all of the tidyverse packages](program.html#update-all)
7. [List all of the tidyverse packages](program.html#list-all-of-the-tidyverse-packages)
8. [Create an object](program.html#create-an-object)
9. [Create a vector](program.html#create-a-vector)
10. [See an object](program.html#see-an-object)
11. [Remove an object](program.html#remove-an-object)
12. [Call a function](program.html#call-a-function)
13. [See a function’s arguments](program.html#see-a-functions-arguments)
14. [Open a function’s help page](program.html#open-a-functions-help-page)
15. [Combine functions into a pipe](program.html#combine-functions-into-a-pipe)
16. [Pipe a result to a specific argument](program.html#pipe-a-result-to-a-specific-argument)
---
What you should know before you begin
-------------------------------------
The [tidyverse](http://www.tidyverse.org) is a collection of R packages that are designed to work well together. There are about 25 packages in the tidyverse. An R package is a bundle of functions, documentation, and data sets. R has over 13,000 packages. These are not installed with R, but are archived [online](https://cran.r-project.org/web/packages/index.html) for when you need them.
To use an R package, you must:
1. Install the package on your local machine with `install.packages()`. You only need to do this once per machine.
2. Load the package into your R session with `library()`. You need to do this each time you start a new R session (if you wish to use the package in that session).
You cannot use the contents of a package until you load the package in your current R session. You should update your packages from time to time to receive the latest improvements from package authors.
Tidyverse functions are designed to be used with the `%>%` operator. `%>%` links R functions together to create a “pipe” of functions that are run in sequence: `%>%` passes the output of one function to the input of the next. `%>%` comes with the dplyr package, which imports it from the magrittr package.
1\.1 Install a tidyverse package
--------------------------------
You’d like to install a package that is in the tidyverse.
#### Solution
```
install.packages("dplyr")
```
#### Discussion
Tidyverse packages can be installed in the normal way with `install.packages()`. See `?install.packages` for installation details.
By default, `install.packages()` will download packages from [https://cran.r\-project.org](https://cran.r-project.org), or one of its mirrors—so be sure you are connected to the internet when you run it.
1\.2 Install all of the tidyverse packages
------------------------------------------
You’d like to install *all* of the packages in the tidyverse with a single command.
#### Solution
```
install.packages("tidyverse")
```
#### Discussion
The `tidyverse` package provides a shortcut for downloading all of the packages in the tidyverse. `tidyverse` purposefully lists every package in the tidyverse as one of its dependencies. This causes R to install all of the packages in the tidyverse when R installs `tidyverse`.
`install.packages("tidyverse")` will install the following packages:
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
1\.3 Load a tidyverse package
-----------------------------
You want to load a package that is in the tidyverse, so that you can use its contents. You’ve already [installed](program.html#install) the package on your computer.
#### Solution
```
library("dplyr")
```
#### Discussion
You can load individual tidyverse packages with `library()`. The package will stay loaded until you end your R session or run `detach()` on the package. If you begin a new R session, you will need to reload the package in the new session with `library()`.
`library()` cannot load packages that have not been [installed](program.html#install) on your machine.
You must place quotation marks around a package name when you use `install.packages()`, but the same is not true for `library()`. The commands below will both load the dplyr package if it is installed on your computer.
```
library(dplyr)
library("dplyr")
```
1\.4 Load the core set of tidyverse packages
--------------------------------------------
You would like to load the most used packages in the tidyverse with a single command. You’ve already [installed](program.html#install-all) these packages on your computer.
#### Solution
```
library("tidyverse")
```
#### Discussion
When you load the `tidyverse` package, R will also load the following packages:
* `ggplot2`
* `dplyr`
* `tidyr`
* `readr`
* `purrr`
* `tibble`
* `stringr`
* `forcats`
These eight packages are considered the “core” of the tidyverse because:
1. They are the most used tidyverse packages.
2. They are often used together as a set (when you use one of the packages, you tend to also use the others).
You can still load each of these packages individually with `library()`.
Notice that `library("tidyverse")` does not load every package installed by `install.packages("tidyverse")`. You must use `library()` to individually load the “non\-core” tidyverse packages.
1\.5 Update a tidyverse package
-------------------------------
You want to check that you have the latest version of a package that is in the tidyverse.
#### Solution
```
update.packages("dplyr")
```
#### Discussion
`update.packages()` compares the version number of your local copy of a package to the version number of the newest version available on CRAN. If your local copy is older than the newest version, `update.packages()` will download and install the newest version from CRAN. Otherwise, `update.packages()` will do nothing.
Be sure that you are connected to the internet when you run `update.packages()`.
1\.6 Update all of the tidyverse packages
-----------------------------------------
You want to check that you have the latest version of *every* package that is in the tidyverse.
#### Solution
```
tidyverse_update()
```
#### Discussion
`tidyverse_update()` checks whether or not each of your tidyverse packages is [up\-to\-date](program.html#update). If every package is up\-to\-date, `tidyverse_update()` will return the message: `All tidyverse packages up-to-date`. Otherwise, `tidyverse_update()` will return a piece of code that you can copy and run to selectively update only those packages that are out\-of\-date.
1\.7 List all of the tidyverse packages
---------------------------------------
You want to generate a vector that contains the names of every package in the tidyverse.
#### Solution
```
tidyverse_packages()
```
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
#### Discussion
`tidyverse_packages()` returns a character vector that contains the names of every package that was in the tidyverse when you installed the `tidyverse` package. These are the packages that were installed onto your machine along with the `tidyverse` package. They are also the packages that [`tidyverse_update()`](program.html#update-all) will check.
[Update](program.html#update) the `tidyverse` package before running `tidyverse_packages()` to receive the most current list.
1\.8 Create an object
---------------------
You want to create an object that stores content to use later.
#### Solution
```
x <- 1
```
#### Discussion
The assignment operator is made by a less than sign, `<`, followed by a minus sign `-`. The result looks like an arrow, `<-`. R will assign the contents on the right hand side of the operator to the name on the left hand sign. Afterwards, you can use the name in your code to refer to the contents, e.g. `log(x)`.
Names cannot begin with a number, nor a special character like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`.
If you assign to a name that is already in use, the new content will mask or overwrite the previous object.
1\.9 Create a vector
--------------------
You want to create a one\-dimensional, ordered set of values.
#### Solution
```
c(3, 1, 7, 5)
```
```
## [1] 3 1 7 5
```
#### Discussion
The concatenate function, `c()`, combines its arguments into a vector that can be passed to a function or assigned to an object. To name each value, provide names to the arguments of `c()`, e.g.
```
c(mu = 3, chi = 1, psi = 7, pi = 5)
```
```
## mu chi psi pi
## 3 1 7 5
```
1\.10 See an object
-------------------
You want to see the contents of an object.
#### Solution
```
x
```
```
## [1] 1
```
#### Discussion
R displays the contents of an object when you call the bare object name—not surrounded with quotation marks, not followed by parentheses. This works for functions too, in which case R displays the code elements assigned to the function name,e.g. `lm`.
1\.11 Remove an object
----------------------
You want to remove an object that you have created.
#### Solution
```
rm(x)
```
#### Discussion
The remove function, `rm()`, removes an object from R’s memory. `rm()` will not remove an object that is loaded from an R package.
1\.12 Call a function
---------------------
You want to call a function, i.e. you want R to execute a function.
#### Solution
```
round(3.141593, digits = 2)
```
```
## [1] 3.14
```
#### Discussion
To run a function, type its name followed by an open open and closed parentheses. If the function requires arguments (i.e. inputs) to do its job, place the arguments between the parentheses.
It is a best practice to name every argument after the first. This is sometimes relaxed to every argument after the second argument, if the second argument is also obvious and required.
1\.13 See a function’s arguments
--------------------------------
You want to see which arguments a function uses/recognizes.
#### Solution
```
args(round)
```
```
## function (x, digits = 0)
## NULL
```
#### Discussion
`args()` displays the argument names of a function. Ignore the `function`, `(`, `)`, and `NULL` in the output. Each argument name will be listed between the parentheses.
If an argument has a default value, the value will appear next to its name, as in `digits = 0`. Arguments that have default values are optional: the function will use the default value when you do not supply a different value for that argument. Arguments without default values must be supplied when you run the function to avoid an error.
1\.14 Open a function’s help page
---------------------------------
You want to open the help page for a function.
#### Solution
```
?round
```
#### Discussion
To open a function’s help page, run a `?` followed by the bare function name—no parentheses or quotes. A function’s help page is the technical documentation for the function and its arguments. Often the most useful section of a help page is the last, which is an examples section of code that uses the function.
A function’s help page is loaded with the function. It will only be available if the package that contains the function is loaded.
1\.15 Combine functions into a pipe
-----------------------------------
You want to chain multiple functions together to be run in sequence, with each function operating on the preceding function’s output.
#### Solution
```
starwars %>%
group_by(species) %>%
summarise(avg_height = mean(height, na.rm = TRUE)) %>%
arrange(avg_height)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 38 x 2
## species avg_height
## <chr> <dbl>
## 1 Yoda's species 66
## 2 Aleena 79
## 3 Ewok 88
## 4 Vulptereen 94
## 5 Dug 112
## 6 Xexto 122
## 7 Droid 131.
## 8 Toydarian 137
## 9 Sullustan 160
## 10 Toong 163
## # … with 28 more rows
```
#### Discussion
The `%>%` operator (pronounced “pipe operator”) evaluates the code on its left hand side (LHS) and then passes the result to the the code on its right hand side (RHS), which should be a function call. By default `%>%` will pass the result of the LHS to the first unnamed argument of the function on the RHS.
So `starwars %>% group_by(species)` is the equivalent of `group_by(starwars, species)`, and the above solution is the equivalent of the nested code:
```
arrange(
summarise(
group_by(starwars, species),
avg_height = mean(height, na.rm = TRUE)
),
avg_height
)
```
or the equivalent of:
```
x1 <- starwars
x2 <- group_by(x1, species)
x3 <- summarise(x3, avg_height = mean(height, na.rm = TRUE))
arrange(x3, avg_height)
```
The chunk of functions connected by `%>%` is called a “pipe.” To read a pipe as a sequence of steps, mentally pronounce `%>%` as “then.”
The `%>%` operator is loaded with the dplyr package, which imports it from the magrittr package. Tidyverse functions facilitate using `%>%` by
1. accepting a data frame or tibble as their first argument
2. returning a data frame or tibble as their result
`%>%` is easy to type in the RStudio IDE with the keyboard shortcuts
* **Command \+ Shift \+ M** (Mac OS)
* **Control \+ Shift \+ M** (Windows)
1\.16 Pipe a result to a specific argument
------------------------------------------
You want to use `%>%` to pass the result of the left hand side to an argument that is not the first argument of the function on the right hand side.
#### Solution
```
starwars %>%
lm(mass ~ height, data = .)
```
```
##
## Call:
## lm(formula = mass ~ height, data = .)
##
## Coefficients:
## (Intercept) height
## -13.8103 0.6386
```
#### Discussion
By default `%>%` passes the result of the left hand side to the the first unnamed argument of the function on the right hand side. To override this default, use `.` as a placeholder within the function call on the right hand side. `%>%` will evaluate `.` as the result of the left hand side, instead of passing the result to the first unnamed argument.
The solution code is the equivalent of
```
lm(mass ~ height, data = starwars)
```
What you should know before you begin
-------------------------------------
The [tidyverse](http://www.tidyverse.org) is a collection of R packages that are designed to work well together. There are about 25 packages in the tidyverse. An R package is a bundle of functions, documentation, and data sets. R has over 13,000 packages. These are not installed with R, but are archived [online](https://cran.r-project.org/web/packages/index.html) for when you need them.
To use an R package, you must:
1. Install the package on your local machine with `install.packages()`. You only need to do this once per machine.
2. Load the package into your R session with `library()`. You need to do this each time you start a new R session (if you wish to use the package in that session).
You cannot use the contents of a package until you load the package in your current R session. You should update your packages from time to time to receive the latest improvements from package authors.
Tidyverse functions are designed to be used with the `%>%` operator. `%>%` links R functions together to create a “pipe” of functions that are run in sequence: `%>%` passes the output of one function to the input of the next. `%>%` comes with the dplyr package, which imports it from the magrittr package.
1\.1 Install a tidyverse package
--------------------------------
You’d like to install a package that is in the tidyverse.
#### Solution
```
install.packages("dplyr")
```
#### Discussion
Tidyverse packages can be installed in the normal way with `install.packages()`. See `?install.packages` for installation details.
By default, `install.packages()` will download packages from [https://cran.r\-project.org](https://cran.r-project.org), or one of its mirrors—so be sure you are connected to the internet when you run it.
#### Solution
```
install.packages("dplyr")
```
#### Discussion
Tidyverse packages can be installed in the normal way with `install.packages()`. See `?install.packages` for installation details.
By default, `install.packages()` will download packages from [https://cran.r\-project.org](https://cran.r-project.org), or one of its mirrors—so be sure you are connected to the internet when you run it.
1\.2 Install all of the tidyverse packages
------------------------------------------
You’d like to install *all* of the packages in the tidyverse with a single command.
#### Solution
```
install.packages("tidyverse")
```
#### Discussion
The `tidyverse` package provides a shortcut for downloading all of the packages in the tidyverse. `tidyverse` purposefully lists every package in the tidyverse as one of its dependencies. This causes R to install all of the packages in the tidyverse when R installs `tidyverse`.
`install.packages("tidyverse")` will install the following packages:
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
#### Solution
```
install.packages("tidyverse")
```
#### Discussion
The `tidyverse` package provides a shortcut for downloading all of the packages in the tidyverse. `tidyverse` purposefully lists every package in the tidyverse as one of its dependencies. This causes R to install all of the packages in the tidyverse when R installs `tidyverse`.
`install.packages("tidyverse")` will install the following packages:
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
1\.3 Load a tidyverse package
-----------------------------
You want to load a package that is in the tidyverse, so that you can use its contents. You’ve already [installed](program.html#install) the package on your computer.
#### Solution
```
library("dplyr")
```
#### Discussion
You can load individual tidyverse packages with `library()`. The package will stay loaded until you end your R session or run `detach()` on the package. If you begin a new R session, you will need to reload the package in the new session with `library()`.
`library()` cannot load packages that have not been [installed](program.html#install) on your machine.
You must place quotation marks around a package name when you use `install.packages()`, but the same is not true for `library()`. The commands below will both load the dplyr package if it is installed on your computer.
```
library(dplyr)
library("dplyr")
```
#### Solution
```
library("dplyr")
```
#### Discussion
You can load individual tidyverse packages with `library()`. The package will stay loaded until you end your R session or run `detach()` on the package. If you begin a new R session, you will need to reload the package in the new session with `library()`.
`library()` cannot load packages that have not been [installed](program.html#install) on your machine.
You must place quotation marks around a package name when you use `install.packages()`, but the same is not true for `library()`. The commands below will both load the dplyr package if it is installed on your computer.
```
library(dplyr)
library("dplyr")
```
1\.4 Load the core set of tidyverse packages
--------------------------------------------
You would like to load the most used packages in the tidyverse with a single command. You’ve already [installed](program.html#install-all) these packages on your computer.
#### Solution
```
library("tidyverse")
```
#### Discussion
When you load the `tidyverse` package, R will also load the following packages:
* `ggplot2`
* `dplyr`
* `tidyr`
* `readr`
* `purrr`
* `tibble`
* `stringr`
* `forcats`
These eight packages are considered the “core” of the tidyverse because:
1. They are the most used tidyverse packages.
2. They are often used together as a set (when you use one of the packages, you tend to also use the others).
You can still load each of these packages individually with `library()`.
Notice that `library("tidyverse")` does not load every package installed by `install.packages("tidyverse")`. You must use `library()` to individually load the “non\-core” tidyverse packages.
#### Solution
```
library("tidyverse")
```
#### Discussion
When you load the `tidyverse` package, R will also load the following packages:
* `ggplot2`
* `dplyr`
* `tidyr`
* `readr`
* `purrr`
* `tibble`
* `stringr`
* `forcats`
These eight packages are considered the “core” of the tidyverse because:
1. They are the most used tidyverse packages.
2. They are often used together as a set (when you use one of the packages, you tend to also use the others).
You can still load each of these packages individually with `library()`.
Notice that `library("tidyverse")` does not load every package installed by `install.packages("tidyverse")`. You must use `library()` to individually load the “non\-core” tidyverse packages.
1\.5 Update a tidyverse package
-------------------------------
You want to check that you have the latest version of a package that is in the tidyverse.
#### Solution
```
update.packages("dplyr")
```
#### Discussion
`update.packages()` compares the version number of your local copy of a package to the version number of the newest version available on CRAN. If your local copy is older than the newest version, `update.packages()` will download and install the newest version from CRAN. Otherwise, `update.packages()` will do nothing.
Be sure that you are connected to the internet when you run `update.packages()`.
#### Solution
```
update.packages("dplyr")
```
#### Discussion
`update.packages()` compares the version number of your local copy of a package to the version number of the newest version available on CRAN. If your local copy is older than the newest version, `update.packages()` will download and install the newest version from CRAN. Otherwise, `update.packages()` will do nothing.
Be sure that you are connected to the internet when you run `update.packages()`.
1\.6 Update all of the tidyverse packages
-----------------------------------------
You want to check that you have the latest version of *every* package that is in the tidyverse.
#### Solution
```
tidyverse_update()
```
#### Discussion
`tidyverse_update()` checks whether or not each of your tidyverse packages is [up\-to\-date](program.html#update). If every package is up\-to\-date, `tidyverse_update()` will return the message: `All tidyverse packages up-to-date`. Otherwise, `tidyverse_update()` will return a piece of code that you can copy and run to selectively update only those packages that are out\-of\-date.
#### Solution
```
tidyverse_update()
```
#### Discussion
`tidyverse_update()` checks whether or not each of your tidyverse packages is [up\-to\-date](program.html#update). If every package is up\-to\-date, `tidyverse_update()` will return the message: `All tidyverse packages up-to-date`. Otherwise, `tidyverse_update()` will return a piece of code that you can copy and run to selectively update only those packages that are out\-of\-date.
1\.7 List all of the tidyverse packages
---------------------------------------
You want to generate a vector that contains the names of every package in the tidyverse.
#### Solution
```
tidyverse_packages()
```
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
#### Discussion
`tidyverse_packages()` returns a character vector that contains the names of every package that was in the tidyverse when you installed the `tidyverse` package. These are the packages that were installed onto your machine along with the `tidyverse` package. They are also the packages that [`tidyverse_update()`](program.html#update-all) will check.
[Update](program.html#update) the `tidyverse` package before running `tidyverse_packages()` to receive the most current list.
#### Solution
```
tidyverse_packages()
```
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
#### Discussion
`tidyverse_packages()` returns a character vector that contains the names of every package that was in the tidyverse when you installed the `tidyverse` package. These are the packages that were installed onto your machine along with the `tidyverse` package. They are also the packages that [`tidyverse_update()`](program.html#update-all) will check.
[Update](program.html#update) the `tidyverse` package before running `tidyverse_packages()` to receive the most current list.
1\.8 Create an object
---------------------
You want to create an object that stores content to use later.
#### Solution
```
x <- 1
```
#### Discussion
The assignment operator is made by a less than sign, `<`, followed by a minus sign `-`. The result looks like an arrow, `<-`. R will assign the contents on the right hand side of the operator to the name on the left hand sign. Afterwards, you can use the name in your code to refer to the contents, e.g. `log(x)`.
Names cannot begin with a number, nor a special character like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`.
If you assign to a name that is already in use, the new content will mask or overwrite the previous object.
#### Solution
```
x <- 1
```
#### Discussion
The assignment operator is made by a less than sign, `<`, followed by a minus sign `-`. The result looks like an arrow, `<-`. R will assign the contents on the right hand side of the operator to the name on the left hand sign. Afterwards, you can use the name in your code to refer to the contents, e.g. `log(x)`.
Names cannot begin with a number, nor a special character like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`.
If you assign to a name that is already in use, the new content will mask or overwrite the previous object.
1\.9 Create a vector
--------------------
You want to create a one\-dimensional, ordered set of values.
#### Solution
```
c(3, 1, 7, 5)
```
```
## [1] 3 1 7 5
```
#### Discussion
The concatenate function, `c()`, combines its arguments into a vector that can be passed to a function or assigned to an object. To name each value, provide names to the arguments of `c()`, e.g.
```
c(mu = 3, chi = 1, psi = 7, pi = 5)
```
```
## mu chi psi pi
## 3 1 7 5
```
#### Solution
```
c(3, 1, 7, 5)
```
```
## [1] 3 1 7 5
```
#### Discussion
The concatenate function, `c()`, combines its arguments into a vector that can be passed to a function or assigned to an object. To name each value, provide names to the arguments of `c()`, e.g.
```
c(mu = 3, chi = 1, psi = 7, pi = 5)
```
```
## mu chi psi pi
## 3 1 7 5
```
1\.10 See an object
-------------------
You want to see the contents of an object.
#### Solution
```
x
```
```
## [1] 1
```
#### Discussion
R displays the contents of an object when you call the bare object name—not surrounded with quotation marks, not followed by parentheses. This works for functions too, in which case R displays the code elements assigned to the function name,e.g. `lm`.
#### Solution
```
x
```
```
## [1] 1
```
#### Discussion
R displays the contents of an object when you call the bare object name—not surrounded with quotation marks, not followed by parentheses. This works for functions too, in which case R displays the code elements assigned to the function name,e.g. `lm`.
1\.11 Remove an object
----------------------
You want to remove an object that you have created.
#### Solution
```
rm(x)
```
#### Discussion
The remove function, `rm()`, removes an object from R’s memory. `rm()` will not remove an object that is loaded from an R package.
#### Solution
```
rm(x)
```
#### Discussion
The remove function, `rm()`, removes an object from R’s memory. `rm()` will not remove an object that is loaded from an R package.
1\.12 Call a function
---------------------
You want to call a function, i.e. you want R to execute a function.
#### Solution
```
round(3.141593, digits = 2)
```
```
## [1] 3.14
```
#### Discussion
To run a function, type its name followed by an open open and closed parentheses. If the function requires arguments (i.e. inputs) to do its job, place the arguments between the parentheses.
It is a best practice to name every argument after the first. This is sometimes relaxed to every argument after the second argument, if the second argument is also obvious and required.
#### Solution
```
round(3.141593, digits = 2)
```
```
## [1] 3.14
```
#### Discussion
To run a function, type its name followed by an open open and closed parentheses. If the function requires arguments (i.e. inputs) to do its job, place the arguments between the parentheses.
It is a best practice to name every argument after the first. This is sometimes relaxed to every argument after the second argument, if the second argument is also obvious and required.
1\.13 See a function’s arguments
--------------------------------
You want to see which arguments a function uses/recognizes.
#### Solution
```
args(round)
```
```
## function (x, digits = 0)
## NULL
```
#### Discussion
`args()` displays the argument names of a function. Ignore the `function`, `(`, `)`, and `NULL` in the output. Each argument name will be listed between the parentheses.
If an argument has a default value, the value will appear next to its name, as in `digits = 0`. Arguments that have default values are optional: the function will use the default value when you do not supply a different value for that argument. Arguments without default values must be supplied when you run the function to avoid an error.
#### Solution
```
args(round)
```
```
## function (x, digits = 0)
## NULL
```
#### Discussion
`args()` displays the argument names of a function. Ignore the `function`, `(`, `)`, and `NULL` in the output. Each argument name will be listed between the parentheses.
If an argument has a default value, the value will appear next to its name, as in `digits = 0`. Arguments that have default values are optional: the function will use the default value when you do not supply a different value for that argument. Arguments without default values must be supplied when you run the function to avoid an error.
1\.14 Open a function’s help page
---------------------------------
You want to open the help page for a function.
#### Solution
```
?round
```
#### Discussion
To open a function’s help page, run a `?` followed by the bare function name—no parentheses or quotes. A function’s help page is the technical documentation for the function and its arguments. Often the most useful section of a help page is the last, which is an examples section of code that uses the function.
A function’s help page is loaded with the function. It will only be available if the package that contains the function is loaded.
#### Solution
```
?round
```
#### Discussion
To open a function’s help page, run a `?` followed by the bare function name—no parentheses or quotes. A function’s help page is the technical documentation for the function and its arguments. Often the most useful section of a help page is the last, which is an examples section of code that uses the function.
A function’s help page is loaded with the function. It will only be available if the package that contains the function is loaded.
1\.15 Combine functions into a pipe
-----------------------------------
You want to chain multiple functions together to be run in sequence, with each function operating on the preceding function’s output.
#### Solution
```
starwars %>%
group_by(species) %>%
summarise(avg_height = mean(height, na.rm = TRUE)) %>%
arrange(avg_height)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 38 x 2
## species avg_height
## <chr> <dbl>
## 1 Yoda's species 66
## 2 Aleena 79
## 3 Ewok 88
## 4 Vulptereen 94
## 5 Dug 112
## 6 Xexto 122
## 7 Droid 131.
## 8 Toydarian 137
## 9 Sullustan 160
## 10 Toong 163
## # … with 28 more rows
```
#### Discussion
The `%>%` operator (pronounced “pipe operator”) evaluates the code on its left hand side (LHS) and then passes the result to the the code on its right hand side (RHS), which should be a function call. By default `%>%` will pass the result of the LHS to the first unnamed argument of the function on the RHS.
So `starwars %>% group_by(species)` is the equivalent of `group_by(starwars, species)`, and the above solution is the equivalent of the nested code:
```
arrange(
summarise(
group_by(starwars, species),
avg_height = mean(height, na.rm = TRUE)
),
avg_height
)
```
or the equivalent of:
```
x1 <- starwars
x2 <- group_by(x1, species)
x3 <- summarise(x3, avg_height = mean(height, na.rm = TRUE))
arrange(x3, avg_height)
```
The chunk of functions connected by `%>%` is called a “pipe.” To read a pipe as a sequence of steps, mentally pronounce `%>%` as “then.”
The `%>%` operator is loaded with the dplyr package, which imports it from the magrittr package. Tidyverse functions facilitate using `%>%` by
1. accepting a data frame or tibble as their first argument
2. returning a data frame or tibble as their result
`%>%` is easy to type in the RStudio IDE with the keyboard shortcuts
* **Command \+ Shift \+ M** (Mac OS)
* **Control \+ Shift \+ M** (Windows)
#### Solution
```
starwars %>%
group_by(species) %>%
summarise(avg_height = mean(height, na.rm = TRUE)) %>%
arrange(avg_height)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 38 x 2
## species avg_height
## <chr> <dbl>
## 1 Yoda's species 66
## 2 Aleena 79
## 3 Ewok 88
## 4 Vulptereen 94
## 5 Dug 112
## 6 Xexto 122
## 7 Droid 131.
## 8 Toydarian 137
## 9 Sullustan 160
## 10 Toong 163
## # … with 28 more rows
```
#### Discussion
The `%>%` operator (pronounced “pipe operator”) evaluates the code on its left hand side (LHS) and then passes the result to the the code on its right hand side (RHS), which should be a function call. By default `%>%` will pass the result of the LHS to the first unnamed argument of the function on the RHS.
So `starwars %>% group_by(species)` is the equivalent of `group_by(starwars, species)`, and the above solution is the equivalent of the nested code:
```
arrange(
summarise(
group_by(starwars, species),
avg_height = mean(height, na.rm = TRUE)
),
avg_height
)
```
or the equivalent of:
```
x1 <- starwars
x2 <- group_by(x1, species)
x3 <- summarise(x3, avg_height = mean(height, na.rm = TRUE))
arrange(x3, avg_height)
```
The chunk of functions connected by `%>%` is called a “pipe.” To read a pipe as a sequence of steps, mentally pronounce `%>%` as “then.”
The `%>%` operator is loaded with the dplyr package, which imports it from the magrittr package. Tidyverse functions facilitate using `%>%` by
1. accepting a data frame or tibble as their first argument
2. returning a data frame or tibble as their result
`%>%` is easy to type in the RStudio IDE with the keyboard shortcuts
* **Command \+ Shift \+ M** (Mac OS)
* **Control \+ Shift \+ M** (Windows)
1\.16 Pipe a result to a specific argument
------------------------------------------
You want to use `%>%` to pass the result of the left hand side to an argument that is not the first argument of the function on the right hand side.
#### Solution
```
starwars %>%
lm(mass ~ height, data = .)
```
```
##
## Call:
## lm(formula = mass ~ height, data = .)
##
## Coefficients:
## (Intercept) height
## -13.8103 0.6386
```
#### Discussion
By default `%>%` passes the result of the left hand side to the the first unnamed argument of the function on the right hand side. To override this default, use `.` as a placeholder within the function call on the right hand side. `%>%` will evaluate `.` as the result of the left hand side, instead of passing the result to the first unnamed argument.
The solution code is the equivalent of
```
lm(mass ~ height, data = starwars)
```
#### Solution
```
starwars %>%
lm(mass ~ height, data = .)
```
```
##
## Call:
## lm(formula = mass ~ height, data = .)
##
## Coefficients:
## (Intercept) height
## -13.8103 0.6386
```
#### Discussion
By default `%>%` passes the result of the left hand side to the the first unnamed argument of the function on the right hand side. To override this default, use `.` as a placeholder within the function call on the right hand side. `%>%` will evaluate `.` as the result of the left hand side, instead of passing the result to the first unnamed argument.
The solution code is the equivalent of
```
lm(mass ~ height, data = starwars)
```
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/program.html |
1 Program
=========
---
This chapter includes the following recipes. To manipulate vectors with [purrr](http://purrr.tidyverse.org), see \[Transform Lists and Vectors].
1. [Install a tidyverse package](program.html#install)
2. [Install all of the tidyverse packages](program.html#install-all)
3. [Load a tidyverse package](program.html#load)
4. [Load the core set of tidyverse packages](program.html#load-the-core-set-of-tidyverse-packages)
5. [Update a tidyverse package](program.html#update)
6. [Update all of the tidyverse packages](program.html#update-all)
7. [List all of the tidyverse packages](program.html#list-all-of-the-tidyverse-packages)
8. [Create an object](program.html#create-an-object)
9. [Create a vector](program.html#create-a-vector)
10. [See an object](program.html#see-an-object)
11. [Remove an object](program.html#remove-an-object)
12. [Call a function](program.html#call-a-function)
13. [See a function’s arguments](program.html#see-a-functions-arguments)
14. [Open a function’s help page](program.html#open-a-functions-help-page)
15. [Combine functions into a pipe](program.html#combine-functions-into-a-pipe)
16. [Pipe a result to a specific argument](program.html#pipe-a-result-to-a-specific-argument)
---
What you should know before you begin
-------------------------------------
The [tidyverse](http://www.tidyverse.org) is a collection of R packages that are designed to work well together. There are about 25 packages in the tidyverse. An R package is a bundle of functions, documentation, and data sets. R has over 13,000 packages. These are not installed with R, but are archived [online](https://cran.r-project.org/web/packages/index.html) for when you need them.
To use an R package, you must:
1. Install the package on your local machine with `install.packages()`. You only need to do this once per machine.
2. Load the package into your R session with `library()`. You need to do this each time you start a new R session (if you wish to use the package in that session).
You cannot use the contents of a package until you load the package in your current R session. You should update your packages from time to time to receive the latest improvements from package authors.
Tidyverse functions are designed to be used with the `%>%` operator. `%>%` links R functions together to create a “pipe” of functions that are run in sequence: `%>%` passes the output of one function to the input of the next. `%>%` comes with the dplyr package, which imports it from the magrittr package.
1\.1 Install a tidyverse package
--------------------------------
You’d like to install a package that is in the tidyverse.
#### Solution
```
install.packages("dplyr")
```
#### Discussion
Tidyverse packages can be installed in the normal way with `install.packages()`. See `?install.packages` for installation details.
By default, `install.packages()` will download packages from [https://cran.r\-project.org](https://cran.r-project.org), or one of its mirrors—so be sure you are connected to the internet when you run it.
1\.2 Install all of the tidyverse packages
------------------------------------------
You’d like to install *all* of the packages in the tidyverse with a single command.
#### Solution
```
install.packages("tidyverse")
```
#### Discussion
The `tidyverse` package provides a shortcut for downloading all of the packages in the tidyverse. `tidyverse` purposefully lists every package in the tidyverse as one of its dependencies. This causes R to install all of the packages in the tidyverse when R installs `tidyverse`.
`install.packages("tidyverse")` will install the following packages:
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
1\.3 Load a tidyverse package
-----------------------------
You want to load a package that is in the tidyverse, so that you can use its contents. You’ve already [installed](program.html#install) the package on your computer.
#### Solution
```
library("dplyr")
```
#### Discussion
You can load individual tidyverse packages with `library()`. The package will stay loaded until you end your R session or run `detach()` on the package. If you begin a new R session, you will need to reload the package in the new session with `library()`.
`library()` cannot load packages that have not been [installed](program.html#install) on your machine.
You must place quotation marks around a package name when you use `install.packages()`, but the same is not true for `library()`. The commands below will both load the dplyr package if it is installed on your computer.
```
library(dplyr)
library("dplyr")
```
1\.4 Load the core set of tidyverse packages
--------------------------------------------
You would like to load the most used packages in the tidyverse with a single command. You’ve already [installed](program.html#install-all) these packages on your computer.
#### Solution
```
library("tidyverse")
```
#### Discussion
When you load the `tidyverse` package, R will also load the following packages:
* `ggplot2`
* `dplyr`
* `tidyr`
* `readr`
* `purrr`
* `tibble`
* `stringr`
* `forcats`
These eight packages are considered the “core” of the tidyverse because:
1. They are the most used tidyverse packages.
2. They are often used together as a set (when you use one of the packages, you tend to also use the others).
You can still load each of these packages individually with `library()`.
Notice that `library("tidyverse")` does not load every package installed by `install.packages("tidyverse")`. You must use `library()` to individually load the “non\-core” tidyverse packages.
1\.5 Update a tidyverse package
-------------------------------
You want to check that you have the latest version of a package that is in the tidyverse.
#### Solution
```
update.packages("dplyr")
```
#### Discussion
`update.packages()` compares the version number of your local copy of a package to the version number of the newest version available on CRAN. If your local copy is older than the newest version, `update.packages()` will download and install the newest version from CRAN. Otherwise, `update.packages()` will do nothing.
Be sure that you are connected to the internet when you run `update.packages()`.
1\.6 Update all of the tidyverse packages
-----------------------------------------
You want to check that you have the latest version of *every* package that is in the tidyverse.
#### Solution
```
tidyverse_update()
```
#### Discussion
`tidyverse_update()` checks whether or not each of your tidyverse packages is [up\-to\-date](program.html#update). If every package is up\-to\-date, `tidyverse_update()` will return the message: `All tidyverse packages up-to-date`. Otherwise, `tidyverse_update()` will return a piece of code that you can copy and run to selectively update only those packages that are out\-of\-date.
1\.7 List all of the tidyverse packages
---------------------------------------
You want to generate a vector that contains the names of every package in the tidyverse.
#### Solution
```
tidyverse_packages()
```
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
#### Discussion
`tidyverse_packages()` returns a character vector that contains the names of every package that was in the tidyverse when you installed the `tidyverse` package. These are the packages that were installed onto your machine along with the `tidyverse` package. They are also the packages that [`tidyverse_update()`](program.html#update-all) will check.
[Update](program.html#update) the `tidyverse` package before running `tidyverse_packages()` to receive the most current list.
1\.8 Create an object
---------------------
You want to create an object that stores content to use later.
#### Solution
```
x <- 1
```
#### Discussion
The assignment operator is made by a less than sign, `<`, followed by a minus sign `-`. The result looks like an arrow, `<-`. R will assign the contents on the right hand side of the operator to the name on the left hand sign. Afterwards, you can use the name in your code to refer to the contents, e.g. `log(x)`.
Names cannot begin with a number, nor a special character like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`.
If you assign to a name that is already in use, the new content will mask or overwrite the previous object.
1\.9 Create a vector
--------------------
You want to create a one\-dimensional, ordered set of values.
#### Solution
```
c(3, 1, 7, 5)
```
```
## [1] 3 1 7 5
```
#### Discussion
The concatenate function, `c()`, combines its arguments into a vector that can be passed to a function or assigned to an object. To name each value, provide names to the arguments of `c()`, e.g.
```
c(mu = 3, chi = 1, psi = 7, pi = 5)
```
```
## mu chi psi pi
## 3 1 7 5
```
1\.10 See an object
-------------------
You want to see the contents of an object.
#### Solution
```
x
```
```
## [1] 1
```
#### Discussion
R displays the contents of an object when you call the bare object name—not surrounded with quotation marks, not followed by parentheses. This works for functions too, in which case R displays the code elements assigned to the function name,e.g. `lm`.
1\.11 Remove an object
----------------------
You want to remove an object that you have created.
#### Solution
```
rm(x)
```
#### Discussion
The remove function, `rm()`, removes an object from R’s memory. `rm()` will not remove an object that is loaded from an R package.
1\.12 Call a function
---------------------
You want to call a function, i.e. you want R to execute a function.
#### Solution
```
round(3.141593, digits = 2)
```
```
## [1] 3.14
```
#### Discussion
To run a function, type its name followed by an open open and closed parentheses. If the function requires arguments (i.e. inputs) to do its job, place the arguments between the parentheses.
It is a best practice to name every argument after the first. This is sometimes relaxed to every argument after the second argument, if the second argument is also obvious and required.
1\.13 See a function’s arguments
--------------------------------
You want to see which arguments a function uses/recognizes.
#### Solution
```
args(round)
```
```
## function (x, digits = 0)
## NULL
```
#### Discussion
`args()` displays the argument names of a function. Ignore the `function`, `(`, `)`, and `NULL` in the output. Each argument name will be listed between the parentheses.
If an argument has a default value, the value will appear next to its name, as in `digits = 0`. Arguments that have default values are optional: the function will use the default value when you do not supply a different value for that argument. Arguments without default values must be supplied when you run the function to avoid an error.
1\.14 Open a function’s help page
---------------------------------
You want to open the help page for a function.
#### Solution
```
?round
```
#### Discussion
To open a function’s help page, run a `?` followed by the bare function name—no parentheses or quotes. A function’s help page is the technical documentation for the function and its arguments. Often the most useful section of a help page is the last, which is an examples section of code that uses the function.
A function’s help page is loaded with the function. It will only be available if the package that contains the function is loaded.
1\.15 Combine functions into a pipe
-----------------------------------
You want to chain multiple functions together to be run in sequence, with each function operating on the preceding function’s output.
#### Solution
```
starwars %>%
group_by(species) %>%
summarise(avg_height = mean(height, na.rm = TRUE)) %>%
arrange(avg_height)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 38 x 2
## species avg_height
## <chr> <dbl>
## 1 Yoda's species 66
## 2 Aleena 79
## 3 Ewok 88
## 4 Vulptereen 94
## 5 Dug 112
## 6 Xexto 122
## 7 Droid 131.
## 8 Toydarian 137
## 9 Sullustan 160
## 10 Toong 163
## # … with 28 more rows
```
#### Discussion
The `%>%` operator (pronounced “pipe operator”) evaluates the code on its left hand side (LHS) and then passes the result to the the code on its right hand side (RHS), which should be a function call. By default `%>%` will pass the result of the LHS to the first unnamed argument of the function on the RHS.
So `starwars %>% group_by(species)` is the equivalent of `group_by(starwars, species)`, and the above solution is the equivalent of the nested code:
```
arrange(
summarise(
group_by(starwars, species),
avg_height = mean(height, na.rm = TRUE)
),
avg_height
)
```
or the equivalent of:
```
x1 <- starwars
x2 <- group_by(x1, species)
x3 <- summarise(x3, avg_height = mean(height, na.rm = TRUE))
arrange(x3, avg_height)
```
The chunk of functions connected by `%>%` is called a “pipe.” To read a pipe as a sequence of steps, mentally pronounce `%>%` as “then.”
The `%>%` operator is loaded with the dplyr package, which imports it from the magrittr package. Tidyverse functions facilitate using `%>%` by
1. accepting a data frame or tibble as their first argument
2. returning a data frame or tibble as their result
`%>%` is easy to type in the RStudio IDE with the keyboard shortcuts
* **Command \+ Shift \+ M** (Mac OS)
* **Control \+ Shift \+ M** (Windows)
1\.16 Pipe a result to a specific argument
------------------------------------------
You want to use `%>%` to pass the result of the left hand side to an argument that is not the first argument of the function on the right hand side.
#### Solution
```
starwars %>%
lm(mass ~ height, data = .)
```
```
##
## Call:
## lm(formula = mass ~ height, data = .)
##
## Coefficients:
## (Intercept) height
## -13.8103 0.6386
```
#### Discussion
By default `%>%` passes the result of the left hand side to the the first unnamed argument of the function on the right hand side. To override this default, use `.` as a placeholder within the function call on the right hand side. `%>%` will evaluate `.` as the result of the left hand side, instead of passing the result to the first unnamed argument.
The solution code is the equivalent of
```
lm(mass ~ height, data = starwars)
```
What you should know before you begin
-------------------------------------
The [tidyverse](http://www.tidyverse.org) is a collection of R packages that are designed to work well together. There are about 25 packages in the tidyverse. An R package is a bundle of functions, documentation, and data sets. R has over 13,000 packages. These are not installed with R, but are archived [online](https://cran.r-project.org/web/packages/index.html) for when you need them.
To use an R package, you must:
1. Install the package on your local machine with `install.packages()`. You only need to do this once per machine.
2. Load the package into your R session with `library()`. You need to do this each time you start a new R session (if you wish to use the package in that session).
You cannot use the contents of a package until you load the package in your current R session. You should update your packages from time to time to receive the latest improvements from package authors.
Tidyverse functions are designed to be used with the `%>%` operator. `%>%` links R functions together to create a “pipe” of functions that are run in sequence: `%>%` passes the output of one function to the input of the next. `%>%` comes with the dplyr package, which imports it from the magrittr package.
1\.1 Install a tidyverse package
--------------------------------
You’d like to install a package that is in the tidyverse.
#### Solution
```
install.packages("dplyr")
```
#### Discussion
Tidyverse packages can be installed in the normal way with `install.packages()`. See `?install.packages` for installation details.
By default, `install.packages()` will download packages from [https://cran.r\-project.org](https://cran.r-project.org), or one of its mirrors—so be sure you are connected to the internet when you run it.
#### Solution
```
install.packages("dplyr")
```
#### Discussion
Tidyverse packages can be installed in the normal way with `install.packages()`. See `?install.packages` for installation details.
By default, `install.packages()` will download packages from [https://cran.r\-project.org](https://cran.r-project.org), or one of its mirrors—so be sure you are connected to the internet when you run it.
1\.2 Install all of the tidyverse packages
------------------------------------------
You’d like to install *all* of the packages in the tidyverse with a single command.
#### Solution
```
install.packages("tidyverse")
```
#### Discussion
The `tidyverse` package provides a shortcut for downloading all of the packages in the tidyverse. `tidyverse` purposefully lists every package in the tidyverse as one of its dependencies. This causes R to install all of the packages in the tidyverse when R installs `tidyverse`.
`install.packages("tidyverse")` will install the following packages:
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
#### Solution
```
install.packages("tidyverse")
```
#### Discussion
The `tidyverse` package provides a shortcut for downloading all of the packages in the tidyverse. `tidyverse` purposefully lists every package in the tidyverse as one of its dependencies. This causes R to install all of the packages in the tidyverse when R installs `tidyverse`.
`install.packages("tidyverse")` will install the following packages:
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
1\.3 Load a tidyverse package
-----------------------------
You want to load a package that is in the tidyverse, so that you can use its contents. You’ve already [installed](program.html#install) the package on your computer.
#### Solution
```
library("dplyr")
```
#### Discussion
You can load individual tidyverse packages with `library()`. The package will stay loaded until you end your R session or run `detach()` on the package. If you begin a new R session, you will need to reload the package in the new session with `library()`.
`library()` cannot load packages that have not been [installed](program.html#install) on your machine.
You must place quotation marks around a package name when you use `install.packages()`, but the same is not true for `library()`. The commands below will both load the dplyr package if it is installed on your computer.
```
library(dplyr)
library("dplyr")
```
#### Solution
```
library("dplyr")
```
#### Discussion
You can load individual tidyverse packages with `library()`. The package will stay loaded until you end your R session or run `detach()` on the package. If you begin a new R session, you will need to reload the package in the new session with `library()`.
`library()` cannot load packages that have not been [installed](program.html#install) on your machine.
You must place quotation marks around a package name when you use `install.packages()`, but the same is not true for `library()`. The commands below will both load the dplyr package if it is installed on your computer.
```
library(dplyr)
library("dplyr")
```
1\.4 Load the core set of tidyverse packages
--------------------------------------------
You would like to load the most used packages in the tidyverse with a single command. You’ve already [installed](program.html#install-all) these packages on your computer.
#### Solution
```
library("tidyverse")
```
#### Discussion
When you load the `tidyverse` package, R will also load the following packages:
* `ggplot2`
* `dplyr`
* `tidyr`
* `readr`
* `purrr`
* `tibble`
* `stringr`
* `forcats`
These eight packages are considered the “core” of the tidyverse because:
1. They are the most used tidyverse packages.
2. They are often used together as a set (when you use one of the packages, you tend to also use the others).
You can still load each of these packages individually with `library()`.
Notice that `library("tidyverse")` does not load every package installed by `install.packages("tidyverse")`. You must use `library()` to individually load the “non\-core” tidyverse packages.
#### Solution
```
library("tidyverse")
```
#### Discussion
When you load the `tidyverse` package, R will also load the following packages:
* `ggplot2`
* `dplyr`
* `tidyr`
* `readr`
* `purrr`
* `tibble`
* `stringr`
* `forcats`
These eight packages are considered the “core” of the tidyverse because:
1. They are the most used tidyverse packages.
2. They are often used together as a set (when you use one of the packages, you tend to also use the others).
You can still load each of these packages individually with `library()`.
Notice that `library("tidyverse")` does not load every package installed by `install.packages("tidyverse")`. You must use `library()` to individually load the “non\-core” tidyverse packages.
1\.5 Update a tidyverse package
-------------------------------
You want to check that you have the latest version of a package that is in the tidyverse.
#### Solution
```
update.packages("dplyr")
```
#### Discussion
`update.packages()` compares the version number of your local copy of a package to the version number of the newest version available on CRAN. If your local copy is older than the newest version, `update.packages()` will download and install the newest version from CRAN. Otherwise, `update.packages()` will do nothing.
Be sure that you are connected to the internet when you run `update.packages()`.
#### Solution
```
update.packages("dplyr")
```
#### Discussion
`update.packages()` compares the version number of your local copy of a package to the version number of the newest version available on CRAN. If your local copy is older than the newest version, `update.packages()` will download and install the newest version from CRAN. Otherwise, `update.packages()` will do nothing.
Be sure that you are connected to the internet when you run `update.packages()`.
1\.6 Update all of the tidyverse packages
-----------------------------------------
You want to check that you have the latest version of *every* package that is in the tidyverse.
#### Solution
```
tidyverse_update()
```
#### Discussion
`tidyverse_update()` checks whether or not each of your tidyverse packages is [up\-to\-date](program.html#update). If every package is up\-to\-date, `tidyverse_update()` will return the message: `All tidyverse packages up-to-date`. Otherwise, `tidyverse_update()` will return a piece of code that you can copy and run to selectively update only those packages that are out\-of\-date.
#### Solution
```
tidyverse_update()
```
#### Discussion
`tidyverse_update()` checks whether or not each of your tidyverse packages is [up\-to\-date](program.html#update). If every package is up\-to\-date, `tidyverse_update()` will return the message: `All tidyverse packages up-to-date`. Otherwise, `tidyverse_update()` will return a piece of code that you can copy and run to selectively update only those packages that are out\-of\-date.
1\.7 List all of the tidyverse packages
---------------------------------------
You want to generate a vector that contains the names of every package in the tidyverse.
#### Solution
```
tidyverse_packages()
```
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
#### Discussion
`tidyverse_packages()` returns a character vector that contains the names of every package that was in the tidyverse when you installed the `tidyverse` package. These are the packages that were installed onto your machine along with the `tidyverse` package. They are also the packages that [`tidyverse_update()`](program.html#update-all) will check.
[Update](program.html#update) the `tidyverse` package before running `tidyverse_packages()` to receive the most current list.
#### Solution
```
tidyverse_packages()
```
```
## [1] "broom" "cli" "crayon" "dbplyr" "dplyr"
## [6] "forcats" "ggplot2" "haven" "hms" "httr"
## [11] "jsonlite" "lubridate" "magrittr" "modelr" "pillar"
## [16] "purrr" "readr" "readxl" "reprex" "rlang"
## [21] "rstudioapi" "rvest" "stringr" "tibble" "tidyr"
## [26] "xml2" "tidyverse"
```
#### Discussion
`tidyverse_packages()` returns a character vector that contains the names of every package that was in the tidyverse when you installed the `tidyverse` package. These are the packages that were installed onto your machine along with the `tidyverse` package. They are also the packages that [`tidyverse_update()`](program.html#update-all) will check.
[Update](program.html#update) the `tidyverse` package before running `tidyverse_packages()` to receive the most current list.
1\.8 Create an object
---------------------
You want to create an object that stores content to use later.
#### Solution
```
x <- 1
```
#### Discussion
The assignment operator is made by a less than sign, `<`, followed by a minus sign `-`. The result looks like an arrow, `<-`. R will assign the contents on the right hand side of the operator to the name on the left hand sign. Afterwards, you can use the name in your code to refer to the contents, e.g. `log(x)`.
Names cannot begin with a number, nor a special character like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`.
If you assign to a name that is already in use, the new content will mask or overwrite the previous object.
#### Solution
```
x <- 1
```
#### Discussion
The assignment operator is made by a less than sign, `<`, followed by a minus sign `-`. The result looks like an arrow, `<-`. R will assign the contents on the right hand side of the operator to the name on the left hand sign. Afterwards, you can use the name in your code to refer to the contents, e.g. `log(x)`.
Names cannot begin with a number, nor a special character like `^`, `!`, `$`, `@`, `+`, `-`, `/`, or `*`.
If you assign to a name that is already in use, the new content will mask or overwrite the previous object.
1\.9 Create a vector
--------------------
You want to create a one\-dimensional, ordered set of values.
#### Solution
```
c(3, 1, 7, 5)
```
```
## [1] 3 1 7 5
```
#### Discussion
The concatenate function, `c()`, combines its arguments into a vector that can be passed to a function or assigned to an object. To name each value, provide names to the arguments of `c()`, e.g.
```
c(mu = 3, chi = 1, psi = 7, pi = 5)
```
```
## mu chi psi pi
## 3 1 7 5
```
#### Solution
```
c(3, 1, 7, 5)
```
```
## [1] 3 1 7 5
```
#### Discussion
The concatenate function, `c()`, combines its arguments into a vector that can be passed to a function or assigned to an object. To name each value, provide names to the arguments of `c()`, e.g.
```
c(mu = 3, chi = 1, psi = 7, pi = 5)
```
```
## mu chi psi pi
## 3 1 7 5
```
1\.10 See an object
-------------------
You want to see the contents of an object.
#### Solution
```
x
```
```
## [1] 1
```
#### Discussion
R displays the contents of an object when you call the bare object name—not surrounded with quotation marks, not followed by parentheses. This works for functions too, in which case R displays the code elements assigned to the function name,e.g. `lm`.
#### Solution
```
x
```
```
## [1] 1
```
#### Discussion
R displays the contents of an object when you call the bare object name—not surrounded with quotation marks, not followed by parentheses. This works for functions too, in which case R displays the code elements assigned to the function name,e.g. `lm`.
1\.11 Remove an object
----------------------
You want to remove an object that you have created.
#### Solution
```
rm(x)
```
#### Discussion
The remove function, `rm()`, removes an object from R’s memory. `rm()` will not remove an object that is loaded from an R package.
#### Solution
```
rm(x)
```
#### Discussion
The remove function, `rm()`, removes an object from R’s memory. `rm()` will not remove an object that is loaded from an R package.
1\.12 Call a function
---------------------
You want to call a function, i.e. you want R to execute a function.
#### Solution
```
round(3.141593, digits = 2)
```
```
## [1] 3.14
```
#### Discussion
To run a function, type its name followed by an open open and closed parentheses. If the function requires arguments (i.e. inputs) to do its job, place the arguments between the parentheses.
It is a best practice to name every argument after the first. This is sometimes relaxed to every argument after the second argument, if the second argument is also obvious and required.
#### Solution
```
round(3.141593, digits = 2)
```
```
## [1] 3.14
```
#### Discussion
To run a function, type its name followed by an open open and closed parentheses. If the function requires arguments (i.e. inputs) to do its job, place the arguments between the parentheses.
It is a best practice to name every argument after the first. This is sometimes relaxed to every argument after the second argument, if the second argument is also obvious and required.
1\.13 See a function’s arguments
--------------------------------
You want to see which arguments a function uses/recognizes.
#### Solution
```
args(round)
```
```
## function (x, digits = 0)
## NULL
```
#### Discussion
`args()` displays the argument names of a function. Ignore the `function`, `(`, `)`, and `NULL` in the output. Each argument name will be listed between the parentheses.
If an argument has a default value, the value will appear next to its name, as in `digits = 0`. Arguments that have default values are optional: the function will use the default value when you do not supply a different value for that argument. Arguments without default values must be supplied when you run the function to avoid an error.
#### Solution
```
args(round)
```
```
## function (x, digits = 0)
## NULL
```
#### Discussion
`args()` displays the argument names of a function. Ignore the `function`, `(`, `)`, and `NULL` in the output. Each argument name will be listed between the parentheses.
If an argument has a default value, the value will appear next to its name, as in `digits = 0`. Arguments that have default values are optional: the function will use the default value when you do not supply a different value for that argument. Arguments without default values must be supplied when you run the function to avoid an error.
1\.14 Open a function’s help page
---------------------------------
You want to open the help page for a function.
#### Solution
```
?round
```
#### Discussion
To open a function’s help page, run a `?` followed by the bare function name—no parentheses or quotes. A function’s help page is the technical documentation for the function and its arguments. Often the most useful section of a help page is the last, which is an examples section of code that uses the function.
A function’s help page is loaded with the function. It will only be available if the package that contains the function is loaded.
#### Solution
```
?round
```
#### Discussion
To open a function’s help page, run a `?` followed by the bare function name—no parentheses or quotes. A function’s help page is the technical documentation for the function and its arguments. Often the most useful section of a help page is the last, which is an examples section of code that uses the function.
A function’s help page is loaded with the function. It will only be available if the package that contains the function is loaded.
1\.15 Combine functions into a pipe
-----------------------------------
You want to chain multiple functions together to be run in sequence, with each function operating on the preceding function’s output.
#### Solution
```
starwars %>%
group_by(species) %>%
summarise(avg_height = mean(height, na.rm = TRUE)) %>%
arrange(avg_height)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 38 x 2
## species avg_height
## <chr> <dbl>
## 1 Yoda's species 66
## 2 Aleena 79
## 3 Ewok 88
## 4 Vulptereen 94
## 5 Dug 112
## 6 Xexto 122
## 7 Droid 131.
## 8 Toydarian 137
## 9 Sullustan 160
## 10 Toong 163
## # … with 28 more rows
```
#### Discussion
The `%>%` operator (pronounced “pipe operator”) evaluates the code on its left hand side (LHS) and then passes the result to the the code on its right hand side (RHS), which should be a function call. By default `%>%` will pass the result of the LHS to the first unnamed argument of the function on the RHS.
So `starwars %>% group_by(species)` is the equivalent of `group_by(starwars, species)`, and the above solution is the equivalent of the nested code:
```
arrange(
summarise(
group_by(starwars, species),
avg_height = mean(height, na.rm = TRUE)
),
avg_height
)
```
or the equivalent of:
```
x1 <- starwars
x2 <- group_by(x1, species)
x3 <- summarise(x3, avg_height = mean(height, na.rm = TRUE))
arrange(x3, avg_height)
```
The chunk of functions connected by `%>%` is called a “pipe.” To read a pipe as a sequence of steps, mentally pronounce `%>%` as “then.”
The `%>%` operator is loaded with the dplyr package, which imports it from the magrittr package. Tidyverse functions facilitate using `%>%` by
1. accepting a data frame or tibble as their first argument
2. returning a data frame or tibble as their result
`%>%` is easy to type in the RStudio IDE with the keyboard shortcuts
* **Command \+ Shift \+ M** (Mac OS)
* **Control \+ Shift \+ M** (Windows)
#### Solution
```
starwars %>%
group_by(species) %>%
summarise(avg_height = mean(height, na.rm = TRUE)) %>%
arrange(avg_height)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 38 x 2
## species avg_height
## <chr> <dbl>
## 1 Yoda's species 66
## 2 Aleena 79
## 3 Ewok 88
## 4 Vulptereen 94
## 5 Dug 112
## 6 Xexto 122
## 7 Droid 131.
## 8 Toydarian 137
## 9 Sullustan 160
## 10 Toong 163
## # … with 28 more rows
```
#### Discussion
The `%>%` operator (pronounced “pipe operator”) evaluates the code on its left hand side (LHS) and then passes the result to the the code on its right hand side (RHS), which should be a function call. By default `%>%` will pass the result of the LHS to the first unnamed argument of the function on the RHS.
So `starwars %>% group_by(species)` is the equivalent of `group_by(starwars, species)`, and the above solution is the equivalent of the nested code:
```
arrange(
summarise(
group_by(starwars, species),
avg_height = mean(height, na.rm = TRUE)
),
avg_height
)
```
or the equivalent of:
```
x1 <- starwars
x2 <- group_by(x1, species)
x3 <- summarise(x3, avg_height = mean(height, na.rm = TRUE))
arrange(x3, avg_height)
```
The chunk of functions connected by `%>%` is called a “pipe.” To read a pipe as a sequence of steps, mentally pronounce `%>%` as “then.”
The `%>%` operator is loaded with the dplyr package, which imports it from the magrittr package. Tidyverse functions facilitate using `%>%` by
1. accepting a data frame or tibble as their first argument
2. returning a data frame or tibble as their result
`%>%` is easy to type in the RStudio IDE with the keyboard shortcuts
* **Command \+ Shift \+ M** (Mac OS)
* **Control \+ Shift \+ M** (Windows)
1\.16 Pipe a result to a specific argument
------------------------------------------
You want to use `%>%` to pass the result of the left hand side to an argument that is not the first argument of the function on the right hand side.
#### Solution
```
starwars %>%
lm(mass ~ height, data = .)
```
```
##
## Call:
## lm(formula = mass ~ height, data = .)
##
## Coefficients:
## (Intercept) height
## -13.8103 0.6386
```
#### Discussion
By default `%>%` passes the result of the left hand side to the the first unnamed argument of the function on the right hand side. To override this default, use `.` as a placeholder within the function call on the right hand side. `%>%` will evaluate `.` as the result of the left hand side, instead of passing the result to the first unnamed argument.
The solution code is the equivalent of
```
lm(mass ~ height, data = starwars)
```
#### Solution
```
starwars %>%
lm(mass ~ height, data = .)
```
```
##
## Call:
## lm(formula = mass ~ height, data = .)
##
## Coefficients:
## (Intercept) height
## -13.8103 0.6386
```
#### Discussion
By default `%>%` passes the result of the left hand side to the the first unnamed argument of the function on the right hand side. To override this default, use `.` as a placeholder within the function call on the right hand side. `%>%` will evaluate `.` as the result of the left hand side, instead of passing the result to the first unnamed argument.
The solution code is the equivalent of
```
lm(mass ~ height, data = starwars)
```
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/import.html |
2 Import
========
---
This chapter includes the following recipes:
1. [Import data quickly with a GUI](import.html#import-data-quickly-with-a-gui)
2. [Read a comma\-separated values (csv) file](import.html#read-a-comma-separated-values-csv-file)
3. [Read a semi\-colon delimited file](import.html#read-a-semi-colon-delimited-file)
4. [Read a tab delimited file](import.html#read-a-tab-delimited-file)
5. [Read a text file with an unusual delimiter](import.html#read-a-text-file-with-an-unusual-delimiter)
6. [Read a fixed\-width file](import.html#read-a-fixed-width-file)
7. [Read a file no header](import.html#read-a-file-no-header)
8. [Skip lines at the start of a file when reading a file](import.html#skip-lines-at-the-start-of-a-file-when-reading-a-file)
9. [Skip lines at the end of a file when reading a file](import.html#skip-lines-at-the-end-of-a-file-when-reading-a-file)
10. [Replace missing values as you read a file](import.html#replace-missing-values-as-you-read-a-file)
11. [Read a compressed RDS file](import.html#read-a-compressed-rds-file)
12. [Read an Excel spreadsheet](import.html#read-an-excel-spreadsheet)
13. [Read a specific sheet from an Excel spreadsheet](import.html#read-a-specific-sheet-from-an-excel-spreadsheet)
14. [Read a field of cells from an excel spreadsheet](import.html#read-a-field-of-cells-from-an-excel-spreadsheet)
15. [Write to a comma\-separate values (csv) file](import.html#write-to-a-comma-separate-values-csv-file)
16. [Write to a semi\-colon delimited file](import.html#write-to-a-semi-colon-delimited-file)
17. [Write to a tab\-delimited file](import.html#write-to-a-tab-delimited-file)
18. [Write to a text file with arbitrary delimiters](import.html#write-to-a-text-file-with-arbitrary-delimiters)
19. [Write to a compressed RDS file](import.html#write-to-a-compressed-rds-file)
---
What you should know before you begin
-------------------------------------
Before you can manipulate data with R, you need to import the data into R’s memory, or build a connection to the data that R can use to access the data remotely. For example, you can build a connection to data that lives in a database.
How you import your data will depend on the format of the data. The most common way to store small data sets is as a plain text file. Data may also be stored in a proprietary format associated with a specific piece of software, such as SAS, SPSS, or Microsoft Excel. Data used on the internet is often stored as a JSON or XML file. Large data sets may be stored in a database or a distributed storage system.
When you import data into R, R stores the data in your computer’s RAM while you manipulate it. This creates a size limitation: truly big data sets should be stored outside of R in a database or a distributed storage system. You can then create a connection to the system that R can use to access the data without bringing the data into your computer’s RAM.
The readr package contains the most common functions in the tidyverse for importing data. The readr package is loaded when you run `library(tidyverse)`. The tidyverse also includes the following packages for importing specific types of data. These are not loaded with `library(tidyverse)`. You must load them individually when you need them.
* DBI \- connect to databases
* haven \- read SPSS, Stata, or SAS data
* httr \- access data over web APIs
* jsonlite \- read JSON
* readxl \- read Excel spreadsheets
* rvest \- scrape data from the web
* xml2 \- read XML
### The working directory
Reading and writing files often involves the use of file paths. If you pass R a partial file path, R will append it to the end of the file path that leads to your *working directory*. In other words, partial file paths are interpretted in relation to your working directory. The working directory can change from session to session, as a general rule, the working directory is the directory where:
* Your .Rmd file lives (if you are running code by knitting the document)
* Your .Rproj file lives (if you are using the RStudio Project system)
* You opened the R interpreter from (if you are using a unix terminal/shell window)
Run `getwd()` to see the file path that leads to your current working directory.
2\.1 Import data quickly with a GUI
-----------------------------------
You want to import data quickly, and you do not mind using a semi\-reproducible graphical user interface (GUI) to do so. Your data is not so big that it needs to stay in a database or external storage system.
#### Solution
#### Discussion
The RStudio IDE provides an Import Dataset button in the Environment pane, which appears in the top right corner of the IDE by default. You can use this button to import data that is stored in plain text files as well as in Excel, SAS, SPSS, and Stata files.
Click the button to launch a window that includes a file browser (below). Use the browser to select the file to import.
After you’ve selected a file, RStudio will display a preview of how the file will be imported as a data frame. Below the preview, RStudio provides a GUI interface to the common options for importing the type of file you have selected. As you customize the options, RStudio updates the data preview to display the results.
The bottom right\-hand corner of the window displays R code that, if run, will reproduce your importation process programatically. You should copy and save this code if you wish to document your work in a reproducible workflow.
2\.2 Read a comma\-separated values (csv) file
----------------------------------------------
You want to read a .csv file.
#### Solution
```
# To make a .csv file to read
write_file(x = "a,b,c\n1,2,3\n4,5,NA", path = "file.csv")
my_data <- read_csv("file.csv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Comma separated value (.csv) files are plain text files arranged so that each line contains a row of data and each cell within a line is separated by a comma. Most data analysis software can export their data as .csv files. So if you are in a pinch you can usually export data from a program as a .csv and then read it into R.
You can also use `read_csv()` to import csv files that are hosted at their own unique URL. This only works if you are connected to the internet, e.g.
```
my_data <- read_csv("https://raw.githubusercontent.com/rstudio-education/tidyverse-cookbook/master/data/mtcars.csv")
```
2\.3 Read a semi\-colon delimited file
--------------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses a *semi\-colon* to separate cells.
#### Solution
```
# To make a .csv file to read
write_file(x = "a;b;c\n1;2;3\n4;5;NA", path = "file2.csv")
my_data <- read_csv2("file2.csv")
```
```
## Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Semi\-colon delimited files are popular in parts of the world that use commas for decimal places, like Europe. Semi\-colon delimited files are often still given the .csv file extension.
2\.4 Read a tab delimited file
------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses a *tab* to separate cells.
#### Solution
```
# To make a .tsv file to read
write_file(x = "a\tb\tc\n1\t2\t3\n4\t5\tNA", path = "file.tsv")
my_data <- read_tsv("file.tsv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Tab delimited files are text files that place each row of a table in its own line, and separate each cell within a line with a tab. Tab delimited files commonly have the extensions .tab, .tsv, and .txt.
2\.5 Read a text file with an unusual delimiter
-----------------------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses *something other than a comma* to separate cells.
#### Solution
```
# To make a file to read
write_file(x = "a|b|c\n1|2|3\n4|5|NA", path = "file.txt")
my_data <- read_delim("file.txt", delim = "|")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Use `read_delim()` as you would [`read_csv()`](import.html#read-a-comma-separated-values-csv-file). Pass the delimiter that your file uses as a character string to the `delim` argument of `read_delim()`.
2\.6 Read a fixed\-width file
-----------------------------
You want to read a .fwf file, which uses the fixed width format to represent a table (each column begins \\(n\\) spaces from the left for every line).
#### Solution
```
# To make a .fwf file to read
write_file(x = "a b c\n1 2 3\n4 5 NA", path = "file.fwf")
my_data <- read_table("file.fwf")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Fixed width files arrange data so that each row of a table is in its own line, and each cell begins at a *fixed* number of character spaces from the beginning of the line. For example, in the example below, the cells in the second column each begin
20 spaces from the beginning of the line, no matter many spaces the contents of the first cell required. At first glance, it is easy to mistake fixed width files for tab delimited files. Fixed with files use individual spaces to separate cells, and not tabs.
```
John Smith WA 418-Y11-4111
Mary Hartford CA 319-Z19-4341
Evan Nolan IL 219-532-c301
```
`read_table()` will read fixed\-width files where each column is separated by at least one whitespace on every line. Use `read_fwf()` to read fixed\-width files that have non\-standard formatting.
2\.7 Read a file no header
--------------------------
You want to read a file that does not include a line of column names at the start of the file.
#### Solution
```
# To make a .csv file to read
write_file("1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", col_names = c("a", "b", "c"))
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
If you do not wish to supply column names, set `col_names = FALSE`. The `col_names` argument works for all readr functions that read tabular data.
2\.8 Skip lines at the start of a file when reading a file
----------------------------------------------------------
You want to read a portion of a file, skipping one or more lines at the *start* of the file. For example, you want to avoid reading the introductory text at the start of a file.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", skip = 1)
```
```
## Parsed with column specification:
## cols(
## `1` = col_double(),
## `2` = col_double(),
## `3` = col_logical()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## `1` `2` `3`
## <dbl> <dbl> <lgl>
## 1 4 5 NA
```
#### Discussion
Set `skip` equal to the number of lines you wish to skip before you begin reading the file. The `skip` argument works for all readr functions that read tabular data, and can be combined with the [`n_max` argument](import.html#skip-lines-at-the-end-of-a-file-when-reading-a-file) to skip more lines at the end of a file.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
2\.9 Skip lines at the end of a file when reading a file
--------------------------------------------------------
You want to read a portion of a file, skipping one or more lines at the *end* of the file. For example, you want to avoid reading in a text comment that appears at the end of a file.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", n_max = 1)
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
```
#### Discussion
Set `n_max` equal to the number of *non\-header* lines you wish to read before you stop reading lines. The `n_max` argument works for all readr functions that read tabular data.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
2\.10 Replace missing values as you read a file
-----------------------------------------------
You want to convert missing values to NA as you read a file, because they have been recorded with a different symbol.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,.","file.csv")
my_data <- read_csv("file.csv", na = ".")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
R’s uses the string `NA` to represent missing data. If your file was exported from another language or application, it might use a different convention. R is likely to read in the convention as a character string, failing to understand that it represents missing data. To prevent this, explicitly tell `read_csv()` which strings are used to represent missing data.
The `na` argument works for all readr functions that read tabular data.
2\.11 Read a compressed RDS file
--------------------------------
You want to read an .RDS file.
#### Solution
```
# To make a .RDS file to read
saveRDS(pressure, file = "file.RDS")
my_data <- readRDS("file.RDS")
my_data
```
```
## temperature pressure
## 1 0 0.0002
## 2 20 0.0012
## 3 40 0.0060
## 4 60 0.0300
## 5 80 0.0900
## 6 100 0.2700
## 7 120 0.7500
## 8 140 1.8500
## 9 160 4.2000
## 10 180 8.8000
## 11 200 17.3000
## 12 220 32.1000
## 13 240 57.0000
## 14 260 96.0000
## 15 280 157.0000
## 16 300 247.0000
## 17 320 376.0000
## 18 340 558.0000
## 19 360 806.0000
```
#### Discussion
`readRDS` is a base R function—you do not need to run `library(tidyverse)` to use it. RDS is a file format native to R for saving compressed content. RDS files are *not* text files and are not human readable in their raw form. Each RDS file contains a single object, which makes it easy to assign its output directly to a single R object. This is not necessarily the case for .RData files, which makes .RDS files safer to use.
2\.12 Read an Excel spreadsheet
-------------------------------
You want to read an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path)
my_data
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
#### Discussion
By default, `read_excel()` reads the *first* sheet in an Excel spreadsheet. See the [next recipe](import.html#read-a-specific-sheet-from-an-excel-spreadsheet) to read in sheets other than the first.
`read_excel()` is loaded with the readxl package, which must be loaded with `library(readxl)`. Pass `read_excel()` a file path or URL that leads to the Excel file you wish to read. `readxl_example("datasets.xlsx")` returns a file path that leads to an example Excel spreadsheet that is conveniently installed with the readxl package.
2\.13 Read a specific sheet from an Excel spreadsheet
-----------------------------------------------------
You want to read a specific sheet from an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, sheet = "chickwts")
my_data
```
```
## # A tibble: 71 x 2
## weight feed
## <dbl> <chr>
## 1 179 horsebean
## 2 160 horsebean
## 3 136 horsebean
## 4 227 horsebean
## 5 217 horsebean
## 6 168 horsebean
## 7 108 horsebean
## 8 124 horsebean
## 9 143 horsebean
## 10 140 horsebean
## # … with 61 more rows
```
#### Discussion
To read a specific sheet from a larger Excel spreadsheet, use the `sheet` argument to pass [`read_excel()`](import.html#read-an-excel-spreadsheet) the name of the sheet as a character string. You can also pass `sheet` the number of the sheet to read. For example, `read_excel(file_path, sheet = 2)` would read the second spreadsheet in the file.
If you are unsure of the name of a sheet, use the `excel_sheets()` function to list the names of the spreadsheets within a file, without importing the file, e.g. `excel_sheets(file_path)`.
2\.14 Read a field of cells from an excel spreadsheet
-----------------------------------------------------
You want to read a subset of cells from an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, range = "C1:E4", skip = 3, n_max = 10)
my_data
```
```
## # A tibble: 3 x 3
## Petal.Length Petal.Width Species
## <dbl> <dbl> <chr>
## 1 1.4 0.2 setosa
## 2 1.4 0.2 setosa
## 3 1.3 0.2 setosa
```
#### Discussion
`read_excel()` adopts the [`skip`](import.html#skip-lines-at-the-start-of-a-file-when-reading-a-file) and \[`n_max][Skip lines at the end of a file when reading a file] arguments of reader functions to skip rows at the top of the spreadsheet and to control how far down the spreadsheet to read. Use the`range`argument to specify a subset of columns to read. Two column names separated by a`:\` specifies those two columns and every column between them.
2\.15 Write to a comma\-separate values (csv) file
--------------------------------------------------
You want to save a tibble or data frame as a .csv file.
#### Solution
```
write_csv(iris, path = "my_file.csv")
```
#### Discussion
`write_csv()` is about twice as fast as base R’s `write.csv()` and never adds rownames to a table. Tidyverse users tend to place important information in its own column, where it can be easily accessed, instead of in the rownames, where it cannot be as easily accessed.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv()`.
2\.16 Write to a semi\-colon delimited file
-------------------------------------------
You want to save a tibble or data frame as a .csv file *that uses semi\-colons to delimit cells*.
#### Solution
```
write_csv2(iris, path = "my_file.csv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv2()`.
2\.17 Write to a tab\-delimited file
------------------------------------
You want to save a tibble or data frame as a tab delimited file.
#### Solution
```
write_tsv(iris, path = "my_file.tsv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_tsv()`.
2\.18 Write to a text file with arbitrary delimiters
----------------------------------------------------
You want to save a tibble or data frame as a plain text file that uses an unusual delimiter.
#### Solution
```
write_delim(iris, path = "my_file.tsv", delim = "|")
```
#### Discussion
`write_delim()` behaves like [`write_csv()`](import.html#write-to-a-comma-separate-values-csv-file), but requires a `delim` argument. Pass `delim` the delimiter you wish to use to separate cells, as a character string.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_delim()`.
2\.19 Write to a compressed RDS file
------------------------------------
You want to write a tibble or data frame to a .RDS file.
#### Solution
```
saveRDS(iris, file = "my_file.RDS")
```
#### Discussion
`saveRDS()`, along with `readRDS()`, is a base R function, which explains the difference in the argument names as well as the read/write naming pattern compared to readr functions—take note!
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `saveRDS()`. Use the .RDS file extension in the file name to help make the uncommon file format obvious.
What you should know before you begin
-------------------------------------
Before you can manipulate data with R, you need to import the data into R’s memory, or build a connection to the data that R can use to access the data remotely. For example, you can build a connection to data that lives in a database.
How you import your data will depend on the format of the data. The most common way to store small data sets is as a plain text file. Data may also be stored in a proprietary format associated with a specific piece of software, such as SAS, SPSS, or Microsoft Excel. Data used on the internet is often stored as a JSON or XML file. Large data sets may be stored in a database or a distributed storage system.
When you import data into R, R stores the data in your computer’s RAM while you manipulate it. This creates a size limitation: truly big data sets should be stored outside of R in a database or a distributed storage system. You can then create a connection to the system that R can use to access the data without bringing the data into your computer’s RAM.
The readr package contains the most common functions in the tidyverse for importing data. The readr package is loaded when you run `library(tidyverse)`. The tidyverse also includes the following packages for importing specific types of data. These are not loaded with `library(tidyverse)`. You must load them individually when you need them.
* DBI \- connect to databases
* haven \- read SPSS, Stata, or SAS data
* httr \- access data over web APIs
* jsonlite \- read JSON
* readxl \- read Excel spreadsheets
* rvest \- scrape data from the web
* xml2 \- read XML
### The working directory
Reading and writing files often involves the use of file paths. If you pass R a partial file path, R will append it to the end of the file path that leads to your *working directory*. In other words, partial file paths are interpretted in relation to your working directory. The working directory can change from session to session, as a general rule, the working directory is the directory where:
* Your .Rmd file lives (if you are running code by knitting the document)
* Your .Rproj file lives (if you are using the RStudio Project system)
* You opened the R interpreter from (if you are using a unix terminal/shell window)
Run `getwd()` to see the file path that leads to your current working directory.
2\.1 Import data quickly with a GUI
-----------------------------------
You want to import data quickly, and you do not mind using a semi\-reproducible graphical user interface (GUI) to do so. Your data is not so big that it needs to stay in a database or external storage system.
#### Solution
#### Discussion
The RStudio IDE provides an Import Dataset button in the Environment pane, which appears in the top right corner of the IDE by default. You can use this button to import data that is stored in plain text files as well as in Excel, SAS, SPSS, and Stata files.
Click the button to launch a window that includes a file browser (below). Use the browser to select the file to import.
After you’ve selected a file, RStudio will display a preview of how the file will be imported as a data frame. Below the preview, RStudio provides a GUI interface to the common options for importing the type of file you have selected. As you customize the options, RStudio updates the data preview to display the results.
The bottom right\-hand corner of the window displays R code that, if run, will reproduce your importation process programatically. You should copy and save this code if you wish to document your work in a reproducible workflow.
#### Solution
#### Discussion
The RStudio IDE provides an Import Dataset button in the Environment pane, which appears in the top right corner of the IDE by default. You can use this button to import data that is stored in plain text files as well as in Excel, SAS, SPSS, and Stata files.
Click the button to launch a window that includes a file browser (below). Use the browser to select the file to import.
After you’ve selected a file, RStudio will display a preview of how the file will be imported as a data frame. Below the preview, RStudio provides a GUI interface to the common options for importing the type of file you have selected. As you customize the options, RStudio updates the data preview to display the results.
The bottom right\-hand corner of the window displays R code that, if run, will reproduce your importation process programatically. You should copy and save this code if you wish to document your work in a reproducible workflow.
2\.2 Read a comma\-separated values (csv) file
----------------------------------------------
You want to read a .csv file.
#### Solution
```
# To make a .csv file to read
write_file(x = "a,b,c\n1,2,3\n4,5,NA", path = "file.csv")
my_data <- read_csv("file.csv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Comma separated value (.csv) files are plain text files arranged so that each line contains a row of data and each cell within a line is separated by a comma. Most data analysis software can export their data as .csv files. So if you are in a pinch you can usually export data from a program as a .csv and then read it into R.
You can also use `read_csv()` to import csv files that are hosted at their own unique URL. This only works if you are connected to the internet, e.g.
```
my_data <- read_csv("https://raw.githubusercontent.com/rstudio-education/tidyverse-cookbook/master/data/mtcars.csv")
```
#### Solution
```
# To make a .csv file to read
write_file(x = "a,b,c\n1,2,3\n4,5,NA", path = "file.csv")
my_data <- read_csv("file.csv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Comma separated value (.csv) files are plain text files arranged so that each line contains a row of data and each cell within a line is separated by a comma. Most data analysis software can export their data as .csv files. So if you are in a pinch you can usually export data from a program as a .csv and then read it into R.
You can also use `read_csv()` to import csv files that are hosted at their own unique URL. This only works if you are connected to the internet, e.g.
```
my_data <- read_csv("https://raw.githubusercontent.com/rstudio-education/tidyverse-cookbook/master/data/mtcars.csv")
```
2\.3 Read a semi\-colon delimited file
--------------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses a *semi\-colon* to separate cells.
#### Solution
```
# To make a .csv file to read
write_file(x = "a;b;c\n1;2;3\n4;5;NA", path = "file2.csv")
my_data <- read_csv2("file2.csv")
```
```
## Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Semi\-colon delimited files are popular in parts of the world that use commas for decimal places, like Europe. Semi\-colon delimited files are often still given the .csv file extension.
#### Solution
```
# To make a .csv file to read
write_file(x = "a;b;c\n1;2;3\n4;5;NA", path = "file2.csv")
my_data <- read_csv2("file2.csv")
```
```
## Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Semi\-colon delimited files are popular in parts of the world that use commas for decimal places, like Europe. Semi\-colon delimited files are often still given the .csv file extension.
2\.4 Read a tab delimited file
------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses a *tab* to separate cells.
#### Solution
```
# To make a .tsv file to read
write_file(x = "a\tb\tc\n1\t2\t3\n4\t5\tNA", path = "file.tsv")
my_data <- read_tsv("file.tsv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Tab delimited files are text files that place each row of a table in its own line, and separate each cell within a line with a tab. Tab delimited files commonly have the extensions .tab, .tsv, and .txt.
#### Solution
```
# To make a .tsv file to read
write_file(x = "a\tb\tc\n1\t2\t3\n4\t5\tNA", path = "file.tsv")
my_data <- read_tsv("file.tsv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Tab delimited files are text files that place each row of a table in its own line, and separate each cell within a line with a tab. Tab delimited files commonly have the extensions .tab, .tsv, and .txt.
2\.5 Read a text file with an unusual delimiter
-----------------------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses *something other than a comma* to separate cells.
#### Solution
```
# To make a file to read
write_file(x = "a|b|c\n1|2|3\n4|5|NA", path = "file.txt")
my_data <- read_delim("file.txt", delim = "|")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Use `read_delim()` as you would [`read_csv()`](import.html#read-a-comma-separated-values-csv-file). Pass the delimiter that your file uses as a character string to the `delim` argument of `read_delim()`.
#### Solution
```
# To make a file to read
write_file(x = "a|b|c\n1|2|3\n4|5|NA", path = "file.txt")
my_data <- read_delim("file.txt", delim = "|")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Use `read_delim()` as you would [`read_csv()`](import.html#read-a-comma-separated-values-csv-file). Pass the delimiter that your file uses as a character string to the `delim` argument of `read_delim()`.
2\.6 Read a fixed\-width file
-----------------------------
You want to read a .fwf file, which uses the fixed width format to represent a table (each column begins \\(n\\) spaces from the left for every line).
#### Solution
```
# To make a .fwf file to read
write_file(x = "a b c\n1 2 3\n4 5 NA", path = "file.fwf")
my_data <- read_table("file.fwf")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Fixed width files arrange data so that each row of a table is in its own line, and each cell begins at a *fixed* number of character spaces from the beginning of the line. For example, in the example below, the cells in the second column each begin
20 spaces from the beginning of the line, no matter many spaces the contents of the first cell required. At first glance, it is easy to mistake fixed width files for tab delimited files. Fixed with files use individual spaces to separate cells, and not tabs.
```
John Smith WA 418-Y11-4111
Mary Hartford CA 319-Z19-4341
Evan Nolan IL 219-532-c301
```
`read_table()` will read fixed\-width files where each column is separated by at least one whitespace on every line. Use `read_fwf()` to read fixed\-width files that have non\-standard formatting.
#### Solution
```
# To make a .fwf file to read
write_file(x = "a b c\n1 2 3\n4 5 NA", path = "file.fwf")
my_data <- read_table("file.fwf")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Fixed width files arrange data so that each row of a table is in its own line, and each cell begins at a *fixed* number of character spaces from the beginning of the line. For example, in the example below, the cells in the second column each begin
20 spaces from the beginning of the line, no matter many spaces the contents of the first cell required. At first glance, it is easy to mistake fixed width files for tab delimited files. Fixed with files use individual spaces to separate cells, and not tabs.
```
John Smith WA 418-Y11-4111
Mary Hartford CA 319-Z19-4341
Evan Nolan IL 219-532-c301
```
`read_table()` will read fixed\-width files where each column is separated by at least one whitespace on every line. Use `read_fwf()` to read fixed\-width files that have non\-standard formatting.
2\.7 Read a file no header
--------------------------
You want to read a file that does not include a line of column names at the start of the file.
#### Solution
```
# To make a .csv file to read
write_file("1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", col_names = c("a", "b", "c"))
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
If you do not wish to supply column names, set `col_names = FALSE`. The `col_names` argument works for all readr functions that read tabular data.
#### Solution
```
# To make a .csv file to read
write_file("1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", col_names = c("a", "b", "c"))
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
If you do not wish to supply column names, set `col_names = FALSE`. The `col_names` argument works for all readr functions that read tabular data.
2\.8 Skip lines at the start of a file when reading a file
----------------------------------------------------------
You want to read a portion of a file, skipping one or more lines at the *start* of the file. For example, you want to avoid reading the introductory text at the start of a file.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", skip = 1)
```
```
## Parsed with column specification:
## cols(
## `1` = col_double(),
## `2` = col_double(),
## `3` = col_logical()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## `1` `2` `3`
## <dbl> <dbl> <lgl>
## 1 4 5 NA
```
#### Discussion
Set `skip` equal to the number of lines you wish to skip before you begin reading the file. The `skip` argument works for all readr functions that read tabular data, and can be combined with the [`n_max` argument](import.html#skip-lines-at-the-end-of-a-file-when-reading-a-file) to skip more lines at the end of a file.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", skip = 1)
```
```
## Parsed with column specification:
## cols(
## `1` = col_double(),
## `2` = col_double(),
## `3` = col_logical()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## `1` `2` `3`
## <dbl> <dbl> <lgl>
## 1 4 5 NA
```
#### Discussion
Set `skip` equal to the number of lines you wish to skip before you begin reading the file. The `skip` argument works for all readr functions that read tabular data, and can be combined with the [`n_max` argument](import.html#skip-lines-at-the-end-of-a-file-when-reading-a-file) to skip more lines at the end of a file.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
2\.9 Skip lines at the end of a file when reading a file
--------------------------------------------------------
You want to read a portion of a file, skipping one or more lines at the *end* of the file. For example, you want to avoid reading in a text comment that appears at the end of a file.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", n_max = 1)
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
```
#### Discussion
Set `n_max` equal to the number of *non\-header* lines you wish to read before you stop reading lines. The `n_max` argument works for all readr functions that read tabular data.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", n_max = 1)
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
```
#### Discussion
Set `n_max` equal to the number of *non\-header* lines you wish to read before you stop reading lines. The `n_max` argument works for all readr functions that read tabular data.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
2\.10 Replace missing values as you read a file
-----------------------------------------------
You want to convert missing values to NA as you read a file, because they have been recorded with a different symbol.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,.","file.csv")
my_data <- read_csv("file.csv", na = ".")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
R’s uses the string `NA` to represent missing data. If your file was exported from another language or application, it might use a different convention. R is likely to read in the convention as a character string, failing to understand that it represents missing data. To prevent this, explicitly tell `read_csv()` which strings are used to represent missing data.
The `na` argument works for all readr functions that read tabular data.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,.","file.csv")
my_data <- read_csv("file.csv", na = ".")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
R’s uses the string `NA` to represent missing data. If your file was exported from another language or application, it might use a different convention. R is likely to read in the convention as a character string, failing to understand that it represents missing data. To prevent this, explicitly tell `read_csv()` which strings are used to represent missing data.
The `na` argument works for all readr functions that read tabular data.
2\.11 Read a compressed RDS file
--------------------------------
You want to read an .RDS file.
#### Solution
```
# To make a .RDS file to read
saveRDS(pressure, file = "file.RDS")
my_data <- readRDS("file.RDS")
my_data
```
```
## temperature pressure
## 1 0 0.0002
## 2 20 0.0012
## 3 40 0.0060
## 4 60 0.0300
## 5 80 0.0900
## 6 100 0.2700
## 7 120 0.7500
## 8 140 1.8500
## 9 160 4.2000
## 10 180 8.8000
## 11 200 17.3000
## 12 220 32.1000
## 13 240 57.0000
## 14 260 96.0000
## 15 280 157.0000
## 16 300 247.0000
## 17 320 376.0000
## 18 340 558.0000
## 19 360 806.0000
```
#### Discussion
`readRDS` is a base R function—you do not need to run `library(tidyverse)` to use it. RDS is a file format native to R for saving compressed content. RDS files are *not* text files and are not human readable in their raw form. Each RDS file contains a single object, which makes it easy to assign its output directly to a single R object. This is not necessarily the case for .RData files, which makes .RDS files safer to use.
#### Solution
```
# To make a .RDS file to read
saveRDS(pressure, file = "file.RDS")
my_data <- readRDS("file.RDS")
my_data
```
```
## temperature pressure
## 1 0 0.0002
## 2 20 0.0012
## 3 40 0.0060
## 4 60 0.0300
## 5 80 0.0900
## 6 100 0.2700
## 7 120 0.7500
## 8 140 1.8500
## 9 160 4.2000
## 10 180 8.8000
## 11 200 17.3000
## 12 220 32.1000
## 13 240 57.0000
## 14 260 96.0000
## 15 280 157.0000
## 16 300 247.0000
## 17 320 376.0000
## 18 340 558.0000
## 19 360 806.0000
```
#### Discussion
`readRDS` is a base R function—you do not need to run `library(tidyverse)` to use it. RDS is a file format native to R for saving compressed content. RDS files are *not* text files and are not human readable in their raw form. Each RDS file contains a single object, which makes it easy to assign its output directly to a single R object. This is not necessarily the case for .RData files, which makes .RDS files safer to use.
2\.12 Read an Excel spreadsheet
-------------------------------
You want to read an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path)
my_data
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
#### Discussion
By default, `read_excel()` reads the *first* sheet in an Excel spreadsheet. See the [next recipe](import.html#read-a-specific-sheet-from-an-excel-spreadsheet) to read in sheets other than the first.
`read_excel()` is loaded with the readxl package, which must be loaded with `library(readxl)`. Pass `read_excel()` a file path or URL that leads to the Excel file you wish to read. `readxl_example("datasets.xlsx")` returns a file path that leads to an example Excel spreadsheet that is conveniently installed with the readxl package.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path)
my_data
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
#### Discussion
By default, `read_excel()` reads the *first* sheet in an Excel spreadsheet. See the [next recipe](import.html#read-a-specific-sheet-from-an-excel-spreadsheet) to read in sheets other than the first.
`read_excel()` is loaded with the readxl package, which must be loaded with `library(readxl)`. Pass `read_excel()` a file path or URL that leads to the Excel file you wish to read. `readxl_example("datasets.xlsx")` returns a file path that leads to an example Excel spreadsheet that is conveniently installed with the readxl package.
2\.13 Read a specific sheet from an Excel spreadsheet
-----------------------------------------------------
You want to read a specific sheet from an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, sheet = "chickwts")
my_data
```
```
## # A tibble: 71 x 2
## weight feed
## <dbl> <chr>
## 1 179 horsebean
## 2 160 horsebean
## 3 136 horsebean
## 4 227 horsebean
## 5 217 horsebean
## 6 168 horsebean
## 7 108 horsebean
## 8 124 horsebean
## 9 143 horsebean
## 10 140 horsebean
## # … with 61 more rows
```
#### Discussion
To read a specific sheet from a larger Excel spreadsheet, use the `sheet` argument to pass [`read_excel()`](import.html#read-an-excel-spreadsheet) the name of the sheet as a character string. You can also pass `sheet` the number of the sheet to read. For example, `read_excel(file_path, sheet = 2)` would read the second spreadsheet in the file.
If you are unsure of the name of a sheet, use the `excel_sheets()` function to list the names of the spreadsheets within a file, without importing the file, e.g. `excel_sheets(file_path)`.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, sheet = "chickwts")
my_data
```
```
## # A tibble: 71 x 2
## weight feed
## <dbl> <chr>
## 1 179 horsebean
## 2 160 horsebean
## 3 136 horsebean
## 4 227 horsebean
## 5 217 horsebean
## 6 168 horsebean
## 7 108 horsebean
## 8 124 horsebean
## 9 143 horsebean
## 10 140 horsebean
## # … with 61 more rows
```
#### Discussion
To read a specific sheet from a larger Excel spreadsheet, use the `sheet` argument to pass [`read_excel()`](import.html#read-an-excel-spreadsheet) the name of the sheet as a character string. You can also pass `sheet` the number of the sheet to read. For example, `read_excel(file_path, sheet = 2)` would read the second spreadsheet in the file.
If you are unsure of the name of a sheet, use the `excel_sheets()` function to list the names of the spreadsheets within a file, without importing the file, e.g. `excel_sheets(file_path)`.
2\.14 Read a field of cells from an excel spreadsheet
-----------------------------------------------------
You want to read a subset of cells from an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, range = "C1:E4", skip = 3, n_max = 10)
my_data
```
```
## # A tibble: 3 x 3
## Petal.Length Petal.Width Species
## <dbl> <dbl> <chr>
## 1 1.4 0.2 setosa
## 2 1.4 0.2 setosa
## 3 1.3 0.2 setosa
```
#### Discussion
`read_excel()` adopts the [`skip`](import.html#skip-lines-at-the-start-of-a-file-when-reading-a-file) and \[`n_max][Skip lines at the end of a file when reading a file] arguments of reader functions to skip rows at the top of the spreadsheet and to control how far down the spreadsheet to read. Use the`range`argument to specify a subset of columns to read. Two column names separated by a`:\` specifies those two columns and every column between them.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, range = "C1:E4", skip = 3, n_max = 10)
my_data
```
```
## # A tibble: 3 x 3
## Petal.Length Petal.Width Species
## <dbl> <dbl> <chr>
## 1 1.4 0.2 setosa
## 2 1.4 0.2 setosa
## 3 1.3 0.2 setosa
```
#### Discussion
`read_excel()` adopts the [`skip`](import.html#skip-lines-at-the-start-of-a-file-when-reading-a-file) and \[`n_max][Skip lines at the end of a file when reading a file] arguments of reader functions to skip rows at the top of the spreadsheet and to control how far down the spreadsheet to read. Use the`range`argument to specify a subset of columns to read. Two column names separated by a`:\` specifies those two columns and every column between them.
2\.15 Write to a comma\-separate values (csv) file
--------------------------------------------------
You want to save a tibble or data frame as a .csv file.
#### Solution
```
write_csv(iris, path = "my_file.csv")
```
#### Discussion
`write_csv()` is about twice as fast as base R’s `write.csv()` and never adds rownames to a table. Tidyverse users tend to place important information in its own column, where it can be easily accessed, instead of in the rownames, where it cannot be as easily accessed.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv()`.
#### Solution
```
write_csv(iris, path = "my_file.csv")
```
#### Discussion
`write_csv()` is about twice as fast as base R’s `write.csv()` and never adds rownames to a table. Tidyverse users tend to place important information in its own column, where it can be easily accessed, instead of in the rownames, where it cannot be as easily accessed.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv()`.
2\.16 Write to a semi\-colon delimited file
-------------------------------------------
You want to save a tibble or data frame as a .csv file *that uses semi\-colons to delimit cells*.
#### Solution
```
write_csv2(iris, path = "my_file.csv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv2()`.
#### Solution
```
write_csv2(iris, path = "my_file.csv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv2()`.
2\.17 Write to a tab\-delimited file
------------------------------------
You want to save a tibble or data frame as a tab delimited file.
#### Solution
```
write_tsv(iris, path = "my_file.tsv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_tsv()`.
#### Solution
```
write_tsv(iris, path = "my_file.tsv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_tsv()`.
2\.18 Write to a text file with arbitrary delimiters
----------------------------------------------------
You want to save a tibble or data frame as a plain text file that uses an unusual delimiter.
#### Solution
```
write_delim(iris, path = "my_file.tsv", delim = "|")
```
#### Discussion
`write_delim()` behaves like [`write_csv()`](import.html#write-to-a-comma-separate-values-csv-file), but requires a `delim` argument. Pass `delim` the delimiter you wish to use to separate cells, as a character string.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_delim()`.
#### Solution
```
write_delim(iris, path = "my_file.tsv", delim = "|")
```
#### Discussion
`write_delim()` behaves like [`write_csv()`](import.html#write-to-a-comma-separate-values-csv-file), but requires a `delim` argument. Pass `delim` the delimiter you wish to use to separate cells, as a character string.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_delim()`.
2\.19 Write to a compressed RDS file
------------------------------------
You want to write a tibble or data frame to a .RDS file.
#### Solution
```
saveRDS(iris, file = "my_file.RDS")
```
#### Discussion
`saveRDS()`, along with `readRDS()`, is a base R function, which explains the difference in the argument names as well as the read/write naming pattern compared to readr functions—take note!
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `saveRDS()`. Use the .RDS file extension in the file name to help make the uncommon file format obvious.
#### Solution
```
saveRDS(iris, file = "my_file.RDS")
```
#### Discussion
`saveRDS()`, along with `readRDS()`, is a base R function, which explains the difference in the argument names as well as the read/write naming pattern compared to readr functions—take note!
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `saveRDS()`. Use the .RDS file extension in the file name to help make the uncommon file format obvious.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/import.html |
2 Import
========
---
This chapter includes the following recipes:
1. [Import data quickly with a GUI](import.html#import-data-quickly-with-a-gui)
2. [Read a comma\-separated values (csv) file](import.html#read-a-comma-separated-values-csv-file)
3. [Read a semi\-colon delimited file](import.html#read-a-semi-colon-delimited-file)
4. [Read a tab delimited file](import.html#read-a-tab-delimited-file)
5. [Read a text file with an unusual delimiter](import.html#read-a-text-file-with-an-unusual-delimiter)
6. [Read a fixed\-width file](import.html#read-a-fixed-width-file)
7. [Read a file no header](import.html#read-a-file-no-header)
8. [Skip lines at the start of a file when reading a file](import.html#skip-lines-at-the-start-of-a-file-when-reading-a-file)
9. [Skip lines at the end of a file when reading a file](import.html#skip-lines-at-the-end-of-a-file-when-reading-a-file)
10. [Replace missing values as you read a file](import.html#replace-missing-values-as-you-read-a-file)
11. [Read a compressed RDS file](import.html#read-a-compressed-rds-file)
12. [Read an Excel spreadsheet](import.html#read-an-excel-spreadsheet)
13. [Read a specific sheet from an Excel spreadsheet](import.html#read-a-specific-sheet-from-an-excel-spreadsheet)
14. [Read a field of cells from an excel spreadsheet](import.html#read-a-field-of-cells-from-an-excel-spreadsheet)
15. [Write to a comma\-separate values (csv) file](import.html#write-to-a-comma-separate-values-csv-file)
16. [Write to a semi\-colon delimited file](import.html#write-to-a-semi-colon-delimited-file)
17. [Write to a tab\-delimited file](import.html#write-to-a-tab-delimited-file)
18. [Write to a text file with arbitrary delimiters](import.html#write-to-a-text-file-with-arbitrary-delimiters)
19. [Write to a compressed RDS file](import.html#write-to-a-compressed-rds-file)
---
What you should know before you begin
-------------------------------------
Before you can manipulate data with R, you need to import the data into R’s memory, or build a connection to the data that R can use to access the data remotely. For example, you can build a connection to data that lives in a database.
How you import your data will depend on the format of the data. The most common way to store small data sets is as a plain text file. Data may also be stored in a proprietary format associated with a specific piece of software, such as SAS, SPSS, or Microsoft Excel. Data used on the internet is often stored as a JSON or XML file. Large data sets may be stored in a database or a distributed storage system.
When you import data into R, R stores the data in your computer’s RAM while you manipulate it. This creates a size limitation: truly big data sets should be stored outside of R in a database or a distributed storage system. You can then create a connection to the system that R can use to access the data without bringing the data into your computer’s RAM.
The readr package contains the most common functions in the tidyverse for importing data. The readr package is loaded when you run `library(tidyverse)`. The tidyverse also includes the following packages for importing specific types of data. These are not loaded with `library(tidyverse)`. You must load them individually when you need them.
* DBI \- connect to databases
* haven \- read SPSS, Stata, or SAS data
* httr \- access data over web APIs
* jsonlite \- read JSON
* readxl \- read Excel spreadsheets
* rvest \- scrape data from the web
* xml2 \- read XML
### The working directory
Reading and writing files often involves the use of file paths. If you pass R a partial file path, R will append it to the end of the file path that leads to your *working directory*. In other words, partial file paths are interpretted in relation to your working directory. The working directory can change from session to session, as a general rule, the working directory is the directory where:
* Your .Rmd file lives (if you are running code by knitting the document)
* Your .Rproj file lives (if you are using the RStudio Project system)
* You opened the R interpreter from (if you are using a unix terminal/shell window)
Run `getwd()` to see the file path that leads to your current working directory.
2\.1 Import data quickly with a GUI
-----------------------------------
You want to import data quickly, and you do not mind using a semi\-reproducible graphical user interface (GUI) to do so. Your data is not so big that it needs to stay in a database or external storage system.
#### Solution
#### Discussion
The RStudio IDE provides an Import Dataset button in the Environment pane, which appears in the top right corner of the IDE by default. You can use this button to import data that is stored in plain text files as well as in Excel, SAS, SPSS, and Stata files.
Click the button to launch a window that includes a file browser (below). Use the browser to select the file to import.
After you’ve selected a file, RStudio will display a preview of how the file will be imported as a data frame. Below the preview, RStudio provides a GUI interface to the common options for importing the type of file you have selected. As you customize the options, RStudio updates the data preview to display the results.
The bottom right\-hand corner of the window displays R code that, if run, will reproduce your importation process programatically. You should copy and save this code if you wish to document your work in a reproducible workflow.
2\.2 Read a comma\-separated values (csv) file
----------------------------------------------
You want to read a .csv file.
#### Solution
```
# To make a .csv file to read
write_file(x = "a,b,c\n1,2,3\n4,5,NA", path = "file.csv")
my_data <- read_csv("file.csv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Comma separated value (.csv) files are plain text files arranged so that each line contains a row of data and each cell within a line is separated by a comma. Most data analysis software can export their data as .csv files. So if you are in a pinch you can usually export data from a program as a .csv and then read it into R.
You can also use `read_csv()` to import csv files that are hosted at their own unique URL. This only works if you are connected to the internet, e.g.
```
my_data <- read_csv("https://raw.githubusercontent.com/rstudio-education/tidyverse-cookbook/master/data/mtcars.csv")
```
2\.3 Read a semi\-colon delimited file
--------------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses a *semi\-colon* to separate cells.
#### Solution
```
# To make a .csv file to read
write_file(x = "a;b;c\n1;2;3\n4;5;NA", path = "file2.csv")
my_data <- read_csv2("file2.csv")
```
```
## Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Semi\-colon delimited files are popular in parts of the world that use commas for decimal places, like Europe. Semi\-colon delimited files are often still given the .csv file extension.
2\.4 Read a tab delimited file
------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses a *tab* to separate cells.
#### Solution
```
# To make a .tsv file to read
write_file(x = "a\tb\tc\n1\t2\t3\n4\t5\tNA", path = "file.tsv")
my_data <- read_tsv("file.tsv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Tab delimited files are text files that place each row of a table in its own line, and separate each cell within a line with a tab. Tab delimited files commonly have the extensions .tab, .tsv, and .txt.
2\.5 Read a text file with an unusual delimiter
-----------------------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses *something other than a comma* to separate cells.
#### Solution
```
# To make a file to read
write_file(x = "a|b|c\n1|2|3\n4|5|NA", path = "file.txt")
my_data <- read_delim("file.txt", delim = "|")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Use `read_delim()` as you would [`read_csv()`](import.html#read-a-comma-separated-values-csv-file). Pass the delimiter that your file uses as a character string to the `delim` argument of `read_delim()`.
2\.6 Read a fixed\-width file
-----------------------------
You want to read a .fwf file, which uses the fixed width format to represent a table (each column begins \\(n\\) spaces from the left for every line).
#### Solution
```
# To make a .fwf file to read
write_file(x = "a b c\n1 2 3\n4 5 NA", path = "file.fwf")
my_data <- read_table("file.fwf")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Fixed width files arrange data so that each row of a table is in its own line, and each cell begins at a *fixed* number of character spaces from the beginning of the line. For example, in the example below, the cells in the second column each begin
20 spaces from the beginning of the line, no matter many spaces the contents of the first cell required. At first glance, it is easy to mistake fixed width files for tab delimited files. Fixed with files use individual spaces to separate cells, and not tabs.
```
John Smith WA 418-Y11-4111
Mary Hartford CA 319-Z19-4341
Evan Nolan IL 219-532-c301
```
`read_table()` will read fixed\-width files where each column is separated by at least one whitespace on every line. Use `read_fwf()` to read fixed\-width files that have non\-standard formatting.
2\.7 Read a file no header
--------------------------
You want to read a file that does not include a line of column names at the start of the file.
#### Solution
```
# To make a .csv file to read
write_file("1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", col_names = c("a", "b", "c"))
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
If you do not wish to supply column names, set `col_names = FALSE`. The `col_names` argument works for all readr functions that read tabular data.
2\.8 Skip lines at the start of a file when reading a file
----------------------------------------------------------
You want to read a portion of a file, skipping one or more lines at the *start* of the file. For example, you want to avoid reading the introductory text at the start of a file.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", skip = 1)
```
```
## Parsed with column specification:
## cols(
## `1` = col_double(),
## `2` = col_double(),
## `3` = col_logical()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## `1` `2` `3`
## <dbl> <dbl> <lgl>
## 1 4 5 NA
```
#### Discussion
Set `skip` equal to the number of lines you wish to skip before you begin reading the file. The `skip` argument works for all readr functions that read tabular data, and can be combined with the [`n_max` argument](import.html#skip-lines-at-the-end-of-a-file-when-reading-a-file) to skip more lines at the end of a file.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
2\.9 Skip lines at the end of a file when reading a file
--------------------------------------------------------
You want to read a portion of a file, skipping one or more lines at the *end* of the file. For example, you want to avoid reading in a text comment that appears at the end of a file.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", n_max = 1)
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
```
#### Discussion
Set `n_max` equal to the number of *non\-header* lines you wish to read before you stop reading lines. The `n_max` argument works for all readr functions that read tabular data.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
2\.10 Replace missing values as you read a file
-----------------------------------------------
You want to convert missing values to NA as you read a file, because they have been recorded with a different symbol.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,.","file.csv")
my_data <- read_csv("file.csv", na = ".")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
R’s uses the string `NA` to represent missing data. If your file was exported from another language or application, it might use a different convention. R is likely to read in the convention as a character string, failing to understand that it represents missing data. To prevent this, explicitly tell `read_csv()` which strings are used to represent missing data.
The `na` argument works for all readr functions that read tabular data.
2\.11 Read a compressed RDS file
--------------------------------
You want to read an .RDS file.
#### Solution
```
# To make a .RDS file to read
saveRDS(pressure, file = "file.RDS")
my_data <- readRDS("file.RDS")
my_data
```
```
## temperature pressure
## 1 0 0.0002
## 2 20 0.0012
## 3 40 0.0060
## 4 60 0.0300
## 5 80 0.0900
## 6 100 0.2700
## 7 120 0.7500
## 8 140 1.8500
## 9 160 4.2000
## 10 180 8.8000
## 11 200 17.3000
## 12 220 32.1000
## 13 240 57.0000
## 14 260 96.0000
## 15 280 157.0000
## 16 300 247.0000
## 17 320 376.0000
## 18 340 558.0000
## 19 360 806.0000
```
#### Discussion
`readRDS` is a base R function—you do not need to run `library(tidyverse)` to use it. RDS is a file format native to R for saving compressed content. RDS files are *not* text files and are not human readable in their raw form. Each RDS file contains a single object, which makes it easy to assign its output directly to a single R object. This is not necessarily the case for .RData files, which makes .RDS files safer to use.
2\.12 Read an Excel spreadsheet
-------------------------------
You want to read an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path)
my_data
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
#### Discussion
By default, `read_excel()` reads the *first* sheet in an Excel spreadsheet. See the [next recipe](import.html#read-a-specific-sheet-from-an-excel-spreadsheet) to read in sheets other than the first.
`read_excel()` is loaded with the readxl package, which must be loaded with `library(readxl)`. Pass `read_excel()` a file path or URL that leads to the Excel file you wish to read. `readxl_example("datasets.xlsx")` returns a file path that leads to an example Excel spreadsheet that is conveniently installed with the readxl package.
2\.13 Read a specific sheet from an Excel spreadsheet
-----------------------------------------------------
You want to read a specific sheet from an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, sheet = "chickwts")
my_data
```
```
## # A tibble: 71 x 2
## weight feed
## <dbl> <chr>
## 1 179 horsebean
## 2 160 horsebean
## 3 136 horsebean
## 4 227 horsebean
## 5 217 horsebean
## 6 168 horsebean
## 7 108 horsebean
## 8 124 horsebean
## 9 143 horsebean
## 10 140 horsebean
## # … with 61 more rows
```
#### Discussion
To read a specific sheet from a larger Excel spreadsheet, use the `sheet` argument to pass [`read_excel()`](import.html#read-an-excel-spreadsheet) the name of the sheet as a character string. You can also pass `sheet` the number of the sheet to read. For example, `read_excel(file_path, sheet = 2)` would read the second spreadsheet in the file.
If you are unsure of the name of a sheet, use the `excel_sheets()` function to list the names of the spreadsheets within a file, without importing the file, e.g. `excel_sheets(file_path)`.
2\.14 Read a field of cells from an excel spreadsheet
-----------------------------------------------------
You want to read a subset of cells from an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, range = "C1:E4", skip = 3, n_max = 10)
my_data
```
```
## # A tibble: 3 x 3
## Petal.Length Petal.Width Species
## <dbl> <dbl> <chr>
## 1 1.4 0.2 setosa
## 2 1.4 0.2 setosa
## 3 1.3 0.2 setosa
```
#### Discussion
`read_excel()` adopts the [`skip`](import.html#skip-lines-at-the-start-of-a-file-when-reading-a-file) and \[`n_max][Skip lines at the end of a file when reading a file] arguments of reader functions to skip rows at the top of the spreadsheet and to control how far down the spreadsheet to read. Use the`range`argument to specify a subset of columns to read. Two column names separated by a`:\` specifies those two columns and every column between them.
2\.15 Write to a comma\-separate values (csv) file
--------------------------------------------------
You want to save a tibble or data frame as a .csv file.
#### Solution
```
write_csv(iris, path = "my_file.csv")
```
#### Discussion
`write_csv()` is about twice as fast as base R’s `write.csv()` and never adds rownames to a table. Tidyverse users tend to place important information in its own column, where it can be easily accessed, instead of in the rownames, where it cannot be as easily accessed.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv()`.
2\.16 Write to a semi\-colon delimited file
-------------------------------------------
You want to save a tibble or data frame as a .csv file *that uses semi\-colons to delimit cells*.
#### Solution
```
write_csv2(iris, path = "my_file.csv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv2()`.
2\.17 Write to a tab\-delimited file
------------------------------------
You want to save a tibble or data frame as a tab delimited file.
#### Solution
```
write_tsv(iris, path = "my_file.tsv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_tsv()`.
2\.18 Write to a text file with arbitrary delimiters
----------------------------------------------------
You want to save a tibble or data frame as a plain text file that uses an unusual delimiter.
#### Solution
```
write_delim(iris, path = "my_file.tsv", delim = "|")
```
#### Discussion
`write_delim()` behaves like [`write_csv()`](import.html#write-to-a-comma-separate-values-csv-file), but requires a `delim` argument. Pass `delim` the delimiter you wish to use to separate cells, as a character string.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_delim()`.
2\.19 Write to a compressed RDS file
------------------------------------
You want to write a tibble or data frame to a .RDS file.
#### Solution
```
saveRDS(iris, file = "my_file.RDS")
```
#### Discussion
`saveRDS()`, along with `readRDS()`, is a base R function, which explains the difference in the argument names as well as the read/write naming pattern compared to readr functions—take note!
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `saveRDS()`. Use the .RDS file extension in the file name to help make the uncommon file format obvious.
What you should know before you begin
-------------------------------------
Before you can manipulate data with R, you need to import the data into R’s memory, or build a connection to the data that R can use to access the data remotely. For example, you can build a connection to data that lives in a database.
How you import your data will depend on the format of the data. The most common way to store small data sets is as a plain text file. Data may also be stored in a proprietary format associated with a specific piece of software, such as SAS, SPSS, or Microsoft Excel. Data used on the internet is often stored as a JSON or XML file. Large data sets may be stored in a database or a distributed storage system.
When you import data into R, R stores the data in your computer’s RAM while you manipulate it. This creates a size limitation: truly big data sets should be stored outside of R in a database or a distributed storage system. You can then create a connection to the system that R can use to access the data without bringing the data into your computer’s RAM.
The readr package contains the most common functions in the tidyverse for importing data. The readr package is loaded when you run `library(tidyverse)`. The tidyverse also includes the following packages for importing specific types of data. These are not loaded with `library(tidyverse)`. You must load them individually when you need them.
* DBI \- connect to databases
* haven \- read SPSS, Stata, or SAS data
* httr \- access data over web APIs
* jsonlite \- read JSON
* readxl \- read Excel spreadsheets
* rvest \- scrape data from the web
* xml2 \- read XML
### The working directory
Reading and writing files often involves the use of file paths. If you pass R a partial file path, R will append it to the end of the file path that leads to your *working directory*. In other words, partial file paths are interpretted in relation to your working directory. The working directory can change from session to session, as a general rule, the working directory is the directory where:
* Your .Rmd file lives (if you are running code by knitting the document)
* Your .Rproj file lives (if you are using the RStudio Project system)
* You opened the R interpreter from (if you are using a unix terminal/shell window)
Run `getwd()` to see the file path that leads to your current working directory.
2\.1 Import data quickly with a GUI
-----------------------------------
You want to import data quickly, and you do not mind using a semi\-reproducible graphical user interface (GUI) to do so. Your data is not so big that it needs to stay in a database or external storage system.
#### Solution
#### Discussion
The RStudio IDE provides an Import Dataset button in the Environment pane, which appears in the top right corner of the IDE by default. You can use this button to import data that is stored in plain text files as well as in Excel, SAS, SPSS, and Stata files.
Click the button to launch a window that includes a file browser (below). Use the browser to select the file to import.
After you’ve selected a file, RStudio will display a preview of how the file will be imported as a data frame. Below the preview, RStudio provides a GUI interface to the common options for importing the type of file you have selected. As you customize the options, RStudio updates the data preview to display the results.
The bottom right\-hand corner of the window displays R code that, if run, will reproduce your importation process programatically. You should copy and save this code if you wish to document your work in a reproducible workflow.
#### Solution
#### Discussion
The RStudio IDE provides an Import Dataset button in the Environment pane, which appears in the top right corner of the IDE by default. You can use this button to import data that is stored in plain text files as well as in Excel, SAS, SPSS, and Stata files.
Click the button to launch a window that includes a file browser (below). Use the browser to select the file to import.
After you’ve selected a file, RStudio will display a preview of how the file will be imported as a data frame. Below the preview, RStudio provides a GUI interface to the common options for importing the type of file you have selected. As you customize the options, RStudio updates the data preview to display the results.
The bottom right\-hand corner of the window displays R code that, if run, will reproduce your importation process programatically. You should copy and save this code if you wish to document your work in a reproducible workflow.
2\.2 Read a comma\-separated values (csv) file
----------------------------------------------
You want to read a .csv file.
#### Solution
```
# To make a .csv file to read
write_file(x = "a,b,c\n1,2,3\n4,5,NA", path = "file.csv")
my_data <- read_csv("file.csv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Comma separated value (.csv) files are plain text files arranged so that each line contains a row of data and each cell within a line is separated by a comma. Most data analysis software can export their data as .csv files. So if you are in a pinch you can usually export data from a program as a .csv and then read it into R.
You can also use `read_csv()` to import csv files that are hosted at their own unique URL. This only works if you are connected to the internet, e.g.
```
my_data <- read_csv("https://raw.githubusercontent.com/rstudio-education/tidyverse-cookbook/master/data/mtcars.csv")
```
#### Solution
```
# To make a .csv file to read
write_file(x = "a,b,c\n1,2,3\n4,5,NA", path = "file.csv")
my_data <- read_csv("file.csv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Comma separated value (.csv) files are plain text files arranged so that each line contains a row of data and each cell within a line is separated by a comma. Most data analysis software can export their data as .csv files. So if you are in a pinch you can usually export data from a program as a .csv and then read it into R.
You can also use `read_csv()` to import csv files that are hosted at their own unique URL. This only works if you are connected to the internet, e.g.
```
my_data <- read_csv("https://raw.githubusercontent.com/rstudio-education/tidyverse-cookbook/master/data/mtcars.csv")
```
2\.3 Read a semi\-colon delimited file
--------------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses a *semi\-colon* to separate cells.
#### Solution
```
# To make a .csv file to read
write_file(x = "a;b;c\n1;2;3\n4;5;NA", path = "file2.csv")
my_data <- read_csv2("file2.csv")
```
```
## Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Semi\-colon delimited files are popular in parts of the world that use commas for decimal places, like Europe. Semi\-colon delimited files are often still given the .csv file extension.
#### Solution
```
# To make a .csv file to read
write_file(x = "a;b;c\n1;2;3\n4;5;NA", path = "file2.csv")
my_data <- read_csv2("file2.csv")
```
```
## Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Semi\-colon delimited files are popular in parts of the world that use commas for decimal places, like Europe. Semi\-colon delimited files are often still given the .csv file extension.
2\.4 Read a tab delimited file
------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses a *tab* to separate cells.
#### Solution
```
# To make a .tsv file to read
write_file(x = "a\tb\tc\n1\t2\t3\n4\t5\tNA", path = "file.tsv")
my_data <- read_tsv("file.tsv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Tab delimited files are text files that place each row of a table in its own line, and separate each cell within a line with a tab. Tab delimited files commonly have the extensions .tab, .tsv, and .txt.
#### Solution
```
# To make a .tsv file to read
write_file(x = "a\tb\tc\n1\t2\t3\n4\t5\tNA", path = "file.tsv")
my_data <- read_tsv("file.tsv")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Tab delimited files are text files that place each row of a table in its own line, and separate each cell within a line with a tab. Tab delimited files commonly have the extensions .tab, .tsv, and .txt.
2\.5 Read a text file with an unusual delimiter
-----------------------------------------------
You want to read a file that resembles a [csv](import.html#read-a-comma-separated-values-csv-file), but uses *something other than a comma* to separate cells.
#### Solution
```
# To make a file to read
write_file(x = "a|b|c\n1|2|3\n4|5|NA", path = "file.txt")
my_data <- read_delim("file.txt", delim = "|")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Use `read_delim()` as you would [`read_csv()`](import.html#read-a-comma-separated-values-csv-file). Pass the delimiter that your file uses as a character string to the `delim` argument of `read_delim()`.
#### Solution
```
# To make a file to read
write_file(x = "a|b|c\n1|2|3\n4|5|NA", path = "file.txt")
my_data <- read_delim("file.txt", delim = "|")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Use `read_delim()` as you would [`read_csv()`](import.html#read-a-comma-separated-values-csv-file). Pass the delimiter that your file uses as a character string to the `delim` argument of `read_delim()`.
2\.6 Read a fixed\-width file
-----------------------------
You want to read a .fwf file, which uses the fixed width format to represent a table (each column begins \\(n\\) spaces from the left for every line).
#### Solution
```
# To make a .fwf file to read
write_file(x = "a b c\n1 2 3\n4 5 NA", path = "file.fwf")
my_data <- read_table("file.fwf")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Fixed width files arrange data so that each row of a table is in its own line, and each cell begins at a *fixed* number of character spaces from the beginning of the line. For example, in the example below, the cells in the second column each begin
20 spaces from the beginning of the line, no matter many spaces the contents of the first cell required. At first glance, it is easy to mistake fixed width files for tab delimited files. Fixed with files use individual spaces to separate cells, and not tabs.
```
John Smith WA 418-Y11-4111
Mary Hartford CA 319-Z19-4341
Evan Nolan IL 219-532-c301
```
`read_table()` will read fixed\-width files where each column is separated by at least one whitespace on every line. Use `read_fwf()` to read fixed\-width files that have non\-standard formatting.
#### Solution
```
# To make a .fwf file to read
write_file(x = "a b c\n1 2 3\n4 5 NA", path = "file.fwf")
my_data <- read_table("file.fwf")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
Fixed width files arrange data so that each row of a table is in its own line, and each cell begins at a *fixed* number of character spaces from the beginning of the line. For example, in the example below, the cells in the second column each begin
20 spaces from the beginning of the line, no matter many spaces the contents of the first cell required. At first glance, it is easy to mistake fixed width files for tab delimited files. Fixed with files use individual spaces to separate cells, and not tabs.
```
John Smith WA 418-Y11-4111
Mary Hartford CA 319-Z19-4341
Evan Nolan IL 219-532-c301
```
`read_table()` will read fixed\-width files where each column is separated by at least one whitespace on every line. Use `read_fwf()` to read fixed\-width files that have non\-standard formatting.
2\.7 Read a file no header
--------------------------
You want to read a file that does not include a line of column names at the start of the file.
#### Solution
```
# To make a .csv file to read
write_file("1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", col_names = c("a", "b", "c"))
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
If you do not wish to supply column names, set `col_names = FALSE`. The `col_names` argument works for all readr functions that read tabular data.
#### Solution
```
# To make a .csv file to read
write_file("1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", col_names = c("a", "b", "c"))
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
If you do not wish to supply column names, set `col_names = FALSE`. The `col_names` argument works for all readr functions that read tabular data.
2\.8 Skip lines at the start of a file when reading a file
----------------------------------------------------------
You want to read a portion of a file, skipping one or more lines at the *start* of the file. For example, you want to avoid reading the introductory text at the start of a file.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", skip = 1)
```
```
## Parsed with column specification:
## cols(
## `1` = col_double(),
## `2` = col_double(),
## `3` = col_logical()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## `1` `2` `3`
## <dbl> <dbl> <lgl>
## 1 4 5 NA
```
#### Discussion
Set `skip` equal to the number of lines you wish to skip before you begin reading the file. The `skip` argument works for all readr functions that read tabular data, and can be combined with the [`n_max` argument](import.html#skip-lines-at-the-end-of-a-file-when-reading-a-file) to skip more lines at the end of a file.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", skip = 1)
```
```
## Parsed with column specification:
## cols(
## `1` = col_double(),
## `2` = col_double(),
## `3` = col_logical()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## `1` `2` `3`
## <dbl> <dbl> <lgl>
## 1 4 5 NA
```
#### Discussion
Set `skip` equal to the number of lines you wish to skip before you begin reading the file. The `skip` argument works for all readr functions that read tabular data, and can be combined with the [`n_max` argument](import.html#skip-lines-at-the-end-of-a-file-when-reading-a-file) to skip more lines at the end of a file.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
2\.9 Skip lines at the end of a file when reading a file
--------------------------------------------------------
You want to read a portion of a file, skipping one or more lines at the *end* of the file. For example, you want to avoid reading in a text comment that appears at the end of a file.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", n_max = 1)
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
```
#### Discussion
Set `n_max` equal to the number of *non\-header* lines you wish to read before you stop reading lines. The `n_max` argument works for all readr functions that read tabular data.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,NA","file.csv")
my_data <- read_csv("file.csv", n_max = 1)
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 1 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
```
#### Discussion
Set `n_max` equal to the number of *non\-header* lines you wish to read before you stop reading lines. The `n_max` argument works for all readr functions that read tabular data.
When in doubt, first try reading the file with `read_lines("file.csv")` to determine how many lines you need to skip.
2\.10 Replace missing values as you read a file
-----------------------------------------------
You want to convert missing values to NA as you read a file, because they have been recorded with a different symbol.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,.","file.csv")
my_data <- read_csv("file.csv", na = ".")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
R’s uses the string `NA` to represent missing data. If your file was exported from another language or application, it might use a different convention. R is likely to read in the convention as a character string, failing to understand that it represents missing data. To prevent this, explicitly tell `read_csv()` which strings are used to represent missing data.
The `na` argument works for all readr functions that read tabular data.
#### Solution
```
# To make a .csv file to read
write_file("a,b,c\n1,2,3\n4,5,.","file.csv")
my_data <- read_csv("file.csv", na = ".")
```
```
## Parsed with column specification:
## cols(
## a = col_double(),
## b = col_double(),
## c = col_double()
## )
```
```
my_data
```
```
## # A tibble: 2 x 3
## a b c
## <dbl> <dbl> <dbl>
## 1 1 2 3
## 2 4 5 NA
```
#### Discussion
R’s uses the string `NA` to represent missing data. If your file was exported from another language or application, it might use a different convention. R is likely to read in the convention as a character string, failing to understand that it represents missing data. To prevent this, explicitly tell `read_csv()` which strings are used to represent missing data.
The `na` argument works for all readr functions that read tabular data.
2\.11 Read a compressed RDS file
--------------------------------
You want to read an .RDS file.
#### Solution
```
# To make a .RDS file to read
saveRDS(pressure, file = "file.RDS")
my_data <- readRDS("file.RDS")
my_data
```
```
## temperature pressure
## 1 0 0.0002
## 2 20 0.0012
## 3 40 0.0060
## 4 60 0.0300
## 5 80 0.0900
## 6 100 0.2700
## 7 120 0.7500
## 8 140 1.8500
## 9 160 4.2000
## 10 180 8.8000
## 11 200 17.3000
## 12 220 32.1000
## 13 240 57.0000
## 14 260 96.0000
## 15 280 157.0000
## 16 300 247.0000
## 17 320 376.0000
## 18 340 558.0000
## 19 360 806.0000
```
#### Discussion
`readRDS` is a base R function—you do not need to run `library(tidyverse)` to use it. RDS is a file format native to R for saving compressed content. RDS files are *not* text files and are not human readable in their raw form. Each RDS file contains a single object, which makes it easy to assign its output directly to a single R object. This is not necessarily the case for .RData files, which makes .RDS files safer to use.
#### Solution
```
# To make a .RDS file to read
saveRDS(pressure, file = "file.RDS")
my_data <- readRDS("file.RDS")
my_data
```
```
## temperature pressure
## 1 0 0.0002
## 2 20 0.0012
## 3 40 0.0060
## 4 60 0.0300
## 5 80 0.0900
## 6 100 0.2700
## 7 120 0.7500
## 8 140 1.8500
## 9 160 4.2000
## 10 180 8.8000
## 11 200 17.3000
## 12 220 32.1000
## 13 240 57.0000
## 14 260 96.0000
## 15 280 157.0000
## 16 300 247.0000
## 17 320 376.0000
## 18 340 558.0000
## 19 360 806.0000
```
#### Discussion
`readRDS` is a base R function—you do not need to run `library(tidyverse)` to use it. RDS is a file format native to R for saving compressed content. RDS files are *not* text files and are not human readable in their raw form. Each RDS file contains a single object, which makes it easy to assign its output directly to a single R object. This is not necessarily the case for .RData files, which makes .RDS files safer to use.
2\.12 Read an Excel spreadsheet
-------------------------------
You want to read an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path)
my_data
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
#### Discussion
By default, `read_excel()` reads the *first* sheet in an Excel spreadsheet. See the [next recipe](import.html#read-a-specific-sheet-from-an-excel-spreadsheet) to read in sheets other than the first.
`read_excel()` is loaded with the readxl package, which must be loaded with `library(readxl)`. Pass `read_excel()` a file path or URL that leads to the Excel file you wish to read. `readxl_example("datasets.xlsx")` returns a file path that leads to an example Excel spreadsheet that is conveniently installed with the readxl package.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path)
my_data
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
#### Discussion
By default, `read_excel()` reads the *first* sheet in an Excel spreadsheet. See the [next recipe](import.html#read-a-specific-sheet-from-an-excel-spreadsheet) to read in sheets other than the first.
`read_excel()` is loaded with the readxl package, which must be loaded with `library(readxl)`. Pass `read_excel()` a file path or URL that leads to the Excel file you wish to read. `readxl_example("datasets.xlsx")` returns a file path that leads to an example Excel spreadsheet that is conveniently installed with the readxl package.
2\.13 Read a specific sheet from an Excel spreadsheet
-----------------------------------------------------
You want to read a specific sheet from an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, sheet = "chickwts")
my_data
```
```
## # A tibble: 71 x 2
## weight feed
## <dbl> <chr>
## 1 179 horsebean
## 2 160 horsebean
## 3 136 horsebean
## 4 227 horsebean
## 5 217 horsebean
## 6 168 horsebean
## 7 108 horsebean
## 8 124 horsebean
## 9 143 horsebean
## 10 140 horsebean
## # … with 61 more rows
```
#### Discussion
To read a specific sheet from a larger Excel spreadsheet, use the `sheet` argument to pass [`read_excel()`](import.html#read-an-excel-spreadsheet) the name of the sheet as a character string. You can also pass `sheet` the number of the sheet to read. For example, `read_excel(file_path, sheet = 2)` would read the second spreadsheet in the file.
If you are unsure of the name of a sheet, use the `excel_sheets()` function to list the names of the spreadsheets within a file, without importing the file, e.g. `excel_sheets(file_path)`.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, sheet = "chickwts")
my_data
```
```
## # A tibble: 71 x 2
## weight feed
## <dbl> <chr>
## 1 179 horsebean
## 2 160 horsebean
## 3 136 horsebean
## 4 227 horsebean
## 5 217 horsebean
## 6 168 horsebean
## 7 108 horsebean
## 8 124 horsebean
## 9 143 horsebean
## 10 140 horsebean
## # … with 61 more rows
```
#### Discussion
To read a specific sheet from a larger Excel spreadsheet, use the `sheet` argument to pass [`read_excel()`](import.html#read-an-excel-spreadsheet) the name of the sheet as a character string. You can also pass `sheet` the number of the sheet to read. For example, `read_excel(file_path, sheet = 2)` would read the second spreadsheet in the file.
If you are unsure of the name of a sheet, use the `excel_sheets()` function to list the names of the spreadsheets within a file, without importing the file, e.g. `excel_sheets(file_path)`.
2\.14 Read a field of cells from an excel spreadsheet
-----------------------------------------------------
You want to read a subset of cells from an Excel spreadsheet.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, range = "C1:E4", skip = 3, n_max = 10)
my_data
```
```
## # A tibble: 3 x 3
## Petal.Length Petal.Width Species
## <dbl> <dbl> <chr>
## 1 1.4 0.2 setosa
## 2 1.4 0.2 setosa
## 3 1.3 0.2 setosa
```
#### Discussion
`read_excel()` adopts the [`skip`](import.html#skip-lines-at-the-start-of-a-file-when-reading-a-file) and \[`n_max][Skip lines at the end of a file when reading a file] arguments of reader functions to skip rows at the top of the spreadsheet and to control how far down the spreadsheet to read. Use the`range`argument to specify a subset of columns to read. Two column names separated by a`:\` specifies those two columns and every column between them.
#### Solution
```
library(readxl)
# File path to an example excel spreadsheet to import
file_path <- readxl_example("datasets.xlsx")
my_data <- read_excel(file_path, range = "C1:E4", skip = 3, n_max = 10)
my_data
```
```
## # A tibble: 3 x 3
## Petal.Length Petal.Width Species
## <dbl> <dbl> <chr>
## 1 1.4 0.2 setosa
## 2 1.4 0.2 setosa
## 3 1.3 0.2 setosa
```
#### Discussion
`read_excel()` adopts the [`skip`](import.html#skip-lines-at-the-start-of-a-file-when-reading-a-file) and \[`n_max][Skip lines at the end of a file when reading a file] arguments of reader functions to skip rows at the top of the spreadsheet and to control how far down the spreadsheet to read. Use the`range`argument to specify a subset of columns to read. Two column names separated by a`:\` specifies those two columns and every column between them.
2\.15 Write to a comma\-separate values (csv) file
--------------------------------------------------
You want to save a tibble or data frame as a .csv file.
#### Solution
```
write_csv(iris, path = "my_file.csv")
```
#### Discussion
`write_csv()` is about twice as fast as base R’s `write.csv()` and never adds rownames to a table. Tidyverse users tend to place important information in its own column, where it can be easily accessed, instead of in the rownames, where it cannot be as easily accessed.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv()`.
#### Solution
```
write_csv(iris, path = "my_file.csv")
```
#### Discussion
`write_csv()` is about twice as fast as base R’s `write.csv()` and never adds rownames to a table. Tidyverse users tend to place important information in its own column, where it can be easily accessed, instead of in the rownames, where it cannot be as easily accessed.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv()`.
2\.16 Write to a semi\-colon delimited file
-------------------------------------------
You want to save a tibble or data frame as a .csv file *that uses semi\-colons to delimit cells*.
#### Solution
```
write_csv2(iris, path = "my_file.csv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv2()`.
#### Solution
```
write_csv2(iris, path = "my_file.csv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_csv2()`.
2\.17 Write to a tab\-delimited file
------------------------------------
You want to save a tibble or data frame as a tab delimited file.
#### Solution
```
write_tsv(iris, path = "my_file.tsv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_tsv()`.
#### Solution
```
write_tsv(iris, path = "my_file.tsv")
```
#### Discussion
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_tsv()`.
2\.18 Write to a text file with arbitrary delimiters
----------------------------------------------------
You want to save a tibble or data frame as a plain text file that uses an unusual delimiter.
#### Solution
```
write_delim(iris, path = "my_file.tsv", delim = "|")
```
#### Discussion
`write_delim()` behaves like [`write_csv()`](import.html#write-to-a-comma-separate-values-csv-file), but requires a `delim` argument. Pass `delim` the delimiter you wish to use to separate cells, as a character string.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_delim()`.
#### Solution
```
write_delim(iris, path = "my_file.tsv", delim = "|")
```
#### Discussion
`write_delim()` behaves like [`write_csv()`](import.html#write-to-a-comma-separate-values-csv-file), but requires a `delim` argument. Pass `delim` the delimiter you wish to use to separate cells, as a character string.
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `write_delim()`.
2\.19 Write to a compressed RDS file
------------------------------------
You want to write a tibble or data frame to a .RDS file.
#### Solution
```
saveRDS(iris, file = "my_file.RDS")
```
#### Discussion
`saveRDS()`, along with `readRDS()`, is a base R function, which explains the difference in the argument names as well as the read/write naming pattern compared to readr functions—take note!
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `saveRDS()`. Use the .RDS file extension in the file name to help make the uncommon file format obvious.
#### Solution
```
saveRDS(iris, file = "my_file.RDS")
```
#### Discussion
`saveRDS()`, along with `readRDS()`, is a base R function, which explains the difference in the argument names as well as the read/write naming pattern compared to readr functions—take note!
R will save your file at the location described by appending the `path` argument to your [working directory](program.html#what-you-should-know-before-you-begin). If your path contains directories, these must exist *before* you run `saveRDS()`. Use the .RDS file extension in the file name to help make the uncommon file format obvious.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/tidy.html |
3 Tidy
======
---
This chapter includes the following recipes:
1. [Create a tibble manually](tidy.html#create-a-tibble-manually)
2. [Convert a data frame to a tibble](tidy.html#convert-a-data-frame-to-a-tibble)
3. [Convert a tibble to a data frame](tidy.html#convert-a-tibble-to-a-data-frame)
4. [Preview the contents of a tibble](tidy.html#preview-the-contents-of-a-tibble)
5. [Inspect every cell of a tibble](tidy.html#inspect-every-cell-of-a-tibble)
6. [Spread a pair of columns into a field of cells](tidy.html#spread-a-pair-of-columns-into-a-field-of-cells)
7. [Gather a field of cells into a pair of columns](tidy.html#gather-a-field-of-cells-into-a-pair-of-columns)
8. [Separate a column into new columns](tidy.html#separate-a-column-into-new-columns)
9. [Unite multiple columns into a single column](tidy.html#unite-multiple-columns-into-a-single-column)
---
What you should know before you begin
-------------------------------------
Data tidying refers to reshaping your data into a tidy data frame or [tibble](how-to-use-this-book.html#tidy-data). Data tidying is an important first step for your analysis because every tidyverse function will expect your data to be stored as **Tidy Data**.
Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
Tidy data is not an arbitrary requirement of the tidyverse; it is the ideal data format for doing data science with R. Tidy data makes it easy to extract every value of a variable to build a plot or to compute a summary statistic. Tidy data also makes it easy to compute new variables; when your data is tidy, you can rely on R’s rowwise operations to maintain the integrity of your observations. Moreover, R can directly manipulate tidy data with R’s fast, built\-in vectorised observations, which lets your code run as fast as possible.
The definition of Tidy Data isn’t complete until you define variable and observation, so let’s borrow two definitions from [*R for Data Science*](https://r4ds.had.co.nz/exploratory-data-analysis.html):
1. A **variable** is a quantity, quality, or property that you can measure.
2. An **observation** is a set of measurements made under similar conditions (you usually make all of the measurements in an observation at the same time and on the same object).
As you work with data, you will be surprised to realize that what is a variable (or observation) will depend less on the data itself and more on what you are trying to do with it. With enough mental flexibility, you can consider anything to be a variable. However, some variables will be more useful than others for any specific task. In general, if you can formulate your task as an equation (math or code that contains an equals sign), the most useful variables will be the names in the equation.
3\.1 Create a tibble manually
-----------------------------
You want to create a tibble from scratch by typing in the contents of the tibble.
#### Solution
```
tribble(~number, ~letter, ~greek,
1, "a", "alpha",
2, "b", "beta",
3, "c", "gamma")
```
```
## # A tibble: 3 x 3
## number letter greek
## <dbl> <chr> <chr>
## 1 1 a alpha
## 2 2 b beta
## 3 3 c gamma
```
#### Discussion
`tribble()` creates a tibble and tricks you into typing out a preview of the result. To use `tribble()`, list each column name preceded by a `~`, then list the values of the tribble in a rowwise fashion. If you take care to align your columns, the transposed syntax of `tribble()` becomes a preview of the table.
You can also create a tibble with `tibble()`, whose syntax mirrors `data.frame()`:
```
tibble(number = c(1, 2, 3),
letter = c("a", "b", "c"),
greek = c("alpha", "beta", "gamma"))
```
3\.2 Convert a data frame to a tibble
-------------------------------------
You want to convert a data frame to a tibble.
#### Solution
```
as_tibble(iris)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
3\.3 Convert a tibble to a data frame
-------------------------------------
You want to convert a tibble to a data frame.
#### Solution
```
as.data.frame(table1)
```
```
## country year cases population
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Be careful to use `as.data.frame()` and not `as_data_frame()`, which is an alias for `as_tibble()`.
3\.4 Preview the contents of a tibble
-------------------------------------
You want to get an idea of what variables and values are stored in a tibble.
#### Solution
```
storms
```
```
## # A tibble: 10,010 x 13
## name year month day hour lat long status category wind pressure
## <chr> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <chr> <ord> <int> <int>
## 1 Amy 1975 6 27 0 27.5 -79 tropi… -1 25 1013
## 2 Amy 1975 6 27 6 28.5 -79 tropi… -1 25 1013
## 3 Amy 1975 6 27 12 29.5 -79 tropi… -1 25 1013
## 4 Amy 1975 6 27 18 30.5 -79 tropi… -1 25 1013
## 5 Amy 1975 6 28 0 31.5 -78.8 tropi… -1 25 1012
## 6 Amy 1975 6 28 6 32.4 -78.7 tropi… -1 25 1012
## 7 Amy 1975 6 28 12 33.3 -78 tropi… -1 25 1011
## 8 Amy 1975 6 28 18 34 -77 tropi… -1 30 1006
## 9 Amy 1975 6 29 0 34.4 -75.8 tropi… 0 35 1004
## 10 Amy 1975 6 29 6 34 -74.8 tropi… 0 40 1002
## # … with 10,000 more rows, and 2 more variables: ts_diameter <dbl>,
## # hu_diameter <dbl>
```
#### Discussion
When you call a tibble directly, R will display enough information to give you a quick sense of the contents of the tibble. This includes:
1. the dimensions of the tibble
2. the column names and types
3. as many cells of the tibble as will fit comfortably in your console window
3\.5 Inspect every cell of a tibble
-----------------------------------
You want to see every value that is stored in a tibble.
#### Solution
```
View(storms)
```
#### Discussion
`View()` (with a capital V) opens the tibble in R’s data viewer, which will let you scroll to every cell in the tibble.
3\.6 Spread a pair of columns into a field of cells
---------------------------------------------------
You want to **pivot**, **convert long data to wide**, or move variable names out of the cells and into the column names. These are different ways of describing the same action.
For example, `table2` contains `type`, which is a column that repeats the variable names `case` and `population`. To make `table2` tidy, you must move `case` and `population` values into their own columns.
```
table2
```
```
## # A tibble: 12 x 4
## country year type count
## <chr> <int> <chr> <int>
## 1 Afghanistan 1999 cases 745
## 2 Afghanistan 1999 population 19987071
## 3 Afghanistan 2000 cases 2666
## 4 Afghanistan 2000 population 20595360
## 5 Brazil 1999 cases 37737
## 6 Brazil 1999 population 172006362
## 7 Brazil 2000 cases 80488
## 8 Brazil 2000 population 174504898
## 9 China 1999 cases 212258
## 10 China 1999 population 1272915272
## 11 China 2000 cases 213766
## 12 China 2000 population 1280428583
```
#### Solution
```
table2 %>%
spread(key = type, value = count)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `spread()`, assign the column that contains variable names to `key`. Assign the column that contains the values that are associated with those names to `value`. `spread()` will:
1. Make a copy of the original table
2. Remove the `key` and `value` columns from the copy
3. Remove every duplicate row in the data set that remains
4. Insert a new column for each unique variable name in the `key` column
5. Fill the new columns with the values of the `value` column in a way that preserves every relationship between values in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `spread()` will inherit the data type of the `value` column. If you would to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
3\.7 Gather a field of cells into a pair of columns
---------------------------------------------------
You want to **convert wide data to long**, reshape a **two\-by\-two table**, or move variable values out of the column names and into the cells. These are different ways of describing the same action.
For example, `table4a` is a two\-by\-two table with the column names `1999` and `2000`. These names are values of a `year` variable. The field of cells in `table4a` contains counts of TB cases, which is another variable. To make `table4a` tidy, you need to move year and case values into their own columns.
```
table4a
```
```
## # A tibble: 3 x 3
## country `1999` `2000`
## * <chr> <int> <int>
## 1 Afghanistan 745 2666
## 2 Brazil 37737 80488
## 3 China 212258 213766
```
#### Solution
```
table4a %>%
gather(key = "year", value = "cases", 2:3)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <chr> <int>
## 1 Afghanistan 1999 745
## 2 Brazil 1999 37737
## 3 China 1999 212258
## 4 Afghanistan 2000 2666
## 5 Brazil 2000 80488
## 6 China 2000 213766
```
#### Discussion
`gather()` is the inverse of `spread()`: `gather()` collapses a field of cells that spans several columns into two new columns:
1. A column of former “keys”, which contains the column names of the former field
2. A column of former “values”, which contains the cell values of the former field
To use `gather()`, pick names for the new `key` and `value` columns, and supply them as strings. Then identify the columns to gather into the new key and value columns. `gather()` will:
1. Create a copy of the original table
2. Remove the identified columns from the copy
3. Add a key column with the supplied name
4. Fill the key column with the column names of the removed columns, repeating rows as necessary so that each combination of row and removed column name appears once
5. Add a value column with the supplied name
6. Fill the value column with the values of the removed columns in a way that preserves every relationship between values and column names in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
##### Identify columns to gather
You can identify the columns to gather (i.e. remove) by:
1. name
2. index (numbers)
3. inverse index (negative numbers that specifiy the columns to *retain*, all other columns will be removed.)
4. the [`select()` helpers](https://dplyr.tidyverse.org/reference/select.html#useful-functions) that come in the dplyr package
So for example, the following commands will do the same thing as the solution above:
```
table4a %>% gather(key = "year", value = "cases", `"1999", "2000")
table4a %>% gather(key = "year", value = "cases", -1)
table4a %>% gather(key = "year", value = "cases", one_of(c("1999", "2000")))
```
By default, the new key column will contain character strings. If you would like to convert the new key column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
3\.8 Separate a column into new columns
---------------------------------------
You want to split a single column into multiple columns by separating each cell in the column into a row of cells. Each new cell should contain a separate portion of the value in the original cell.
For example, `table3` combines `cases` and `population` values in a single column named `rate`. To tidy `table3`, you need to separate `rate` into two columns: one for the `cases` variable and one for the `population` variable.
```
table3
```
```
## # A tibble: 6 x 3
## country year rate
## * <chr> <int> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Solution
```
table3 %>%
separate(col = rate, into = c("cases", "population"),
sep = "/", convert = TRUE)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `separate()`, pass `col` the name of the column to split, and pass `into` a vector of names for the new columns to split `col` into. You should supply one name for each new column that you expect to appear in the result; a mismatch will imply that something went wrong.
`separate()` will:
1. Create a copy of the original data set
2. Add a new column for each value of `into`. The values will become the names of the new columns.
3. Split each cell of `col` into multiple values, based on the locations of a separator character.
4. Place the new values into the new columns in order, one value per column
5. Remove the `col` column. Add the argument `remove = FALSE` to retain the `col` column in the final result.
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `separate()` will inherit the data type of the `col` column. If you would like to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
##### Control where cells are separated
By default, `separate()` will use non\-alpha\-numeric characters as a separators. Pass a regular expression to the `sep` argument to specify a different set of separators. Alternatively, pass an integer vector to the `sep` argument to split cells into sequences that each have a specific number of characters:
* `sep = 1` will split each cell between the first and second character.
* `sep = c(1, 3)` will split each cell between the first and second character and then again between the third and fourth character.
* `sep = -1` will split each cell between the last and second to last character.
##### Separate into multiple rows
`separate_rows()` behaves like `separate()` except that it places each new value into a new row (instead of into a new column).
To use `separate_rows()`, follow the same syntax as `separate()`.
3\.9 Unite multiple columns into a single column
------------------------------------------------
You want to combine several columns into a single column by uniting their values across rows.
For example, `table5` splits the `year` variable across two columns: `century` and `year`. To make `table5` tidy, you need to unite `century` and `year` into a single column.
```
table5
```
```
## # A tibble: 6 x 4
## country century year rate
## * <chr> <chr> <chr> <chr>
## 1 Afghanistan 19 99 745/19987071
## 2 Afghanistan 20 00 2666/20595360
## 3 Brazil 19 99 37737/172006362
## 4 Brazil 20 00 80488/174504898
## 5 China 19 99 212258/1272915272
## 6 China 20 00 213766/1280428583
```
#### Solution
```
table5 %>%
unite(col = "year", century, year, sep = "")
```
```
## # A tibble: 6 x 3
## country year rate
## <chr> <chr> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Discussion
To use `unite()`, give the `col` argument a character string to use as the name of the new column to create. Then list the columns to combine. Finally, give the `sep` argument a separator character to use to paste together the values in the cells of each column. `unite()` will:
1. Create a copy of the original data set
2. Paste together the values of the listed columns in a vectorized (i.e. rowwise) fashion. `unite()` will place the value of `sep` between each value during the paste process.
3. Append the results as a new column whose name is the value of `col`
4. Remove the listed columns. To retain the columns in the result, add the argument `remove = FALSE`.
Since this is easier to see than explain, you may want to study the diagram and result above.
If you do not suppy a `sep` value, `unite()` will use `_` as a separator character. To avoid a separator character, use `sep = ""`.
What you should know before you begin
-------------------------------------
Data tidying refers to reshaping your data into a tidy data frame or [tibble](how-to-use-this-book.html#tidy-data). Data tidying is an important first step for your analysis because every tidyverse function will expect your data to be stored as **Tidy Data**.
Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
Tidy data is not an arbitrary requirement of the tidyverse; it is the ideal data format for doing data science with R. Tidy data makes it easy to extract every value of a variable to build a plot or to compute a summary statistic. Tidy data also makes it easy to compute new variables; when your data is tidy, you can rely on R’s rowwise operations to maintain the integrity of your observations. Moreover, R can directly manipulate tidy data with R’s fast, built\-in vectorised observations, which lets your code run as fast as possible.
The definition of Tidy Data isn’t complete until you define variable and observation, so let’s borrow two definitions from [*R for Data Science*](https://r4ds.had.co.nz/exploratory-data-analysis.html):
1. A **variable** is a quantity, quality, or property that you can measure.
2. An **observation** is a set of measurements made under similar conditions (you usually make all of the measurements in an observation at the same time and on the same object).
As you work with data, you will be surprised to realize that what is a variable (or observation) will depend less on the data itself and more on what you are trying to do with it. With enough mental flexibility, you can consider anything to be a variable. However, some variables will be more useful than others for any specific task. In general, if you can formulate your task as an equation (math or code that contains an equals sign), the most useful variables will be the names in the equation.
3\.1 Create a tibble manually
-----------------------------
You want to create a tibble from scratch by typing in the contents of the tibble.
#### Solution
```
tribble(~number, ~letter, ~greek,
1, "a", "alpha",
2, "b", "beta",
3, "c", "gamma")
```
```
## # A tibble: 3 x 3
## number letter greek
## <dbl> <chr> <chr>
## 1 1 a alpha
## 2 2 b beta
## 3 3 c gamma
```
#### Discussion
`tribble()` creates a tibble and tricks you into typing out a preview of the result. To use `tribble()`, list each column name preceded by a `~`, then list the values of the tribble in a rowwise fashion. If you take care to align your columns, the transposed syntax of `tribble()` becomes a preview of the table.
You can also create a tibble with `tibble()`, whose syntax mirrors `data.frame()`:
```
tibble(number = c(1, 2, 3),
letter = c("a", "b", "c"),
greek = c("alpha", "beta", "gamma"))
```
#### Solution
```
tribble(~number, ~letter, ~greek,
1, "a", "alpha",
2, "b", "beta",
3, "c", "gamma")
```
```
## # A tibble: 3 x 3
## number letter greek
## <dbl> <chr> <chr>
## 1 1 a alpha
## 2 2 b beta
## 3 3 c gamma
```
#### Discussion
`tribble()` creates a tibble and tricks you into typing out a preview of the result. To use `tribble()`, list each column name preceded by a `~`, then list the values of the tribble in a rowwise fashion. If you take care to align your columns, the transposed syntax of `tribble()` becomes a preview of the table.
You can also create a tibble with `tibble()`, whose syntax mirrors `data.frame()`:
```
tibble(number = c(1, 2, 3),
letter = c("a", "b", "c"),
greek = c("alpha", "beta", "gamma"))
```
3\.2 Convert a data frame to a tibble
-------------------------------------
You want to convert a data frame to a tibble.
#### Solution
```
as_tibble(iris)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
#### Solution
```
as_tibble(iris)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
3\.3 Convert a tibble to a data frame
-------------------------------------
You want to convert a tibble to a data frame.
#### Solution
```
as.data.frame(table1)
```
```
## country year cases population
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Be careful to use `as.data.frame()` and not `as_data_frame()`, which is an alias for `as_tibble()`.
#### Solution
```
as.data.frame(table1)
```
```
## country year cases population
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Be careful to use `as.data.frame()` and not `as_data_frame()`, which is an alias for `as_tibble()`.
3\.4 Preview the contents of a tibble
-------------------------------------
You want to get an idea of what variables and values are stored in a tibble.
#### Solution
```
storms
```
```
## # A tibble: 10,010 x 13
## name year month day hour lat long status category wind pressure
## <chr> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <chr> <ord> <int> <int>
## 1 Amy 1975 6 27 0 27.5 -79 tropi… -1 25 1013
## 2 Amy 1975 6 27 6 28.5 -79 tropi… -1 25 1013
## 3 Amy 1975 6 27 12 29.5 -79 tropi… -1 25 1013
## 4 Amy 1975 6 27 18 30.5 -79 tropi… -1 25 1013
## 5 Amy 1975 6 28 0 31.5 -78.8 tropi… -1 25 1012
## 6 Amy 1975 6 28 6 32.4 -78.7 tropi… -1 25 1012
## 7 Amy 1975 6 28 12 33.3 -78 tropi… -1 25 1011
## 8 Amy 1975 6 28 18 34 -77 tropi… -1 30 1006
## 9 Amy 1975 6 29 0 34.4 -75.8 tropi… 0 35 1004
## 10 Amy 1975 6 29 6 34 -74.8 tropi… 0 40 1002
## # … with 10,000 more rows, and 2 more variables: ts_diameter <dbl>,
## # hu_diameter <dbl>
```
#### Discussion
When you call a tibble directly, R will display enough information to give you a quick sense of the contents of the tibble. This includes:
1. the dimensions of the tibble
2. the column names and types
3. as many cells of the tibble as will fit comfortably in your console window
#### Solution
```
storms
```
```
## # A tibble: 10,010 x 13
## name year month day hour lat long status category wind pressure
## <chr> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <chr> <ord> <int> <int>
## 1 Amy 1975 6 27 0 27.5 -79 tropi… -1 25 1013
## 2 Amy 1975 6 27 6 28.5 -79 tropi… -1 25 1013
## 3 Amy 1975 6 27 12 29.5 -79 tropi… -1 25 1013
## 4 Amy 1975 6 27 18 30.5 -79 tropi… -1 25 1013
## 5 Amy 1975 6 28 0 31.5 -78.8 tropi… -1 25 1012
## 6 Amy 1975 6 28 6 32.4 -78.7 tropi… -1 25 1012
## 7 Amy 1975 6 28 12 33.3 -78 tropi… -1 25 1011
## 8 Amy 1975 6 28 18 34 -77 tropi… -1 30 1006
## 9 Amy 1975 6 29 0 34.4 -75.8 tropi… 0 35 1004
## 10 Amy 1975 6 29 6 34 -74.8 tropi… 0 40 1002
## # … with 10,000 more rows, and 2 more variables: ts_diameter <dbl>,
## # hu_diameter <dbl>
```
#### Discussion
When you call a tibble directly, R will display enough information to give you a quick sense of the contents of the tibble. This includes:
1. the dimensions of the tibble
2. the column names and types
3. as many cells of the tibble as will fit comfortably in your console window
3\.5 Inspect every cell of a tibble
-----------------------------------
You want to see every value that is stored in a tibble.
#### Solution
```
View(storms)
```
#### Discussion
`View()` (with a capital V) opens the tibble in R’s data viewer, which will let you scroll to every cell in the tibble.
#### Solution
```
View(storms)
```
#### Discussion
`View()` (with a capital V) opens the tibble in R’s data viewer, which will let you scroll to every cell in the tibble.
3\.6 Spread a pair of columns into a field of cells
---------------------------------------------------
You want to **pivot**, **convert long data to wide**, or move variable names out of the cells and into the column names. These are different ways of describing the same action.
For example, `table2` contains `type`, which is a column that repeats the variable names `case` and `population`. To make `table2` tidy, you must move `case` and `population` values into their own columns.
```
table2
```
```
## # A tibble: 12 x 4
## country year type count
## <chr> <int> <chr> <int>
## 1 Afghanistan 1999 cases 745
## 2 Afghanistan 1999 population 19987071
## 3 Afghanistan 2000 cases 2666
## 4 Afghanistan 2000 population 20595360
## 5 Brazil 1999 cases 37737
## 6 Brazil 1999 population 172006362
## 7 Brazil 2000 cases 80488
## 8 Brazil 2000 population 174504898
## 9 China 1999 cases 212258
## 10 China 1999 population 1272915272
## 11 China 2000 cases 213766
## 12 China 2000 population 1280428583
```
#### Solution
```
table2 %>%
spread(key = type, value = count)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `spread()`, assign the column that contains variable names to `key`. Assign the column that contains the values that are associated with those names to `value`. `spread()` will:
1. Make a copy of the original table
2. Remove the `key` and `value` columns from the copy
3. Remove every duplicate row in the data set that remains
4. Insert a new column for each unique variable name in the `key` column
5. Fill the new columns with the values of the `value` column in a way that preserves every relationship between values in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `spread()` will inherit the data type of the `value` column. If you would to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
#### Solution
```
table2 %>%
spread(key = type, value = count)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `spread()`, assign the column that contains variable names to `key`. Assign the column that contains the values that are associated with those names to `value`. `spread()` will:
1. Make a copy of the original table
2. Remove the `key` and `value` columns from the copy
3. Remove every duplicate row in the data set that remains
4. Insert a new column for each unique variable name in the `key` column
5. Fill the new columns with the values of the `value` column in a way that preserves every relationship between values in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `spread()` will inherit the data type of the `value` column. If you would to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
3\.7 Gather a field of cells into a pair of columns
---------------------------------------------------
You want to **convert wide data to long**, reshape a **two\-by\-two table**, or move variable values out of the column names and into the cells. These are different ways of describing the same action.
For example, `table4a` is a two\-by\-two table with the column names `1999` and `2000`. These names are values of a `year` variable. The field of cells in `table4a` contains counts of TB cases, which is another variable. To make `table4a` tidy, you need to move year and case values into their own columns.
```
table4a
```
```
## # A tibble: 3 x 3
## country `1999` `2000`
## * <chr> <int> <int>
## 1 Afghanistan 745 2666
## 2 Brazil 37737 80488
## 3 China 212258 213766
```
#### Solution
```
table4a %>%
gather(key = "year", value = "cases", 2:3)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <chr> <int>
## 1 Afghanistan 1999 745
## 2 Brazil 1999 37737
## 3 China 1999 212258
## 4 Afghanistan 2000 2666
## 5 Brazil 2000 80488
## 6 China 2000 213766
```
#### Discussion
`gather()` is the inverse of `spread()`: `gather()` collapses a field of cells that spans several columns into two new columns:
1. A column of former “keys”, which contains the column names of the former field
2. A column of former “values”, which contains the cell values of the former field
To use `gather()`, pick names for the new `key` and `value` columns, and supply them as strings. Then identify the columns to gather into the new key and value columns. `gather()` will:
1. Create a copy of the original table
2. Remove the identified columns from the copy
3. Add a key column with the supplied name
4. Fill the key column with the column names of the removed columns, repeating rows as necessary so that each combination of row and removed column name appears once
5. Add a value column with the supplied name
6. Fill the value column with the values of the removed columns in a way that preserves every relationship between values and column names in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
##### Identify columns to gather
You can identify the columns to gather (i.e. remove) by:
1. name
2. index (numbers)
3. inverse index (negative numbers that specifiy the columns to *retain*, all other columns will be removed.)
4. the [`select()` helpers](https://dplyr.tidyverse.org/reference/select.html#useful-functions) that come in the dplyr package
So for example, the following commands will do the same thing as the solution above:
```
table4a %>% gather(key = "year", value = "cases", `"1999", "2000")
table4a %>% gather(key = "year", value = "cases", -1)
table4a %>% gather(key = "year", value = "cases", one_of(c("1999", "2000")))
```
By default, the new key column will contain character strings. If you would like to convert the new key column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
#### Solution
```
table4a %>%
gather(key = "year", value = "cases", 2:3)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <chr> <int>
## 1 Afghanistan 1999 745
## 2 Brazil 1999 37737
## 3 China 1999 212258
## 4 Afghanistan 2000 2666
## 5 Brazil 2000 80488
## 6 China 2000 213766
```
#### Discussion
`gather()` is the inverse of `spread()`: `gather()` collapses a field of cells that spans several columns into two new columns:
1. A column of former “keys”, which contains the column names of the former field
2. A column of former “values”, which contains the cell values of the former field
To use `gather()`, pick names for the new `key` and `value` columns, and supply them as strings. Then identify the columns to gather into the new key and value columns. `gather()` will:
1. Create a copy of the original table
2. Remove the identified columns from the copy
3. Add a key column with the supplied name
4. Fill the key column with the column names of the removed columns, repeating rows as necessary so that each combination of row and removed column name appears once
5. Add a value column with the supplied name
6. Fill the value column with the values of the removed columns in a way that preserves every relationship between values and column names in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
##### Identify columns to gather
You can identify the columns to gather (i.e. remove) by:
1. name
2. index (numbers)
3. inverse index (negative numbers that specifiy the columns to *retain*, all other columns will be removed.)
4. the [`select()` helpers](https://dplyr.tidyverse.org/reference/select.html#useful-functions) that come in the dplyr package
So for example, the following commands will do the same thing as the solution above:
```
table4a %>% gather(key = "year", value = "cases", `"1999", "2000")
table4a %>% gather(key = "year", value = "cases", -1)
table4a %>% gather(key = "year", value = "cases", one_of(c("1999", "2000")))
```
By default, the new key column will contain character strings. If you would like to convert the new key column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
##### Identify columns to gather
You can identify the columns to gather (i.e. remove) by:
1. name
2. index (numbers)
3. inverse index (negative numbers that specifiy the columns to *retain*, all other columns will be removed.)
4. the [`select()` helpers](https://dplyr.tidyverse.org/reference/select.html#useful-functions) that come in the dplyr package
So for example, the following commands will do the same thing as the solution above:
```
table4a %>% gather(key = "year", value = "cases", `"1999", "2000")
table4a %>% gather(key = "year", value = "cases", -1)
table4a %>% gather(key = "year", value = "cases", one_of(c("1999", "2000")))
```
By default, the new key column will contain character strings. If you would like to convert the new key column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
3\.8 Separate a column into new columns
---------------------------------------
You want to split a single column into multiple columns by separating each cell in the column into a row of cells. Each new cell should contain a separate portion of the value in the original cell.
For example, `table3` combines `cases` and `population` values in a single column named `rate`. To tidy `table3`, you need to separate `rate` into two columns: one for the `cases` variable and one for the `population` variable.
```
table3
```
```
## # A tibble: 6 x 3
## country year rate
## * <chr> <int> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Solution
```
table3 %>%
separate(col = rate, into = c("cases", "population"),
sep = "/", convert = TRUE)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `separate()`, pass `col` the name of the column to split, and pass `into` a vector of names for the new columns to split `col` into. You should supply one name for each new column that you expect to appear in the result; a mismatch will imply that something went wrong.
`separate()` will:
1. Create a copy of the original data set
2. Add a new column for each value of `into`. The values will become the names of the new columns.
3. Split each cell of `col` into multiple values, based on the locations of a separator character.
4. Place the new values into the new columns in order, one value per column
5. Remove the `col` column. Add the argument `remove = FALSE` to retain the `col` column in the final result.
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `separate()` will inherit the data type of the `col` column. If you would like to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
##### Control where cells are separated
By default, `separate()` will use non\-alpha\-numeric characters as a separators. Pass a regular expression to the `sep` argument to specify a different set of separators. Alternatively, pass an integer vector to the `sep` argument to split cells into sequences that each have a specific number of characters:
* `sep = 1` will split each cell between the first and second character.
* `sep = c(1, 3)` will split each cell between the first and second character and then again between the third and fourth character.
* `sep = -1` will split each cell between the last and second to last character.
##### Separate into multiple rows
`separate_rows()` behaves like `separate()` except that it places each new value into a new row (instead of into a new column).
To use `separate_rows()`, follow the same syntax as `separate()`.
#### Solution
```
table3 %>%
separate(col = rate, into = c("cases", "population"),
sep = "/", convert = TRUE)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `separate()`, pass `col` the name of the column to split, and pass `into` a vector of names for the new columns to split `col` into. You should supply one name for each new column that you expect to appear in the result; a mismatch will imply that something went wrong.
`separate()` will:
1. Create a copy of the original data set
2. Add a new column for each value of `into`. The values will become the names of the new columns.
3. Split each cell of `col` into multiple values, based on the locations of a separator character.
4. Place the new values into the new columns in order, one value per column
5. Remove the `col` column. Add the argument `remove = FALSE` to retain the `col` column in the final result.
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `separate()` will inherit the data type of the `col` column. If you would like to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
##### Control where cells are separated
By default, `separate()` will use non\-alpha\-numeric characters as a separators. Pass a regular expression to the `sep` argument to specify a different set of separators. Alternatively, pass an integer vector to the `sep` argument to split cells into sequences that each have a specific number of characters:
* `sep = 1` will split each cell between the first and second character.
* `sep = c(1, 3)` will split each cell between the first and second character and then again between the third and fourth character.
* `sep = -1` will split each cell between the last and second to last character.
##### Separate into multiple rows
`separate_rows()` behaves like `separate()` except that it places each new value into a new row (instead of into a new column).
To use `separate_rows()`, follow the same syntax as `separate()`.
##### Control where cells are separated
By default, `separate()` will use non\-alpha\-numeric characters as a separators. Pass a regular expression to the `sep` argument to specify a different set of separators. Alternatively, pass an integer vector to the `sep` argument to split cells into sequences that each have a specific number of characters:
* `sep = 1` will split each cell between the first and second character.
* `sep = c(1, 3)` will split each cell between the first and second character and then again between the third and fourth character.
* `sep = -1` will split each cell between the last and second to last character.
##### Separate into multiple rows
`separate_rows()` behaves like `separate()` except that it places each new value into a new row (instead of into a new column).
To use `separate_rows()`, follow the same syntax as `separate()`.
3\.9 Unite multiple columns into a single column
------------------------------------------------
You want to combine several columns into a single column by uniting their values across rows.
For example, `table5` splits the `year` variable across two columns: `century` and `year`. To make `table5` tidy, you need to unite `century` and `year` into a single column.
```
table5
```
```
## # A tibble: 6 x 4
## country century year rate
## * <chr> <chr> <chr> <chr>
## 1 Afghanistan 19 99 745/19987071
## 2 Afghanistan 20 00 2666/20595360
## 3 Brazil 19 99 37737/172006362
## 4 Brazil 20 00 80488/174504898
## 5 China 19 99 212258/1272915272
## 6 China 20 00 213766/1280428583
```
#### Solution
```
table5 %>%
unite(col = "year", century, year, sep = "")
```
```
## # A tibble: 6 x 3
## country year rate
## <chr> <chr> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Discussion
To use `unite()`, give the `col` argument a character string to use as the name of the new column to create. Then list the columns to combine. Finally, give the `sep` argument a separator character to use to paste together the values in the cells of each column. `unite()` will:
1. Create a copy of the original data set
2. Paste together the values of the listed columns in a vectorized (i.e. rowwise) fashion. `unite()` will place the value of `sep` between each value during the paste process.
3. Append the results as a new column whose name is the value of `col`
4. Remove the listed columns. To retain the columns in the result, add the argument `remove = FALSE`.
Since this is easier to see than explain, you may want to study the diagram and result above.
If you do not suppy a `sep` value, `unite()` will use `_` as a separator character. To avoid a separator character, use `sep = ""`.
#### Solution
```
table5 %>%
unite(col = "year", century, year, sep = "")
```
```
## # A tibble: 6 x 3
## country year rate
## <chr> <chr> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Discussion
To use `unite()`, give the `col` argument a character string to use as the name of the new column to create. Then list the columns to combine. Finally, give the `sep` argument a separator character to use to paste together the values in the cells of each column. `unite()` will:
1. Create a copy of the original data set
2. Paste together the values of the listed columns in a vectorized (i.e. rowwise) fashion. `unite()` will place the value of `sep` between each value during the paste process.
3. Append the results as a new column whose name is the value of `col`
4. Remove the listed columns. To retain the columns in the result, add the argument `remove = FALSE`.
Since this is easier to see than explain, you may want to study the diagram and result above.
If you do not suppy a `sep` value, `unite()` will use `_` as a separator character. To avoid a separator character, use `sep = ""`.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/tidy.html |
3 Tidy
======
---
This chapter includes the following recipes:
1. [Create a tibble manually](tidy.html#create-a-tibble-manually)
2. [Convert a data frame to a tibble](tidy.html#convert-a-data-frame-to-a-tibble)
3. [Convert a tibble to a data frame](tidy.html#convert-a-tibble-to-a-data-frame)
4. [Preview the contents of a tibble](tidy.html#preview-the-contents-of-a-tibble)
5. [Inspect every cell of a tibble](tidy.html#inspect-every-cell-of-a-tibble)
6. [Spread a pair of columns into a field of cells](tidy.html#spread-a-pair-of-columns-into-a-field-of-cells)
7. [Gather a field of cells into a pair of columns](tidy.html#gather-a-field-of-cells-into-a-pair-of-columns)
8. [Separate a column into new columns](tidy.html#separate-a-column-into-new-columns)
9. [Unite multiple columns into a single column](tidy.html#unite-multiple-columns-into-a-single-column)
---
What you should know before you begin
-------------------------------------
Data tidying refers to reshaping your data into a tidy data frame or [tibble](how-to-use-this-book.html#tidy-data). Data tidying is an important first step for your analysis because every tidyverse function will expect your data to be stored as **Tidy Data**.
Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
Tidy data is not an arbitrary requirement of the tidyverse; it is the ideal data format for doing data science with R. Tidy data makes it easy to extract every value of a variable to build a plot or to compute a summary statistic. Tidy data also makes it easy to compute new variables; when your data is tidy, you can rely on R’s rowwise operations to maintain the integrity of your observations. Moreover, R can directly manipulate tidy data with R’s fast, built\-in vectorised observations, which lets your code run as fast as possible.
The definition of Tidy Data isn’t complete until you define variable and observation, so let’s borrow two definitions from [*R for Data Science*](https://r4ds.had.co.nz/exploratory-data-analysis.html):
1. A **variable** is a quantity, quality, or property that you can measure.
2. An **observation** is a set of measurements made under similar conditions (you usually make all of the measurements in an observation at the same time and on the same object).
As you work with data, you will be surprised to realize that what is a variable (or observation) will depend less on the data itself and more on what you are trying to do with it. With enough mental flexibility, you can consider anything to be a variable. However, some variables will be more useful than others for any specific task. In general, if you can formulate your task as an equation (math or code that contains an equals sign), the most useful variables will be the names in the equation.
3\.1 Create a tibble manually
-----------------------------
You want to create a tibble from scratch by typing in the contents of the tibble.
#### Solution
```
tribble(~number, ~letter, ~greek,
1, "a", "alpha",
2, "b", "beta",
3, "c", "gamma")
```
```
## # A tibble: 3 x 3
## number letter greek
## <dbl> <chr> <chr>
## 1 1 a alpha
## 2 2 b beta
## 3 3 c gamma
```
#### Discussion
`tribble()` creates a tibble and tricks you into typing out a preview of the result. To use `tribble()`, list each column name preceded by a `~`, then list the values of the tribble in a rowwise fashion. If you take care to align your columns, the transposed syntax of `tribble()` becomes a preview of the table.
You can also create a tibble with `tibble()`, whose syntax mirrors `data.frame()`:
```
tibble(number = c(1, 2, 3),
letter = c("a", "b", "c"),
greek = c("alpha", "beta", "gamma"))
```
3\.2 Convert a data frame to a tibble
-------------------------------------
You want to convert a data frame to a tibble.
#### Solution
```
as_tibble(iris)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
3\.3 Convert a tibble to a data frame
-------------------------------------
You want to convert a tibble to a data frame.
#### Solution
```
as.data.frame(table1)
```
```
## country year cases population
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Be careful to use `as.data.frame()` and not `as_data_frame()`, which is an alias for `as_tibble()`.
3\.4 Preview the contents of a tibble
-------------------------------------
You want to get an idea of what variables and values are stored in a tibble.
#### Solution
```
storms
```
```
## # A tibble: 10,010 x 13
## name year month day hour lat long status category wind pressure
## <chr> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <chr> <ord> <int> <int>
## 1 Amy 1975 6 27 0 27.5 -79 tropi… -1 25 1013
## 2 Amy 1975 6 27 6 28.5 -79 tropi… -1 25 1013
## 3 Amy 1975 6 27 12 29.5 -79 tropi… -1 25 1013
## 4 Amy 1975 6 27 18 30.5 -79 tropi… -1 25 1013
## 5 Amy 1975 6 28 0 31.5 -78.8 tropi… -1 25 1012
## 6 Amy 1975 6 28 6 32.4 -78.7 tropi… -1 25 1012
## 7 Amy 1975 6 28 12 33.3 -78 tropi… -1 25 1011
## 8 Amy 1975 6 28 18 34 -77 tropi… -1 30 1006
## 9 Amy 1975 6 29 0 34.4 -75.8 tropi… 0 35 1004
## 10 Amy 1975 6 29 6 34 -74.8 tropi… 0 40 1002
## # … with 10,000 more rows, and 2 more variables: ts_diameter <dbl>,
## # hu_diameter <dbl>
```
#### Discussion
When you call a tibble directly, R will display enough information to give you a quick sense of the contents of the tibble. This includes:
1. the dimensions of the tibble
2. the column names and types
3. as many cells of the tibble as will fit comfortably in your console window
3\.5 Inspect every cell of a tibble
-----------------------------------
You want to see every value that is stored in a tibble.
#### Solution
```
View(storms)
```
#### Discussion
`View()` (with a capital V) opens the tibble in R’s data viewer, which will let you scroll to every cell in the tibble.
3\.6 Spread a pair of columns into a field of cells
---------------------------------------------------
You want to **pivot**, **convert long data to wide**, or move variable names out of the cells and into the column names. These are different ways of describing the same action.
For example, `table2` contains `type`, which is a column that repeats the variable names `case` and `population`. To make `table2` tidy, you must move `case` and `population` values into their own columns.
```
table2
```
```
## # A tibble: 12 x 4
## country year type count
## <chr> <int> <chr> <int>
## 1 Afghanistan 1999 cases 745
## 2 Afghanistan 1999 population 19987071
## 3 Afghanistan 2000 cases 2666
## 4 Afghanistan 2000 population 20595360
## 5 Brazil 1999 cases 37737
## 6 Brazil 1999 population 172006362
## 7 Brazil 2000 cases 80488
## 8 Brazil 2000 population 174504898
## 9 China 1999 cases 212258
## 10 China 1999 population 1272915272
## 11 China 2000 cases 213766
## 12 China 2000 population 1280428583
```
#### Solution
```
table2 %>%
spread(key = type, value = count)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `spread()`, assign the column that contains variable names to `key`. Assign the column that contains the values that are associated with those names to `value`. `spread()` will:
1. Make a copy of the original table
2. Remove the `key` and `value` columns from the copy
3. Remove every duplicate row in the data set that remains
4. Insert a new column for each unique variable name in the `key` column
5. Fill the new columns with the values of the `value` column in a way that preserves every relationship between values in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `spread()` will inherit the data type of the `value` column. If you would to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
3\.7 Gather a field of cells into a pair of columns
---------------------------------------------------
You want to **convert wide data to long**, reshape a **two\-by\-two table**, or move variable values out of the column names and into the cells. These are different ways of describing the same action.
For example, `table4a` is a two\-by\-two table with the column names `1999` and `2000`. These names are values of a `year` variable. The field of cells in `table4a` contains counts of TB cases, which is another variable. To make `table4a` tidy, you need to move year and case values into their own columns.
```
table4a
```
```
## # A tibble: 3 x 3
## country `1999` `2000`
## * <chr> <int> <int>
## 1 Afghanistan 745 2666
## 2 Brazil 37737 80488
## 3 China 212258 213766
```
#### Solution
```
table4a %>%
gather(key = "year", value = "cases", 2:3)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <chr> <int>
## 1 Afghanistan 1999 745
## 2 Brazil 1999 37737
## 3 China 1999 212258
## 4 Afghanistan 2000 2666
## 5 Brazil 2000 80488
## 6 China 2000 213766
```
#### Discussion
`gather()` is the inverse of `spread()`: `gather()` collapses a field of cells that spans several columns into two new columns:
1. A column of former “keys”, which contains the column names of the former field
2. A column of former “values”, which contains the cell values of the former field
To use `gather()`, pick names for the new `key` and `value` columns, and supply them as strings. Then identify the columns to gather into the new key and value columns. `gather()` will:
1. Create a copy of the original table
2. Remove the identified columns from the copy
3. Add a key column with the supplied name
4. Fill the key column with the column names of the removed columns, repeating rows as necessary so that each combination of row and removed column name appears once
5. Add a value column with the supplied name
6. Fill the value column with the values of the removed columns in a way that preserves every relationship between values and column names in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
##### Identify columns to gather
You can identify the columns to gather (i.e. remove) by:
1. name
2. index (numbers)
3. inverse index (negative numbers that specifiy the columns to *retain*, all other columns will be removed.)
4. the [`select()` helpers](https://dplyr.tidyverse.org/reference/select.html#useful-functions) that come in the dplyr package
So for example, the following commands will do the same thing as the solution above:
```
table4a %>% gather(key = "year", value = "cases", `"1999", "2000")
table4a %>% gather(key = "year", value = "cases", -1)
table4a %>% gather(key = "year", value = "cases", one_of(c("1999", "2000")))
```
By default, the new key column will contain character strings. If you would like to convert the new key column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
3\.8 Separate a column into new columns
---------------------------------------
You want to split a single column into multiple columns by separating each cell in the column into a row of cells. Each new cell should contain a separate portion of the value in the original cell.
For example, `table3` combines `cases` and `population` values in a single column named `rate`. To tidy `table3`, you need to separate `rate` into two columns: one for the `cases` variable and one for the `population` variable.
```
table3
```
```
## # A tibble: 6 x 3
## country year rate
## * <chr> <int> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Solution
```
table3 %>%
separate(col = rate, into = c("cases", "population"),
sep = "/", convert = TRUE)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `separate()`, pass `col` the name of the column to split, and pass `into` a vector of names for the new columns to split `col` into. You should supply one name for each new column that you expect to appear in the result; a mismatch will imply that something went wrong.
`separate()` will:
1. Create a copy of the original data set
2. Add a new column for each value of `into`. The values will become the names of the new columns.
3. Split each cell of `col` into multiple values, based on the locations of a separator character.
4. Place the new values into the new columns in order, one value per column
5. Remove the `col` column. Add the argument `remove = FALSE` to retain the `col` column in the final result.
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `separate()` will inherit the data type of the `col` column. If you would like to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
##### Control where cells are separated
By default, `separate()` will use non\-alpha\-numeric characters as a separators. Pass a regular expression to the `sep` argument to specify a different set of separators. Alternatively, pass an integer vector to the `sep` argument to split cells into sequences that each have a specific number of characters:
* `sep = 1` will split each cell between the first and second character.
* `sep = c(1, 3)` will split each cell between the first and second character and then again between the third and fourth character.
* `sep = -1` will split each cell between the last and second to last character.
##### Separate into multiple rows
`separate_rows()` behaves like `separate()` except that it places each new value into a new row (instead of into a new column).
To use `separate_rows()`, follow the same syntax as `separate()`.
3\.9 Unite multiple columns into a single column
------------------------------------------------
You want to combine several columns into a single column by uniting their values across rows.
For example, `table5` splits the `year` variable across two columns: `century` and `year`. To make `table5` tidy, you need to unite `century` and `year` into a single column.
```
table5
```
```
## # A tibble: 6 x 4
## country century year rate
## * <chr> <chr> <chr> <chr>
## 1 Afghanistan 19 99 745/19987071
## 2 Afghanistan 20 00 2666/20595360
## 3 Brazil 19 99 37737/172006362
## 4 Brazil 20 00 80488/174504898
## 5 China 19 99 212258/1272915272
## 6 China 20 00 213766/1280428583
```
#### Solution
```
table5 %>%
unite(col = "year", century, year, sep = "")
```
```
## # A tibble: 6 x 3
## country year rate
## <chr> <chr> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Discussion
To use `unite()`, give the `col` argument a character string to use as the name of the new column to create. Then list the columns to combine. Finally, give the `sep` argument a separator character to use to paste together the values in the cells of each column. `unite()` will:
1. Create a copy of the original data set
2. Paste together the values of the listed columns in a vectorized (i.e. rowwise) fashion. `unite()` will place the value of `sep` between each value during the paste process.
3. Append the results as a new column whose name is the value of `col`
4. Remove the listed columns. To retain the columns in the result, add the argument `remove = FALSE`.
Since this is easier to see than explain, you may want to study the diagram and result above.
If you do not suppy a `sep` value, `unite()` will use `_` as a separator character. To avoid a separator character, use `sep = ""`.
What you should know before you begin
-------------------------------------
Data tidying refers to reshaping your data into a tidy data frame or [tibble](how-to-use-this-book.html#tidy-data). Data tidying is an important first step for your analysis because every tidyverse function will expect your data to be stored as **Tidy Data**.
Tidy data is tabular data organized so that:
1. Each column contains a single variable
2. Each row contains a single observation
Tidy data is not an arbitrary requirement of the tidyverse; it is the ideal data format for doing data science with R. Tidy data makes it easy to extract every value of a variable to build a plot or to compute a summary statistic. Tidy data also makes it easy to compute new variables; when your data is tidy, you can rely on R’s rowwise operations to maintain the integrity of your observations. Moreover, R can directly manipulate tidy data with R’s fast, built\-in vectorised observations, which lets your code run as fast as possible.
The definition of Tidy Data isn’t complete until you define variable and observation, so let’s borrow two definitions from [*R for Data Science*](https://r4ds.had.co.nz/exploratory-data-analysis.html):
1. A **variable** is a quantity, quality, or property that you can measure.
2. An **observation** is a set of measurements made under similar conditions (you usually make all of the measurements in an observation at the same time and on the same object).
As you work with data, you will be surprised to realize that what is a variable (or observation) will depend less on the data itself and more on what you are trying to do with it. With enough mental flexibility, you can consider anything to be a variable. However, some variables will be more useful than others for any specific task. In general, if you can formulate your task as an equation (math or code that contains an equals sign), the most useful variables will be the names in the equation.
3\.1 Create a tibble manually
-----------------------------
You want to create a tibble from scratch by typing in the contents of the tibble.
#### Solution
```
tribble(~number, ~letter, ~greek,
1, "a", "alpha",
2, "b", "beta",
3, "c", "gamma")
```
```
## # A tibble: 3 x 3
## number letter greek
## <dbl> <chr> <chr>
## 1 1 a alpha
## 2 2 b beta
## 3 3 c gamma
```
#### Discussion
`tribble()` creates a tibble and tricks you into typing out a preview of the result. To use `tribble()`, list each column name preceded by a `~`, then list the values of the tribble in a rowwise fashion. If you take care to align your columns, the transposed syntax of `tribble()` becomes a preview of the table.
You can also create a tibble with `tibble()`, whose syntax mirrors `data.frame()`:
```
tibble(number = c(1, 2, 3),
letter = c("a", "b", "c"),
greek = c("alpha", "beta", "gamma"))
```
#### Solution
```
tribble(~number, ~letter, ~greek,
1, "a", "alpha",
2, "b", "beta",
3, "c", "gamma")
```
```
## # A tibble: 3 x 3
## number letter greek
## <dbl> <chr> <chr>
## 1 1 a alpha
## 2 2 b beta
## 3 3 c gamma
```
#### Discussion
`tribble()` creates a tibble and tricks you into typing out a preview of the result. To use `tribble()`, list each column name preceded by a `~`, then list the values of the tribble in a rowwise fashion. If you take care to align your columns, the transposed syntax of `tribble()` becomes a preview of the table.
You can also create a tibble with `tibble()`, whose syntax mirrors `data.frame()`:
```
tibble(number = c(1, 2, 3),
letter = c("a", "b", "c"),
greek = c("alpha", "beta", "gamma"))
```
3\.2 Convert a data frame to a tibble
-------------------------------------
You want to convert a data frame to a tibble.
#### Solution
```
as_tibble(iris)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
#### Solution
```
as_tibble(iris)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
3\.3 Convert a tibble to a data frame
-------------------------------------
You want to convert a tibble to a data frame.
#### Solution
```
as.data.frame(table1)
```
```
## country year cases population
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Be careful to use `as.data.frame()` and not `as_data_frame()`, which is an alias for `as_tibble()`.
#### Solution
```
as.data.frame(table1)
```
```
## country year cases population
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Be careful to use `as.data.frame()` and not `as_data_frame()`, which is an alias for `as_tibble()`.
3\.4 Preview the contents of a tibble
-------------------------------------
You want to get an idea of what variables and values are stored in a tibble.
#### Solution
```
storms
```
```
## # A tibble: 10,010 x 13
## name year month day hour lat long status category wind pressure
## <chr> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <chr> <ord> <int> <int>
## 1 Amy 1975 6 27 0 27.5 -79 tropi… -1 25 1013
## 2 Amy 1975 6 27 6 28.5 -79 tropi… -1 25 1013
## 3 Amy 1975 6 27 12 29.5 -79 tropi… -1 25 1013
## 4 Amy 1975 6 27 18 30.5 -79 tropi… -1 25 1013
## 5 Amy 1975 6 28 0 31.5 -78.8 tropi… -1 25 1012
## 6 Amy 1975 6 28 6 32.4 -78.7 tropi… -1 25 1012
## 7 Amy 1975 6 28 12 33.3 -78 tropi… -1 25 1011
## 8 Amy 1975 6 28 18 34 -77 tropi… -1 30 1006
## 9 Amy 1975 6 29 0 34.4 -75.8 tropi… 0 35 1004
## 10 Amy 1975 6 29 6 34 -74.8 tropi… 0 40 1002
## # … with 10,000 more rows, and 2 more variables: ts_diameter <dbl>,
## # hu_diameter <dbl>
```
#### Discussion
When you call a tibble directly, R will display enough information to give you a quick sense of the contents of the tibble. This includes:
1. the dimensions of the tibble
2. the column names and types
3. as many cells of the tibble as will fit comfortably in your console window
#### Solution
```
storms
```
```
## # A tibble: 10,010 x 13
## name year month day hour lat long status category wind pressure
## <chr> <dbl> <dbl> <int> <dbl> <dbl> <dbl> <chr> <ord> <int> <int>
## 1 Amy 1975 6 27 0 27.5 -79 tropi… -1 25 1013
## 2 Amy 1975 6 27 6 28.5 -79 tropi… -1 25 1013
## 3 Amy 1975 6 27 12 29.5 -79 tropi… -1 25 1013
## 4 Amy 1975 6 27 18 30.5 -79 tropi… -1 25 1013
## 5 Amy 1975 6 28 0 31.5 -78.8 tropi… -1 25 1012
## 6 Amy 1975 6 28 6 32.4 -78.7 tropi… -1 25 1012
## 7 Amy 1975 6 28 12 33.3 -78 tropi… -1 25 1011
## 8 Amy 1975 6 28 18 34 -77 tropi… -1 30 1006
## 9 Amy 1975 6 29 0 34.4 -75.8 tropi… 0 35 1004
## 10 Amy 1975 6 29 6 34 -74.8 tropi… 0 40 1002
## # … with 10,000 more rows, and 2 more variables: ts_diameter <dbl>,
## # hu_diameter <dbl>
```
#### Discussion
When you call a tibble directly, R will display enough information to give you a quick sense of the contents of the tibble. This includes:
1. the dimensions of the tibble
2. the column names and types
3. as many cells of the tibble as will fit comfortably in your console window
3\.5 Inspect every cell of a tibble
-----------------------------------
You want to see every value that is stored in a tibble.
#### Solution
```
View(storms)
```
#### Discussion
`View()` (with a capital V) opens the tibble in R’s data viewer, which will let you scroll to every cell in the tibble.
#### Solution
```
View(storms)
```
#### Discussion
`View()` (with a capital V) opens the tibble in R’s data viewer, which will let you scroll to every cell in the tibble.
3\.6 Spread a pair of columns into a field of cells
---------------------------------------------------
You want to **pivot**, **convert long data to wide**, or move variable names out of the cells and into the column names. These are different ways of describing the same action.
For example, `table2` contains `type`, which is a column that repeats the variable names `case` and `population`. To make `table2` tidy, you must move `case` and `population` values into their own columns.
```
table2
```
```
## # A tibble: 12 x 4
## country year type count
## <chr> <int> <chr> <int>
## 1 Afghanistan 1999 cases 745
## 2 Afghanistan 1999 population 19987071
## 3 Afghanistan 2000 cases 2666
## 4 Afghanistan 2000 population 20595360
## 5 Brazil 1999 cases 37737
## 6 Brazil 1999 population 172006362
## 7 Brazil 2000 cases 80488
## 8 Brazil 2000 population 174504898
## 9 China 1999 cases 212258
## 10 China 1999 population 1272915272
## 11 China 2000 cases 213766
## 12 China 2000 population 1280428583
```
#### Solution
```
table2 %>%
spread(key = type, value = count)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `spread()`, assign the column that contains variable names to `key`. Assign the column that contains the values that are associated with those names to `value`. `spread()` will:
1. Make a copy of the original table
2. Remove the `key` and `value` columns from the copy
3. Remove every duplicate row in the data set that remains
4. Insert a new column for each unique variable name in the `key` column
5. Fill the new columns with the values of the `value` column in a way that preserves every relationship between values in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `spread()` will inherit the data type of the `value` column. If you would to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
#### Solution
```
table2 %>%
spread(key = type, value = count)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `spread()`, assign the column that contains variable names to `key`. Assign the column that contains the values that are associated with those names to `value`. `spread()` will:
1. Make a copy of the original table
2. Remove the `key` and `value` columns from the copy
3. Remove every duplicate row in the data set that remains
4. Insert a new column for each unique variable name in the `key` column
5. Fill the new columns with the values of the `value` column in a way that preserves every relationship between values in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `spread()` will inherit the data type of the `value` column. If you would to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
3\.7 Gather a field of cells into a pair of columns
---------------------------------------------------
You want to **convert wide data to long**, reshape a **two\-by\-two table**, or move variable values out of the column names and into the cells. These are different ways of describing the same action.
For example, `table4a` is a two\-by\-two table with the column names `1999` and `2000`. These names are values of a `year` variable. The field of cells in `table4a` contains counts of TB cases, which is another variable. To make `table4a` tidy, you need to move year and case values into their own columns.
```
table4a
```
```
## # A tibble: 3 x 3
## country `1999` `2000`
## * <chr> <int> <int>
## 1 Afghanistan 745 2666
## 2 Brazil 37737 80488
## 3 China 212258 213766
```
#### Solution
```
table4a %>%
gather(key = "year", value = "cases", 2:3)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <chr> <int>
## 1 Afghanistan 1999 745
## 2 Brazil 1999 37737
## 3 China 1999 212258
## 4 Afghanistan 2000 2666
## 5 Brazil 2000 80488
## 6 China 2000 213766
```
#### Discussion
`gather()` is the inverse of `spread()`: `gather()` collapses a field of cells that spans several columns into two new columns:
1. A column of former “keys”, which contains the column names of the former field
2. A column of former “values”, which contains the cell values of the former field
To use `gather()`, pick names for the new `key` and `value` columns, and supply them as strings. Then identify the columns to gather into the new key and value columns. `gather()` will:
1. Create a copy of the original table
2. Remove the identified columns from the copy
3. Add a key column with the supplied name
4. Fill the key column with the column names of the removed columns, repeating rows as necessary so that each combination of row and removed column name appears once
5. Add a value column with the supplied name
6. Fill the value column with the values of the removed columns in a way that preserves every relationship between values and column names in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
##### Identify columns to gather
You can identify the columns to gather (i.e. remove) by:
1. name
2. index (numbers)
3. inverse index (negative numbers that specifiy the columns to *retain*, all other columns will be removed.)
4. the [`select()` helpers](https://dplyr.tidyverse.org/reference/select.html#useful-functions) that come in the dplyr package
So for example, the following commands will do the same thing as the solution above:
```
table4a %>% gather(key = "year", value = "cases", `"1999", "2000")
table4a %>% gather(key = "year", value = "cases", -1)
table4a %>% gather(key = "year", value = "cases", one_of(c("1999", "2000")))
```
By default, the new key column will contain character strings. If you would like to convert the new key column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
#### Solution
```
table4a %>%
gather(key = "year", value = "cases", 2:3)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <chr> <int>
## 1 Afghanistan 1999 745
## 2 Brazil 1999 37737
## 3 China 1999 212258
## 4 Afghanistan 2000 2666
## 5 Brazil 2000 80488
## 6 China 2000 213766
```
#### Discussion
`gather()` is the inverse of `spread()`: `gather()` collapses a field of cells that spans several columns into two new columns:
1. A column of former “keys”, which contains the column names of the former field
2. A column of former “values”, which contains the cell values of the former field
To use `gather()`, pick names for the new `key` and `value` columns, and supply them as strings. Then identify the columns to gather into the new key and value columns. `gather()` will:
1. Create a copy of the original table
2. Remove the identified columns from the copy
3. Add a key column with the supplied name
4. Fill the key column with the column names of the removed columns, repeating rows as necessary so that each combination of row and removed column name appears once
5. Add a value column with the supplied name
6. Fill the value column with the values of the removed columns in a way that preserves every relationship between values and column names in the original data set
Since this is easier to see than explain, you may want to study the diagram and result above.
##### Identify columns to gather
You can identify the columns to gather (i.e. remove) by:
1. name
2. index (numbers)
3. inverse index (negative numbers that specifiy the columns to *retain*, all other columns will be removed.)
4. the [`select()` helpers](https://dplyr.tidyverse.org/reference/select.html#useful-functions) that come in the dplyr package
So for example, the following commands will do the same thing as the solution above:
```
table4a %>% gather(key = "year", value = "cases", `"1999", "2000")
table4a %>% gather(key = "year", value = "cases", -1)
table4a %>% gather(key = "year", value = "cases", one_of(c("1999", "2000")))
```
By default, the new key column will contain character strings. If you would like to convert the new key column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
##### Identify columns to gather
You can identify the columns to gather (i.e. remove) by:
1. name
2. index (numbers)
3. inverse index (negative numbers that specifiy the columns to *retain*, all other columns will be removed.)
4. the [`select()` helpers](https://dplyr.tidyverse.org/reference/select.html#useful-functions) that come in the dplyr package
So for example, the following commands will do the same thing as the solution above:
```
table4a %>% gather(key = "year", value = "cases", `"1999", "2000")
table4a %>% gather(key = "year", value = "cases", -1)
table4a %>% gather(key = "year", value = "cases", one_of(c("1999", "2000")))
```
By default, the new key column will contain character strings. If you would like to convert the new key column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
3\.8 Separate a column into new columns
---------------------------------------
You want to split a single column into multiple columns by separating each cell in the column into a row of cells. Each new cell should contain a separate portion of the value in the original cell.
For example, `table3` combines `cases` and `population` values in a single column named `rate`. To tidy `table3`, you need to separate `rate` into two columns: one for the `cases` variable and one for the `population` variable.
```
table3
```
```
## # A tibble: 6 x 3
## country year rate
## * <chr> <int> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Solution
```
table3 %>%
separate(col = rate, into = c("cases", "population"),
sep = "/", convert = TRUE)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `separate()`, pass `col` the name of the column to split, and pass `into` a vector of names for the new columns to split `col` into. You should supply one name for each new column that you expect to appear in the result; a mismatch will imply that something went wrong.
`separate()` will:
1. Create a copy of the original data set
2. Add a new column for each value of `into`. The values will become the names of the new columns.
3. Split each cell of `col` into multiple values, based on the locations of a separator character.
4. Place the new values into the new columns in order, one value per column
5. Remove the `col` column. Add the argument `remove = FALSE` to retain the `col` column in the final result.
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `separate()` will inherit the data type of the `col` column. If you would like to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
##### Control where cells are separated
By default, `separate()` will use non\-alpha\-numeric characters as a separators. Pass a regular expression to the `sep` argument to specify a different set of separators. Alternatively, pass an integer vector to the `sep` argument to split cells into sequences that each have a specific number of characters:
* `sep = 1` will split each cell between the first and second character.
* `sep = c(1, 3)` will split each cell between the first and second character and then again between the third and fourth character.
* `sep = -1` will split each cell between the last and second to last character.
##### Separate into multiple rows
`separate_rows()` behaves like `separate()` except that it places each new value into a new row (instead of into a new column).
To use `separate_rows()`, follow the same syntax as `separate()`.
#### Solution
```
table3 %>%
separate(col = rate, into = c("cases", "population"),
sep = "/", convert = TRUE)
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
To use `separate()`, pass `col` the name of the column to split, and pass `into` a vector of names for the new columns to split `col` into. You should supply one name for each new column that you expect to appear in the result; a mismatch will imply that something went wrong.
`separate()` will:
1. Create a copy of the original data set
2. Add a new column for each value of `into`. The values will become the names of the new columns.
3. Split each cell of `col` into multiple values, based on the locations of a separator character.
4. Place the new values into the new columns in order, one value per column
5. Remove the `col` column. Add the argument `remove = FALSE` to retain the `col` column in the final result.
Since this is easier to see than explain, you may want to study the diagram and result above.
Each new column created by `separate()` will inherit the data type of the `col` column. If you would like to convert each new column to the most sensible data type given its final contents, add the argument `convert = TRUE`.
##### Control where cells are separated
By default, `separate()` will use non\-alpha\-numeric characters as a separators. Pass a regular expression to the `sep` argument to specify a different set of separators. Alternatively, pass an integer vector to the `sep` argument to split cells into sequences that each have a specific number of characters:
* `sep = 1` will split each cell between the first and second character.
* `sep = c(1, 3)` will split each cell between the first and second character and then again between the third and fourth character.
* `sep = -1` will split each cell between the last and second to last character.
##### Separate into multiple rows
`separate_rows()` behaves like `separate()` except that it places each new value into a new row (instead of into a new column).
To use `separate_rows()`, follow the same syntax as `separate()`.
##### Control where cells are separated
By default, `separate()` will use non\-alpha\-numeric characters as a separators. Pass a regular expression to the `sep` argument to specify a different set of separators. Alternatively, pass an integer vector to the `sep` argument to split cells into sequences that each have a specific number of characters:
* `sep = 1` will split each cell between the first and second character.
* `sep = c(1, 3)` will split each cell between the first and second character and then again between the third and fourth character.
* `sep = -1` will split each cell between the last and second to last character.
##### Separate into multiple rows
`separate_rows()` behaves like `separate()` except that it places each new value into a new row (instead of into a new column).
To use `separate_rows()`, follow the same syntax as `separate()`.
3\.9 Unite multiple columns into a single column
------------------------------------------------
You want to combine several columns into a single column by uniting their values across rows.
For example, `table5` splits the `year` variable across two columns: `century` and `year`. To make `table5` tidy, you need to unite `century` and `year` into a single column.
```
table5
```
```
## # A tibble: 6 x 4
## country century year rate
## * <chr> <chr> <chr> <chr>
## 1 Afghanistan 19 99 745/19987071
## 2 Afghanistan 20 00 2666/20595360
## 3 Brazil 19 99 37737/172006362
## 4 Brazil 20 00 80488/174504898
## 5 China 19 99 212258/1272915272
## 6 China 20 00 213766/1280428583
```
#### Solution
```
table5 %>%
unite(col = "year", century, year, sep = "")
```
```
## # A tibble: 6 x 3
## country year rate
## <chr> <chr> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Discussion
To use `unite()`, give the `col` argument a character string to use as the name of the new column to create. Then list the columns to combine. Finally, give the `sep` argument a separator character to use to paste together the values in the cells of each column. `unite()` will:
1. Create a copy of the original data set
2. Paste together the values of the listed columns in a vectorized (i.e. rowwise) fashion. `unite()` will place the value of `sep` between each value during the paste process.
3. Append the results as a new column whose name is the value of `col`
4. Remove the listed columns. To retain the columns in the result, add the argument `remove = FALSE`.
Since this is easier to see than explain, you may want to study the diagram and result above.
If you do not suppy a `sep` value, `unite()` will use `_` as a separator character. To avoid a separator character, use `sep = ""`.
#### Solution
```
table5 %>%
unite(col = "year", century, year, sep = "")
```
```
## # A tibble: 6 x 3
## country year rate
## <chr> <chr> <chr>
## 1 Afghanistan 1999 745/19987071
## 2 Afghanistan 2000 2666/20595360
## 3 Brazil 1999 37737/172006362
## 4 Brazil 2000 80488/174504898
## 5 China 1999 212258/1272915272
## 6 China 2000 213766/1280428583
```
#### Discussion
To use `unite()`, give the `col` argument a character string to use as the name of the new column to create. Then list the columns to combine. Finally, give the `sep` argument a separator character to use to paste together the values in the cells of each column. `unite()` will:
1. Create a copy of the original data set
2. Paste together the values of the listed columns in a vectorized (i.e. rowwise) fashion. `unite()` will place the value of `sep` between each value during the paste process.
3. Append the results as a new column whose name is the value of `col`
4. Remove the listed columns. To retain the columns in the result, add the argument `remove = FALSE`.
Since this is easier to see than explain, you may want to study the diagram and result above.
If you do not suppy a `sep` value, `unite()` will use `_` as a separator character. To avoid a separator character, use `sep = ""`.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/transform-tables.html |
4 Transform Tables
==================
---
This chapter includes the following recipes:
1. [Arrange rows by value in ascending order](transform-tables.html#arrange-rows-by-value-in-ascending-order)
2. [Arrange rows by value in descending order](transform-tables.html#arrange-rows-by-value-in-descending-order)
3. [Filter rows with a logical test](transform-tables.html#filter-rows-with-a-logical-test)
4. [Select columns by name](transform-tables.html#select-columns-by-name)
5. [Drop columns by name](transform-tables.html#drop-columns-by-name)
6. [Select a range of columns](transform-tables.html#select-a-range-of-columns)
7. [Select columns by integer position](transform-tables.html#select-columns-by-integer-position)
8. [Select columns by start of name](transform-tables.html#select-columns-by-start-of-name)
9. [Select columns by end of name](transform-tables.html#select-columns-by-end-of-name)
10. [Select columns by string in name](transform-tables.html#select-columns-by-string-in-name)
11. [Reorder columns](transform-tables.html#reorder-columns)
12. [Reorder columns without naming each](transform-tables.html#reorder-columns-without-naming-each)
13. [Rename columns](transform-tables.html#rename-columns)
14. [Return the contents of a column as a vector](transform-tables.html#return-the-contents-of-a-column-as-a-vector)
15. [Mutate data (Add new variables)](transform-tables.html#mutate-data-add-new-variables)
16. [Summarise data](transform-tables.html#summarise-data)
17. [Group data](transform-tables.html#group-data)
18. [Summarise data by groups](transform-tables.html#summarise-data-by-groups)
19. [Nest a data frame](transform-tables.html#nest-a-data-frame)
20. [Extract a table from a nested data frame](transform-tables.html#extract-a-table-from-a-nested-data-frame)
21. [Unnest a data frame](transform-tables.html#unnest-a-data-frame)
22. [Combine transform recipes](transform-tables.html#combine-transform-recipes)
23. [Join data sets by common column(s)](transform-tables.html#joins)
24. [Find rows that have a match in another data set](transform-tables.html#find-rows-that-have-a-match-in-another-data-set)
25. [Find rows that do not have a match in another data set](transform-tables.html#find-rows-that-do-not-have-a-match-in-another-data-set)
---
What you should know before you begin
-------------------------------------
The dplyr package provides the most important tidyverse functions for manipulating tables. These functions share some defaults that make it easy to transform tables:
1. dplyr functions always return a transformed *copy* of your table. They won’t change your original table unless you tell them to (by saving over the name of the original table). That’s good news, because you should always retain a clean copy of your original data in case something goes wrong.
2. You can refer to columns by name inside of a dplyr function. There’s no need for `$` syntax or `""`. Every dplyr function requires you to supply a data frame, and it will recognize the columns in that data frame, e.g.
`summarise(mpg, h = mean(hwy), c = mean(cty))`
This only becomes a problem if you’d like to use an object that has the same name as one of the columns in the data frame. In this case, place `!!` before the object’s name to unquote it, dplyr will skip the columns when looking up the object. e.g.
`hwy <- 1:10`
`summarise(mpg, h = mean(!!hwy), c = mean(cty))`
Transforming a table sometimes requires more than one recipe. Why? Because tables are made of multiple data structures that work together:
1. The table itself is a data frame or tibble.
2. The columns of the table are vectors.
3. Some columns may be list\-columns, which are lists that contain vectors.
Each tidyverse function tends to focus on a single type of data structure; it is part of the tidyverse philosophy that each function should do one thing and do it well.
So to transform a table, begin with a recipe that transforms the structure of the table. You’ll find those recipes in this chapter. Then complete it with a recipe that transforms the actual data values in your table. The [Combine transform recipes](transform-tables.html#combine-transform-recipes) recipe will show you how.
4\.1 Arrange rows by value in ascending order
---------------------------------------------
You want to sort the rows of a data frame in **ascending** order by the values in one or more columns.
#### Solution
```
mpg %>%
arrange(displ)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 5 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 6 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 7 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 10 honda civic 1.8 2008 4 manual… f 26 34 r subco…
## # … with 224 more rows
```
#### Discussion
`arrange()` sorts the rows according to the values of the specified column, with the lowest values appearing near the top of the data frame. If you provide additional column names, `arrange()` will use the additional columns in order as tiebreakers to sort within rows that share the same value of the first column.
```
mpg %>%
arrange(displ, cty)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 5 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 6 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 7 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 volkswagen passat 1.8 1999 4 auto(l… f 18 29 p midsi…
## 10 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## # … with 224 more rows
```
4\.2 Arrange rows by value in descending order
----------------------------------------------
You want to sort the rows of a data frame in **descending** order by the values in one or more columns.
#### Solution
```
mpg %>%
arrange(desc(displ))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet c1500 su… 5.7 1999 8 auto(… r 13 17 r suv
## 10 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## # … with 224 more rows
```
#### Discussion
Place `desc()` around a column name to cause `arrange()` to sort by descending values of that column. You can use `desc()` for tie\-breaker columns as well (compare line nine below to the table above).
```
mpg %>%
arrange(desc(displ), desc(cty))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## 10 chevrolet corvette 5.7 1999 8 auto(… r 15 23 p 2sea…
## # … with 224 more rows
```
4\.3 Filter rows with a logical test
------------------------------------
You want to filter your table to just the rows that meet a specific condition.
#### Solution
```
mpg %>%
filter(model == "jetta")
```
```
## # A tibble: 9 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compa…
## 2 volkswagen jetta 2 1999 4 manual(m5) f 21 29 r compa…
## 3 volkswagen jetta 2 1999 4 auto(l4) f 19 26 r compa…
## 4 volkswagen jetta 2 2008 4 auto(s6) f 22 29 p compa…
## 5 volkswagen jetta 2 2008 4 manual(m6) f 21 29 p compa…
## 6 volkswagen jetta 2.5 2008 5 auto(s6) f 21 29 r compa…
## 7 volkswagen jetta 2.5 2008 5 manual(m5) f 21 29 r compa…
## 8 volkswagen jetta 2.8 1999 6 auto(l4) f 16 23 r compa…
## 9 volkswagen jetta 2.8 1999 6 manual(m5) f 17 24 r compa…
```
#### Discussion
`filter()` takes a logical test and returns the rows for which the logical test returns `TRUE`. `filter()` will match column names that appear within the logical test to columns in your data frame.
If you provide multiple logical tests, `filter()` will combine them with an AND operator (`&`):
```
mpg %>%
filter(model == "jetta", year == 1999)
```
Use R’s boolean operators, like `|`and `!`, to create other combinations of logical tests to pass to filter. See the help pages for `?Comparison` and `?Logic` to learn more about writing logical tests in R.
4\.4 Select columns by name
---------------------------
You want to return a “subset” of columns from your data frame by listing the name of each column to return.
#### Solution
```
table1 %>%
select(country, year, cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
`select()` returns a new data frame that includes each column passed to `select()`. Repeat a name to include the column twice.
4\.5 Drop columns by name
-------------------------
You want to return a “subset” of columns from your data frame by listing the name of each column to drop.
#### Solution
```
table1 %>%
select(-c(population, year))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
#### Discussion
If you use a `-` before a column name, `select()` will return every column in the data frame except that column. To drop more than one column at a time, group the columns into a vector preceded by `-`.
4\.6 Select a range of columns
------------------------------
You want to return two columns from a data frame as well as every column that appears between them.
#### Solution
```
table1 %>%
select(country:cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
If you combine two column names with a `:`, `select()` will return both columns and every column that appears between them in the data frame.
4\.7 Select columns by integer position
---------------------------------------
You want to return a “subset” of columns from your data frame by listing the position of each column to return.
#### Solution
```
table1 %>%
select(1, 2, 4)
```
```
## # A tibble: 6 x 3
## country year population
## <chr> <int> <int>
## 1 Afghanistan 1999 19987071
## 2 Afghanistan 2000 20595360
## 3 Brazil 1999 172006362
## 4 Brazil 2000 174504898
## 5 China 1999 1272915272
## 6 China 2000 1280428583
```
#### Discussion
`select()` interprets the whole number *n* as the \_n\_th column in the data set. You can combine numbers with `-` and `:` inside of `select()` as well.
4\.8 Select columns by start of name
------------------------------------
You want to return evey column in your data that begins with a specific string.
#### Solution
```
table1 %>%
select(starts_with("c"))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
4\.9 Select columns by end of name
----------------------------------
You want to return evey column in your data that ends with a specific string.
#### Solution
```
table1 %>%
select(ends_with("tion"))
```
```
## # A tibble: 6 x 1
## population
## <int>
## 1 19987071
## 2 20595360
## 3 172006362
## 4 174504898
## 5 1272915272
## 6 1280428583
```
4\.10 Select columns by string in name
--------------------------------------
You want to return evey column in your data whose name contains a specific string or regular expression.
#### Solution
```
table1 %>%
select(matches("o.*u"))
```
```
## # A tibble: 6 x 2
## country population
## <chr> <int>
## 1 Afghanistan 19987071
## 2 Afghanistan 20595360
## 3 Brazil 172006362
## 4 Brazil 174504898
## 5 China 1272915272
## 6 China 1280428583
```
#### Discussion
`o.*u` is a regular expression that matches an `o` followed by a `u` with any number of characters in between. `country` and `population` are returned because the names `country` and `population` each contain an `o` followed (at any distance) by a `u`.
See the help page for `?regex` to learn more about regular expressions in R.
4\.11 Reorder columns
---------------------
You want to return all of the columns in the original data frame in a new order.
#### Solution
```
table1 %>%
select(country, year, population, cases)
```
```
## # A tibble: 6 x 4
## country year population cases
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 19987071 745
## 2 Afghanistan 2000 20595360 2666
## 3 Brazil 1999 172006362 37737
## 4 Brazil 2000 174504898 80488
## 5 China 1999 1272915272 212258
## 6 China 2000 1280428583 213766
```
#### Discussion
Use `select()` to select all of the columns. List the column names in the new order.
4\.12 Reorder columns without naming each
-----------------------------------------
You want to reorder some of the columns in the original data frame, but you don’t care about the order for other columns, and you may have too many columns to name them each individually.
#### Solution
```
table1 %>%
select(country, year, everything())
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Use `everything()` within `select()` to select all of the columns in the order they are named: all columns are kept, and no columns are duplicated. Using `everything()` preserves the original ordering of the original (unnamed) columns.
4\.13 Rename columns
--------------------
You want to rename one or more columns in your data frame, retaining the rest.
#### Solution
```
table1 %>%
rename(state = country, date = year)
```
```
## # A tibble: 6 x 4
## state date cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
For each column to be renamed, type a new name for the column and set it equal to the old name for the column.
4\.14 Return the contents of a column as a vector
-------------------------------------------------
You want to return the contents of a single column as a vector, not as a data frame with one column.
#### Solution
```
table1 %>%
pull(cases)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
#### Discussion
`pull()` comes in the dplyr package. It does the equivalent of `pluck()` in the purrr package; however, `pull()` is designed to work specifically with data frames. `pluck()` is designed to work with all types of lists.
You can also pull a column by integer position:
```
table1 %>%
pull(3)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
4\.15 Mutate data (Add new variables)
-------------------------------------
You want to compute one or more new variables and add them to your table as columns.
#### Solution
```
table1 %>%
mutate(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 6
## country year cases population rate percent
## <chr> <int> <int> <int> <dbl> <dbl>
## 1 Afghanistan 1999 745 19987071 0.0000373 0.00373
## 2 Afghanistan 2000 2666 20595360 0.000129 0.0129
## 3 Brazil 1999 37737 172006362 0.000219 0.0219
## 4 Brazil 2000 80488 174504898 0.000461 0.0461
## 5 China 1999 212258 1272915272 0.000167 0.0167
## 6 China 2000 213766 1280428583 0.000167 0.0167
```
#### Discussion
To use `mutate()`, pass it a series of names followed by R expressions. `mutate()` will return a copy of your table that contains one column for each name that you pass to `mutate()`. The name of the column will be the name that you passed to `mutate()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a vector of the same length as the other columns in the data frame,[1](#fn1) because `mutate()` will add the vector as a new column.
In other words, `mutate()` is intended to be used with *vectorized functions*, which are functions that take a vector of values as input and return a new vector of values as output (e.g `abs()`, `round()`, and all of R’s math operations).
`mutate()` will build the columns in the order that you define them. As a result, you may use a new column in the column definitions that follow it.
##### Dropping the original data
Use `transmute()` to return only the new columns that `mutate()` would create.
```
table1 %>%
transmute(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 2
## rate percent
## <dbl> <dbl>
## 1 0.0000373 0.00373
## 2 0.000129 0.0129
## 3 0.000219 0.0219
## 4 0.000461 0.0461
## 5 0.000167 0.0167
## 6 0.000167 0.0167
```
4\.16 Summarise data
--------------------
You want to compute summary statistics for the data in your data frame.
#### Solution
```
table1 %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## # A tibble: 1 x 2
## total_cases max_rate
## <int> <dbl>
## 1 547660 0.000461
```
#### Discussion
To use `summarise()`, pass it a series of names followed by R expressions. `summarise()` will return a new tibble that contains one column for each name that you pass to `summarise()`. The name of the column will be the name that you passed to `summarise()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a single value because `summarise()` will always return a 1 x n tibble.[2](#fn2)
In other words, `summarise()` is intended to be used with *summary functions*, which are functions that take a vector of values as input and return a single value as output (e.g `sum()`, `max()`, `mean()`). In normal use, `summarise()` will pass each function a column (i.e. vector) of values and expect a single value in return.
`summarize()` is an alias for `summarise()`.
4\.17 Group data
----------------
You want to assign the rows of your data to subgroups based on their shared values or their shared combinations of values.
You want to do this as the first step in a multi\-step analysis, because grouping data doesn’t do anything noticeable until you pass the grouped data to a tidyverse function.
#### Solution
```
table1 %>%
group_by(country)
```
```
## # A tibble: 6 x 4
## # Groups: country [3]
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
`group_by()` converts your data into a grouped tibble, which is a tibble subclass that indicates in its attributes[3](#fn3) which rows belong to which group.
To group rows by the values of a single column, pass `group_by()` a single column name. To group rows by the unique *combination* of values across multiple columns, pass `group_by()` the names of two or more columns.
##### Group\-wise operations
Where appropriate, tidyverse functions recognize grouped tibbles. Tidyverse functions:
1. treat each group as a distinct data set
2. execute their code separately on each group
3. combine the results into a new data frame that contains the same grouping characteristics. `summarise()` is a slight exception, see below.
4\.18 Summarise data by groups
------------------------------
You want to compute summary statistics for different subgroups of data in your grouped data frame.
#### Solution
```
table1 %>%
group_by(country) %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 3
## country total_cases max_rate
## <chr> <int> <dbl>
## 1 Afghanistan 3411 0.000129
## 2 Brazil 118225 0.000461
## 3 China 426024 0.000167
```
#### Discussion
Group\-wise summaries are the most common use of grouped data. When you apply `summarise()` to grouped data, `summarise()` will:
1. treat each group as a distinct data set
2. compute separate statistics for each group
3. combine the results into a new tibble
Since this is easier to see than explain, you may want to study the diagram and result above.
`summarise()` gives grouped data special treatment in two ways:
1. `summarise()` will retain the column(s) that were used to group the data in its result. This makes the output of grouped summaries interpretable.
2. `summarise()` will shorten the grouping criteria of its result by one column name (the last column name). Compare this to other tidyverse functions which give their result the *same* grouping criteria as their input.
For example, if the input of `summarise()` is grouped by `country` and `year`, the output of `summarise()` will only be grouped by `country`. Because of this, you can call `summarise()` repeatedly to view progressively higher level summaries:
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## # A tibble: 6 x 3
## # Groups: country [3]
## country year total_cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 2
## country total_cases
## <chr> <int>
## 1 Afghanistan 3411
## 2 Brazil 118225
## 3 China 426024
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 1 x 1
## total_cases
## <int>
## 1 547660
```
4\.19 Nest a data frame
-----------------------
You want to move portions of your data frame into their own tables, and then store those tables in cells in your original data frame.
This lets you manipulate the collection of tables with `filter()`, `select()`, `arrange()`, and so on, as you would normally manipulate a collection of values.
#### Solution
```
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
```
## # A tibble: 3 x 2
## # Groups: Species [3]
## Species Measurements
## <fct> <list>
## 1 setosa <tibble [50 × 4]>
## 2 versicolor <tibble [50 × 4]>
## 3 virginica <tibble [50 × 4]>
```
#### Discussion
`nest()` comes in the tidyr package. You can use it to nest portions of your data frame in two ways:
1. Pass `nest()` a *grouped data frame* made with `dplyr::group_by()` (as above). `nest()` will create a separate table for each group. The table will contain every row in the group and every column that is not part of the grouping criteria.
2. Pass `nest()` an ungrouped data frame and then specify which columns to nest. `nest()` will perform an implicit grouping on the combination of values that appear across the remaining columns, and then create a separate table for each implied grouping.
You can specify columns with the same syntax and helpers that you would use with dplyr’s `select()` function. So, for example, the two calls below will produce the same result as the solution above.
```
iris %>%
nest(Sepal.Width,
Sepal.Length,
Petal.Width,
Petal.Length,
.key = "Measurements") %>%
as_tibble()
iris %>%
nest(-Species, .key = "Measurements") %>%
as_tibble()
```
`nest()` preserves class, which means that `nest()` will return a data frame if its input is a data frame and a tibble if its input is a tibble. In each case, `nest()` will add the subtables to the result as a list\-column. Since, list\-columns are much easier to view in a tibble than a data frame, I recommend that you convert the result of `nest()` to a tibble when necessary.
Use the `.key` argument to provide a name for the new list\-column.
4\.20 Extract a table from a nested data frame
----------------------------------------------
You want to extract a single table nested within a data frame.
For example, you want to extract the setosa table from the nested data set created in the solution above,
```
nested_iris <-
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
#### Solution
```
nested_iris %>%
filter(Species == "setosa") %>%
unnest()
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 50 x 5
## # Groups: Species [1]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 40 more rows
```
#### Discussion
Since the table is in a cell of the data frame, it is possible to extract the table by extracting the contents of the cell (as below). Be sure to use double brackets to extract the content itself, and not a 1 x 1 data frame that contains the cell.
```
nested_iris[[1, 2]]
```
However, this approach can be suboptimal for nested data. Often a nested table relies on information that is stored alongside it for meaning. For example, in the above example, the table relies on the Species variable to indicate that it describes setosa flowers.
You can extract a nested table along with its related information by first subsetting to just the relevant information and then unnesting the result.
4\.21 Unnest a data frame
-------------------------
You want to transform a nested data frame into a flat data frame by re\-integrating the nested tables into the surrounding data frame.
For example, you want to unnest the `nested_iris` data frame created in the recipe above,
```
nested_iris <-
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
#### Solution
```
unnest(nested_iris)
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 150 x 5
## # Groups: Species [3]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 140 more rows
```
#### Discussion
`unnest()` converts a list\-column into a regular column or columns, repeating the surrounding rows as necessary. `unnest()` can handle list columns that contain atomic vectors and data frames, but cannot handle list columns that contain objects that would not naturally fit into a data frame, such as model objects.
By default, `unnest()` will unnest every list\-column in a data frame. To unnest only a subset of list\-columns, pass the names of the list\-columns to `unnest()` using dplyr `select()` syntax or helpers. `unnest()` will ignore unnamed list columns, excluding them from the result to return a flat data frame. `unnest()` comes in the tidyr package.
4\.22 Combine transform recipes
-------------------------------
You want to transform the structure of a table, and you want to use the data within the table to do it. This is the sort of thing you do everytime you call `summarise()` or `mutate()`.
#### Solution
See the discussion.
#### Discussion
When you transform a data frame, you often work with several functions:
1. A function that transforms the *table* itself, adding columns to its structure, or building a whole new table to hold results.
2. A function that transforms the values held in the table. Since values are always held in vectors (here column vectors), this function transforms a *vector* to create a new vector that can then be added to the empty column or table created by function 1\.
3. A function that mediates between 1 and 2 if the values are embedded in a list\-column, which is a *list*.
The most concise way to learn how to combine these functions is to learn about the functions in isolation, and to then have the functions call each other as necessary.
##### An example
`smiths` contains measurements that describe two fictional people: John and Mary Smith.
```
smiths
```
```
## # A tibble: 2 x 5
## subject time age weight height
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87
## 2 Mary Smith 1 NA NA 1.54
```
To round the `height` value for each person, you would need two functions:
1. `round()` which can round the values of the data vector stored in `smiths$height`.
```
round(smiths$height)
```
```
## [1] 2 2
```
2. `mutate()` which can add the results to a copy of the `smiths` table.
```
smiths %>%
mutate(height_int = round(height))
```
```
## # A tibble: 2 x 6
## subject time age weight height height_int
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87 2
## 2 Mary Smith 1 NA NA 1.54 2
```
`round()` works with data vectors. `mutate()` works with tables. Together they create the table you want.
##### An example that uses list columns
`sepals` is a tibble that I made for this example.[4](#fn4) It contains sepal length measurements for three species of flowers. The measurements are stored in a list\-column named `lengths`.
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
sepals
```
```
## # A tibble: 3 x 2
## Species lengths
## <fct> <list>
## 1 setosa <dbl [50]>
## 2 versicolor <dbl [50]>
## 3 virginica <dbl [50]>
```
Each cell in `lengths` contains a data vector of 50 sepal lengths.
```
# For example, the first cell of lengths
sepals[[1, 2]]
```
```
## [[1]]
## [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7
## [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9
## [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0
```
To add the average sepal length for each species to the table, you would need three functions:
1. `mean()` which can compute the average of a data vector in `lengths`
```
mean(sepals[[1, 2]])
```
```
## Warning in mean.default(sepals[[1, 2]]): argument is not numeric or logical:
## returning NA
```
```
## [1] NA
```
2. `map_dbl()` which can apply `mean()` to each cell of `lengths`, which is a list\-column.
```
map_dbl(sepals$lengths, mean)
```
```
## [1] 5.006 5.936 6.588
```
3. `mutate()` which can add the results to a copy of the `sepals` table.
```
sepals %>%
mutate(avg_length = map_dbl(lengths, mean))
```
```
## # A tibble: 3 x 3
## Species lengths avg_length
## <fct> <list> <dbl>
## 1 setosa <dbl [50]> 5.01
## 2 versicolor <dbl [50]> 5.94
## 3 virginica <dbl [50]> 6.59
```
`mean()` works with data vectors. `map_dbl()` works with list\-columns. `mutate()` works with tables. Together they create the table you want.
4\.23 Join data sets by common column(s)
----------------------------------------
You want to combine two data frames into a single data frame, such that observations in the first data frame are matched to the corresponding observations in the second data frame, even if those observations do not appear in the same order.
Your data is structured in such a way that you can match observations by the values of one or more ID columns that appear in both data frames.
For example, you would like to combine `band_members` and `band_instruments` into a single data frame based on the values of the `name` column. The new data frame will correctly list who plays what.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
left_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
#### Discussion
There are four ways to join content from one data frame to another. Each uses a different function name, but the same arguments and syntax. Of these, `left_join()` is the most common.
1. `left_join()` drops any row in the *second* data set does not match a row in the first data set. (It retains every row in the first data set, which appears on the left when you type the function call, hence the name).
2. `right_join()` drops any row in the *first* data set does not match a row in the first data set.
```
band_members %>%
right_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
## 3 Keith <NA> guitar
```
3. `inner_join()` drops any row in *either* data set that does not have a match in both data sets, i.e. `inner_join()` does not retain any incomplete rows in the final result.
```
band_members %>%
inner_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
```
4. `full_join()` retains every row from both data sets; it is the only join guaranteed to retain all of the original data.
```
band_members %>%
full_join(band_instruments, by = "name")
```
```
## # A tibble: 4 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
## 4 Keith <NA> guitar
```
##### Mutating joins
`left_join()`, `right_join()`, `inner_join()`, and `full_join()` are collectively called *mutating joins* because they add additional columns to a copy of a data set, as does `mutate()`.
##### Specifying column(s) to join on
By default, the mutating join functions will join on the set of columns whose names appear in both data frames. The join functions will match each row in the first data frame to the row in the second data frame that has the same combination of values across the commonly named columns.
To override the default, add a `by` argument to your join function. Pass it the name(s) of the column(s) to join on as a character vector. These names should appear in both data sets. R will join together rows that contain the same combination of values in these columns, ignoring the values in other columns, even if those columns share a name with a column in the other data frame.
```
table1 %>%
left_join(table3, by = c("country", "year"))
```
```
## # A tibble: 6 x 5
## country year cases population rate
## <chr> <int> <int> <int> <chr>
## 1 Afghanistan 1999 745 19987071 745/19987071
## 2 Afghanistan 2000 2666 20595360 2666/20595360
## 3 Brazil 1999 37737 172006362 37737/172006362
## 4 Brazil 2000 80488 174504898 80488/174504898
## 5 China 1999 212258 1272915272 212258/1272915272
## 6 China 2000 213766 1280428583 213766/1280428583
```
##### Joining when ID names do not match
Often an ID variable will appear with a different name in each data frame. For example, the `name` variable appears as `artist` in `band_instruments2`.
```
band_instruments2
```
```
## # A tibble: 3 x 2
## artist plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
To join by two columns that have different names, pass `by` a named character vector: each element of the vector should be a pair of names, e.g.
```
c(name1 = "name2", name4 = "name4")
```
For each element,
1. Write the name of the column that appears in the first data frame
2. Write an equals sign
3. Write the name of the matching column that appears in the second data set.
Only the second name needs to be surrounded with quotation marks. The join will match the corresponding columns across data frames.
```
band_members %>%
left_join(band_instruments2, by = c(name = "artist"))
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
R will use the column name(s) from the first data set in the result.
##### Suffixes
Joins will append the suffixes `.x` and `.y` to any columns that have the same name in both data sets *but are not used to join on*. The columns from the first data set are suffixed with `.x`, the columns from the second with `.y`.
```
table4a %>%
left_join(table4b, by = "country")
```
```
## # A tibble: 3 x 5
## country `1999.x` `2000.x` `1999.y` `2000.y`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
To use different suffixes, supply a character vector of length two as a `suffix` argument.
```
table4a %>%
left_join(table4b, by = "country", suffix = c("_cases", "_pop"))
```
```
## # A tibble: 3 x 5
## country `1999_cases` `2000_cases` `1999_pop` `2000_pop`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
4\.24 Find rows that have a match in another data set
-----------------------------------------------------
You want to find the rows in one data frame that have a match in a second data frame. By match, you mean that both rows refer to the same observation, even if they include different measurements.
Your data is structured in such a way that you can match rows by the values of one or more ID columns that appear in both data frames.
For example, you would like to return the rows of `band_members` that have a corresponding row in `band_instruments`. The new data frame will be a reduced version of `band_members` that does not contain any new columns.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
semi_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 2
## name band
## <chr> <chr>
## 1 John Beatles
## 2 Paul Beatles
```
#### Discussion
`semi_join()` returns only the rows of the first data frame that *have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `semi_join()` a useful way to preview which rows will be retained by a mutating join.
`semi_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match).
##### Filtering joins
`semi_join()` and `anti_join()` (see below) are called *filtering joins* because they filter a data frame to only those rows that meet a specific criteria, as does `filter()`.
Unlike mutating joins, filtering joins do not add columns from the second data frame to the first. Instead, they use the second data frame to identify rows to return from the first.
When you need to filter on a complicated set of conditions, filtering joins can be more effective than `filter()`: use `tribble()` to create a data frame to filter against with a filtering join.
4\.25 Find rows that do not have a match in another data set
------------------------------------------------------------
You want to find the rows in one data frame that *do not* have a match in a second data frame. By match, you mean that both rows refer to the same observation, even if they include different measurements.
Your data is structured in such a way that you can match rows by the values of one or more ID columns that appear in both data frames.
For example, you would like to return the rows of `band_members` that do not have a corresponding row in `band_instruments`. The new data frame will be a reduced version of `band_members` that does not contain any new columns.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
anti_join(band_instruments, by = "name")
```
```
## # A tibble: 1 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
```
#### Discussion
`anti_join()` returns only the rows of the first data frame that *do not have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `ant_join()` a useful way to debug a mutating join.
`anti_join()` provides a useful way to check for typos that could interfere with a mutating join; these rows will not have a match in the second data frame (assuming that the typo does not also appear in the second data frame).
`anti_join()` also highlights entries that are coded in different ways across data frames, such as `"North_Carolina"` and `"North Carolina"`.
`anti_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match). Along with `semi_join()`, `anti_join()` is one of the two [Filtering joins](transform-tables.html#filtering-joins).
What you should know before you begin
-------------------------------------
The dplyr package provides the most important tidyverse functions for manipulating tables. These functions share some defaults that make it easy to transform tables:
1. dplyr functions always return a transformed *copy* of your table. They won’t change your original table unless you tell them to (by saving over the name of the original table). That’s good news, because you should always retain a clean copy of your original data in case something goes wrong.
2. You can refer to columns by name inside of a dplyr function. There’s no need for `$` syntax or `""`. Every dplyr function requires you to supply a data frame, and it will recognize the columns in that data frame, e.g.
`summarise(mpg, h = mean(hwy), c = mean(cty))`
This only becomes a problem if you’d like to use an object that has the same name as one of the columns in the data frame. In this case, place `!!` before the object’s name to unquote it, dplyr will skip the columns when looking up the object. e.g.
`hwy <- 1:10`
`summarise(mpg, h = mean(!!hwy), c = mean(cty))`
Transforming a table sometimes requires more than one recipe. Why? Because tables are made of multiple data structures that work together:
1. The table itself is a data frame or tibble.
2. The columns of the table are vectors.
3. Some columns may be list\-columns, which are lists that contain vectors.
Each tidyverse function tends to focus on a single type of data structure; it is part of the tidyverse philosophy that each function should do one thing and do it well.
So to transform a table, begin with a recipe that transforms the structure of the table. You’ll find those recipes in this chapter. Then complete it with a recipe that transforms the actual data values in your table. The [Combine transform recipes](transform-tables.html#combine-transform-recipes) recipe will show you how.
4\.1 Arrange rows by value in ascending order
---------------------------------------------
You want to sort the rows of a data frame in **ascending** order by the values in one or more columns.
#### Solution
```
mpg %>%
arrange(displ)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 5 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 6 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 7 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 10 honda civic 1.8 2008 4 manual… f 26 34 r subco…
## # … with 224 more rows
```
#### Discussion
`arrange()` sorts the rows according to the values of the specified column, with the lowest values appearing near the top of the data frame. If you provide additional column names, `arrange()` will use the additional columns in order as tiebreakers to sort within rows that share the same value of the first column.
```
mpg %>%
arrange(displ, cty)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 5 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 6 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 7 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 volkswagen passat 1.8 1999 4 auto(l… f 18 29 p midsi…
## 10 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## # … with 224 more rows
```
#### Solution
```
mpg %>%
arrange(displ)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 5 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 6 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 7 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 10 honda civic 1.8 2008 4 manual… f 26 34 r subco…
## # … with 224 more rows
```
#### Discussion
`arrange()` sorts the rows according to the values of the specified column, with the lowest values appearing near the top of the data frame. If you provide additional column names, `arrange()` will use the additional columns in order as tiebreakers to sort within rows that share the same value of the first column.
```
mpg %>%
arrange(displ, cty)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 5 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 6 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 7 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 volkswagen passat 1.8 1999 4 auto(l… f 18 29 p midsi…
## 10 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## # … with 224 more rows
```
4\.2 Arrange rows by value in descending order
----------------------------------------------
You want to sort the rows of a data frame in **descending** order by the values in one or more columns.
#### Solution
```
mpg %>%
arrange(desc(displ))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet c1500 su… 5.7 1999 8 auto(… r 13 17 r suv
## 10 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## # … with 224 more rows
```
#### Discussion
Place `desc()` around a column name to cause `arrange()` to sort by descending values of that column. You can use `desc()` for tie\-breaker columns as well (compare line nine below to the table above).
```
mpg %>%
arrange(desc(displ), desc(cty))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## 10 chevrolet corvette 5.7 1999 8 auto(… r 15 23 p 2sea…
## # … with 224 more rows
```
#### Solution
```
mpg %>%
arrange(desc(displ))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet c1500 su… 5.7 1999 8 auto(… r 13 17 r suv
## 10 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## # … with 224 more rows
```
#### Discussion
Place `desc()` around a column name to cause `arrange()` to sort by descending values of that column. You can use `desc()` for tie\-breaker columns as well (compare line nine below to the table above).
```
mpg %>%
arrange(desc(displ), desc(cty))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## 10 chevrolet corvette 5.7 1999 8 auto(… r 15 23 p 2sea…
## # … with 224 more rows
```
4\.3 Filter rows with a logical test
------------------------------------
You want to filter your table to just the rows that meet a specific condition.
#### Solution
```
mpg %>%
filter(model == "jetta")
```
```
## # A tibble: 9 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compa…
## 2 volkswagen jetta 2 1999 4 manual(m5) f 21 29 r compa…
## 3 volkswagen jetta 2 1999 4 auto(l4) f 19 26 r compa…
## 4 volkswagen jetta 2 2008 4 auto(s6) f 22 29 p compa…
## 5 volkswagen jetta 2 2008 4 manual(m6) f 21 29 p compa…
## 6 volkswagen jetta 2.5 2008 5 auto(s6) f 21 29 r compa…
## 7 volkswagen jetta 2.5 2008 5 manual(m5) f 21 29 r compa…
## 8 volkswagen jetta 2.8 1999 6 auto(l4) f 16 23 r compa…
## 9 volkswagen jetta 2.8 1999 6 manual(m5) f 17 24 r compa…
```
#### Discussion
`filter()` takes a logical test and returns the rows for which the logical test returns `TRUE`. `filter()` will match column names that appear within the logical test to columns in your data frame.
If you provide multiple logical tests, `filter()` will combine them with an AND operator (`&`):
```
mpg %>%
filter(model == "jetta", year == 1999)
```
Use R’s boolean operators, like `|`and `!`, to create other combinations of logical tests to pass to filter. See the help pages for `?Comparison` and `?Logic` to learn more about writing logical tests in R.
#### Solution
```
mpg %>%
filter(model == "jetta")
```
```
## # A tibble: 9 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compa…
## 2 volkswagen jetta 2 1999 4 manual(m5) f 21 29 r compa…
## 3 volkswagen jetta 2 1999 4 auto(l4) f 19 26 r compa…
## 4 volkswagen jetta 2 2008 4 auto(s6) f 22 29 p compa…
## 5 volkswagen jetta 2 2008 4 manual(m6) f 21 29 p compa…
## 6 volkswagen jetta 2.5 2008 5 auto(s6) f 21 29 r compa…
## 7 volkswagen jetta 2.5 2008 5 manual(m5) f 21 29 r compa…
## 8 volkswagen jetta 2.8 1999 6 auto(l4) f 16 23 r compa…
## 9 volkswagen jetta 2.8 1999 6 manual(m5) f 17 24 r compa…
```
#### Discussion
`filter()` takes a logical test and returns the rows for which the logical test returns `TRUE`. `filter()` will match column names that appear within the logical test to columns in your data frame.
If you provide multiple logical tests, `filter()` will combine them with an AND operator (`&`):
```
mpg %>%
filter(model == "jetta", year == 1999)
```
Use R’s boolean operators, like `|`and `!`, to create other combinations of logical tests to pass to filter. See the help pages for `?Comparison` and `?Logic` to learn more about writing logical tests in R.
4\.4 Select columns by name
---------------------------
You want to return a “subset” of columns from your data frame by listing the name of each column to return.
#### Solution
```
table1 %>%
select(country, year, cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
`select()` returns a new data frame that includes each column passed to `select()`. Repeat a name to include the column twice.
#### Solution
```
table1 %>%
select(country, year, cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
`select()` returns a new data frame that includes each column passed to `select()`. Repeat a name to include the column twice.
4\.5 Drop columns by name
-------------------------
You want to return a “subset” of columns from your data frame by listing the name of each column to drop.
#### Solution
```
table1 %>%
select(-c(population, year))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
#### Discussion
If you use a `-` before a column name, `select()` will return every column in the data frame except that column. To drop more than one column at a time, group the columns into a vector preceded by `-`.
#### Solution
```
table1 %>%
select(-c(population, year))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
#### Discussion
If you use a `-` before a column name, `select()` will return every column in the data frame except that column. To drop more than one column at a time, group the columns into a vector preceded by `-`.
4\.6 Select a range of columns
------------------------------
You want to return two columns from a data frame as well as every column that appears between them.
#### Solution
```
table1 %>%
select(country:cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
If you combine two column names with a `:`, `select()` will return both columns and every column that appears between them in the data frame.
#### Solution
```
table1 %>%
select(country:cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
If you combine two column names with a `:`, `select()` will return both columns and every column that appears between them in the data frame.
4\.7 Select columns by integer position
---------------------------------------
You want to return a “subset” of columns from your data frame by listing the position of each column to return.
#### Solution
```
table1 %>%
select(1, 2, 4)
```
```
## # A tibble: 6 x 3
## country year population
## <chr> <int> <int>
## 1 Afghanistan 1999 19987071
## 2 Afghanistan 2000 20595360
## 3 Brazil 1999 172006362
## 4 Brazil 2000 174504898
## 5 China 1999 1272915272
## 6 China 2000 1280428583
```
#### Discussion
`select()` interprets the whole number *n* as the \_n\_th column in the data set. You can combine numbers with `-` and `:` inside of `select()` as well.
#### Solution
```
table1 %>%
select(1, 2, 4)
```
```
## # A tibble: 6 x 3
## country year population
## <chr> <int> <int>
## 1 Afghanistan 1999 19987071
## 2 Afghanistan 2000 20595360
## 3 Brazil 1999 172006362
## 4 Brazil 2000 174504898
## 5 China 1999 1272915272
## 6 China 2000 1280428583
```
#### Discussion
`select()` interprets the whole number *n* as the \_n\_th column in the data set. You can combine numbers with `-` and `:` inside of `select()` as well.
4\.8 Select columns by start of name
------------------------------------
You want to return evey column in your data that begins with a specific string.
#### Solution
```
table1 %>%
select(starts_with("c"))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
#### Solution
```
table1 %>%
select(starts_with("c"))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
4\.9 Select columns by end of name
----------------------------------
You want to return evey column in your data that ends with a specific string.
#### Solution
```
table1 %>%
select(ends_with("tion"))
```
```
## # A tibble: 6 x 1
## population
## <int>
## 1 19987071
## 2 20595360
## 3 172006362
## 4 174504898
## 5 1272915272
## 6 1280428583
```
#### Solution
```
table1 %>%
select(ends_with("tion"))
```
```
## # A tibble: 6 x 1
## population
## <int>
## 1 19987071
## 2 20595360
## 3 172006362
## 4 174504898
## 5 1272915272
## 6 1280428583
```
4\.10 Select columns by string in name
--------------------------------------
You want to return evey column in your data whose name contains a specific string or regular expression.
#### Solution
```
table1 %>%
select(matches("o.*u"))
```
```
## # A tibble: 6 x 2
## country population
## <chr> <int>
## 1 Afghanistan 19987071
## 2 Afghanistan 20595360
## 3 Brazil 172006362
## 4 Brazil 174504898
## 5 China 1272915272
## 6 China 1280428583
```
#### Discussion
`o.*u` is a regular expression that matches an `o` followed by a `u` with any number of characters in between. `country` and `population` are returned because the names `country` and `population` each contain an `o` followed (at any distance) by a `u`.
See the help page for `?regex` to learn more about regular expressions in R.
#### Solution
```
table1 %>%
select(matches("o.*u"))
```
```
## # A tibble: 6 x 2
## country population
## <chr> <int>
## 1 Afghanistan 19987071
## 2 Afghanistan 20595360
## 3 Brazil 172006362
## 4 Brazil 174504898
## 5 China 1272915272
## 6 China 1280428583
```
#### Discussion
`o.*u` is a regular expression that matches an `o` followed by a `u` with any number of characters in between. `country` and `population` are returned because the names `country` and `population` each contain an `o` followed (at any distance) by a `u`.
See the help page for `?regex` to learn more about regular expressions in R.
4\.11 Reorder columns
---------------------
You want to return all of the columns in the original data frame in a new order.
#### Solution
```
table1 %>%
select(country, year, population, cases)
```
```
## # A tibble: 6 x 4
## country year population cases
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 19987071 745
## 2 Afghanistan 2000 20595360 2666
## 3 Brazil 1999 172006362 37737
## 4 Brazil 2000 174504898 80488
## 5 China 1999 1272915272 212258
## 6 China 2000 1280428583 213766
```
#### Discussion
Use `select()` to select all of the columns. List the column names in the new order.
#### Solution
```
table1 %>%
select(country, year, population, cases)
```
```
## # A tibble: 6 x 4
## country year population cases
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 19987071 745
## 2 Afghanistan 2000 20595360 2666
## 3 Brazil 1999 172006362 37737
## 4 Brazil 2000 174504898 80488
## 5 China 1999 1272915272 212258
## 6 China 2000 1280428583 213766
```
#### Discussion
Use `select()` to select all of the columns. List the column names in the new order.
4\.12 Reorder columns without naming each
-----------------------------------------
You want to reorder some of the columns in the original data frame, but you don’t care about the order for other columns, and you may have too many columns to name them each individually.
#### Solution
```
table1 %>%
select(country, year, everything())
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Use `everything()` within `select()` to select all of the columns in the order they are named: all columns are kept, and no columns are duplicated. Using `everything()` preserves the original ordering of the original (unnamed) columns.
#### Solution
```
table1 %>%
select(country, year, everything())
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Use `everything()` within `select()` to select all of the columns in the order they are named: all columns are kept, and no columns are duplicated. Using `everything()` preserves the original ordering of the original (unnamed) columns.
4\.13 Rename columns
--------------------
You want to rename one or more columns in your data frame, retaining the rest.
#### Solution
```
table1 %>%
rename(state = country, date = year)
```
```
## # A tibble: 6 x 4
## state date cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
For each column to be renamed, type a new name for the column and set it equal to the old name for the column.
#### Solution
```
table1 %>%
rename(state = country, date = year)
```
```
## # A tibble: 6 x 4
## state date cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
For each column to be renamed, type a new name for the column and set it equal to the old name for the column.
4\.14 Return the contents of a column as a vector
-------------------------------------------------
You want to return the contents of a single column as a vector, not as a data frame with one column.
#### Solution
```
table1 %>%
pull(cases)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
#### Discussion
`pull()` comes in the dplyr package. It does the equivalent of `pluck()` in the purrr package; however, `pull()` is designed to work specifically with data frames. `pluck()` is designed to work with all types of lists.
You can also pull a column by integer position:
```
table1 %>%
pull(3)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
#### Solution
```
table1 %>%
pull(cases)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
#### Discussion
`pull()` comes in the dplyr package. It does the equivalent of `pluck()` in the purrr package; however, `pull()` is designed to work specifically with data frames. `pluck()` is designed to work with all types of lists.
You can also pull a column by integer position:
```
table1 %>%
pull(3)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
4\.15 Mutate data (Add new variables)
-------------------------------------
You want to compute one or more new variables and add them to your table as columns.
#### Solution
```
table1 %>%
mutate(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 6
## country year cases population rate percent
## <chr> <int> <int> <int> <dbl> <dbl>
## 1 Afghanistan 1999 745 19987071 0.0000373 0.00373
## 2 Afghanistan 2000 2666 20595360 0.000129 0.0129
## 3 Brazil 1999 37737 172006362 0.000219 0.0219
## 4 Brazil 2000 80488 174504898 0.000461 0.0461
## 5 China 1999 212258 1272915272 0.000167 0.0167
## 6 China 2000 213766 1280428583 0.000167 0.0167
```
#### Discussion
To use `mutate()`, pass it a series of names followed by R expressions. `mutate()` will return a copy of your table that contains one column for each name that you pass to `mutate()`. The name of the column will be the name that you passed to `mutate()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a vector of the same length as the other columns in the data frame,[1](#fn1) because `mutate()` will add the vector as a new column.
In other words, `mutate()` is intended to be used with *vectorized functions*, which are functions that take a vector of values as input and return a new vector of values as output (e.g `abs()`, `round()`, and all of R’s math operations).
`mutate()` will build the columns in the order that you define them. As a result, you may use a new column in the column definitions that follow it.
##### Dropping the original data
Use `transmute()` to return only the new columns that `mutate()` would create.
```
table1 %>%
transmute(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 2
## rate percent
## <dbl> <dbl>
## 1 0.0000373 0.00373
## 2 0.000129 0.0129
## 3 0.000219 0.0219
## 4 0.000461 0.0461
## 5 0.000167 0.0167
## 6 0.000167 0.0167
```
#### Solution
```
table1 %>%
mutate(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 6
## country year cases population rate percent
## <chr> <int> <int> <int> <dbl> <dbl>
## 1 Afghanistan 1999 745 19987071 0.0000373 0.00373
## 2 Afghanistan 2000 2666 20595360 0.000129 0.0129
## 3 Brazil 1999 37737 172006362 0.000219 0.0219
## 4 Brazil 2000 80488 174504898 0.000461 0.0461
## 5 China 1999 212258 1272915272 0.000167 0.0167
## 6 China 2000 213766 1280428583 0.000167 0.0167
```
#### Discussion
To use `mutate()`, pass it a series of names followed by R expressions. `mutate()` will return a copy of your table that contains one column for each name that you pass to `mutate()`. The name of the column will be the name that you passed to `mutate()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a vector of the same length as the other columns in the data frame,[1](#fn1) because `mutate()` will add the vector as a new column.
In other words, `mutate()` is intended to be used with *vectorized functions*, which are functions that take a vector of values as input and return a new vector of values as output (e.g `abs()`, `round()`, and all of R’s math operations).
`mutate()` will build the columns in the order that you define them. As a result, you may use a new column in the column definitions that follow it.
##### Dropping the original data
Use `transmute()` to return only the new columns that `mutate()` would create.
```
table1 %>%
transmute(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 2
## rate percent
## <dbl> <dbl>
## 1 0.0000373 0.00373
## 2 0.000129 0.0129
## 3 0.000219 0.0219
## 4 0.000461 0.0461
## 5 0.000167 0.0167
## 6 0.000167 0.0167
```
##### Dropping the original data
Use `transmute()` to return only the new columns that `mutate()` would create.
```
table1 %>%
transmute(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 2
## rate percent
## <dbl> <dbl>
## 1 0.0000373 0.00373
## 2 0.000129 0.0129
## 3 0.000219 0.0219
## 4 0.000461 0.0461
## 5 0.000167 0.0167
## 6 0.000167 0.0167
```
4\.16 Summarise data
--------------------
You want to compute summary statistics for the data in your data frame.
#### Solution
```
table1 %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## # A tibble: 1 x 2
## total_cases max_rate
## <int> <dbl>
## 1 547660 0.000461
```
#### Discussion
To use `summarise()`, pass it a series of names followed by R expressions. `summarise()` will return a new tibble that contains one column for each name that you pass to `summarise()`. The name of the column will be the name that you passed to `summarise()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a single value because `summarise()` will always return a 1 x n tibble.[2](#fn2)
In other words, `summarise()` is intended to be used with *summary functions*, which are functions that take a vector of values as input and return a single value as output (e.g `sum()`, `max()`, `mean()`). In normal use, `summarise()` will pass each function a column (i.e. vector) of values and expect a single value in return.
`summarize()` is an alias for `summarise()`.
#### Solution
```
table1 %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## # A tibble: 1 x 2
## total_cases max_rate
## <int> <dbl>
## 1 547660 0.000461
```
#### Discussion
To use `summarise()`, pass it a series of names followed by R expressions. `summarise()` will return a new tibble that contains one column for each name that you pass to `summarise()`. The name of the column will be the name that you passed to `summarise()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a single value because `summarise()` will always return a 1 x n tibble.[2](#fn2)
In other words, `summarise()` is intended to be used with *summary functions*, which are functions that take a vector of values as input and return a single value as output (e.g `sum()`, `max()`, `mean()`). In normal use, `summarise()` will pass each function a column (i.e. vector) of values and expect a single value in return.
`summarize()` is an alias for `summarise()`.
4\.17 Group data
----------------
You want to assign the rows of your data to subgroups based on their shared values or their shared combinations of values.
You want to do this as the first step in a multi\-step analysis, because grouping data doesn’t do anything noticeable until you pass the grouped data to a tidyverse function.
#### Solution
```
table1 %>%
group_by(country)
```
```
## # A tibble: 6 x 4
## # Groups: country [3]
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
`group_by()` converts your data into a grouped tibble, which is a tibble subclass that indicates in its attributes[3](#fn3) which rows belong to which group.
To group rows by the values of a single column, pass `group_by()` a single column name. To group rows by the unique *combination* of values across multiple columns, pass `group_by()` the names of two or more columns.
##### Group\-wise operations
Where appropriate, tidyverse functions recognize grouped tibbles. Tidyverse functions:
1. treat each group as a distinct data set
2. execute their code separately on each group
3. combine the results into a new data frame that contains the same grouping characteristics. `summarise()` is a slight exception, see below.
#### Solution
```
table1 %>%
group_by(country)
```
```
## # A tibble: 6 x 4
## # Groups: country [3]
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
`group_by()` converts your data into a grouped tibble, which is a tibble subclass that indicates in its attributes[3](#fn3) which rows belong to which group.
To group rows by the values of a single column, pass `group_by()` a single column name. To group rows by the unique *combination* of values across multiple columns, pass `group_by()` the names of two or more columns.
##### Group\-wise operations
Where appropriate, tidyverse functions recognize grouped tibbles. Tidyverse functions:
1. treat each group as a distinct data set
2. execute their code separately on each group
3. combine the results into a new data frame that contains the same grouping characteristics. `summarise()` is a slight exception, see below.
##### Group\-wise operations
Where appropriate, tidyverse functions recognize grouped tibbles. Tidyverse functions:
1. treat each group as a distinct data set
2. execute their code separately on each group
3. combine the results into a new data frame that contains the same grouping characteristics. `summarise()` is a slight exception, see below.
4\.18 Summarise data by groups
------------------------------
You want to compute summary statistics for different subgroups of data in your grouped data frame.
#### Solution
```
table1 %>%
group_by(country) %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 3
## country total_cases max_rate
## <chr> <int> <dbl>
## 1 Afghanistan 3411 0.000129
## 2 Brazil 118225 0.000461
## 3 China 426024 0.000167
```
#### Discussion
Group\-wise summaries are the most common use of grouped data. When you apply `summarise()` to grouped data, `summarise()` will:
1. treat each group as a distinct data set
2. compute separate statistics for each group
3. combine the results into a new tibble
Since this is easier to see than explain, you may want to study the diagram and result above.
`summarise()` gives grouped data special treatment in two ways:
1. `summarise()` will retain the column(s) that were used to group the data in its result. This makes the output of grouped summaries interpretable.
2. `summarise()` will shorten the grouping criteria of its result by one column name (the last column name). Compare this to other tidyverse functions which give their result the *same* grouping criteria as their input.
For example, if the input of `summarise()` is grouped by `country` and `year`, the output of `summarise()` will only be grouped by `country`. Because of this, you can call `summarise()` repeatedly to view progressively higher level summaries:
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## # A tibble: 6 x 3
## # Groups: country [3]
## country year total_cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 2
## country total_cases
## <chr> <int>
## 1 Afghanistan 3411
## 2 Brazil 118225
## 3 China 426024
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 1 x 1
## total_cases
## <int>
## 1 547660
```
#### Solution
```
table1 %>%
group_by(country) %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 3
## country total_cases max_rate
## <chr> <int> <dbl>
## 1 Afghanistan 3411 0.000129
## 2 Brazil 118225 0.000461
## 3 China 426024 0.000167
```
#### Discussion
Group\-wise summaries are the most common use of grouped data. When you apply `summarise()` to grouped data, `summarise()` will:
1. treat each group as a distinct data set
2. compute separate statistics for each group
3. combine the results into a new tibble
Since this is easier to see than explain, you may want to study the diagram and result above.
`summarise()` gives grouped data special treatment in two ways:
1. `summarise()` will retain the column(s) that were used to group the data in its result. This makes the output of grouped summaries interpretable.
2. `summarise()` will shorten the grouping criteria of its result by one column name (the last column name). Compare this to other tidyverse functions which give their result the *same* grouping criteria as their input.
For example, if the input of `summarise()` is grouped by `country` and `year`, the output of `summarise()` will only be grouped by `country`. Because of this, you can call `summarise()` repeatedly to view progressively higher level summaries:
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## # A tibble: 6 x 3
## # Groups: country [3]
## country year total_cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 2
## country total_cases
## <chr> <int>
## 1 Afghanistan 3411
## 2 Brazil 118225
## 3 China 426024
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 1 x 1
## total_cases
## <int>
## 1 547660
```
4\.19 Nest a data frame
-----------------------
You want to move portions of your data frame into their own tables, and then store those tables in cells in your original data frame.
This lets you manipulate the collection of tables with `filter()`, `select()`, `arrange()`, and so on, as you would normally manipulate a collection of values.
#### Solution
```
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
```
## # A tibble: 3 x 2
## # Groups: Species [3]
## Species Measurements
## <fct> <list>
## 1 setosa <tibble [50 × 4]>
## 2 versicolor <tibble [50 × 4]>
## 3 virginica <tibble [50 × 4]>
```
#### Discussion
`nest()` comes in the tidyr package. You can use it to nest portions of your data frame in two ways:
1. Pass `nest()` a *grouped data frame* made with `dplyr::group_by()` (as above). `nest()` will create a separate table for each group. The table will contain every row in the group and every column that is not part of the grouping criteria.
2. Pass `nest()` an ungrouped data frame and then specify which columns to nest. `nest()` will perform an implicit grouping on the combination of values that appear across the remaining columns, and then create a separate table for each implied grouping.
You can specify columns with the same syntax and helpers that you would use with dplyr’s `select()` function. So, for example, the two calls below will produce the same result as the solution above.
```
iris %>%
nest(Sepal.Width,
Sepal.Length,
Petal.Width,
Petal.Length,
.key = "Measurements") %>%
as_tibble()
iris %>%
nest(-Species, .key = "Measurements") %>%
as_tibble()
```
`nest()` preserves class, which means that `nest()` will return a data frame if its input is a data frame and a tibble if its input is a tibble. In each case, `nest()` will add the subtables to the result as a list\-column. Since, list\-columns are much easier to view in a tibble than a data frame, I recommend that you convert the result of `nest()` to a tibble when necessary.
Use the `.key` argument to provide a name for the new list\-column.
#### Solution
```
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
```
## # A tibble: 3 x 2
## # Groups: Species [3]
## Species Measurements
## <fct> <list>
## 1 setosa <tibble [50 × 4]>
## 2 versicolor <tibble [50 × 4]>
## 3 virginica <tibble [50 × 4]>
```
#### Discussion
`nest()` comes in the tidyr package. You can use it to nest portions of your data frame in two ways:
1. Pass `nest()` a *grouped data frame* made with `dplyr::group_by()` (as above). `nest()` will create a separate table for each group. The table will contain every row in the group and every column that is not part of the grouping criteria.
2. Pass `nest()` an ungrouped data frame and then specify which columns to nest. `nest()` will perform an implicit grouping on the combination of values that appear across the remaining columns, and then create a separate table for each implied grouping.
You can specify columns with the same syntax and helpers that you would use with dplyr’s `select()` function. So, for example, the two calls below will produce the same result as the solution above.
```
iris %>%
nest(Sepal.Width,
Sepal.Length,
Petal.Width,
Petal.Length,
.key = "Measurements") %>%
as_tibble()
iris %>%
nest(-Species, .key = "Measurements") %>%
as_tibble()
```
`nest()` preserves class, which means that `nest()` will return a data frame if its input is a data frame and a tibble if its input is a tibble. In each case, `nest()` will add the subtables to the result as a list\-column. Since, list\-columns are much easier to view in a tibble than a data frame, I recommend that you convert the result of `nest()` to a tibble when necessary.
Use the `.key` argument to provide a name for the new list\-column.
4\.20 Extract a table from a nested data frame
----------------------------------------------
You want to extract a single table nested within a data frame.
For example, you want to extract the setosa table from the nested data set created in the solution above,
```
nested_iris <-
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
#### Solution
```
nested_iris %>%
filter(Species == "setosa") %>%
unnest()
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 50 x 5
## # Groups: Species [1]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 40 more rows
```
#### Discussion
Since the table is in a cell of the data frame, it is possible to extract the table by extracting the contents of the cell (as below). Be sure to use double brackets to extract the content itself, and not a 1 x 1 data frame that contains the cell.
```
nested_iris[[1, 2]]
```
However, this approach can be suboptimal for nested data. Often a nested table relies on information that is stored alongside it for meaning. For example, in the above example, the table relies on the Species variable to indicate that it describes setosa flowers.
You can extract a nested table along with its related information by first subsetting to just the relevant information and then unnesting the result.
#### Solution
```
nested_iris %>%
filter(Species == "setosa") %>%
unnest()
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 50 x 5
## # Groups: Species [1]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 40 more rows
```
#### Discussion
Since the table is in a cell of the data frame, it is possible to extract the table by extracting the contents of the cell (as below). Be sure to use double brackets to extract the content itself, and not a 1 x 1 data frame that contains the cell.
```
nested_iris[[1, 2]]
```
However, this approach can be suboptimal for nested data. Often a nested table relies on information that is stored alongside it for meaning. For example, in the above example, the table relies on the Species variable to indicate that it describes setosa flowers.
You can extract a nested table along with its related information by first subsetting to just the relevant information and then unnesting the result.
4\.21 Unnest a data frame
-------------------------
You want to transform a nested data frame into a flat data frame by re\-integrating the nested tables into the surrounding data frame.
For example, you want to unnest the `nested_iris` data frame created in the recipe above,
```
nested_iris <-
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
#### Solution
```
unnest(nested_iris)
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 150 x 5
## # Groups: Species [3]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 140 more rows
```
#### Discussion
`unnest()` converts a list\-column into a regular column or columns, repeating the surrounding rows as necessary. `unnest()` can handle list columns that contain atomic vectors and data frames, but cannot handle list columns that contain objects that would not naturally fit into a data frame, such as model objects.
By default, `unnest()` will unnest every list\-column in a data frame. To unnest only a subset of list\-columns, pass the names of the list\-columns to `unnest()` using dplyr `select()` syntax or helpers. `unnest()` will ignore unnamed list columns, excluding them from the result to return a flat data frame. `unnest()` comes in the tidyr package.
#### Solution
```
unnest(nested_iris)
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 150 x 5
## # Groups: Species [3]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 140 more rows
```
#### Discussion
`unnest()` converts a list\-column into a regular column or columns, repeating the surrounding rows as necessary. `unnest()` can handle list columns that contain atomic vectors and data frames, but cannot handle list columns that contain objects that would not naturally fit into a data frame, such as model objects.
By default, `unnest()` will unnest every list\-column in a data frame. To unnest only a subset of list\-columns, pass the names of the list\-columns to `unnest()` using dplyr `select()` syntax or helpers. `unnest()` will ignore unnamed list columns, excluding them from the result to return a flat data frame. `unnest()` comes in the tidyr package.
4\.22 Combine transform recipes
-------------------------------
You want to transform the structure of a table, and you want to use the data within the table to do it. This is the sort of thing you do everytime you call `summarise()` or `mutate()`.
#### Solution
See the discussion.
#### Discussion
When you transform a data frame, you often work with several functions:
1. A function that transforms the *table* itself, adding columns to its structure, or building a whole new table to hold results.
2. A function that transforms the values held in the table. Since values are always held in vectors (here column vectors), this function transforms a *vector* to create a new vector that can then be added to the empty column or table created by function 1\.
3. A function that mediates between 1 and 2 if the values are embedded in a list\-column, which is a *list*.
The most concise way to learn how to combine these functions is to learn about the functions in isolation, and to then have the functions call each other as necessary.
##### An example
`smiths` contains measurements that describe two fictional people: John and Mary Smith.
```
smiths
```
```
## # A tibble: 2 x 5
## subject time age weight height
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87
## 2 Mary Smith 1 NA NA 1.54
```
To round the `height` value for each person, you would need two functions:
1. `round()` which can round the values of the data vector stored in `smiths$height`.
```
round(smiths$height)
```
```
## [1] 2 2
```
2. `mutate()` which can add the results to a copy of the `smiths` table.
```
smiths %>%
mutate(height_int = round(height))
```
```
## # A tibble: 2 x 6
## subject time age weight height height_int
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87 2
## 2 Mary Smith 1 NA NA 1.54 2
```
`round()` works with data vectors. `mutate()` works with tables. Together they create the table you want.
##### An example that uses list columns
`sepals` is a tibble that I made for this example.[4](#fn4) It contains sepal length measurements for three species of flowers. The measurements are stored in a list\-column named `lengths`.
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
sepals
```
```
## # A tibble: 3 x 2
## Species lengths
## <fct> <list>
## 1 setosa <dbl [50]>
## 2 versicolor <dbl [50]>
## 3 virginica <dbl [50]>
```
Each cell in `lengths` contains a data vector of 50 sepal lengths.
```
# For example, the first cell of lengths
sepals[[1, 2]]
```
```
## [[1]]
## [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7
## [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9
## [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0
```
To add the average sepal length for each species to the table, you would need three functions:
1. `mean()` which can compute the average of a data vector in `lengths`
```
mean(sepals[[1, 2]])
```
```
## Warning in mean.default(sepals[[1, 2]]): argument is not numeric or logical:
## returning NA
```
```
## [1] NA
```
2. `map_dbl()` which can apply `mean()` to each cell of `lengths`, which is a list\-column.
```
map_dbl(sepals$lengths, mean)
```
```
## [1] 5.006 5.936 6.588
```
3. `mutate()` which can add the results to a copy of the `sepals` table.
```
sepals %>%
mutate(avg_length = map_dbl(lengths, mean))
```
```
## # A tibble: 3 x 3
## Species lengths avg_length
## <fct> <list> <dbl>
## 1 setosa <dbl [50]> 5.01
## 2 versicolor <dbl [50]> 5.94
## 3 virginica <dbl [50]> 6.59
```
`mean()` works with data vectors. `map_dbl()` works with list\-columns. `mutate()` works with tables. Together they create the table you want.
#### Solution
See the discussion.
#### Discussion
When you transform a data frame, you often work with several functions:
1. A function that transforms the *table* itself, adding columns to its structure, or building a whole new table to hold results.
2. A function that transforms the values held in the table. Since values are always held in vectors (here column vectors), this function transforms a *vector* to create a new vector that can then be added to the empty column or table created by function 1\.
3. A function that mediates between 1 and 2 if the values are embedded in a list\-column, which is a *list*.
The most concise way to learn how to combine these functions is to learn about the functions in isolation, and to then have the functions call each other as necessary.
##### An example
`smiths` contains measurements that describe two fictional people: John and Mary Smith.
```
smiths
```
```
## # A tibble: 2 x 5
## subject time age weight height
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87
## 2 Mary Smith 1 NA NA 1.54
```
To round the `height` value for each person, you would need two functions:
1. `round()` which can round the values of the data vector stored in `smiths$height`.
```
round(smiths$height)
```
```
## [1] 2 2
```
2. `mutate()` which can add the results to a copy of the `smiths` table.
```
smiths %>%
mutate(height_int = round(height))
```
```
## # A tibble: 2 x 6
## subject time age weight height height_int
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87 2
## 2 Mary Smith 1 NA NA 1.54 2
```
`round()` works with data vectors. `mutate()` works with tables. Together they create the table you want.
##### An example that uses list columns
`sepals` is a tibble that I made for this example.[4](#fn4) It contains sepal length measurements for three species of flowers. The measurements are stored in a list\-column named `lengths`.
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
sepals
```
```
## # A tibble: 3 x 2
## Species lengths
## <fct> <list>
## 1 setosa <dbl [50]>
## 2 versicolor <dbl [50]>
## 3 virginica <dbl [50]>
```
Each cell in `lengths` contains a data vector of 50 sepal lengths.
```
# For example, the first cell of lengths
sepals[[1, 2]]
```
```
## [[1]]
## [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7
## [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9
## [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0
```
To add the average sepal length for each species to the table, you would need three functions:
1. `mean()` which can compute the average of a data vector in `lengths`
```
mean(sepals[[1, 2]])
```
```
## Warning in mean.default(sepals[[1, 2]]): argument is not numeric or logical:
## returning NA
```
```
## [1] NA
```
2. `map_dbl()` which can apply `mean()` to each cell of `lengths`, which is a list\-column.
```
map_dbl(sepals$lengths, mean)
```
```
## [1] 5.006 5.936 6.588
```
3. `mutate()` which can add the results to a copy of the `sepals` table.
```
sepals %>%
mutate(avg_length = map_dbl(lengths, mean))
```
```
## # A tibble: 3 x 3
## Species lengths avg_length
## <fct> <list> <dbl>
## 1 setosa <dbl [50]> 5.01
## 2 versicolor <dbl [50]> 5.94
## 3 virginica <dbl [50]> 6.59
```
`mean()` works with data vectors. `map_dbl()` works with list\-columns. `mutate()` works with tables. Together they create the table you want.
##### An example
`smiths` contains measurements that describe two fictional people: John and Mary Smith.
```
smiths
```
```
## # A tibble: 2 x 5
## subject time age weight height
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87
## 2 Mary Smith 1 NA NA 1.54
```
To round the `height` value for each person, you would need two functions:
1. `round()` which can round the values of the data vector stored in `smiths$height`.
```
round(smiths$height)
```
```
## [1] 2 2
```
2. `mutate()` which can add the results to a copy of the `smiths` table.
```
smiths %>%
mutate(height_int = round(height))
```
```
## # A tibble: 2 x 6
## subject time age weight height height_int
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87 2
## 2 Mary Smith 1 NA NA 1.54 2
```
`round()` works with data vectors. `mutate()` works with tables. Together they create the table you want.
##### An example that uses list columns
`sepals` is a tibble that I made for this example.[4](#fn4) It contains sepal length measurements for three species of flowers. The measurements are stored in a list\-column named `lengths`.
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
sepals
```
```
## # A tibble: 3 x 2
## Species lengths
## <fct> <list>
## 1 setosa <dbl [50]>
## 2 versicolor <dbl [50]>
## 3 virginica <dbl [50]>
```
Each cell in `lengths` contains a data vector of 50 sepal lengths.
```
# For example, the first cell of lengths
sepals[[1, 2]]
```
```
## [[1]]
## [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7
## [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9
## [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0
```
To add the average sepal length for each species to the table, you would need three functions:
1. `mean()` which can compute the average of a data vector in `lengths`
```
mean(sepals[[1, 2]])
```
```
## Warning in mean.default(sepals[[1, 2]]): argument is not numeric or logical:
## returning NA
```
```
## [1] NA
```
2. `map_dbl()` which can apply `mean()` to each cell of `lengths`, which is a list\-column.
```
map_dbl(sepals$lengths, mean)
```
```
## [1] 5.006 5.936 6.588
```
3. `mutate()` which can add the results to a copy of the `sepals` table.
```
sepals %>%
mutate(avg_length = map_dbl(lengths, mean))
```
```
## # A tibble: 3 x 3
## Species lengths avg_length
## <fct> <list> <dbl>
## 1 setosa <dbl [50]> 5.01
## 2 versicolor <dbl [50]> 5.94
## 3 virginica <dbl [50]> 6.59
```
`mean()` works with data vectors. `map_dbl()` works with list\-columns. `mutate()` works with tables. Together they create the table you want.
4\.23 Join data sets by common column(s)
----------------------------------------
You want to combine two data frames into a single data frame, such that observations in the first data frame are matched to the corresponding observations in the second data frame, even if those observations do not appear in the same order.
Your data is structured in such a way that you can match observations by the values of one or more ID columns that appear in both data frames.
For example, you would like to combine `band_members` and `band_instruments` into a single data frame based on the values of the `name` column. The new data frame will correctly list who plays what.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
left_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
#### Discussion
There are four ways to join content from one data frame to another. Each uses a different function name, but the same arguments and syntax. Of these, `left_join()` is the most common.
1. `left_join()` drops any row in the *second* data set does not match a row in the first data set. (It retains every row in the first data set, which appears on the left when you type the function call, hence the name).
2. `right_join()` drops any row in the *first* data set does not match a row in the first data set.
```
band_members %>%
right_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
## 3 Keith <NA> guitar
```
3. `inner_join()` drops any row in *either* data set that does not have a match in both data sets, i.e. `inner_join()` does not retain any incomplete rows in the final result.
```
band_members %>%
inner_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
```
4. `full_join()` retains every row from both data sets; it is the only join guaranteed to retain all of the original data.
```
band_members %>%
full_join(band_instruments, by = "name")
```
```
## # A tibble: 4 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
## 4 Keith <NA> guitar
```
##### Mutating joins
`left_join()`, `right_join()`, `inner_join()`, and `full_join()` are collectively called *mutating joins* because they add additional columns to a copy of a data set, as does `mutate()`.
##### Specifying column(s) to join on
By default, the mutating join functions will join on the set of columns whose names appear in both data frames. The join functions will match each row in the first data frame to the row in the second data frame that has the same combination of values across the commonly named columns.
To override the default, add a `by` argument to your join function. Pass it the name(s) of the column(s) to join on as a character vector. These names should appear in both data sets. R will join together rows that contain the same combination of values in these columns, ignoring the values in other columns, even if those columns share a name with a column in the other data frame.
```
table1 %>%
left_join(table3, by = c("country", "year"))
```
```
## # A tibble: 6 x 5
## country year cases population rate
## <chr> <int> <int> <int> <chr>
## 1 Afghanistan 1999 745 19987071 745/19987071
## 2 Afghanistan 2000 2666 20595360 2666/20595360
## 3 Brazil 1999 37737 172006362 37737/172006362
## 4 Brazil 2000 80488 174504898 80488/174504898
## 5 China 1999 212258 1272915272 212258/1272915272
## 6 China 2000 213766 1280428583 213766/1280428583
```
##### Joining when ID names do not match
Often an ID variable will appear with a different name in each data frame. For example, the `name` variable appears as `artist` in `band_instruments2`.
```
band_instruments2
```
```
## # A tibble: 3 x 2
## artist plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
To join by two columns that have different names, pass `by` a named character vector: each element of the vector should be a pair of names, e.g.
```
c(name1 = "name2", name4 = "name4")
```
For each element,
1. Write the name of the column that appears in the first data frame
2. Write an equals sign
3. Write the name of the matching column that appears in the second data set.
Only the second name needs to be surrounded with quotation marks. The join will match the corresponding columns across data frames.
```
band_members %>%
left_join(band_instruments2, by = c(name = "artist"))
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
R will use the column name(s) from the first data set in the result.
##### Suffixes
Joins will append the suffixes `.x` and `.y` to any columns that have the same name in both data sets *but are not used to join on*. The columns from the first data set are suffixed with `.x`, the columns from the second with `.y`.
```
table4a %>%
left_join(table4b, by = "country")
```
```
## # A tibble: 3 x 5
## country `1999.x` `2000.x` `1999.y` `2000.y`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
To use different suffixes, supply a character vector of length two as a `suffix` argument.
```
table4a %>%
left_join(table4b, by = "country", suffix = c("_cases", "_pop"))
```
```
## # A tibble: 3 x 5
## country `1999_cases` `2000_cases` `1999_pop` `2000_pop`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
#### Solution
```
band_members %>%
left_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
#### Discussion
There are four ways to join content from one data frame to another. Each uses a different function name, but the same arguments and syntax. Of these, `left_join()` is the most common.
1. `left_join()` drops any row in the *second* data set does not match a row in the first data set. (It retains every row in the first data set, which appears on the left when you type the function call, hence the name).
2. `right_join()` drops any row in the *first* data set does not match a row in the first data set.
```
band_members %>%
right_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
## 3 Keith <NA> guitar
```
3. `inner_join()` drops any row in *either* data set that does not have a match in both data sets, i.e. `inner_join()` does not retain any incomplete rows in the final result.
```
band_members %>%
inner_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
```
4. `full_join()` retains every row from both data sets; it is the only join guaranteed to retain all of the original data.
```
band_members %>%
full_join(band_instruments, by = "name")
```
```
## # A tibble: 4 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
## 4 Keith <NA> guitar
```
##### Mutating joins
`left_join()`, `right_join()`, `inner_join()`, and `full_join()` are collectively called *mutating joins* because they add additional columns to a copy of a data set, as does `mutate()`.
##### Specifying column(s) to join on
By default, the mutating join functions will join on the set of columns whose names appear in both data frames. The join functions will match each row in the first data frame to the row in the second data frame that has the same combination of values across the commonly named columns.
To override the default, add a `by` argument to your join function. Pass it the name(s) of the column(s) to join on as a character vector. These names should appear in both data sets. R will join together rows that contain the same combination of values in these columns, ignoring the values in other columns, even if those columns share a name with a column in the other data frame.
```
table1 %>%
left_join(table3, by = c("country", "year"))
```
```
## # A tibble: 6 x 5
## country year cases population rate
## <chr> <int> <int> <int> <chr>
## 1 Afghanistan 1999 745 19987071 745/19987071
## 2 Afghanistan 2000 2666 20595360 2666/20595360
## 3 Brazil 1999 37737 172006362 37737/172006362
## 4 Brazil 2000 80488 174504898 80488/174504898
## 5 China 1999 212258 1272915272 212258/1272915272
## 6 China 2000 213766 1280428583 213766/1280428583
```
##### Joining when ID names do not match
Often an ID variable will appear with a different name in each data frame. For example, the `name` variable appears as `artist` in `band_instruments2`.
```
band_instruments2
```
```
## # A tibble: 3 x 2
## artist plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
To join by two columns that have different names, pass `by` a named character vector: each element of the vector should be a pair of names, e.g.
```
c(name1 = "name2", name4 = "name4")
```
For each element,
1. Write the name of the column that appears in the first data frame
2. Write an equals sign
3. Write the name of the matching column that appears in the second data set.
Only the second name needs to be surrounded with quotation marks. The join will match the corresponding columns across data frames.
```
band_members %>%
left_join(band_instruments2, by = c(name = "artist"))
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
R will use the column name(s) from the first data set in the result.
##### Suffixes
Joins will append the suffixes `.x` and `.y` to any columns that have the same name in both data sets *but are not used to join on*. The columns from the first data set are suffixed with `.x`, the columns from the second with `.y`.
```
table4a %>%
left_join(table4b, by = "country")
```
```
## # A tibble: 3 x 5
## country `1999.x` `2000.x` `1999.y` `2000.y`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
To use different suffixes, supply a character vector of length two as a `suffix` argument.
```
table4a %>%
left_join(table4b, by = "country", suffix = c("_cases", "_pop"))
```
```
## # A tibble: 3 x 5
## country `1999_cases` `2000_cases` `1999_pop` `2000_pop`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
##### Mutating joins
`left_join()`, `right_join()`, `inner_join()`, and `full_join()` are collectively called *mutating joins* because they add additional columns to a copy of a data set, as does `mutate()`.
##### Specifying column(s) to join on
By default, the mutating join functions will join on the set of columns whose names appear in both data frames. The join functions will match each row in the first data frame to the row in the second data frame that has the same combination of values across the commonly named columns.
To override the default, add a `by` argument to your join function. Pass it the name(s) of the column(s) to join on as a character vector. These names should appear in both data sets. R will join together rows that contain the same combination of values in these columns, ignoring the values in other columns, even if those columns share a name with a column in the other data frame.
```
table1 %>%
left_join(table3, by = c("country", "year"))
```
```
## # A tibble: 6 x 5
## country year cases population rate
## <chr> <int> <int> <int> <chr>
## 1 Afghanistan 1999 745 19987071 745/19987071
## 2 Afghanistan 2000 2666 20595360 2666/20595360
## 3 Brazil 1999 37737 172006362 37737/172006362
## 4 Brazil 2000 80488 174504898 80488/174504898
## 5 China 1999 212258 1272915272 212258/1272915272
## 6 China 2000 213766 1280428583 213766/1280428583
```
##### Joining when ID names do not match
Often an ID variable will appear with a different name in each data frame. For example, the `name` variable appears as `artist` in `band_instruments2`.
```
band_instruments2
```
```
## # A tibble: 3 x 2
## artist plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
To join by two columns that have different names, pass `by` a named character vector: each element of the vector should be a pair of names, e.g.
```
c(name1 = "name2", name4 = "name4")
```
For each element,
1. Write the name of the column that appears in the first data frame
2. Write an equals sign
3. Write the name of the matching column that appears in the second data set.
Only the second name needs to be surrounded with quotation marks. The join will match the corresponding columns across data frames.
```
band_members %>%
left_join(band_instruments2, by = c(name = "artist"))
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
R will use the column name(s) from the first data set in the result.
##### Suffixes
Joins will append the suffixes `.x` and `.y` to any columns that have the same name in both data sets *but are not used to join on*. The columns from the first data set are suffixed with `.x`, the columns from the second with `.y`.
```
table4a %>%
left_join(table4b, by = "country")
```
```
## # A tibble: 3 x 5
## country `1999.x` `2000.x` `1999.y` `2000.y`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
To use different suffixes, supply a character vector of length two as a `suffix` argument.
```
table4a %>%
left_join(table4b, by = "country", suffix = c("_cases", "_pop"))
```
```
## # A tibble: 3 x 5
## country `1999_cases` `2000_cases` `1999_pop` `2000_pop`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
4\.24 Find rows that have a match in another data set
-----------------------------------------------------
You want to find the rows in one data frame that have a match in a second data frame. By match, you mean that both rows refer to the same observation, even if they include different measurements.
Your data is structured in such a way that you can match rows by the values of one or more ID columns that appear in both data frames.
For example, you would like to return the rows of `band_members` that have a corresponding row in `band_instruments`. The new data frame will be a reduced version of `band_members` that does not contain any new columns.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
semi_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 2
## name band
## <chr> <chr>
## 1 John Beatles
## 2 Paul Beatles
```
#### Discussion
`semi_join()` returns only the rows of the first data frame that *have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `semi_join()` a useful way to preview which rows will be retained by a mutating join.
`semi_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match).
##### Filtering joins
`semi_join()` and `anti_join()` (see below) are called *filtering joins* because they filter a data frame to only those rows that meet a specific criteria, as does `filter()`.
Unlike mutating joins, filtering joins do not add columns from the second data frame to the first. Instead, they use the second data frame to identify rows to return from the first.
When you need to filter on a complicated set of conditions, filtering joins can be more effective than `filter()`: use `tribble()` to create a data frame to filter against with a filtering join.
#### Solution
```
band_members %>%
semi_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 2
## name band
## <chr> <chr>
## 1 John Beatles
## 2 Paul Beatles
```
#### Discussion
`semi_join()` returns only the rows of the first data frame that *have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `semi_join()` a useful way to preview which rows will be retained by a mutating join.
`semi_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match).
##### Filtering joins
`semi_join()` and `anti_join()` (see below) are called *filtering joins* because they filter a data frame to only those rows that meet a specific criteria, as does `filter()`.
Unlike mutating joins, filtering joins do not add columns from the second data frame to the first. Instead, they use the second data frame to identify rows to return from the first.
When you need to filter on a complicated set of conditions, filtering joins can be more effective than `filter()`: use `tribble()` to create a data frame to filter against with a filtering join.
##### Filtering joins
`semi_join()` and `anti_join()` (see below) are called *filtering joins* because they filter a data frame to only those rows that meet a specific criteria, as does `filter()`.
Unlike mutating joins, filtering joins do not add columns from the second data frame to the first. Instead, they use the second data frame to identify rows to return from the first.
When you need to filter on a complicated set of conditions, filtering joins can be more effective than `filter()`: use `tribble()` to create a data frame to filter against with a filtering join.
4\.25 Find rows that do not have a match in another data set
------------------------------------------------------------
You want to find the rows in one data frame that *do not* have a match in a second data frame. By match, you mean that both rows refer to the same observation, even if they include different measurements.
Your data is structured in such a way that you can match rows by the values of one or more ID columns that appear in both data frames.
For example, you would like to return the rows of `band_members` that do not have a corresponding row in `band_instruments`. The new data frame will be a reduced version of `band_members` that does not contain any new columns.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
anti_join(band_instruments, by = "name")
```
```
## # A tibble: 1 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
```
#### Discussion
`anti_join()` returns only the rows of the first data frame that *do not have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `ant_join()` a useful way to debug a mutating join.
`anti_join()` provides a useful way to check for typos that could interfere with a mutating join; these rows will not have a match in the second data frame (assuming that the typo does not also appear in the second data frame).
`anti_join()` also highlights entries that are coded in different ways across data frames, such as `"North_Carolina"` and `"North Carolina"`.
`anti_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match). Along with `semi_join()`, `anti_join()` is one of the two [Filtering joins](transform-tables.html#filtering-joins).
#### Solution
```
band_members %>%
anti_join(band_instruments, by = "name")
```
```
## # A tibble: 1 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
```
#### Discussion
`anti_join()` returns only the rows of the first data frame that *do not have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `ant_join()` a useful way to debug a mutating join.
`anti_join()` provides a useful way to check for typos that could interfere with a mutating join; these rows will not have a match in the second data frame (assuming that the typo does not also appear in the second data frame).
`anti_join()` also highlights entries that are coded in different ways across data frames, such as `"North_Carolina"` and `"North Carolina"`.
`anti_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match). Along with `semi_join()`, `anti_join()` is one of the two [Filtering joins](transform-tables.html#filtering-joins).
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/transform-tables.html |
4 Transform Tables
==================
---
This chapter includes the following recipes:
1. [Arrange rows by value in ascending order](transform-tables.html#arrange-rows-by-value-in-ascending-order)
2. [Arrange rows by value in descending order](transform-tables.html#arrange-rows-by-value-in-descending-order)
3. [Filter rows with a logical test](transform-tables.html#filter-rows-with-a-logical-test)
4. [Select columns by name](transform-tables.html#select-columns-by-name)
5. [Drop columns by name](transform-tables.html#drop-columns-by-name)
6. [Select a range of columns](transform-tables.html#select-a-range-of-columns)
7. [Select columns by integer position](transform-tables.html#select-columns-by-integer-position)
8. [Select columns by start of name](transform-tables.html#select-columns-by-start-of-name)
9. [Select columns by end of name](transform-tables.html#select-columns-by-end-of-name)
10. [Select columns by string in name](transform-tables.html#select-columns-by-string-in-name)
11. [Reorder columns](transform-tables.html#reorder-columns)
12. [Reorder columns without naming each](transform-tables.html#reorder-columns-without-naming-each)
13. [Rename columns](transform-tables.html#rename-columns)
14. [Return the contents of a column as a vector](transform-tables.html#return-the-contents-of-a-column-as-a-vector)
15. [Mutate data (Add new variables)](transform-tables.html#mutate-data-add-new-variables)
16. [Summarise data](transform-tables.html#summarise-data)
17. [Group data](transform-tables.html#group-data)
18. [Summarise data by groups](transform-tables.html#summarise-data-by-groups)
19. [Nest a data frame](transform-tables.html#nest-a-data-frame)
20. [Extract a table from a nested data frame](transform-tables.html#extract-a-table-from-a-nested-data-frame)
21. [Unnest a data frame](transform-tables.html#unnest-a-data-frame)
22. [Combine transform recipes](transform-tables.html#combine-transform-recipes)
23. [Join data sets by common column(s)](transform-tables.html#joins)
24. [Find rows that have a match in another data set](transform-tables.html#find-rows-that-have-a-match-in-another-data-set)
25. [Find rows that do not have a match in another data set](transform-tables.html#find-rows-that-do-not-have-a-match-in-another-data-set)
---
What you should know before you begin
-------------------------------------
The dplyr package provides the most important tidyverse functions for manipulating tables. These functions share some defaults that make it easy to transform tables:
1. dplyr functions always return a transformed *copy* of your table. They won’t change your original table unless you tell them to (by saving over the name of the original table). That’s good news, because you should always retain a clean copy of your original data in case something goes wrong.
2. You can refer to columns by name inside of a dplyr function. There’s no need for `$` syntax or `""`. Every dplyr function requires you to supply a data frame, and it will recognize the columns in that data frame, e.g.
`summarise(mpg, h = mean(hwy), c = mean(cty))`
This only becomes a problem if you’d like to use an object that has the same name as one of the columns in the data frame. In this case, place `!!` before the object’s name to unquote it, dplyr will skip the columns when looking up the object. e.g.
`hwy <- 1:10`
`summarise(mpg, h = mean(!!hwy), c = mean(cty))`
Transforming a table sometimes requires more than one recipe. Why? Because tables are made of multiple data structures that work together:
1. The table itself is a data frame or tibble.
2. The columns of the table are vectors.
3. Some columns may be list\-columns, which are lists that contain vectors.
Each tidyverse function tends to focus on a single type of data structure; it is part of the tidyverse philosophy that each function should do one thing and do it well.
So to transform a table, begin with a recipe that transforms the structure of the table. You’ll find those recipes in this chapter. Then complete it with a recipe that transforms the actual data values in your table. The [Combine transform recipes](transform-tables.html#combine-transform-recipes) recipe will show you how.
4\.1 Arrange rows by value in ascending order
---------------------------------------------
You want to sort the rows of a data frame in **ascending** order by the values in one or more columns.
#### Solution
```
mpg %>%
arrange(displ)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 5 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 6 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 7 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 10 honda civic 1.8 2008 4 manual… f 26 34 r subco…
## # … with 224 more rows
```
#### Discussion
`arrange()` sorts the rows according to the values of the specified column, with the lowest values appearing near the top of the data frame. If you provide additional column names, `arrange()` will use the additional columns in order as tiebreakers to sort within rows that share the same value of the first column.
```
mpg %>%
arrange(displ, cty)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 5 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 6 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 7 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 volkswagen passat 1.8 1999 4 auto(l… f 18 29 p midsi…
## 10 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## # … with 224 more rows
```
4\.2 Arrange rows by value in descending order
----------------------------------------------
You want to sort the rows of a data frame in **descending** order by the values in one or more columns.
#### Solution
```
mpg %>%
arrange(desc(displ))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet c1500 su… 5.7 1999 8 auto(… r 13 17 r suv
## 10 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## # … with 224 more rows
```
#### Discussion
Place `desc()` around a column name to cause `arrange()` to sort by descending values of that column. You can use `desc()` for tie\-breaker columns as well (compare line nine below to the table above).
```
mpg %>%
arrange(desc(displ), desc(cty))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## 10 chevrolet corvette 5.7 1999 8 auto(… r 15 23 p 2sea…
## # … with 224 more rows
```
4\.3 Filter rows with a logical test
------------------------------------
You want to filter your table to just the rows that meet a specific condition.
#### Solution
```
mpg %>%
filter(model == "jetta")
```
```
## # A tibble: 9 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compa…
## 2 volkswagen jetta 2 1999 4 manual(m5) f 21 29 r compa…
## 3 volkswagen jetta 2 1999 4 auto(l4) f 19 26 r compa…
## 4 volkswagen jetta 2 2008 4 auto(s6) f 22 29 p compa…
## 5 volkswagen jetta 2 2008 4 manual(m6) f 21 29 p compa…
## 6 volkswagen jetta 2.5 2008 5 auto(s6) f 21 29 r compa…
## 7 volkswagen jetta 2.5 2008 5 manual(m5) f 21 29 r compa…
## 8 volkswagen jetta 2.8 1999 6 auto(l4) f 16 23 r compa…
## 9 volkswagen jetta 2.8 1999 6 manual(m5) f 17 24 r compa…
```
#### Discussion
`filter()` takes a logical test and returns the rows for which the logical test returns `TRUE`. `filter()` will match column names that appear within the logical test to columns in your data frame.
If you provide multiple logical tests, `filter()` will combine them with an AND operator (`&`):
```
mpg %>%
filter(model == "jetta", year == 1999)
```
Use R’s boolean operators, like `|`and `!`, to create other combinations of logical tests to pass to filter. See the help pages for `?Comparison` and `?Logic` to learn more about writing logical tests in R.
4\.4 Select columns by name
---------------------------
You want to return a “subset” of columns from your data frame by listing the name of each column to return.
#### Solution
```
table1 %>%
select(country, year, cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
`select()` returns a new data frame that includes each column passed to `select()`. Repeat a name to include the column twice.
4\.5 Drop columns by name
-------------------------
You want to return a “subset” of columns from your data frame by listing the name of each column to drop.
#### Solution
```
table1 %>%
select(-c(population, year))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
#### Discussion
If you use a `-` before a column name, `select()` will return every column in the data frame except that column. To drop more than one column at a time, group the columns into a vector preceded by `-`.
4\.6 Select a range of columns
------------------------------
You want to return two columns from a data frame as well as every column that appears between them.
#### Solution
```
table1 %>%
select(country:cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
If you combine two column names with a `:`, `select()` will return both columns and every column that appears between them in the data frame.
4\.7 Select columns by integer position
---------------------------------------
You want to return a “subset” of columns from your data frame by listing the position of each column to return.
#### Solution
```
table1 %>%
select(1, 2, 4)
```
```
## # A tibble: 6 x 3
## country year population
## <chr> <int> <int>
## 1 Afghanistan 1999 19987071
## 2 Afghanistan 2000 20595360
## 3 Brazil 1999 172006362
## 4 Brazil 2000 174504898
## 5 China 1999 1272915272
## 6 China 2000 1280428583
```
#### Discussion
`select()` interprets the whole number *n* as the \_n\_th column in the data set. You can combine numbers with `-` and `:` inside of `select()` as well.
4\.8 Select columns by start of name
------------------------------------
You want to return evey column in your data that begins with a specific string.
#### Solution
```
table1 %>%
select(starts_with("c"))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
4\.9 Select columns by end of name
----------------------------------
You want to return evey column in your data that ends with a specific string.
#### Solution
```
table1 %>%
select(ends_with("tion"))
```
```
## # A tibble: 6 x 1
## population
## <int>
## 1 19987071
## 2 20595360
## 3 172006362
## 4 174504898
## 5 1272915272
## 6 1280428583
```
4\.10 Select columns by string in name
--------------------------------------
You want to return evey column in your data whose name contains a specific string or regular expression.
#### Solution
```
table1 %>%
select(matches("o.*u"))
```
```
## # A tibble: 6 x 2
## country population
## <chr> <int>
## 1 Afghanistan 19987071
## 2 Afghanistan 20595360
## 3 Brazil 172006362
## 4 Brazil 174504898
## 5 China 1272915272
## 6 China 1280428583
```
#### Discussion
`o.*u` is a regular expression that matches an `o` followed by a `u` with any number of characters in between. `country` and `population` are returned because the names `country` and `population` each contain an `o` followed (at any distance) by a `u`.
See the help page for `?regex` to learn more about regular expressions in R.
4\.11 Reorder columns
---------------------
You want to return all of the columns in the original data frame in a new order.
#### Solution
```
table1 %>%
select(country, year, population, cases)
```
```
## # A tibble: 6 x 4
## country year population cases
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 19987071 745
## 2 Afghanistan 2000 20595360 2666
## 3 Brazil 1999 172006362 37737
## 4 Brazil 2000 174504898 80488
## 5 China 1999 1272915272 212258
## 6 China 2000 1280428583 213766
```
#### Discussion
Use `select()` to select all of the columns. List the column names in the new order.
4\.12 Reorder columns without naming each
-----------------------------------------
You want to reorder some of the columns in the original data frame, but you don’t care about the order for other columns, and you may have too many columns to name them each individually.
#### Solution
```
table1 %>%
select(country, year, everything())
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Use `everything()` within `select()` to select all of the columns in the order they are named: all columns are kept, and no columns are duplicated. Using `everything()` preserves the original ordering of the original (unnamed) columns.
4\.13 Rename columns
--------------------
You want to rename one or more columns in your data frame, retaining the rest.
#### Solution
```
table1 %>%
rename(state = country, date = year)
```
```
## # A tibble: 6 x 4
## state date cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
For each column to be renamed, type a new name for the column and set it equal to the old name for the column.
4\.14 Return the contents of a column as a vector
-------------------------------------------------
You want to return the contents of a single column as a vector, not as a data frame with one column.
#### Solution
```
table1 %>%
pull(cases)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
#### Discussion
`pull()` comes in the dplyr package. It does the equivalent of `pluck()` in the purrr package; however, `pull()` is designed to work specifically with data frames. `pluck()` is designed to work with all types of lists.
You can also pull a column by integer position:
```
table1 %>%
pull(3)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
4\.15 Mutate data (Add new variables)
-------------------------------------
You want to compute one or more new variables and add them to your table as columns.
#### Solution
```
table1 %>%
mutate(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 6
## country year cases population rate percent
## <chr> <int> <int> <int> <dbl> <dbl>
## 1 Afghanistan 1999 745 19987071 0.0000373 0.00373
## 2 Afghanistan 2000 2666 20595360 0.000129 0.0129
## 3 Brazil 1999 37737 172006362 0.000219 0.0219
## 4 Brazil 2000 80488 174504898 0.000461 0.0461
## 5 China 1999 212258 1272915272 0.000167 0.0167
## 6 China 2000 213766 1280428583 0.000167 0.0167
```
#### Discussion
To use `mutate()`, pass it a series of names followed by R expressions. `mutate()` will return a copy of your table that contains one column for each name that you pass to `mutate()`. The name of the column will be the name that you passed to `mutate()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a vector of the same length as the other columns in the data frame,[1](#fn1) because `mutate()` will add the vector as a new column.
In other words, `mutate()` is intended to be used with *vectorized functions*, which are functions that take a vector of values as input and return a new vector of values as output (e.g `abs()`, `round()`, and all of R’s math operations).
`mutate()` will build the columns in the order that you define them. As a result, you may use a new column in the column definitions that follow it.
##### Dropping the original data
Use `transmute()` to return only the new columns that `mutate()` would create.
```
table1 %>%
transmute(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 2
## rate percent
## <dbl> <dbl>
## 1 0.0000373 0.00373
## 2 0.000129 0.0129
## 3 0.000219 0.0219
## 4 0.000461 0.0461
## 5 0.000167 0.0167
## 6 0.000167 0.0167
```
4\.16 Summarise data
--------------------
You want to compute summary statistics for the data in your data frame.
#### Solution
```
table1 %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## # A tibble: 1 x 2
## total_cases max_rate
## <int> <dbl>
## 1 547660 0.000461
```
#### Discussion
To use `summarise()`, pass it a series of names followed by R expressions. `summarise()` will return a new tibble that contains one column for each name that you pass to `summarise()`. The name of the column will be the name that you passed to `summarise()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a single value because `summarise()` will always return a 1 x n tibble.[2](#fn2)
In other words, `summarise()` is intended to be used with *summary functions*, which are functions that take a vector of values as input and return a single value as output (e.g `sum()`, `max()`, `mean()`). In normal use, `summarise()` will pass each function a column (i.e. vector) of values and expect a single value in return.
`summarize()` is an alias for `summarise()`.
4\.17 Group data
----------------
You want to assign the rows of your data to subgroups based on their shared values or their shared combinations of values.
You want to do this as the first step in a multi\-step analysis, because grouping data doesn’t do anything noticeable until you pass the grouped data to a tidyverse function.
#### Solution
```
table1 %>%
group_by(country)
```
```
## # A tibble: 6 x 4
## # Groups: country [3]
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
`group_by()` converts your data into a grouped tibble, which is a tibble subclass that indicates in its attributes[3](#fn3) which rows belong to which group.
To group rows by the values of a single column, pass `group_by()` a single column name. To group rows by the unique *combination* of values across multiple columns, pass `group_by()` the names of two or more columns.
##### Group\-wise operations
Where appropriate, tidyverse functions recognize grouped tibbles. Tidyverse functions:
1. treat each group as a distinct data set
2. execute their code separately on each group
3. combine the results into a new data frame that contains the same grouping characteristics. `summarise()` is a slight exception, see below.
4\.18 Summarise data by groups
------------------------------
You want to compute summary statistics for different subgroups of data in your grouped data frame.
#### Solution
```
table1 %>%
group_by(country) %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 3
## country total_cases max_rate
## <chr> <int> <dbl>
## 1 Afghanistan 3411 0.000129
## 2 Brazil 118225 0.000461
## 3 China 426024 0.000167
```
#### Discussion
Group\-wise summaries are the most common use of grouped data. When you apply `summarise()` to grouped data, `summarise()` will:
1. treat each group as a distinct data set
2. compute separate statistics for each group
3. combine the results into a new tibble
Since this is easier to see than explain, you may want to study the diagram and result above.
`summarise()` gives grouped data special treatment in two ways:
1. `summarise()` will retain the column(s) that were used to group the data in its result. This makes the output of grouped summaries interpretable.
2. `summarise()` will shorten the grouping criteria of its result by one column name (the last column name). Compare this to other tidyverse functions which give their result the *same* grouping criteria as their input.
For example, if the input of `summarise()` is grouped by `country` and `year`, the output of `summarise()` will only be grouped by `country`. Because of this, you can call `summarise()` repeatedly to view progressively higher level summaries:
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## # A tibble: 6 x 3
## # Groups: country [3]
## country year total_cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 2
## country total_cases
## <chr> <int>
## 1 Afghanistan 3411
## 2 Brazil 118225
## 3 China 426024
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 1 x 1
## total_cases
## <int>
## 1 547660
```
4\.19 Nest a data frame
-----------------------
You want to move portions of your data frame into their own tables, and then store those tables in cells in your original data frame.
This lets you manipulate the collection of tables with `filter()`, `select()`, `arrange()`, and so on, as you would normally manipulate a collection of values.
#### Solution
```
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
```
## # A tibble: 3 x 2
## # Groups: Species [3]
## Species Measurements
## <fct> <list>
## 1 setosa <tibble [50 × 4]>
## 2 versicolor <tibble [50 × 4]>
## 3 virginica <tibble [50 × 4]>
```
#### Discussion
`nest()` comes in the tidyr package. You can use it to nest portions of your data frame in two ways:
1. Pass `nest()` a *grouped data frame* made with `dplyr::group_by()` (as above). `nest()` will create a separate table for each group. The table will contain every row in the group and every column that is not part of the grouping criteria.
2. Pass `nest()` an ungrouped data frame and then specify which columns to nest. `nest()` will perform an implicit grouping on the combination of values that appear across the remaining columns, and then create a separate table for each implied grouping.
You can specify columns with the same syntax and helpers that you would use with dplyr’s `select()` function. So, for example, the two calls below will produce the same result as the solution above.
```
iris %>%
nest(Sepal.Width,
Sepal.Length,
Petal.Width,
Petal.Length,
.key = "Measurements") %>%
as_tibble()
iris %>%
nest(-Species, .key = "Measurements") %>%
as_tibble()
```
`nest()` preserves class, which means that `nest()` will return a data frame if its input is a data frame and a tibble if its input is a tibble. In each case, `nest()` will add the subtables to the result as a list\-column. Since, list\-columns are much easier to view in a tibble than a data frame, I recommend that you convert the result of `nest()` to a tibble when necessary.
Use the `.key` argument to provide a name for the new list\-column.
4\.20 Extract a table from a nested data frame
----------------------------------------------
You want to extract a single table nested within a data frame.
For example, you want to extract the setosa table from the nested data set created in the solution above,
```
nested_iris <-
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
#### Solution
```
nested_iris %>%
filter(Species == "setosa") %>%
unnest()
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 50 x 5
## # Groups: Species [1]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 40 more rows
```
#### Discussion
Since the table is in a cell of the data frame, it is possible to extract the table by extracting the contents of the cell (as below). Be sure to use double brackets to extract the content itself, and not a 1 x 1 data frame that contains the cell.
```
nested_iris[[1, 2]]
```
However, this approach can be suboptimal for nested data. Often a nested table relies on information that is stored alongside it for meaning. For example, in the above example, the table relies on the Species variable to indicate that it describes setosa flowers.
You can extract a nested table along with its related information by first subsetting to just the relevant information and then unnesting the result.
4\.21 Unnest a data frame
-------------------------
You want to transform a nested data frame into a flat data frame by re\-integrating the nested tables into the surrounding data frame.
For example, you want to unnest the `nested_iris` data frame created in the recipe above,
```
nested_iris <-
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
#### Solution
```
unnest(nested_iris)
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 150 x 5
## # Groups: Species [3]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 140 more rows
```
#### Discussion
`unnest()` converts a list\-column into a regular column or columns, repeating the surrounding rows as necessary. `unnest()` can handle list columns that contain atomic vectors and data frames, but cannot handle list columns that contain objects that would not naturally fit into a data frame, such as model objects.
By default, `unnest()` will unnest every list\-column in a data frame. To unnest only a subset of list\-columns, pass the names of the list\-columns to `unnest()` using dplyr `select()` syntax or helpers. `unnest()` will ignore unnamed list columns, excluding them from the result to return a flat data frame. `unnest()` comes in the tidyr package.
4\.22 Combine transform recipes
-------------------------------
You want to transform the structure of a table, and you want to use the data within the table to do it. This is the sort of thing you do everytime you call `summarise()` or `mutate()`.
#### Solution
See the discussion.
#### Discussion
When you transform a data frame, you often work with several functions:
1. A function that transforms the *table* itself, adding columns to its structure, or building a whole new table to hold results.
2. A function that transforms the values held in the table. Since values are always held in vectors (here column vectors), this function transforms a *vector* to create a new vector that can then be added to the empty column or table created by function 1\.
3. A function that mediates between 1 and 2 if the values are embedded in a list\-column, which is a *list*.
The most concise way to learn how to combine these functions is to learn about the functions in isolation, and to then have the functions call each other as necessary.
##### An example
`smiths` contains measurements that describe two fictional people: John and Mary Smith.
```
smiths
```
```
## # A tibble: 2 x 5
## subject time age weight height
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87
## 2 Mary Smith 1 NA NA 1.54
```
To round the `height` value for each person, you would need two functions:
1. `round()` which can round the values of the data vector stored in `smiths$height`.
```
round(smiths$height)
```
```
## [1] 2 2
```
2. `mutate()` which can add the results to a copy of the `smiths` table.
```
smiths %>%
mutate(height_int = round(height))
```
```
## # A tibble: 2 x 6
## subject time age weight height height_int
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87 2
## 2 Mary Smith 1 NA NA 1.54 2
```
`round()` works with data vectors. `mutate()` works with tables. Together they create the table you want.
##### An example that uses list columns
`sepals` is a tibble that I made for this example.[4](#fn4) It contains sepal length measurements for three species of flowers. The measurements are stored in a list\-column named `lengths`.
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
sepals
```
```
## # A tibble: 3 x 2
## Species lengths
## <fct> <list>
## 1 setosa <dbl [50]>
## 2 versicolor <dbl [50]>
## 3 virginica <dbl [50]>
```
Each cell in `lengths` contains a data vector of 50 sepal lengths.
```
# For example, the first cell of lengths
sepals[[1, 2]]
```
```
## [[1]]
## [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7
## [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9
## [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0
```
To add the average sepal length for each species to the table, you would need three functions:
1. `mean()` which can compute the average of a data vector in `lengths`
```
mean(sepals[[1, 2]])
```
```
## Warning in mean.default(sepals[[1, 2]]): argument is not numeric or logical:
## returning NA
```
```
## [1] NA
```
2. `map_dbl()` which can apply `mean()` to each cell of `lengths`, which is a list\-column.
```
map_dbl(sepals$lengths, mean)
```
```
## [1] 5.006 5.936 6.588
```
3. `mutate()` which can add the results to a copy of the `sepals` table.
```
sepals %>%
mutate(avg_length = map_dbl(lengths, mean))
```
```
## # A tibble: 3 x 3
## Species lengths avg_length
## <fct> <list> <dbl>
## 1 setosa <dbl [50]> 5.01
## 2 versicolor <dbl [50]> 5.94
## 3 virginica <dbl [50]> 6.59
```
`mean()` works with data vectors. `map_dbl()` works with list\-columns. `mutate()` works with tables. Together they create the table you want.
4\.23 Join data sets by common column(s)
----------------------------------------
You want to combine two data frames into a single data frame, such that observations in the first data frame are matched to the corresponding observations in the second data frame, even if those observations do not appear in the same order.
Your data is structured in such a way that you can match observations by the values of one or more ID columns that appear in both data frames.
For example, you would like to combine `band_members` and `band_instruments` into a single data frame based on the values of the `name` column. The new data frame will correctly list who plays what.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
left_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
#### Discussion
There are four ways to join content from one data frame to another. Each uses a different function name, but the same arguments and syntax. Of these, `left_join()` is the most common.
1. `left_join()` drops any row in the *second* data set does not match a row in the first data set. (It retains every row in the first data set, which appears on the left when you type the function call, hence the name).
2. `right_join()` drops any row in the *first* data set does not match a row in the first data set.
```
band_members %>%
right_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
## 3 Keith <NA> guitar
```
3. `inner_join()` drops any row in *either* data set that does not have a match in both data sets, i.e. `inner_join()` does not retain any incomplete rows in the final result.
```
band_members %>%
inner_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
```
4. `full_join()` retains every row from both data sets; it is the only join guaranteed to retain all of the original data.
```
band_members %>%
full_join(band_instruments, by = "name")
```
```
## # A tibble: 4 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
## 4 Keith <NA> guitar
```
##### Mutating joins
`left_join()`, `right_join()`, `inner_join()`, and `full_join()` are collectively called *mutating joins* because they add additional columns to a copy of a data set, as does `mutate()`.
##### Specifying column(s) to join on
By default, the mutating join functions will join on the set of columns whose names appear in both data frames. The join functions will match each row in the first data frame to the row in the second data frame that has the same combination of values across the commonly named columns.
To override the default, add a `by` argument to your join function. Pass it the name(s) of the column(s) to join on as a character vector. These names should appear in both data sets. R will join together rows that contain the same combination of values in these columns, ignoring the values in other columns, even if those columns share a name with a column in the other data frame.
```
table1 %>%
left_join(table3, by = c("country", "year"))
```
```
## # A tibble: 6 x 5
## country year cases population rate
## <chr> <int> <int> <int> <chr>
## 1 Afghanistan 1999 745 19987071 745/19987071
## 2 Afghanistan 2000 2666 20595360 2666/20595360
## 3 Brazil 1999 37737 172006362 37737/172006362
## 4 Brazil 2000 80488 174504898 80488/174504898
## 5 China 1999 212258 1272915272 212258/1272915272
## 6 China 2000 213766 1280428583 213766/1280428583
```
##### Joining when ID names do not match
Often an ID variable will appear with a different name in each data frame. For example, the `name` variable appears as `artist` in `band_instruments2`.
```
band_instruments2
```
```
## # A tibble: 3 x 2
## artist plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
To join by two columns that have different names, pass `by` a named character vector: each element of the vector should be a pair of names, e.g.
```
c(name1 = "name2", name4 = "name4")
```
For each element,
1. Write the name of the column that appears in the first data frame
2. Write an equals sign
3. Write the name of the matching column that appears in the second data set.
Only the second name needs to be surrounded with quotation marks. The join will match the corresponding columns across data frames.
```
band_members %>%
left_join(band_instruments2, by = c(name = "artist"))
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
R will use the column name(s) from the first data set in the result.
##### Suffixes
Joins will append the suffixes `.x` and `.y` to any columns that have the same name in both data sets *but are not used to join on*. The columns from the first data set are suffixed with `.x`, the columns from the second with `.y`.
```
table4a %>%
left_join(table4b, by = "country")
```
```
## # A tibble: 3 x 5
## country `1999.x` `2000.x` `1999.y` `2000.y`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
To use different suffixes, supply a character vector of length two as a `suffix` argument.
```
table4a %>%
left_join(table4b, by = "country", suffix = c("_cases", "_pop"))
```
```
## # A tibble: 3 x 5
## country `1999_cases` `2000_cases` `1999_pop` `2000_pop`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
4\.24 Find rows that have a match in another data set
-----------------------------------------------------
You want to find the rows in one data frame that have a match in a second data frame. By match, you mean that both rows refer to the same observation, even if they include different measurements.
Your data is structured in such a way that you can match rows by the values of one or more ID columns that appear in both data frames.
For example, you would like to return the rows of `band_members` that have a corresponding row in `band_instruments`. The new data frame will be a reduced version of `band_members` that does not contain any new columns.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
semi_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 2
## name band
## <chr> <chr>
## 1 John Beatles
## 2 Paul Beatles
```
#### Discussion
`semi_join()` returns only the rows of the first data frame that *have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `semi_join()` a useful way to preview which rows will be retained by a mutating join.
`semi_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match).
##### Filtering joins
`semi_join()` and `anti_join()` (see below) are called *filtering joins* because they filter a data frame to only those rows that meet a specific criteria, as does `filter()`.
Unlike mutating joins, filtering joins do not add columns from the second data frame to the first. Instead, they use the second data frame to identify rows to return from the first.
When you need to filter on a complicated set of conditions, filtering joins can be more effective than `filter()`: use `tribble()` to create a data frame to filter against with a filtering join.
4\.25 Find rows that do not have a match in another data set
------------------------------------------------------------
You want to find the rows in one data frame that *do not* have a match in a second data frame. By match, you mean that both rows refer to the same observation, even if they include different measurements.
Your data is structured in such a way that you can match rows by the values of one or more ID columns that appear in both data frames.
For example, you would like to return the rows of `band_members` that do not have a corresponding row in `band_instruments`. The new data frame will be a reduced version of `band_members` that does not contain any new columns.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
anti_join(band_instruments, by = "name")
```
```
## # A tibble: 1 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
```
#### Discussion
`anti_join()` returns only the rows of the first data frame that *do not have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `ant_join()` a useful way to debug a mutating join.
`anti_join()` provides a useful way to check for typos that could interfere with a mutating join; these rows will not have a match in the second data frame (assuming that the typo does not also appear in the second data frame).
`anti_join()` also highlights entries that are coded in different ways across data frames, such as `"North_Carolina"` and `"North Carolina"`.
`anti_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match). Along with `semi_join()`, `anti_join()` is one of the two [Filtering joins](transform-tables.html#filtering-joins).
What you should know before you begin
-------------------------------------
The dplyr package provides the most important tidyverse functions for manipulating tables. These functions share some defaults that make it easy to transform tables:
1. dplyr functions always return a transformed *copy* of your table. They won’t change your original table unless you tell them to (by saving over the name of the original table). That’s good news, because you should always retain a clean copy of your original data in case something goes wrong.
2. You can refer to columns by name inside of a dplyr function. There’s no need for `$` syntax or `""`. Every dplyr function requires you to supply a data frame, and it will recognize the columns in that data frame, e.g.
`summarise(mpg, h = mean(hwy), c = mean(cty))`
This only becomes a problem if you’d like to use an object that has the same name as one of the columns in the data frame. In this case, place `!!` before the object’s name to unquote it, dplyr will skip the columns when looking up the object. e.g.
`hwy <- 1:10`
`summarise(mpg, h = mean(!!hwy), c = mean(cty))`
Transforming a table sometimes requires more than one recipe. Why? Because tables are made of multiple data structures that work together:
1. The table itself is a data frame or tibble.
2. The columns of the table are vectors.
3. Some columns may be list\-columns, which are lists that contain vectors.
Each tidyverse function tends to focus on a single type of data structure; it is part of the tidyverse philosophy that each function should do one thing and do it well.
So to transform a table, begin with a recipe that transforms the structure of the table. You’ll find those recipes in this chapter. Then complete it with a recipe that transforms the actual data values in your table. The [Combine transform recipes](transform-tables.html#combine-transform-recipes) recipe will show you how.
4\.1 Arrange rows by value in ascending order
---------------------------------------------
You want to sort the rows of a data frame in **ascending** order by the values in one or more columns.
#### Solution
```
mpg %>%
arrange(displ)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 5 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 6 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 7 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 10 honda civic 1.8 2008 4 manual… f 26 34 r subco…
## # … with 224 more rows
```
#### Discussion
`arrange()` sorts the rows according to the values of the specified column, with the lowest values appearing near the top of the data frame. If you provide additional column names, `arrange()` will use the additional columns in order as tiebreakers to sort within rows that share the same value of the first column.
```
mpg %>%
arrange(displ, cty)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 5 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 6 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 7 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 volkswagen passat 1.8 1999 4 auto(l… f 18 29 p midsi…
## 10 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## # … with 224 more rows
```
#### Solution
```
mpg %>%
arrange(displ)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 5 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 6 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 7 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 10 honda civic 1.8 2008 4 manual… f 26 34 r subco…
## # … with 224 more rows
```
#### Discussion
`arrange()` sorts the rows according to the values of the specified column, with the lowest values appearing near the top of the data frame. If you provide additional column names, `arrange()` will use the additional columns in order as tiebreakers to sort within rows that share the same value of the first column.
```
mpg %>%
arrange(displ, cty)
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 honda civic 1.6 1999 4 manual… f 23 29 p subco…
## 2 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 3 honda civic 1.6 1999 4 auto(l… f 24 32 r subco…
## 4 honda civic 1.6 1999 4 manual… f 25 32 r subco…
## 5 honda civic 1.6 1999 4 manual… f 28 33 r subco…
## 6 audi a4 qua… 1.8 1999 4 auto(l… 4 16 25 p compa…
## 7 audi a4 1.8 1999 4 auto(l… f 18 29 p compa…
## 8 audi a4 qua… 1.8 1999 4 manual… 4 18 26 p compa…
## 9 volkswagen passat 1.8 1999 4 auto(l… f 18 29 p midsi…
## 10 audi a4 1.8 1999 4 manual… f 21 29 p compa…
## # … with 224 more rows
```
4\.2 Arrange rows by value in descending order
----------------------------------------------
You want to sort the rows of a data frame in **descending** order by the values in one or more columns.
#### Solution
```
mpg %>%
arrange(desc(displ))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet c1500 su… 5.7 1999 8 auto(… r 13 17 r suv
## 10 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## # … with 224 more rows
```
#### Discussion
Place `desc()` around a column name to cause `arrange()` to sort by descending values of that column. You can use `desc()` for tie\-breaker columns as well (compare line nine below to the table above).
```
mpg %>%
arrange(desc(displ), desc(cty))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## 10 chevrolet corvette 5.7 1999 8 auto(… r 15 23 p 2sea…
## # … with 224 more rows
```
#### Solution
```
mpg %>%
arrange(desc(displ))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet c1500 su… 5.7 1999 8 auto(… r 13 17 r suv
## 10 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## # … with 224 more rows
```
#### Discussion
Place `desc()` around a column name to cause `arrange()` to sort by descending values of that column. You can use `desc()` for tie\-breaker columns as well (compare line nine below to the table above).
```
mpg %>%
arrange(desc(displ), desc(cty))
```
```
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 chevrolet corvette 7 2008 8 manua… r 15 24 p 2sea…
## 2 chevrolet k1500 ta… 6.5 1999 8 auto(… 4 14 17 d suv
## 3 chevrolet corvette 6.2 2008 8 manua… r 16 26 p 2sea…
## 4 chevrolet corvette 6.2 2008 8 auto(… r 15 25 p 2sea…
## 5 jeep grand ch… 6.1 2008 8 auto(… 4 11 14 p suv
## 6 chevrolet c1500 su… 6 2008 8 auto(… r 12 17 r suv
## 7 dodge durango … 5.9 1999 8 auto(… 4 11 15 r suv
## 8 dodge ram 1500… 5.9 1999 8 auto(… 4 11 15 r pick…
## 9 chevrolet corvette 5.7 1999 8 manua… r 16 26 p 2sea…
## 10 chevrolet corvette 5.7 1999 8 auto(… r 15 23 p 2sea…
## # … with 224 more rows
```
4\.3 Filter rows with a logical test
------------------------------------
You want to filter your table to just the rows that meet a specific condition.
#### Solution
```
mpg %>%
filter(model == "jetta")
```
```
## # A tibble: 9 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compa…
## 2 volkswagen jetta 2 1999 4 manual(m5) f 21 29 r compa…
## 3 volkswagen jetta 2 1999 4 auto(l4) f 19 26 r compa…
## 4 volkswagen jetta 2 2008 4 auto(s6) f 22 29 p compa…
## 5 volkswagen jetta 2 2008 4 manual(m6) f 21 29 p compa…
## 6 volkswagen jetta 2.5 2008 5 auto(s6) f 21 29 r compa…
## 7 volkswagen jetta 2.5 2008 5 manual(m5) f 21 29 r compa…
## 8 volkswagen jetta 2.8 1999 6 auto(l4) f 16 23 r compa…
## 9 volkswagen jetta 2.8 1999 6 manual(m5) f 17 24 r compa…
```
#### Discussion
`filter()` takes a logical test and returns the rows for which the logical test returns `TRUE`. `filter()` will match column names that appear within the logical test to columns in your data frame.
If you provide multiple logical tests, `filter()` will combine them with an AND operator (`&`):
```
mpg %>%
filter(model == "jetta", year == 1999)
```
Use R’s boolean operators, like `|`and `!`, to create other combinations of logical tests to pass to filter. See the help pages for `?Comparison` and `?Logic` to learn more about writing logical tests in R.
#### Solution
```
mpg %>%
filter(model == "jetta")
```
```
## # A tibble: 9 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compa…
## 2 volkswagen jetta 2 1999 4 manual(m5) f 21 29 r compa…
## 3 volkswagen jetta 2 1999 4 auto(l4) f 19 26 r compa…
## 4 volkswagen jetta 2 2008 4 auto(s6) f 22 29 p compa…
## 5 volkswagen jetta 2 2008 4 manual(m6) f 21 29 p compa…
## 6 volkswagen jetta 2.5 2008 5 auto(s6) f 21 29 r compa…
## 7 volkswagen jetta 2.5 2008 5 manual(m5) f 21 29 r compa…
## 8 volkswagen jetta 2.8 1999 6 auto(l4) f 16 23 r compa…
## 9 volkswagen jetta 2.8 1999 6 manual(m5) f 17 24 r compa…
```
#### Discussion
`filter()` takes a logical test and returns the rows for which the logical test returns `TRUE`. `filter()` will match column names that appear within the logical test to columns in your data frame.
If you provide multiple logical tests, `filter()` will combine them with an AND operator (`&`):
```
mpg %>%
filter(model == "jetta", year == 1999)
```
Use R’s boolean operators, like `|`and `!`, to create other combinations of logical tests to pass to filter. See the help pages for `?Comparison` and `?Logic` to learn more about writing logical tests in R.
4\.4 Select columns by name
---------------------------
You want to return a “subset” of columns from your data frame by listing the name of each column to return.
#### Solution
```
table1 %>%
select(country, year, cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
`select()` returns a new data frame that includes each column passed to `select()`. Repeat a name to include the column twice.
#### Solution
```
table1 %>%
select(country, year, cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
`select()` returns a new data frame that includes each column passed to `select()`. Repeat a name to include the column twice.
4\.5 Drop columns by name
-------------------------
You want to return a “subset” of columns from your data frame by listing the name of each column to drop.
#### Solution
```
table1 %>%
select(-c(population, year))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
#### Discussion
If you use a `-` before a column name, `select()` will return every column in the data frame except that column. To drop more than one column at a time, group the columns into a vector preceded by `-`.
#### Solution
```
table1 %>%
select(-c(population, year))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
#### Discussion
If you use a `-` before a column name, `select()` will return every column in the data frame except that column. To drop more than one column at a time, group the columns into a vector preceded by `-`.
4\.6 Select a range of columns
------------------------------
You want to return two columns from a data frame as well as every column that appears between them.
#### Solution
```
table1 %>%
select(country:cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
If you combine two column names with a `:`, `select()` will return both columns and every column that appears between them in the data frame.
#### Solution
```
table1 %>%
select(country:cases)
```
```
## # A tibble: 6 x 3
## country year cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
#### Discussion
If you combine two column names with a `:`, `select()` will return both columns and every column that appears between them in the data frame.
4\.7 Select columns by integer position
---------------------------------------
You want to return a “subset” of columns from your data frame by listing the position of each column to return.
#### Solution
```
table1 %>%
select(1, 2, 4)
```
```
## # A tibble: 6 x 3
## country year population
## <chr> <int> <int>
## 1 Afghanistan 1999 19987071
## 2 Afghanistan 2000 20595360
## 3 Brazil 1999 172006362
## 4 Brazil 2000 174504898
## 5 China 1999 1272915272
## 6 China 2000 1280428583
```
#### Discussion
`select()` interprets the whole number *n* as the \_n\_th column in the data set. You can combine numbers with `-` and `:` inside of `select()` as well.
#### Solution
```
table1 %>%
select(1, 2, 4)
```
```
## # A tibble: 6 x 3
## country year population
## <chr> <int> <int>
## 1 Afghanistan 1999 19987071
## 2 Afghanistan 2000 20595360
## 3 Brazil 1999 172006362
## 4 Brazil 2000 174504898
## 5 China 1999 1272915272
## 6 China 2000 1280428583
```
#### Discussion
`select()` interprets the whole number *n* as the \_n\_th column in the data set. You can combine numbers with `-` and `:` inside of `select()` as well.
4\.8 Select columns by start of name
------------------------------------
You want to return evey column in your data that begins with a specific string.
#### Solution
```
table1 %>%
select(starts_with("c"))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
#### Solution
```
table1 %>%
select(starts_with("c"))
```
```
## # A tibble: 6 x 2
## country cases
## <chr> <int>
## 1 Afghanistan 745
## 2 Afghanistan 2666
## 3 Brazil 37737
## 4 Brazil 80488
## 5 China 212258
## 6 China 213766
```
4\.9 Select columns by end of name
----------------------------------
You want to return evey column in your data that ends with a specific string.
#### Solution
```
table1 %>%
select(ends_with("tion"))
```
```
## # A tibble: 6 x 1
## population
## <int>
## 1 19987071
## 2 20595360
## 3 172006362
## 4 174504898
## 5 1272915272
## 6 1280428583
```
#### Solution
```
table1 %>%
select(ends_with("tion"))
```
```
## # A tibble: 6 x 1
## population
## <int>
## 1 19987071
## 2 20595360
## 3 172006362
## 4 174504898
## 5 1272915272
## 6 1280428583
```
4\.10 Select columns by string in name
--------------------------------------
You want to return evey column in your data whose name contains a specific string or regular expression.
#### Solution
```
table1 %>%
select(matches("o.*u"))
```
```
## # A tibble: 6 x 2
## country population
## <chr> <int>
## 1 Afghanistan 19987071
## 2 Afghanistan 20595360
## 3 Brazil 172006362
## 4 Brazil 174504898
## 5 China 1272915272
## 6 China 1280428583
```
#### Discussion
`o.*u` is a regular expression that matches an `o` followed by a `u` with any number of characters in between. `country` and `population` are returned because the names `country` and `population` each contain an `o` followed (at any distance) by a `u`.
See the help page for `?regex` to learn more about regular expressions in R.
#### Solution
```
table1 %>%
select(matches("o.*u"))
```
```
## # A tibble: 6 x 2
## country population
## <chr> <int>
## 1 Afghanistan 19987071
## 2 Afghanistan 20595360
## 3 Brazil 172006362
## 4 Brazil 174504898
## 5 China 1272915272
## 6 China 1280428583
```
#### Discussion
`o.*u` is a regular expression that matches an `o` followed by a `u` with any number of characters in between. `country` and `population` are returned because the names `country` and `population` each contain an `o` followed (at any distance) by a `u`.
See the help page for `?regex` to learn more about regular expressions in R.
4\.11 Reorder columns
---------------------
You want to return all of the columns in the original data frame in a new order.
#### Solution
```
table1 %>%
select(country, year, population, cases)
```
```
## # A tibble: 6 x 4
## country year population cases
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 19987071 745
## 2 Afghanistan 2000 20595360 2666
## 3 Brazil 1999 172006362 37737
## 4 Brazil 2000 174504898 80488
## 5 China 1999 1272915272 212258
## 6 China 2000 1280428583 213766
```
#### Discussion
Use `select()` to select all of the columns. List the column names in the new order.
#### Solution
```
table1 %>%
select(country, year, population, cases)
```
```
## # A tibble: 6 x 4
## country year population cases
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 19987071 745
## 2 Afghanistan 2000 20595360 2666
## 3 Brazil 1999 172006362 37737
## 4 Brazil 2000 174504898 80488
## 5 China 1999 1272915272 212258
## 6 China 2000 1280428583 213766
```
#### Discussion
Use `select()` to select all of the columns. List the column names in the new order.
4\.12 Reorder columns without naming each
-----------------------------------------
You want to reorder some of the columns in the original data frame, but you don’t care about the order for other columns, and you may have too many columns to name them each individually.
#### Solution
```
table1 %>%
select(country, year, everything())
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Use `everything()` within `select()` to select all of the columns in the order they are named: all columns are kept, and no columns are duplicated. Using `everything()` preserves the original ordering of the original (unnamed) columns.
#### Solution
```
table1 %>%
select(country, year, everything())
```
```
## # A tibble: 6 x 4
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
Use `everything()` within `select()` to select all of the columns in the order they are named: all columns are kept, and no columns are duplicated. Using `everything()` preserves the original ordering of the original (unnamed) columns.
4\.13 Rename columns
--------------------
You want to rename one or more columns in your data frame, retaining the rest.
#### Solution
```
table1 %>%
rename(state = country, date = year)
```
```
## # A tibble: 6 x 4
## state date cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
For each column to be renamed, type a new name for the column and set it equal to the old name for the column.
#### Solution
```
table1 %>%
rename(state = country, date = year)
```
```
## # A tibble: 6 x 4
## state date cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
For each column to be renamed, type a new name for the column and set it equal to the old name for the column.
4\.14 Return the contents of a column as a vector
-------------------------------------------------
You want to return the contents of a single column as a vector, not as a data frame with one column.
#### Solution
```
table1 %>%
pull(cases)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
#### Discussion
`pull()` comes in the dplyr package. It does the equivalent of `pluck()` in the purrr package; however, `pull()` is designed to work specifically with data frames. `pluck()` is designed to work with all types of lists.
You can also pull a column by integer position:
```
table1 %>%
pull(3)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
#### Solution
```
table1 %>%
pull(cases)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
#### Discussion
`pull()` comes in the dplyr package. It does the equivalent of `pluck()` in the purrr package; however, `pull()` is designed to work specifically with data frames. `pluck()` is designed to work with all types of lists.
You can also pull a column by integer position:
```
table1 %>%
pull(3)
```
```
## [1] 745 2666 37737 80488 212258 213766
```
4\.15 Mutate data (Add new variables)
-------------------------------------
You want to compute one or more new variables and add them to your table as columns.
#### Solution
```
table1 %>%
mutate(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 6
## country year cases population rate percent
## <chr> <int> <int> <int> <dbl> <dbl>
## 1 Afghanistan 1999 745 19987071 0.0000373 0.00373
## 2 Afghanistan 2000 2666 20595360 0.000129 0.0129
## 3 Brazil 1999 37737 172006362 0.000219 0.0219
## 4 Brazil 2000 80488 174504898 0.000461 0.0461
## 5 China 1999 212258 1272915272 0.000167 0.0167
## 6 China 2000 213766 1280428583 0.000167 0.0167
```
#### Discussion
To use `mutate()`, pass it a series of names followed by R expressions. `mutate()` will return a copy of your table that contains one column for each name that you pass to `mutate()`. The name of the column will be the name that you passed to `mutate()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a vector of the same length as the other columns in the data frame,[1](#fn1) because `mutate()` will add the vector as a new column.
In other words, `mutate()` is intended to be used with *vectorized functions*, which are functions that take a vector of values as input and return a new vector of values as output (e.g `abs()`, `round()`, and all of R’s math operations).
`mutate()` will build the columns in the order that you define them. As a result, you may use a new column in the column definitions that follow it.
##### Dropping the original data
Use `transmute()` to return only the new columns that `mutate()` would create.
```
table1 %>%
transmute(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 2
## rate percent
## <dbl> <dbl>
## 1 0.0000373 0.00373
## 2 0.000129 0.0129
## 3 0.000219 0.0219
## 4 0.000461 0.0461
## 5 0.000167 0.0167
## 6 0.000167 0.0167
```
#### Solution
```
table1 %>%
mutate(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 6
## country year cases population rate percent
## <chr> <int> <int> <int> <dbl> <dbl>
## 1 Afghanistan 1999 745 19987071 0.0000373 0.00373
## 2 Afghanistan 2000 2666 20595360 0.000129 0.0129
## 3 Brazil 1999 37737 172006362 0.000219 0.0219
## 4 Brazil 2000 80488 174504898 0.000461 0.0461
## 5 China 1999 212258 1272915272 0.000167 0.0167
## 6 China 2000 213766 1280428583 0.000167 0.0167
```
#### Discussion
To use `mutate()`, pass it a series of names followed by R expressions. `mutate()` will return a copy of your table that contains one column for each name that you pass to `mutate()`. The name of the column will be the name that you passed to `mutate()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a vector of the same length as the other columns in the data frame,[1](#fn1) because `mutate()` will add the vector as a new column.
In other words, `mutate()` is intended to be used with *vectorized functions*, which are functions that take a vector of values as input and return a new vector of values as output (e.g `abs()`, `round()`, and all of R’s math operations).
`mutate()` will build the columns in the order that you define them. As a result, you may use a new column in the column definitions that follow it.
##### Dropping the original data
Use `transmute()` to return only the new columns that `mutate()` would create.
```
table1 %>%
transmute(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 2
## rate percent
## <dbl> <dbl>
## 1 0.0000373 0.00373
## 2 0.000129 0.0129
## 3 0.000219 0.0219
## 4 0.000461 0.0461
## 5 0.000167 0.0167
## 6 0.000167 0.0167
```
##### Dropping the original data
Use `transmute()` to return only the new columns that `mutate()` would create.
```
table1 %>%
transmute(rate = cases/population, percent = rate * 100)
```
```
## # A tibble: 6 x 2
## rate percent
## <dbl> <dbl>
## 1 0.0000373 0.00373
## 2 0.000129 0.0129
## 3 0.000219 0.0219
## 4 0.000461 0.0461
## 5 0.000167 0.0167
## 6 0.000167 0.0167
```
4\.16 Summarise data
--------------------
You want to compute summary statistics for the data in your data frame.
#### Solution
```
table1 %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## # A tibble: 1 x 2
## total_cases max_rate
## <int> <dbl>
## 1 547660 0.000461
```
#### Discussion
To use `summarise()`, pass it a series of names followed by R expressions. `summarise()` will return a new tibble that contains one column for each name that you pass to `summarise()`. The name of the column will be the name that you passed to `summarise()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a single value because `summarise()` will always return a 1 x n tibble.[2](#fn2)
In other words, `summarise()` is intended to be used with *summary functions*, which are functions that take a vector of values as input and return a single value as output (e.g `sum()`, `max()`, `mean()`). In normal use, `summarise()` will pass each function a column (i.e. vector) of values and expect a single value in return.
`summarize()` is an alias for `summarise()`.
#### Solution
```
table1 %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## # A tibble: 1 x 2
## total_cases max_rate
## <int> <dbl>
## 1 547660 0.000461
```
#### Discussion
To use `summarise()`, pass it a series of names followed by R expressions. `summarise()` will return a new tibble that contains one column for each name that you pass to `summarise()`. The name of the column will be the name that you passed to `summarise()`; the contents of the column will be the result of the R expression that you assigned to the name. The R expression should always return a single value because `summarise()` will always return a 1 x n tibble.[2](#fn2)
In other words, `summarise()` is intended to be used with *summary functions*, which are functions that take a vector of values as input and return a single value as output (e.g `sum()`, `max()`, `mean()`). In normal use, `summarise()` will pass each function a column (i.e. vector) of values and expect a single value in return.
`summarize()` is an alias for `summarise()`.
4\.17 Group data
----------------
You want to assign the rows of your data to subgroups based on their shared values or their shared combinations of values.
You want to do this as the first step in a multi\-step analysis, because grouping data doesn’t do anything noticeable until you pass the grouped data to a tidyverse function.
#### Solution
```
table1 %>%
group_by(country)
```
```
## # A tibble: 6 x 4
## # Groups: country [3]
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
`group_by()` converts your data into a grouped tibble, which is a tibble subclass that indicates in its attributes[3](#fn3) which rows belong to which group.
To group rows by the values of a single column, pass `group_by()` a single column name. To group rows by the unique *combination* of values across multiple columns, pass `group_by()` the names of two or more columns.
##### Group\-wise operations
Where appropriate, tidyverse functions recognize grouped tibbles. Tidyverse functions:
1. treat each group as a distinct data set
2. execute their code separately on each group
3. combine the results into a new data frame that contains the same grouping characteristics. `summarise()` is a slight exception, see below.
#### Solution
```
table1 %>%
group_by(country)
```
```
## # A tibble: 6 x 4
## # Groups: country [3]
## country year cases population
## <chr> <int> <int> <int>
## 1 Afghanistan 1999 745 19987071
## 2 Afghanistan 2000 2666 20595360
## 3 Brazil 1999 37737 172006362
## 4 Brazil 2000 80488 174504898
## 5 China 1999 212258 1272915272
## 6 China 2000 213766 1280428583
```
#### Discussion
`group_by()` converts your data into a grouped tibble, which is a tibble subclass that indicates in its attributes[3](#fn3) which rows belong to which group.
To group rows by the values of a single column, pass `group_by()` a single column name. To group rows by the unique *combination* of values across multiple columns, pass `group_by()` the names of two or more columns.
##### Group\-wise operations
Where appropriate, tidyverse functions recognize grouped tibbles. Tidyverse functions:
1. treat each group as a distinct data set
2. execute their code separately on each group
3. combine the results into a new data frame that contains the same grouping characteristics. `summarise()` is a slight exception, see below.
##### Group\-wise operations
Where appropriate, tidyverse functions recognize grouped tibbles. Tidyverse functions:
1. treat each group as a distinct data set
2. execute their code separately on each group
3. combine the results into a new data frame that contains the same grouping characteristics. `summarise()` is a slight exception, see below.
4\.18 Summarise data by groups
------------------------------
You want to compute summary statistics for different subgroups of data in your grouped data frame.
#### Solution
```
table1 %>%
group_by(country) %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 3
## country total_cases max_rate
## <chr> <int> <dbl>
## 1 Afghanistan 3411 0.000129
## 2 Brazil 118225 0.000461
## 3 China 426024 0.000167
```
#### Discussion
Group\-wise summaries are the most common use of grouped data. When you apply `summarise()` to grouped data, `summarise()` will:
1. treat each group as a distinct data set
2. compute separate statistics for each group
3. combine the results into a new tibble
Since this is easier to see than explain, you may want to study the diagram and result above.
`summarise()` gives grouped data special treatment in two ways:
1. `summarise()` will retain the column(s) that were used to group the data in its result. This makes the output of grouped summaries interpretable.
2. `summarise()` will shorten the grouping criteria of its result by one column name (the last column name). Compare this to other tidyverse functions which give their result the *same* grouping criteria as their input.
For example, if the input of `summarise()` is grouped by `country` and `year`, the output of `summarise()` will only be grouped by `country`. Because of this, you can call `summarise()` repeatedly to view progressively higher level summaries:
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## # A tibble: 6 x 3
## # Groups: country [3]
## country year total_cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 2
## country total_cases
## <chr> <int>
## 1 Afghanistan 3411
## 2 Brazil 118225
## 3 China 426024
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 1 x 1
## total_cases
## <int>
## 1 547660
```
#### Solution
```
table1 %>%
group_by(country) %>%
summarise(total_cases = sum(cases), max_rate = max(cases/population))
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 3
## country total_cases max_rate
## <chr> <int> <dbl>
## 1 Afghanistan 3411 0.000129
## 2 Brazil 118225 0.000461
## 3 China 426024 0.000167
```
#### Discussion
Group\-wise summaries are the most common use of grouped data. When you apply `summarise()` to grouped data, `summarise()` will:
1. treat each group as a distinct data set
2. compute separate statistics for each group
3. combine the results into a new tibble
Since this is easier to see than explain, you may want to study the diagram and result above.
`summarise()` gives grouped data special treatment in two ways:
1. `summarise()` will retain the column(s) that were used to group the data in its result. This makes the output of grouped summaries interpretable.
2. `summarise()` will shorten the grouping criteria of its result by one column name (the last column name). Compare this to other tidyverse functions which give their result the *same* grouping criteria as their input.
For example, if the input of `summarise()` is grouped by `country` and `year`, the output of `summarise()` will only be grouped by `country`. Because of this, you can call `summarise()` repeatedly to view progressively higher level summaries:
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## # A tibble: 6 x 3
## # Groups: country [3]
## country year total_cases
## <chr> <int> <int>
## 1 Afghanistan 1999 745
## 2 Afghanistan 2000 2666
## 3 Brazil 1999 37737
## 4 Brazil 2000 80488
## 5 China 1999 212258
## 6 China 2000 213766
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 3 x 2
## country total_cases
## <chr> <int>
## 1 Afghanistan 3411
## 2 Brazil 118225
## 3 China 426024
```
```
table1 %>%
group_by(country, year) %>%
summarise(total_cases = sum(cases)) %>%
summarise(total_cases = sum(total_cases)) %>%
summarise(total_cases = sum(total_cases))
```
```
## `summarise()` regrouping output by 'country' (override with `.groups` argument)
```
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
## # A tibble: 1 x 1
## total_cases
## <int>
## 1 547660
```
4\.19 Nest a data frame
-----------------------
You want to move portions of your data frame into their own tables, and then store those tables in cells in your original data frame.
This lets you manipulate the collection of tables with `filter()`, `select()`, `arrange()`, and so on, as you would normally manipulate a collection of values.
#### Solution
```
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
```
## # A tibble: 3 x 2
## # Groups: Species [3]
## Species Measurements
## <fct> <list>
## 1 setosa <tibble [50 × 4]>
## 2 versicolor <tibble [50 × 4]>
## 3 virginica <tibble [50 × 4]>
```
#### Discussion
`nest()` comes in the tidyr package. You can use it to nest portions of your data frame in two ways:
1. Pass `nest()` a *grouped data frame* made with `dplyr::group_by()` (as above). `nest()` will create a separate table for each group. The table will contain every row in the group and every column that is not part of the grouping criteria.
2. Pass `nest()` an ungrouped data frame and then specify which columns to nest. `nest()` will perform an implicit grouping on the combination of values that appear across the remaining columns, and then create a separate table for each implied grouping.
You can specify columns with the same syntax and helpers that you would use with dplyr’s `select()` function. So, for example, the two calls below will produce the same result as the solution above.
```
iris %>%
nest(Sepal.Width,
Sepal.Length,
Petal.Width,
Petal.Length,
.key = "Measurements") %>%
as_tibble()
iris %>%
nest(-Species, .key = "Measurements") %>%
as_tibble()
```
`nest()` preserves class, which means that `nest()` will return a data frame if its input is a data frame and a tibble if its input is a tibble. In each case, `nest()` will add the subtables to the result as a list\-column. Since, list\-columns are much easier to view in a tibble than a data frame, I recommend that you convert the result of `nest()` to a tibble when necessary.
Use the `.key` argument to provide a name for the new list\-column.
#### Solution
```
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
```
## # A tibble: 3 x 2
## # Groups: Species [3]
## Species Measurements
## <fct> <list>
## 1 setosa <tibble [50 × 4]>
## 2 versicolor <tibble [50 × 4]>
## 3 virginica <tibble [50 × 4]>
```
#### Discussion
`nest()` comes in the tidyr package. You can use it to nest portions of your data frame in two ways:
1. Pass `nest()` a *grouped data frame* made with `dplyr::group_by()` (as above). `nest()` will create a separate table for each group. The table will contain every row in the group and every column that is not part of the grouping criteria.
2. Pass `nest()` an ungrouped data frame and then specify which columns to nest. `nest()` will perform an implicit grouping on the combination of values that appear across the remaining columns, and then create a separate table for each implied grouping.
You can specify columns with the same syntax and helpers that you would use with dplyr’s `select()` function. So, for example, the two calls below will produce the same result as the solution above.
```
iris %>%
nest(Sepal.Width,
Sepal.Length,
Petal.Width,
Petal.Length,
.key = "Measurements") %>%
as_tibble()
iris %>%
nest(-Species, .key = "Measurements") %>%
as_tibble()
```
`nest()` preserves class, which means that `nest()` will return a data frame if its input is a data frame and a tibble if its input is a tibble. In each case, `nest()` will add the subtables to the result as a list\-column. Since, list\-columns are much easier to view in a tibble than a data frame, I recommend that you convert the result of `nest()` to a tibble when necessary.
Use the `.key` argument to provide a name for the new list\-column.
4\.20 Extract a table from a nested data frame
----------------------------------------------
You want to extract a single table nested within a data frame.
For example, you want to extract the setosa table from the nested data set created in the solution above,
```
nested_iris <-
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
#### Solution
```
nested_iris %>%
filter(Species == "setosa") %>%
unnest()
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 50 x 5
## # Groups: Species [1]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 40 more rows
```
#### Discussion
Since the table is in a cell of the data frame, it is possible to extract the table by extracting the contents of the cell (as below). Be sure to use double brackets to extract the content itself, and not a 1 x 1 data frame that contains the cell.
```
nested_iris[[1, 2]]
```
However, this approach can be suboptimal for nested data. Often a nested table relies on information that is stored alongside it for meaning. For example, in the above example, the table relies on the Species variable to indicate that it describes setosa flowers.
You can extract a nested table along with its related information by first subsetting to just the relevant information and then unnesting the result.
#### Solution
```
nested_iris %>%
filter(Species == "setosa") %>%
unnest()
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 50 x 5
## # Groups: Species [1]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 40 more rows
```
#### Discussion
Since the table is in a cell of the data frame, it is possible to extract the table by extracting the contents of the cell (as below). Be sure to use double brackets to extract the content itself, and not a 1 x 1 data frame that contains the cell.
```
nested_iris[[1, 2]]
```
However, this approach can be suboptimal for nested data. Often a nested table relies on information that is stored alongside it for meaning. For example, in the above example, the table relies on the Species variable to indicate that it describes setosa flowers.
You can extract a nested table along with its related information by first subsetting to just the relevant information and then unnesting the result.
4\.21 Unnest a data frame
-------------------------
You want to transform a nested data frame into a flat data frame by re\-integrating the nested tables into the surrounding data frame.
For example, you want to unnest the `nested_iris` data frame created in the recipe above,
```
nested_iris <-
iris %>%
group_by(Species) %>%
nest(.key = "Measurements")
```
```
## Warning: `.key` is deprecated
```
#### Solution
```
unnest(nested_iris)
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 150 x 5
## # Groups: Species [3]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 140 more rows
```
#### Discussion
`unnest()` converts a list\-column into a regular column or columns, repeating the surrounding rows as necessary. `unnest()` can handle list columns that contain atomic vectors and data frames, but cannot handle list columns that contain objects that would not naturally fit into a data frame, such as model objects.
By default, `unnest()` will unnest every list\-column in a data frame. To unnest only a subset of list\-columns, pass the names of the list\-columns to `unnest()` using dplyr `select()` syntax or helpers. `unnest()` will ignore unnamed list columns, excluding them from the result to return a flat data frame. `unnest()` comes in the tidyr package.
#### Solution
```
unnest(nested_iris)
```
```
## Warning: `cols` is now required when using unnest().
## Please use `cols = c(Measurements)`
```
```
## # A tibble: 150 x 5
## # Groups: Species [3]
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.1 3.5 1.4 0.2
## 2 setosa 4.9 3 1.4 0.2
## 3 setosa 4.7 3.2 1.3 0.2
## 4 setosa 4.6 3.1 1.5 0.2
## 5 setosa 5 3.6 1.4 0.2
## 6 setosa 5.4 3.9 1.7 0.4
## 7 setosa 4.6 3.4 1.4 0.3
## 8 setosa 5 3.4 1.5 0.2
## 9 setosa 4.4 2.9 1.4 0.2
## 10 setosa 4.9 3.1 1.5 0.1
## # … with 140 more rows
```
#### Discussion
`unnest()` converts a list\-column into a regular column or columns, repeating the surrounding rows as necessary. `unnest()` can handle list columns that contain atomic vectors and data frames, but cannot handle list columns that contain objects that would not naturally fit into a data frame, such as model objects.
By default, `unnest()` will unnest every list\-column in a data frame. To unnest only a subset of list\-columns, pass the names of the list\-columns to `unnest()` using dplyr `select()` syntax or helpers. `unnest()` will ignore unnamed list columns, excluding them from the result to return a flat data frame. `unnest()` comes in the tidyr package.
4\.22 Combine transform recipes
-------------------------------
You want to transform the structure of a table, and you want to use the data within the table to do it. This is the sort of thing you do everytime you call `summarise()` or `mutate()`.
#### Solution
See the discussion.
#### Discussion
When you transform a data frame, you often work with several functions:
1. A function that transforms the *table* itself, adding columns to its structure, or building a whole new table to hold results.
2. A function that transforms the values held in the table. Since values are always held in vectors (here column vectors), this function transforms a *vector* to create a new vector that can then be added to the empty column or table created by function 1\.
3. A function that mediates between 1 and 2 if the values are embedded in a list\-column, which is a *list*.
The most concise way to learn how to combine these functions is to learn about the functions in isolation, and to then have the functions call each other as necessary.
##### An example
`smiths` contains measurements that describe two fictional people: John and Mary Smith.
```
smiths
```
```
## # A tibble: 2 x 5
## subject time age weight height
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87
## 2 Mary Smith 1 NA NA 1.54
```
To round the `height` value for each person, you would need two functions:
1. `round()` which can round the values of the data vector stored in `smiths$height`.
```
round(smiths$height)
```
```
## [1] 2 2
```
2. `mutate()` which can add the results to a copy of the `smiths` table.
```
smiths %>%
mutate(height_int = round(height))
```
```
## # A tibble: 2 x 6
## subject time age weight height height_int
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87 2
## 2 Mary Smith 1 NA NA 1.54 2
```
`round()` works with data vectors. `mutate()` works with tables. Together they create the table you want.
##### An example that uses list columns
`sepals` is a tibble that I made for this example.[4](#fn4) It contains sepal length measurements for three species of flowers. The measurements are stored in a list\-column named `lengths`.
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
sepals
```
```
## # A tibble: 3 x 2
## Species lengths
## <fct> <list>
## 1 setosa <dbl [50]>
## 2 versicolor <dbl [50]>
## 3 virginica <dbl [50]>
```
Each cell in `lengths` contains a data vector of 50 sepal lengths.
```
# For example, the first cell of lengths
sepals[[1, 2]]
```
```
## [[1]]
## [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7
## [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9
## [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0
```
To add the average sepal length for each species to the table, you would need three functions:
1. `mean()` which can compute the average of a data vector in `lengths`
```
mean(sepals[[1, 2]])
```
```
## Warning in mean.default(sepals[[1, 2]]): argument is not numeric or logical:
## returning NA
```
```
## [1] NA
```
2. `map_dbl()` which can apply `mean()` to each cell of `lengths`, which is a list\-column.
```
map_dbl(sepals$lengths, mean)
```
```
## [1] 5.006 5.936 6.588
```
3. `mutate()` which can add the results to a copy of the `sepals` table.
```
sepals %>%
mutate(avg_length = map_dbl(lengths, mean))
```
```
## # A tibble: 3 x 3
## Species lengths avg_length
## <fct> <list> <dbl>
## 1 setosa <dbl [50]> 5.01
## 2 versicolor <dbl [50]> 5.94
## 3 virginica <dbl [50]> 6.59
```
`mean()` works with data vectors. `map_dbl()` works with list\-columns. `mutate()` works with tables. Together they create the table you want.
#### Solution
See the discussion.
#### Discussion
When you transform a data frame, you often work with several functions:
1. A function that transforms the *table* itself, adding columns to its structure, or building a whole new table to hold results.
2. A function that transforms the values held in the table. Since values are always held in vectors (here column vectors), this function transforms a *vector* to create a new vector that can then be added to the empty column or table created by function 1\.
3. A function that mediates between 1 and 2 if the values are embedded in a list\-column, which is a *list*.
The most concise way to learn how to combine these functions is to learn about the functions in isolation, and to then have the functions call each other as necessary.
##### An example
`smiths` contains measurements that describe two fictional people: John and Mary Smith.
```
smiths
```
```
## # A tibble: 2 x 5
## subject time age weight height
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87
## 2 Mary Smith 1 NA NA 1.54
```
To round the `height` value for each person, you would need two functions:
1. `round()` which can round the values of the data vector stored in `smiths$height`.
```
round(smiths$height)
```
```
## [1] 2 2
```
2. `mutate()` which can add the results to a copy of the `smiths` table.
```
smiths %>%
mutate(height_int = round(height))
```
```
## # A tibble: 2 x 6
## subject time age weight height height_int
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87 2
## 2 Mary Smith 1 NA NA 1.54 2
```
`round()` works with data vectors. `mutate()` works with tables. Together they create the table you want.
##### An example that uses list columns
`sepals` is a tibble that I made for this example.[4](#fn4) It contains sepal length measurements for three species of flowers. The measurements are stored in a list\-column named `lengths`.
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
sepals
```
```
## # A tibble: 3 x 2
## Species lengths
## <fct> <list>
## 1 setosa <dbl [50]>
## 2 versicolor <dbl [50]>
## 3 virginica <dbl [50]>
```
Each cell in `lengths` contains a data vector of 50 sepal lengths.
```
# For example, the first cell of lengths
sepals[[1, 2]]
```
```
## [[1]]
## [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7
## [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9
## [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0
```
To add the average sepal length for each species to the table, you would need three functions:
1. `mean()` which can compute the average of a data vector in `lengths`
```
mean(sepals[[1, 2]])
```
```
## Warning in mean.default(sepals[[1, 2]]): argument is not numeric or logical:
## returning NA
```
```
## [1] NA
```
2. `map_dbl()` which can apply `mean()` to each cell of `lengths`, which is a list\-column.
```
map_dbl(sepals$lengths, mean)
```
```
## [1] 5.006 5.936 6.588
```
3. `mutate()` which can add the results to a copy of the `sepals` table.
```
sepals %>%
mutate(avg_length = map_dbl(lengths, mean))
```
```
## # A tibble: 3 x 3
## Species lengths avg_length
## <fct> <list> <dbl>
## 1 setosa <dbl [50]> 5.01
## 2 versicolor <dbl [50]> 5.94
## 3 virginica <dbl [50]> 6.59
```
`mean()` works with data vectors. `map_dbl()` works with list\-columns. `mutate()` works with tables. Together they create the table you want.
##### An example
`smiths` contains measurements that describe two fictional people: John and Mary Smith.
```
smiths
```
```
## # A tibble: 2 x 5
## subject time age weight height
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87
## 2 Mary Smith 1 NA NA 1.54
```
To round the `height` value for each person, you would need two functions:
1. `round()` which can round the values of the data vector stored in `smiths$height`.
```
round(smiths$height)
```
```
## [1] 2 2
```
2. `mutate()` which can add the results to a copy of the `smiths` table.
```
smiths %>%
mutate(height_int = round(height))
```
```
## # A tibble: 2 x 6
## subject time age weight height height_int
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 John Smith 1 33 90 1.87 2
## 2 Mary Smith 1 NA NA 1.54 2
```
`round()` works with data vectors. `mutate()` works with tables. Together they create the table you want.
##### An example that uses list columns
`sepals` is a tibble that I made for this example.[4](#fn4) It contains sepal length measurements for three species of flowers. The measurements are stored in a list\-column named `lengths`.
```
## `summarise()` ungrouping output (override with `.groups` argument)
```
```
sepals
```
```
## # A tibble: 3 x 2
## Species lengths
## <fct> <list>
## 1 setosa <dbl [50]>
## 2 versicolor <dbl [50]>
## 3 virginica <dbl [50]>
```
Each cell in `lengths` contains a data vector of 50 sepal lengths.
```
# For example, the first cell of lengths
sepals[[1, 2]]
```
```
## [[1]]
## [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7
## [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9
## [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0
```
To add the average sepal length for each species to the table, you would need three functions:
1. `mean()` which can compute the average of a data vector in `lengths`
```
mean(sepals[[1, 2]])
```
```
## Warning in mean.default(sepals[[1, 2]]): argument is not numeric or logical:
## returning NA
```
```
## [1] NA
```
2. `map_dbl()` which can apply `mean()` to each cell of `lengths`, which is a list\-column.
```
map_dbl(sepals$lengths, mean)
```
```
## [1] 5.006 5.936 6.588
```
3. `mutate()` which can add the results to a copy of the `sepals` table.
```
sepals %>%
mutate(avg_length = map_dbl(lengths, mean))
```
```
## # A tibble: 3 x 3
## Species lengths avg_length
## <fct> <list> <dbl>
## 1 setosa <dbl [50]> 5.01
## 2 versicolor <dbl [50]> 5.94
## 3 virginica <dbl [50]> 6.59
```
`mean()` works with data vectors. `map_dbl()` works with list\-columns. `mutate()` works with tables. Together they create the table you want.
4\.23 Join data sets by common column(s)
----------------------------------------
You want to combine two data frames into a single data frame, such that observations in the first data frame are matched to the corresponding observations in the second data frame, even if those observations do not appear in the same order.
Your data is structured in such a way that you can match observations by the values of one or more ID columns that appear in both data frames.
For example, you would like to combine `band_members` and `band_instruments` into a single data frame based on the values of the `name` column. The new data frame will correctly list who plays what.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
left_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
#### Discussion
There are four ways to join content from one data frame to another. Each uses a different function name, but the same arguments and syntax. Of these, `left_join()` is the most common.
1. `left_join()` drops any row in the *second* data set does not match a row in the first data set. (It retains every row in the first data set, which appears on the left when you type the function call, hence the name).
2. `right_join()` drops any row in the *first* data set does not match a row in the first data set.
```
band_members %>%
right_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
## 3 Keith <NA> guitar
```
3. `inner_join()` drops any row in *either* data set that does not have a match in both data sets, i.e. `inner_join()` does not retain any incomplete rows in the final result.
```
band_members %>%
inner_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
```
4. `full_join()` retains every row from both data sets; it is the only join guaranteed to retain all of the original data.
```
band_members %>%
full_join(band_instruments, by = "name")
```
```
## # A tibble: 4 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
## 4 Keith <NA> guitar
```
##### Mutating joins
`left_join()`, `right_join()`, `inner_join()`, and `full_join()` are collectively called *mutating joins* because they add additional columns to a copy of a data set, as does `mutate()`.
##### Specifying column(s) to join on
By default, the mutating join functions will join on the set of columns whose names appear in both data frames. The join functions will match each row in the first data frame to the row in the second data frame that has the same combination of values across the commonly named columns.
To override the default, add a `by` argument to your join function. Pass it the name(s) of the column(s) to join on as a character vector. These names should appear in both data sets. R will join together rows that contain the same combination of values in these columns, ignoring the values in other columns, even if those columns share a name with a column in the other data frame.
```
table1 %>%
left_join(table3, by = c("country", "year"))
```
```
## # A tibble: 6 x 5
## country year cases population rate
## <chr> <int> <int> <int> <chr>
## 1 Afghanistan 1999 745 19987071 745/19987071
## 2 Afghanistan 2000 2666 20595360 2666/20595360
## 3 Brazil 1999 37737 172006362 37737/172006362
## 4 Brazil 2000 80488 174504898 80488/174504898
## 5 China 1999 212258 1272915272 212258/1272915272
## 6 China 2000 213766 1280428583 213766/1280428583
```
##### Joining when ID names do not match
Often an ID variable will appear with a different name in each data frame. For example, the `name` variable appears as `artist` in `band_instruments2`.
```
band_instruments2
```
```
## # A tibble: 3 x 2
## artist plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
To join by two columns that have different names, pass `by` a named character vector: each element of the vector should be a pair of names, e.g.
```
c(name1 = "name2", name4 = "name4")
```
For each element,
1. Write the name of the column that appears in the first data frame
2. Write an equals sign
3. Write the name of the matching column that appears in the second data set.
Only the second name needs to be surrounded with quotation marks. The join will match the corresponding columns across data frames.
```
band_members %>%
left_join(band_instruments2, by = c(name = "artist"))
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
R will use the column name(s) from the first data set in the result.
##### Suffixes
Joins will append the suffixes `.x` and `.y` to any columns that have the same name in both data sets *but are not used to join on*. The columns from the first data set are suffixed with `.x`, the columns from the second with `.y`.
```
table4a %>%
left_join(table4b, by = "country")
```
```
## # A tibble: 3 x 5
## country `1999.x` `2000.x` `1999.y` `2000.y`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
To use different suffixes, supply a character vector of length two as a `suffix` argument.
```
table4a %>%
left_join(table4b, by = "country", suffix = c("_cases", "_pop"))
```
```
## # A tibble: 3 x 5
## country `1999_cases` `2000_cases` `1999_pop` `2000_pop`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
#### Solution
```
band_members %>%
left_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
#### Discussion
There are four ways to join content from one data frame to another. Each uses a different function name, but the same arguments and syntax. Of these, `left_join()` is the most common.
1. `left_join()` drops any row in the *second* data set does not match a row in the first data set. (It retains every row in the first data set, which appears on the left when you type the function call, hence the name).
2. `right_join()` drops any row in the *first* data set does not match a row in the first data set.
```
band_members %>%
right_join(band_instruments, by = "name")
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
## 3 Keith <NA> guitar
```
3. `inner_join()` drops any row in *either* data set that does not have a match in both data sets, i.e. `inner_join()` does not retain any incomplete rows in the final result.
```
band_members %>%
inner_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 3
## name band plays
## <chr> <chr> <chr>
## 1 John Beatles guitar
## 2 Paul Beatles bass
```
4. `full_join()` retains every row from both data sets; it is the only join guaranteed to retain all of the original data.
```
band_members %>%
full_join(band_instruments, by = "name")
```
```
## # A tibble: 4 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
## 4 Keith <NA> guitar
```
##### Mutating joins
`left_join()`, `right_join()`, `inner_join()`, and `full_join()` are collectively called *mutating joins* because they add additional columns to a copy of a data set, as does `mutate()`.
##### Specifying column(s) to join on
By default, the mutating join functions will join on the set of columns whose names appear in both data frames. The join functions will match each row in the first data frame to the row in the second data frame that has the same combination of values across the commonly named columns.
To override the default, add a `by` argument to your join function. Pass it the name(s) of the column(s) to join on as a character vector. These names should appear in both data sets. R will join together rows that contain the same combination of values in these columns, ignoring the values in other columns, even if those columns share a name with a column in the other data frame.
```
table1 %>%
left_join(table3, by = c("country", "year"))
```
```
## # A tibble: 6 x 5
## country year cases population rate
## <chr> <int> <int> <int> <chr>
## 1 Afghanistan 1999 745 19987071 745/19987071
## 2 Afghanistan 2000 2666 20595360 2666/20595360
## 3 Brazil 1999 37737 172006362 37737/172006362
## 4 Brazil 2000 80488 174504898 80488/174504898
## 5 China 1999 212258 1272915272 212258/1272915272
## 6 China 2000 213766 1280428583 213766/1280428583
```
##### Joining when ID names do not match
Often an ID variable will appear with a different name in each data frame. For example, the `name` variable appears as `artist` in `band_instruments2`.
```
band_instruments2
```
```
## # A tibble: 3 x 2
## artist plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
To join by two columns that have different names, pass `by` a named character vector: each element of the vector should be a pair of names, e.g.
```
c(name1 = "name2", name4 = "name4")
```
For each element,
1. Write the name of the column that appears in the first data frame
2. Write an equals sign
3. Write the name of the matching column that appears in the second data set.
Only the second name needs to be surrounded with quotation marks. The join will match the corresponding columns across data frames.
```
band_members %>%
left_join(band_instruments2, by = c(name = "artist"))
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
R will use the column name(s) from the first data set in the result.
##### Suffixes
Joins will append the suffixes `.x` and `.y` to any columns that have the same name in both data sets *but are not used to join on*. The columns from the first data set are suffixed with `.x`, the columns from the second with `.y`.
```
table4a %>%
left_join(table4b, by = "country")
```
```
## # A tibble: 3 x 5
## country `1999.x` `2000.x` `1999.y` `2000.y`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
To use different suffixes, supply a character vector of length two as a `suffix` argument.
```
table4a %>%
left_join(table4b, by = "country", suffix = c("_cases", "_pop"))
```
```
## # A tibble: 3 x 5
## country `1999_cases` `2000_cases` `1999_pop` `2000_pop`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
##### Mutating joins
`left_join()`, `right_join()`, `inner_join()`, and `full_join()` are collectively called *mutating joins* because they add additional columns to a copy of a data set, as does `mutate()`.
##### Specifying column(s) to join on
By default, the mutating join functions will join on the set of columns whose names appear in both data frames. The join functions will match each row in the first data frame to the row in the second data frame that has the same combination of values across the commonly named columns.
To override the default, add a `by` argument to your join function. Pass it the name(s) of the column(s) to join on as a character vector. These names should appear in both data sets. R will join together rows that contain the same combination of values in these columns, ignoring the values in other columns, even if those columns share a name with a column in the other data frame.
```
table1 %>%
left_join(table3, by = c("country", "year"))
```
```
## # A tibble: 6 x 5
## country year cases population rate
## <chr> <int> <int> <int> <chr>
## 1 Afghanistan 1999 745 19987071 745/19987071
## 2 Afghanistan 2000 2666 20595360 2666/20595360
## 3 Brazil 1999 37737 172006362 37737/172006362
## 4 Brazil 2000 80488 174504898 80488/174504898
## 5 China 1999 212258 1272915272 212258/1272915272
## 6 China 2000 213766 1280428583 213766/1280428583
```
##### Joining when ID names do not match
Often an ID variable will appear with a different name in each data frame. For example, the `name` variable appears as `artist` in `band_instruments2`.
```
band_instruments2
```
```
## # A tibble: 3 x 2
## artist plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
To join by two columns that have different names, pass `by` a named character vector: each element of the vector should be a pair of names, e.g.
```
c(name1 = "name2", name4 = "name4")
```
For each element,
1. Write the name of the column that appears in the first data frame
2. Write an equals sign
3. Write the name of the matching column that appears in the second data set.
Only the second name needs to be surrounded with quotation marks. The join will match the corresponding columns across data frames.
```
band_members %>%
left_join(band_instruments2, by = c(name = "artist"))
```
```
## # A tibble: 3 x 3
## name band plays
## <chr> <chr> <chr>
## 1 Mick Stones <NA>
## 2 John Beatles guitar
## 3 Paul Beatles bass
```
R will use the column name(s) from the first data set in the result.
##### Suffixes
Joins will append the suffixes `.x` and `.y` to any columns that have the same name in both data sets *but are not used to join on*. The columns from the first data set are suffixed with `.x`, the columns from the second with `.y`.
```
table4a %>%
left_join(table4b, by = "country")
```
```
## # A tibble: 3 x 5
## country `1999.x` `2000.x` `1999.y` `2000.y`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
To use different suffixes, supply a character vector of length two as a `suffix` argument.
```
table4a %>%
left_join(table4b, by = "country", suffix = c("_cases", "_pop"))
```
```
## # A tibble: 3 x 5
## country `1999_cases` `2000_cases` `1999_pop` `2000_pop`
## <chr> <int> <int> <int> <int>
## 1 Afghanistan 745 2666 19987071 20595360
## 2 Brazil 37737 80488 172006362 174504898
## 3 China 212258 213766 1272915272 1280428583
```
4\.24 Find rows that have a match in another data set
-----------------------------------------------------
You want to find the rows in one data frame that have a match in a second data frame. By match, you mean that both rows refer to the same observation, even if they include different measurements.
Your data is structured in such a way that you can match rows by the values of one or more ID columns that appear in both data frames.
For example, you would like to return the rows of `band_members` that have a corresponding row in `band_instruments`. The new data frame will be a reduced version of `band_members` that does not contain any new columns.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
semi_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 2
## name band
## <chr> <chr>
## 1 John Beatles
## 2 Paul Beatles
```
#### Discussion
`semi_join()` returns only the rows of the first data frame that *have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `semi_join()` a useful way to preview which rows will be retained by a mutating join.
`semi_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match).
##### Filtering joins
`semi_join()` and `anti_join()` (see below) are called *filtering joins* because they filter a data frame to only those rows that meet a specific criteria, as does `filter()`.
Unlike mutating joins, filtering joins do not add columns from the second data frame to the first. Instead, they use the second data frame to identify rows to return from the first.
When you need to filter on a complicated set of conditions, filtering joins can be more effective than `filter()`: use `tribble()` to create a data frame to filter against with a filtering join.
#### Solution
```
band_members %>%
semi_join(band_instruments, by = "name")
```
```
## # A tibble: 2 x 2
## name band
## <chr> <chr>
## 1 John Beatles
## 2 Paul Beatles
```
#### Discussion
`semi_join()` returns only the rows of the first data frame that *have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `semi_join()` a useful way to preview which rows will be retained by a mutating join.
`semi_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match).
##### Filtering joins
`semi_join()` and `anti_join()` (see below) are called *filtering joins* because they filter a data frame to only those rows that meet a specific criteria, as does `filter()`.
Unlike mutating joins, filtering joins do not add columns from the second data frame to the first. Instead, they use the second data frame to identify rows to return from the first.
When you need to filter on a complicated set of conditions, filtering joins can be more effective than `filter()`: use `tribble()` to create a data frame to filter against with a filtering join.
##### Filtering joins
`semi_join()` and `anti_join()` (see below) are called *filtering joins* because they filter a data frame to only those rows that meet a specific criteria, as does `filter()`.
Unlike mutating joins, filtering joins do not add columns from the second data frame to the first. Instead, they use the second data frame to identify rows to return from the first.
When you need to filter on a complicated set of conditions, filtering joins can be more effective than `filter()`: use `tribble()` to create a data frame to filter against with a filtering join.
4\.25 Find rows that do not have a match in another data set
------------------------------------------------------------
You want to find the rows in one data frame that *do not* have a match in a second data frame. By match, you mean that both rows refer to the same observation, even if they include different measurements.
Your data is structured in such a way that you can match rows by the values of one or more ID columns that appear in both data frames.
For example, you would like to return the rows of `band_members` that do not have a corresponding row in `band_instruments`. The new data frame will be a reduced version of `band_members` that does not contain any new columns.
```
band_members
```
```
## # A tibble: 3 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
## 2 John Beatles
## 3 Paul Beatles
```
```
band_instruments
```
```
## # A tibble: 3 x 2
## name plays
## <chr> <chr>
## 1 John guitar
## 2 Paul bass
## 3 Keith guitar
```
#### Solution
```
band_members %>%
anti_join(band_instruments, by = "name")
```
```
## # A tibble: 1 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
```
#### Discussion
`anti_join()` returns only the rows of the first data frame that *do not have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `ant_join()` a useful way to debug a mutating join.
`anti_join()` provides a useful way to check for typos that could interfere with a mutating join; these rows will not have a match in the second data frame (assuming that the typo does not also appear in the second data frame).
`anti_join()` also highlights entries that are coded in different ways across data frames, such as `"North_Carolina"` and `"North Carolina"`.
`anti_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match). Along with `semi_join()`, `anti_join()` is one of the two [Filtering joins](transform-tables.html#filtering-joins).
#### Solution
```
band_members %>%
anti_join(band_instruments, by = "name")
```
```
## # A tibble: 1 x 2
## name band
## <chr> <chr>
## 1 Mick Stones
```
#### Discussion
`anti_join()` returns only the rows of the first data frame that *do not have* a match in the second data frame. A match is a row that would be combined with the first row by a [mutating join](transform-tables.html#joins). This makes `ant_join()` a useful way to debug a mutating join.
`anti_join()` provides a useful way to check for typos that could interfere with a mutating join; these rows will not have a match in the second data frame (assuming that the typo does not also appear in the second data frame).
`anti_join()` also highlights entries that are coded in different ways across data frames, such as `"North_Carolina"` and `"North Carolina"`.
`anti_join()` uses the same syntax as mutating joins. Learn more in [Specifying column(s) to join on](transform-tables.html#specifying-columns-to-join-on) and [Joining when ID names do not match](transform-tables.html#joining-when-id-names-do-not-match). Along with `semi_join()`, `anti_join()` is one of the two [Filtering joins](transform-tables.html#filtering-joins).
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/transform-lists-and-vectors.html |
5 Transform Lists and Vectors
=============================
---
This chapter includes the following recipes:
1. [Extract an element from a list](transform-lists-and-vectors.html#extract)
2. [Determine the type of a vector](transform-lists-and-vectors.html#determine-the-type-of-a-vector)
3. [Map a function to each element of a vector](transform-lists-and-vectors.html#map)
---
What you should know before you begin
-------------------------------------
A vector is a one dimensional array of elements. Vectors are the basic building blocks of R. Almost all data in R is stored in a vector, or even a vector of vectors.
A list is a *recursive vector*: a vector that can contain another vector or list in each of its elements.
Lists are one of the most flexible data structures in R. As a result, they are used as a general purpose glue to hold objects together. You will find lists disguised as model objects, data frames, list\-columns within data frames, and more.
Data frames are a sub\-type of list. Each column of a data frame is an element of the list that the data frame is built around.
More than any other part of R, lists demonstrate how a programming language can appear different to beginners than to experts. Seasoned R programmers do not distinguish between lists and vectors because the two are equivalent: a list is a type of vector.
However, a beginner who uses R for data science will quickly see that lists behave differently than other types of vectors. First, many R functions will not accept lists as input, even though they accept other types of vectors. Second, when you subset a list with `[ ]` to extract the value of one of its elements, R will give you a new list of length one that contains the value as its first element. This poses a problem if you want to pass that value to a function that does not accept lists as input (solve that problem with [this recipe](transform-lists-and-vectors.html#extract)).
To respect this difference, I try to be clear when talking about vectors that are not lists. This introduces a new problem: when you speak about R, it is difficult to distinguish between vectors that are lists and vectors that are not lists. Whenever the difference matters, I’ll call the first set of vectors **lists** and the second set of vectors **data vectors**[5](#fn5). I’ll refer to the superset that includes both lists and data vectors as vectors.
Data vectors come in six atomic *types*: double, integer logical, character, complex, and raw. Every element in a data vector must be the same type of data as the vector (if you try to put a different type of data into an atomic vector, R will coerce the data’s type to match the vector). R also contains an S3 class system that builds classes like factors and date\-times on top of the atomic types. You don’t need to understand R’s types and classes to use R or this cookbook, but you should know that R will recognize different types of data and treat them accordingly.
This chapter focuses on both lists and data vectors, but it only features recipes that work with the *structure* of a list or data vector. The chapters that follow will contain recipes that work with the *types* of data stored in a data vector.
5\.1 Extract an element from a list
-----------------------------------
You want to return the value of an element of a list as it is, perhaps to use in a function. You do not want the value to come embedded in a list of length one.
#### Solution
```
# returns the element named x in state.center
state.center %>%
pluck("x")
```
```
## [1] -86.7509 -127.2500 -111.6250 -92.2992 -119.7730 -105.5130 -72.3573
## [8] -74.9841 -81.6850 -83.3736 -126.2500 -113.9300 -89.3776 -86.0808
## [15] -93.3714 -98.1156 -84.7674 -92.2724 -68.9801 -76.6459 -71.5800
## [22] -84.6870 -94.6043 -89.8065 -92.5137 -109.3200 -99.5898 -116.8510
## [29] -71.3924 -74.2336 -105.9420 -75.1449 -78.4686 -100.0990 -82.5963
## [36] -97.1239 -120.0680 -77.4500 -71.1244 -80.5056 -99.7238 -86.4560
## [43] -98.7857 -111.3300 -72.5450 -78.2005 -119.7460 -80.6665 -89.9941
## [50] -107.2560
```
#### Discussion
`pluck()` comes in the purrr package and does the equivalent of `[[` subsetting. If you pass `pluck()` a character string, `pluck()` will return the element whose name matches the string. If you pass `pluck()` an integer *n*, `pluck()` will return the *nth* element of the list.
Pass multiple arguments to `pluck()` to subset multiple times. `pluck()` will subset the result of each argument with the argument that follows, e.g.
```
library(repurrrsive)
sw_films %>%
pluck(7, "title")
```
```
## [1] "The Force Awakens"
```
5\.2 Determine the type of a vector
-----------------------------------
You want to know the type of a vector.
#### Solution
```
typeof(letters)
```
```
## [1] "character"
```
#### Discussion
R vectors can be one of six atomic types, or a list. `typeof()` provides a useful way to check which type of vector you are working with. This is useful, for example, when you want to match a function’s output to an appropriate map function ([below](transform-lists-and-vectors.html#map)).
5\.3 Map a function to each element of a vector
-----------------------------------------------
You want to apply a function separately to each element in a vector and then combine the results into a single object. This is similar to what you might do with a for loop, or with the apply family of functions.
For example, `got_chars` is a list of 30 sublists. You want to compute the `length()` of each sublist.
#### Solution
```
library(repurrrsive)
got_chars %>%
map(length)
```
```
## [[1]]
## [1] 18
##
## [[2]]
## [1] 18
##
## [[3]]
## [1] 18
##
## [[4]]
## [1] 18
##
## [[5]]
## [1] 18
##
## [[6]]
## [1] 18
##
## [[7]]
## [1] 18
##
## [[8]]
## [1] 18
##
## [[9]]
## [1] 18
##
## [[10]]
## [1] 18
##
## [[11]]
## [1] 18
##
## [[12]]
## [1] 18
##
## [[13]]
## [1] 18
##
## [[14]]
## [1] 18
##
## [[15]]
## [1] 18
##
## [[16]]
## [1] 18
##
## [[17]]
## [1] 18
##
## [[18]]
## [1] 18
##
## [[19]]
## [1] 18
##
## [[20]]
## [1] 18
##
## [[21]]
## [1] 18
##
## [[22]]
## [1] 18
##
## [[23]]
## [1] 18
##
## [[24]]
## [1] 18
##
## [[25]]
## [1] 18
##
## [[26]]
## [1] 18
##
## [[27]]
## [1] 18
##
## [[28]]
## [1] 18
##
## [[29]]
## [1] 18
##
## [[30]]
## [1] 18
```
#### Discussion
`map()` takes a vector to iterate over (here supplied by the pipe) followed by a function to apply to each element of the vector, followed by any arguments to pass to the function when it is applied to the vector.
Pass the function name to `map()` without quotes and without parentheses. `map()` will pass each element of the vector one at a time to the first argument of the function.
If your function requires additional arguments to do its job, pass the arguments to `map()`. `map()` will forward these arguments in order, with their names, to the function when `map()` runs the function.
```
got_chars %>%
map(keep, is.numeric)
```
```
## [[1]]
## [[1]]$id
## [1] 1022
##
##
## [[2]]
## [[2]]$id
## [1] 1052
##
##
## [[3]]
## [[3]]$id
## [1] 1074
##
##
## [[4]]
## [[4]]$id
## [1] 1109
##
##
## [[5]]
## [[5]]$id
## [1] 1166
##
##
## [[6]]
## [[6]]$id
## [1] 1267
##
##
## [[7]]
## [[7]]$id
## [1] 1295
##
##
## [[8]]
## [[8]]$id
## [1] 130
##
##
## [[9]]
## [[9]]$id
## [1] 1303
##
##
## [[10]]
## [[10]]$id
## [1] 1319
##
##
## [[11]]
## [[11]]$id
## [1] 148
##
##
## [[12]]
## [[12]]$id
## [1] 149
##
##
## [[13]]
## [[13]]$id
## [1] 150
##
##
## [[14]]
## [[14]]$id
## [1] 168
##
##
## [[15]]
## [[15]]$id
## [1] 2066
##
##
## [[16]]
## [[16]]$id
## [1] 208
##
##
## [[17]]
## [[17]]$id
## [1] 216
##
##
## [[18]]
## [[18]]$id
## [1] 232
##
##
## [[19]]
## [[19]]$id
## [1] 238
##
##
## [[20]]
## [[20]]$id
## [1] 339
##
##
## [[21]]
## [[21]]$id
## [1] 529
##
##
## [[22]]
## [[22]]$id
## [1] 576
##
##
## [[23]]
## [[23]]$id
## [1] 583
##
##
## [[24]]
## [[24]]$id
## [1] 60
##
##
## [[25]]
## [[25]]$id
## [1] 605
##
##
## [[26]]
## [[26]]$id
## [1] 743
##
##
## [[27]]
## [[27]]$id
## [1] 751
##
##
## [[28]]
## [[28]]$id
## [1] 844
##
##
## [[29]]
## [[29]]$id
## [1] 954
##
##
## [[30]]
## [[30]]$id
## [1] 957
```
##### 5\.3\.0\.0\.1 The map family of functions
`map()` is one of ten similar functions provided by the purrr package that together form a family of functions. Each member of the map family applies a function to a vector in the same iterative way; but each member returns the results in a different type of data structure.
| Function | Returns |
| --- | --- |
| `map` | A list |
| `map_chr` | A character vector |
| `map_dbl` | A double (numeric) vector |
| `map_df` | A data frame (`map_df` does the equivalent of `map_dfr`) |
| `map_dfr` | A single data frame made by row\-binding the individual results[6](#fn6) |
| `map_dfc` | A single data frame made by column\-binding the individual results[7](#fn7) |
| `map_int` | An integer vector |
| `map_lgl` | A logical vector |
| `walk` | The original input (returned invisibly) |
#### 5\.3\.0\.1 How to choose a map function
To map a function over a vector, consider what type of output the function will produce. Then pick the map function that returns that type of output. This is a general rule that will return sensible results.
For example, the `length()` function returns an integer, so you would map `length()` over `got_chars` with `map_int()`, which returns the results as an integer vector.
```
got_chars %>%
map_int(length)
```
```
## [1] 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
## [26] 18 18 18 18 18
```
`walk()` returns the original vector invisibly (so you can pipe the result to a new function). `walk()` is intended to be used with functions like `plot()` or `print()`, which execute side effects but do not return an object to pass on.
##### 5\.3\.0\.1\.1 Map shorthand syntax
Map functions recognize two syntax shorthands.
1. If your vector is a list of sublists, you can extract elements from the sublists by name or position, e.g.
```
got_chars %>%
map_chr("name")
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
```
got_chars %>%
map_chr(3)
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
These do the equivalent of
```
got_chars %>%
map_chr(pluck, "name")
got_chars %>%
map_chr(pluck, 3)
```
2. You can use `~` and `.x` to map with expressions instead of functions. To turn a pice of code into an expression to map, first place a `~` at the start of the code. Then use `.x` as a pronoun for the value that map should supply from the vector to map over. The map function will iteratively pass each element of your vector to the `.x` in the expression and then run the expression.
An expression like this:
```
got_chars %>%
map_lgl(~length(.x) > 0)
```
becomes the equivalent of
```
got_chars %>%
map_lgl(function(x) length(x) > 0)
```
Expressions provide an easy way to map over a function argument that is not the first.
What you should know before you begin
-------------------------------------
A vector is a one dimensional array of elements. Vectors are the basic building blocks of R. Almost all data in R is stored in a vector, or even a vector of vectors.
A list is a *recursive vector*: a vector that can contain another vector or list in each of its elements.
Lists are one of the most flexible data structures in R. As a result, they are used as a general purpose glue to hold objects together. You will find lists disguised as model objects, data frames, list\-columns within data frames, and more.
Data frames are a sub\-type of list. Each column of a data frame is an element of the list that the data frame is built around.
More than any other part of R, lists demonstrate how a programming language can appear different to beginners than to experts. Seasoned R programmers do not distinguish between lists and vectors because the two are equivalent: a list is a type of vector.
However, a beginner who uses R for data science will quickly see that lists behave differently than other types of vectors. First, many R functions will not accept lists as input, even though they accept other types of vectors. Second, when you subset a list with `[ ]` to extract the value of one of its elements, R will give you a new list of length one that contains the value as its first element. This poses a problem if you want to pass that value to a function that does not accept lists as input (solve that problem with [this recipe](transform-lists-and-vectors.html#extract)).
To respect this difference, I try to be clear when talking about vectors that are not lists. This introduces a new problem: when you speak about R, it is difficult to distinguish between vectors that are lists and vectors that are not lists. Whenever the difference matters, I’ll call the first set of vectors **lists** and the second set of vectors **data vectors**[5](#fn5). I’ll refer to the superset that includes both lists and data vectors as vectors.
Data vectors come in six atomic *types*: double, integer logical, character, complex, and raw. Every element in a data vector must be the same type of data as the vector (if you try to put a different type of data into an atomic vector, R will coerce the data’s type to match the vector). R also contains an S3 class system that builds classes like factors and date\-times on top of the atomic types. You don’t need to understand R’s types and classes to use R or this cookbook, but you should know that R will recognize different types of data and treat them accordingly.
This chapter focuses on both lists and data vectors, but it only features recipes that work with the *structure* of a list or data vector. The chapters that follow will contain recipes that work with the *types* of data stored in a data vector.
5\.1 Extract an element from a list
-----------------------------------
You want to return the value of an element of a list as it is, perhaps to use in a function. You do not want the value to come embedded in a list of length one.
#### Solution
```
# returns the element named x in state.center
state.center %>%
pluck("x")
```
```
## [1] -86.7509 -127.2500 -111.6250 -92.2992 -119.7730 -105.5130 -72.3573
## [8] -74.9841 -81.6850 -83.3736 -126.2500 -113.9300 -89.3776 -86.0808
## [15] -93.3714 -98.1156 -84.7674 -92.2724 -68.9801 -76.6459 -71.5800
## [22] -84.6870 -94.6043 -89.8065 -92.5137 -109.3200 -99.5898 -116.8510
## [29] -71.3924 -74.2336 -105.9420 -75.1449 -78.4686 -100.0990 -82.5963
## [36] -97.1239 -120.0680 -77.4500 -71.1244 -80.5056 -99.7238 -86.4560
## [43] -98.7857 -111.3300 -72.5450 -78.2005 -119.7460 -80.6665 -89.9941
## [50] -107.2560
```
#### Discussion
`pluck()` comes in the purrr package and does the equivalent of `[[` subsetting. If you pass `pluck()` a character string, `pluck()` will return the element whose name matches the string. If you pass `pluck()` an integer *n*, `pluck()` will return the *nth* element of the list.
Pass multiple arguments to `pluck()` to subset multiple times. `pluck()` will subset the result of each argument with the argument that follows, e.g.
```
library(repurrrsive)
sw_films %>%
pluck(7, "title")
```
```
## [1] "The Force Awakens"
```
#### Solution
```
# returns the element named x in state.center
state.center %>%
pluck("x")
```
```
## [1] -86.7509 -127.2500 -111.6250 -92.2992 -119.7730 -105.5130 -72.3573
## [8] -74.9841 -81.6850 -83.3736 -126.2500 -113.9300 -89.3776 -86.0808
## [15] -93.3714 -98.1156 -84.7674 -92.2724 -68.9801 -76.6459 -71.5800
## [22] -84.6870 -94.6043 -89.8065 -92.5137 -109.3200 -99.5898 -116.8510
## [29] -71.3924 -74.2336 -105.9420 -75.1449 -78.4686 -100.0990 -82.5963
## [36] -97.1239 -120.0680 -77.4500 -71.1244 -80.5056 -99.7238 -86.4560
## [43] -98.7857 -111.3300 -72.5450 -78.2005 -119.7460 -80.6665 -89.9941
## [50] -107.2560
```
#### Discussion
`pluck()` comes in the purrr package and does the equivalent of `[[` subsetting. If you pass `pluck()` a character string, `pluck()` will return the element whose name matches the string. If you pass `pluck()` an integer *n*, `pluck()` will return the *nth* element of the list.
Pass multiple arguments to `pluck()` to subset multiple times. `pluck()` will subset the result of each argument with the argument that follows, e.g.
```
library(repurrrsive)
sw_films %>%
pluck(7, "title")
```
```
## [1] "The Force Awakens"
```
5\.2 Determine the type of a vector
-----------------------------------
You want to know the type of a vector.
#### Solution
```
typeof(letters)
```
```
## [1] "character"
```
#### Discussion
R vectors can be one of six atomic types, or a list. `typeof()` provides a useful way to check which type of vector you are working with. This is useful, for example, when you want to match a function’s output to an appropriate map function ([below](transform-lists-and-vectors.html#map)).
#### Solution
```
typeof(letters)
```
```
## [1] "character"
```
#### Discussion
R vectors can be one of six atomic types, or a list. `typeof()` provides a useful way to check which type of vector you are working with. This is useful, for example, when you want to match a function’s output to an appropriate map function ([below](transform-lists-and-vectors.html#map)).
5\.3 Map a function to each element of a vector
-----------------------------------------------
You want to apply a function separately to each element in a vector and then combine the results into a single object. This is similar to what you might do with a for loop, or with the apply family of functions.
For example, `got_chars` is a list of 30 sublists. You want to compute the `length()` of each sublist.
#### Solution
```
library(repurrrsive)
got_chars %>%
map(length)
```
```
## [[1]]
## [1] 18
##
## [[2]]
## [1] 18
##
## [[3]]
## [1] 18
##
## [[4]]
## [1] 18
##
## [[5]]
## [1] 18
##
## [[6]]
## [1] 18
##
## [[7]]
## [1] 18
##
## [[8]]
## [1] 18
##
## [[9]]
## [1] 18
##
## [[10]]
## [1] 18
##
## [[11]]
## [1] 18
##
## [[12]]
## [1] 18
##
## [[13]]
## [1] 18
##
## [[14]]
## [1] 18
##
## [[15]]
## [1] 18
##
## [[16]]
## [1] 18
##
## [[17]]
## [1] 18
##
## [[18]]
## [1] 18
##
## [[19]]
## [1] 18
##
## [[20]]
## [1] 18
##
## [[21]]
## [1] 18
##
## [[22]]
## [1] 18
##
## [[23]]
## [1] 18
##
## [[24]]
## [1] 18
##
## [[25]]
## [1] 18
##
## [[26]]
## [1] 18
##
## [[27]]
## [1] 18
##
## [[28]]
## [1] 18
##
## [[29]]
## [1] 18
##
## [[30]]
## [1] 18
```
#### Discussion
`map()` takes a vector to iterate over (here supplied by the pipe) followed by a function to apply to each element of the vector, followed by any arguments to pass to the function when it is applied to the vector.
Pass the function name to `map()` without quotes and without parentheses. `map()` will pass each element of the vector one at a time to the first argument of the function.
If your function requires additional arguments to do its job, pass the arguments to `map()`. `map()` will forward these arguments in order, with their names, to the function when `map()` runs the function.
```
got_chars %>%
map(keep, is.numeric)
```
```
## [[1]]
## [[1]]$id
## [1] 1022
##
##
## [[2]]
## [[2]]$id
## [1] 1052
##
##
## [[3]]
## [[3]]$id
## [1] 1074
##
##
## [[4]]
## [[4]]$id
## [1] 1109
##
##
## [[5]]
## [[5]]$id
## [1] 1166
##
##
## [[6]]
## [[6]]$id
## [1] 1267
##
##
## [[7]]
## [[7]]$id
## [1] 1295
##
##
## [[8]]
## [[8]]$id
## [1] 130
##
##
## [[9]]
## [[9]]$id
## [1] 1303
##
##
## [[10]]
## [[10]]$id
## [1] 1319
##
##
## [[11]]
## [[11]]$id
## [1] 148
##
##
## [[12]]
## [[12]]$id
## [1] 149
##
##
## [[13]]
## [[13]]$id
## [1] 150
##
##
## [[14]]
## [[14]]$id
## [1] 168
##
##
## [[15]]
## [[15]]$id
## [1] 2066
##
##
## [[16]]
## [[16]]$id
## [1] 208
##
##
## [[17]]
## [[17]]$id
## [1] 216
##
##
## [[18]]
## [[18]]$id
## [1] 232
##
##
## [[19]]
## [[19]]$id
## [1] 238
##
##
## [[20]]
## [[20]]$id
## [1] 339
##
##
## [[21]]
## [[21]]$id
## [1] 529
##
##
## [[22]]
## [[22]]$id
## [1] 576
##
##
## [[23]]
## [[23]]$id
## [1] 583
##
##
## [[24]]
## [[24]]$id
## [1] 60
##
##
## [[25]]
## [[25]]$id
## [1] 605
##
##
## [[26]]
## [[26]]$id
## [1] 743
##
##
## [[27]]
## [[27]]$id
## [1] 751
##
##
## [[28]]
## [[28]]$id
## [1] 844
##
##
## [[29]]
## [[29]]$id
## [1] 954
##
##
## [[30]]
## [[30]]$id
## [1] 957
```
##### 5\.3\.0\.0\.1 The map family of functions
`map()` is one of ten similar functions provided by the purrr package that together form a family of functions. Each member of the map family applies a function to a vector in the same iterative way; but each member returns the results in a different type of data structure.
| Function | Returns |
| --- | --- |
| `map` | A list |
| `map_chr` | A character vector |
| `map_dbl` | A double (numeric) vector |
| `map_df` | A data frame (`map_df` does the equivalent of `map_dfr`) |
| `map_dfr` | A single data frame made by row\-binding the individual results[6](#fn6) |
| `map_dfc` | A single data frame made by column\-binding the individual results[7](#fn7) |
| `map_int` | An integer vector |
| `map_lgl` | A logical vector |
| `walk` | The original input (returned invisibly) |
#### 5\.3\.0\.1 How to choose a map function
To map a function over a vector, consider what type of output the function will produce. Then pick the map function that returns that type of output. This is a general rule that will return sensible results.
For example, the `length()` function returns an integer, so you would map `length()` over `got_chars` with `map_int()`, which returns the results as an integer vector.
```
got_chars %>%
map_int(length)
```
```
## [1] 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
## [26] 18 18 18 18 18
```
`walk()` returns the original vector invisibly (so you can pipe the result to a new function). `walk()` is intended to be used with functions like `plot()` or `print()`, which execute side effects but do not return an object to pass on.
##### 5\.3\.0\.1\.1 Map shorthand syntax
Map functions recognize two syntax shorthands.
1. If your vector is a list of sublists, you can extract elements from the sublists by name or position, e.g.
```
got_chars %>%
map_chr("name")
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
```
got_chars %>%
map_chr(3)
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
These do the equivalent of
```
got_chars %>%
map_chr(pluck, "name")
got_chars %>%
map_chr(pluck, 3)
```
2. You can use `~` and `.x` to map with expressions instead of functions. To turn a pice of code into an expression to map, first place a `~` at the start of the code. Then use `.x` as a pronoun for the value that map should supply from the vector to map over. The map function will iteratively pass each element of your vector to the `.x` in the expression and then run the expression.
An expression like this:
```
got_chars %>%
map_lgl(~length(.x) > 0)
```
becomes the equivalent of
```
got_chars %>%
map_lgl(function(x) length(x) > 0)
```
Expressions provide an easy way to map over a function argument that is not the first.
#### Solution
```
library(repurrrsive)
got_chars %>%
map(length)
```
```
## [[1]]
## [1] 18
##
## [[2]]
## [1] 18
##
## [[3]]
## [1] 18
##
## [[4]]
## [1] 18
##
## [[5]]
## [1] 18
##
## [[6]]
## [1] 18
##
## [[7]]
## [1] 18
##
## [[8]]
## [1] 18
##
## [[9]]
## [1] 18
##
## [[10]]
## [1] 18
##
## [[11]]
## [1] 18
##
## [[12]]
## [1] 18
##
## [[13]]
## [1] 18
##
## [[14]]
## [1] 18
##
## [[15]]
## [1] 18
##
## [[16]]
## [1] 18
##
## [[17]]
## [1] 18
##
## [[18]]
## [1] 18
##
## [[19]]
## [1] 18
##
## [[20]]
## [1] 18
##
## [[21]]
## [1] 18
##
## [[22]]
## [1] 18
##
## [[23]]
## [1] 18
##
## [[24]]
## [1] 18
##
## [[25]]
## [1] 18
##
## [[26]]
## [1] 18
##
## [[27]]
## [1] 18
##
## [[28]]
## [1] 18
##
## [[29]]
## [1] 18
##
## [[30]]
## [1] 18
```
#### Discussion
`map()` takes a vector to iterate over (here supplied by the pipe) followed by a function to apply to each element of the vector, followed by any arguments to pass to the function when it is applied to the vector.
Pass the function name to `map()` without quotes and without parentheses. `map()` will pass each element of the vector one at a time to the first argument of the function.
If your function requires additional arguments to do its job, pass the arguments to `map()`. `map()` will forward these arguments in order, with their names, to the function when `map()` runs the function.
```
got_chars %>%
map(keep, is.numeric)
```
```
## [[1]]
## [[1]]$id
## [1] 1022
##
##
## [[2]]
## [[2]]$id
## [1] 1052
##
##
## [[3]]
## [[3]]$id
## [1] 1074
##
##
## [[4]]
## [[4]]$id
## [1] 1109
##
##
## [[5]]
## [[5]]$id
## [1] 1166
##
##
## [[6]]
## [[6]]$id
## [1] 1267
##
##
## [[7]]
## [[7]]$id
## [1] 1295
##
##
## [[8]]
## [[8]]$id
## [1] 130
##
##
## [[9]]
## [[9]]$id
## [1] 1303
##
##
## [[10]]
## [[10]]$id
## [1] 1319
##
##
## [[11]]
## [[11]]$id
## [1] 148
##
##
## [[12]]
## [[12]]$id
## [1] 149
##
##
## [[13]]
## [[13]]$id
## [1] 150
##
##
## [[14]]
## [[14]]$id
## [1] 168
##
##
## [[15]]
## [[15]]$id
## [1] 2066
##
##
## [[16]]
## [[16]]$id
## [1] 208
##
##
## [[17]]
## [[17]]$id
## [1] 216
##
##
## [[18]]
## [[18]]$id
## [1] 232
##
##
## [[19]]
## [[19]]$id
## [1] 238
##
##
## [[20]]
## [[20]]$id
## [1] 339
##
##
## [[21]]
## [[21]]$id
## [1] 529
##
##
## [[22]]
## [[22]]$id
## [1] 576
##
##
## [[23]]
## [[23]]$id
## [1] 583
##
##
## [[24]]
## [[24]]$id
## [1] 60
##
##
## [[25]]
## [[25]]$id
## [1] 605
##
##
## [[26]]
## [[26]]$id
## [1] 743
##
##
## [[27]]
## [[27]]$id
## [1] 751
##
##
## [[28]]
## [[28]]$id
## [1] 844
##
##
## [[29]]
## [[29]]$id
## [1] 954
##
##
## [[30]]
## [[30]]$id
## [1] 957
```
##### 5\.3\.0\.0\.1 The map family of functions
`map()` is one of ten similar functions provided by the purrr package that together form a family of functions. Each member of the map family applies a function to a vector in the same iterative way; but each member returns the results in a different type of data structure.
| Function | Returns |
| --- | --- |
| `map` | A list |
| `map_chr` | A character vector |
| `map_dbl` | A double (numeric) vector |
| `map_df` | A data frame (`map_df` does the equivalent of `map_dfr`) |
| `map_dfr` | A single data frame made by row\-binding the individual results[6](#fn6) |
| `map_dfc` | A single data frame made by column\-binding the individual results[7](#fn7) |
| `map_int` | An integer vector |
| `map_lgl` | A logical vector |
| `walk` | The original input (returned invisibly) |
##### 5\.3\.0\.0\.1 The map family of functions
`map()` is one of ten similar functions provided by the purrr package that together form a family of functions. Each member of the map family applies a function to a vector in the same iterative way; but each member returns the results in a different type of data structure.
| Function | Returns |
| --- | --- |
| `map` | A list |
| `map_chr` | A character vector |
| `map_dbl` | A double (numeric) vector |
| `map_df` | A data frame (`map_df` does the equivalent of `map_dfr`) |
| `map_dfr` | A single data frame made by row\-binding the individual results[6](#fn6) |
| `map_dfc` | A single data frame made by column\-binding the individual results[7](#fn7) |
| `map_int` | An integer vector |
| `map_lgl` | A logical vector |
| `walk` | The original input (returned invisibly) |
#### 5\.3\.0\.1 How to choose a map function
To map a function over a vector, consider what type of output the function will produce. Then pick the map function that returns that type of output. This is a general rule that will return sensible results.
For example, the `length()` function returns an integer, so you would map `length()` over `got_chars` with `map_int()`, which returns the results as an integer vector.
```
got_chars %>%
map_int(length)
```
```
## [1] 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
## [26] 18 18 18 18 18
```
`walk()` returns the original vector invisibly (so you can pipe the result to a new function). `walk()` is intended to be used with functions like `plot()` or `print()`, which execute side effects but do not return an object to pass on.
##### 5\.3\.0\.1\.1 Map shorthand syntax
Map functions recognize two syntax shorthands.
1. If your vector is a list of sublists, you can extract elements from the sublists by name or position, e.g.
```
got_chars %>%
map_chr("name")
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
```
got_chars %>%
map_chr(3)
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
These do the equivalent of
```
got_chars %>%
map_chr(pluck, "name")
got_chars %>%
map_chr(pluck, 3)
```
2. You can use `~` and `.x` to map with expressions instead of functions. To turn a pice of code into an expression to map, first place a `~` at the start of the code. Then use `.x` as a pronoun for the value that map should supply from the vector to map over. The map function will iteratively pass each element of your vector to the `.x` in the expression and then run the expression.
An expression like this:
```
got_chars %>%
map_lgl(~length(.x) > 0)
```
becomes the equivalent of
```
got_chars %>%
map_lgl(function(x) length(x) > 0)
```
Expressions provide an easy way to map over a function argument that is not the first.
##### 5\.3\.0\.1\.1 Map shorthand syntax
Map functions recognize two syntax shorthands.
1. If your vector is a list of sublists, you can extract elements from the sublists by name or position, e.g.
```
got_chars %>%
map_chr("name")
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
```
got_chars %>%
map_chr(3)
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
These do the equivalent of
```
got_chars %>%
map_chr(pluck, "name")
got_chars %>%
map_chr(pluck, 3)
```
2. You can use `~` and `.x` to map with expressions instead of functions. To turn a pice of code into an expression to map, first place a `~` at the start of the code. Then use `.x` as a pronoun for the value that map should supply from the vector to map over. The map function will iteratively pass each element of your vector to the `.x` in the expression and then run the expression.
An expression like this:
```
got_chars %>%
map_lgl(~length(.x) > 0)
```
becomes the equivalent of
```
got_chars %>%
map_lgl(function(x) length(x) > 0)
```
Expressions provide an easy way to map over a function argument that is not the first.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/transform-lists-and-vectors.html |
5 Transform Lists and Vectors
=============================
---
This chapter includes the following recipes:
1. [Extract an element from a list](transform-lists-and-vectors.html#extract)
2. [Determine the type of a vector](transform-lists-and-vectors.html#determine-the-type-of-a-vector)
3. [Map a function to each element of a vector](transform-lists-and-vectors.html#map)
---
What you should know before you begin
-------------------------------------
A vector is a one dimensional array of elements. Vectors are the basic building blocks of R. Almost all data in R is stored in a vector, or even a vector of vectors.
A list is a *recursive vector*: a vector that can contain another vector or list in each of its elements.
Lists are one of the most flexible data structures in R. As a result, they are used as a general purpose glue to hold objects together. You will find lists disguised as model objects, data frames, list\-columns within data frames, and more.
Data frames are a sub\-type of list. Each column of a data frame is an element of the list that the data frame is built around.
More than any other part of R, lists demonstrate how a programming language can appear different to beginners than to experts. Seasoned R programmers do not distinguish between lists and vectors because the two are equivalent: a list is a type of vector.
However, a beginner who uses R for data science will quickly see that lists behave differently than other types of vectors. First, many R functions will not accept lists as input, even though they accept other types of vectors. Second, when you subset a list with `[ ]` to extract the value of one of its elements, R will give you a new list of length one that contains the value as its first element. This poses a problem if you want to pass that value to a function that does not accept lists as input (solve that problem with [this recipe](transform-lists-and-vectors.html#extract)).
To respect this difference, I try to be clear when talking about vectors that are not lists. This introduces a new problem: when you speak about R, it is difficult to distinguish between vectors that are lists and vectors that are not lists. Whenever the difference matters, I’ll call the first set of vectors **lists** and the second set of vectors **data vectors**[5](#fn5). I’ll refer to the superset that includes both lists and data vectors as vectors.
Data vectors come in six atomic *types*: double, integer logical, character, complex, and raw. Every element in a data vector must be the same type of data as the vector (if you try to put a different type of data into an atomic vector, R will coerce the data’s type to match the vector). R also contains an S3 class system that builds classes like factors and date\-times on top of the atomic types. You don’t need to understand R’s types and classes to use R or this cookbook, but you should know that R will recognize different types of data and treat them accordingly.
This chapter focuses on both lists and data vectors, but it only features recipes that work with the *structure* of a list or data vector. The chapters that follow will contain recipes that work with the *types* of data stored in a data vector.
5\.1 Extract an element from a list
-----------------------------------
You want to return the value of an element of a list as it is, perhaps to use in a function. You do not want the value to come embedded in a list of length one.
#### Solution
```
# returns the element named x in state.center
state.center %>%
pluck("x")
```
```
## [1] -86.7509 -127.2500 -111.6250 -92.2992 -119.7730 -105.5130 -72.3573
## [8] -74.9841 -81.6850 -83.3736 -126.2500 -113.9300 -89.3776 -86.0808
## [15] -93.3714 -98.1156 -84.7674 -92.2724 -68.9801 -76.6459 -71.5800
## [22] -84.6870 -94.6043 -89.8065 -92.5137 -109.3200 -99.5898 -116.8510
## [29] -71.3924 -74.2336 -105.9420 -75.1449 -78.4686 -100.0990 -82.5963
## [36] -97.1239 -120.0680 -77.4500 -71.1244 -80.5056 -99.7238 -86.4560
## [43] -98.7857 -111.3300 -72.5450 -78.2005 -119.7460 -80.6665 -89.9941
## [50] -107.2560
```
#### Discussion
`pluck()` comes in the purrr package and does the equivalent of `[[` subsetting. If you pass `pluck()` a character string, `pluck()` will return the element whose name matches the string. If you pass `pluck()` an integer *n*, `pluck()` will return the *nth* element of the list.
Pass multiple arguments to `pluck()` to subset multiple times. `pluck()` will subset the result of each argument with the argument that follows, e.g.
```
library(repurrrsive)
sw_films %>%
pluck(7, "title")
```
```
## [1] "The Force Awakens"
```
5\.2 Determine the type of a vector
-----------------------------------
You want to know the type of a vector.
#### Solution
```
typeof(letters)
```
```
## [1] "character"
```
#### Discussion
R vectors can be one of six atomic types, or a list. `typeof()` provides a useful way to check which type of vector you are working with. This is useful, for example, when you want to match a function’s output to an appropriate map function ([below](transform-lists-and-vectors.html#map)).
5\.3 Map a function to each element of a vector
-----------------------------------------------
You want to apply a function separately to each element in a vector and then combine the results into a single object. This is similar to what you might do with a for loop, or with the apply family of functions.
For example, `got_chars` is a list of 30 sublists. You want to compute the `length()` of each sublist.
#### Solution
```
library(repurrrsive)
got_chars %>%
map(length)
```
```
## [[1]]
## [1] 18
##
## [[2]]
## [1] 18
##
## [[3]]
## [1] 18
##
## [[4]]
## [1] 18
##
## [[5]]
## [1] 18
##
## [[6]]
## [1] 18
##
## [[7]]
## [1] 18
##
## [[8]]
## [1] 18
##
## [[9]]
## [1] 18
##
## [[10]]
## [1] 18
##
## [[11]]
## [1] 18
##
## [[12]]
## [1] 18
##
## [[13]]
## [1] 18
##
## [[14]]
## [1] 18
##
## [[15]]
## [1] 18
##
## [[16]]
## [1] 18
##
## [[17]]
## [1] 18
##
## [[18]]
## [1] 18
##
## [[19]]
## [1] 18
##
## [[20]]
## [1] 18
##
## [[21]]
## [1] 18
##
## [[22]]
## [1] 18
##
## [[23]]
## [1] 18
##
## [[24]]
## [1] 18
##
## [[25]]
## [1] 18
##
## [[26]]
## [1] 18
##
## [[27]]
## [1] 18
##
## [[28]]
## [1] 18
##
## [[29]]
## [1] 18
##
## [[30]]
## [1] 18
```
#### Discussion
`map()` takes a vector to iterate over (here supplied by the pipe) followed by a function to apply to each element of the vector, followed by any arguments to pass to the function when it is applied to the vector.
Pass the function name to `map()` without quotes and without parentheses. `map()` will pass each element of the vector one at a time to the first argument of the function.
If your function requires additional arguments to do its job, pass the arguments to `map()`. `map()` will forward these arguments in order, with their names, to the function when `map()` runs the function.
```
got_chars %>%
map(keep, is.numeric)
```
```
## [[1]]
## [[1]]$id
## [1] 1022
##
##
## [[2]]
## [[2]]$id
## [1] 1052
##
##
## [[3]]
## [[3]]$id
## [1] 1074
##
##
## [[4]]
## [[4]]$id
## [1] 1109
##
##
## [[5]]
## [[5]]$id
## [1] 1166
##
##
## [[6]]
## [[6]]$id
## [1] 1267
##
##
## [[7]]
## [[7]]$id
## [1] 1295
##
##
## [[8]]
## [[8]]$id
## [1] 130
##
##
## [[9]]
## [[9]]$id
## [1] 1303
##
##
## [[10]]
## [[10]]$id
## [1] 1319
##
##
## [[11]]
## [[11]]$id
## [1] 148
##
##
## [[12]]
## [[12]]$id
## [1] 149
##
##
## [[13]]
## [[13]]$id
## [1] 150
##
##
## [[14]]
## [[14]]$id
## [1] 168
##
##
## [[15]]
## [[15]]$id
## [1] 2066
##
##
## [[16]]
## [[16]]$id
## [1] 208
##
##
## [[17]]
## [[17]]$id
## [1] 216
##
##
## [[18]]
## [[18]]$id
## [1] 232
##
##
## [[19]]
## [[19]]$id
## [1] 238
##
##
## [[20]]
## [[20]]$id
## [1] 339
##
##
## [[21]]
## [[21]]$id
## [1] 529
##
##
## [[22]]
## [[22]]$id
## [1] 576
##
##
## [[23]]
## [[23]]$id
## [1] 583
##
##
## [[24]]
## [[24]]$id
## [1] 60
##
##
## [[25]]
## [[25]]$id
## [1] 605
##
##
## [[26]]
## [[26]]$id
## [1] 743
##
##
## [[27]]
## [[27]]$id
## [1] 751
##
##
## [[28]]
## [[28]]$id
## [1] 844
##
##
## [[29]]
## [[29]]$id
## [1] 954
##
##
## [[30]]
## [[30]]$id
## [1] 957
```
##### 5\.3\.0\.0\.1 The map family of functions
`map()` is one of ten similar functions provided by the purrr package that together form a family of functions. Each member of the map family applies a function to a vector in the same iterative way; but each member returns the results in a different type of data structure.
| Function | Returns |
| --- | --- |
| `map` | A list |
| `map_chr` | A character vector |
| `map_dbl` | A double (numeric) vector |
| `map_df` | A data frame (`map_df` does the equivalent of `map_dfr`) |
| `map_dfr` | A single data frame made by row\-binding the individual results[6](#fn6) |
| `map_dfc` | A single data frame made by column\-binding the individual results[7](#fn7) |
| `map_int` | An integer vector |
| `map_lgl` | A logical vector |
| `walk` | The original input (returned invisibly) |
#### 5\.3\.0\.1 How to choose a map function
To map a function over a vector, consider what type of output the function will produce. Then pick the map function that returns that type of output. This is a general rule that will return sensible results.
For example, the `length()` function returns an integer, so you would map `length()` over `got_chars` with `map_int()`, which returns the results as an integer vector.
```
got_chars %>%
map_int(length)
```
```
## [1] 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
## [26] 18 18 18 18 18
```
`walk()` returns the original vector invisibly (so you can pipe the result to a new function). `walk()` is intended to be used with functions like `plot()` or `print()`, which execute side effects but do not return an object to pass on.
##### 5\.3\.0\.1\.1 Map shorthand syntax
Map functions recognize two syntax shorthands.
1. If your vector is a list of sublists, you can extract elements from the sublists by name or position, e.g.
```
got_chars %>%
map_chr("name")
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
```
got_chars %>%
map_chr(3)
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
These do the equivalent of
```
got_chars %>%
map_chr(pluck, "name")
got_chars %>%
map_chr(pluck, 3)
```
2. You can use `~` and `.x` to map with expressions instead of functions. To turn a pice of code into an expression to map, first place a `~` at the start of the code. Then use `.x` as a pronoun for the value that map should supply from the vector to map over. The map function will iteratively pass each element of your vector to the `.x` in the expression and then run the expression.
An expression like this:
```
got_chars %>%
map_lgl(~length(.x) > 0)
```
becomes the equivalent of
```
got_chars %>%
map_lgl(function(x) length(x) > 0)
```
Expressions provide an easy way to map over a function argument that is not the first.
What you should know before you begin
-------------------------------------
A vector is a one dimensional array of elements. Vectors are the basic building blocks of R. Almost all data in R is stored in a vector, or even a vector of vectors.
A list is a *recursive vector*: a vector that can contain another vector or list in each of its elements.
Lists are one of the most flexible data structures in R. As a result, they are used as a general purpose glue to hold objects together. You will find lists disguised as model objects, data frames, list\-columns within data frames, and more.
Data frames are a sub\-type of list. Each column of a data frame is an element of the list that the data frame is built around.
More than any other part of R, lists demonstrate how a programming language can appear different to beginners than to experts. Seasoned R programmers do not distinguish between lists and vectors because the two are equivalent: a list is a type of vector.
However, a beginner who uses R for data science will quickly see that lists behave differently than other types of vectors. First, many R functions will not accept lists as input, even though they accept other types of vectors. Second, when you subset a list with `[ ]` to extract the value of one of its elements, R will give you a new list of length one that contains the value as its first element. This poses a problem if you want to pass that value to a function that does not accept lists as input (solve that problem with [this recipe](transform-lists-and-vectors.html#extract)).
To respect this difference, I try to be clear when talking about vectors that are not lists. This introduces a new problem: when you speak about R, it is difficult to distinguish between vectors that are lists and vectors that are not lists. Whenever the difference matters, I’ll call the first set of vectors **lists** and the second set of vectors **data vectors**[5](#fn5). I’ll refer to the superset that includes both lists and data vectors as vectors.
Data vectors come in six atomic *types*: double, integer logical, character, complex, and raw. Every element in a data vector must be the same type of data as the vector (if you try to put a different type of data into an atomic vector, R will coerce the data’s type to match the vector). R also contains an S3 class system that builds classes like factors and date\-times on top of the atomic types. You don’t need to understand R’s types and classes to use R or this cookbook, but you should know that R will recognize different types of data and treat them accordingly.
This chapter focuses on both lists and data vectors, but it only features recipes that work with the *structure* of a list or data vector. The chapters that follow will contain recipes that work with the *types* of data stored in a data vector.
5\.1 Extract an element from a list
-----------------------------------
You want to return the value of an element of a list as it is, perhaps to use in a function. You do not want the value to come embedded in a list of length one.
#### Solution
```
# returns the element named x in state.center
state.center %>%
pluck("x")
```
```
## [1] -86.7509 -127.2500 -111.6250 -92.2992 -119.7730 -105.5130 -72.3573
## [8] -74.9841 -81.6850 -83.3736 -126.2500 -113.9300 -89.3776 -86.0808
## [15] -93.3714 -98.1156 -84.7674 -92.2724 -68.9801 -76.6459 -71.5800
## [22] -84.6870 -94.6043 -89.8065 -92.5137 -109.3200 -99.5898 -116.8510
## [29] -71.3924 -74.2336 -105.9420 -75.1449 -78.4686 -100.0990 -82.5963
## [36] -97.1239 -120.0680 -77.4500 -71.1244 -80.5056 -99.7238 -86.4560
## [43] -98.7857 -111.3300 -72.5450 -78.2005 -119.7460 -80.6665 -89.9941
## [50] -107.2560
```
#### Discussion
`pluck()` comes in the purrr package and does the equivalent of `[[` subsetting. If you pass `pluck()` a character string, `pluck()` will return the element whose name matches the string. If you pass `pluck()` an integer *n*, `pluck()` will return the *nth* element of the list.
Pass multiple arguments to `pluck()` to subset multiple times. `pluck()` will subset the result of each argument with the argument that follows, e.g.
```
library(repurrrsive)
sw_films %>%
pluck(7, "title")
```
```
## [1] "The Force Awakens"
```
#### Solution
```
# returns the element named x in state.center
state.center %>%
pluck("x")
```
```
## [1] -86.7509 -127.2500 -111.6250 -92.2992 -119.7730 -105.5130 -72.3573
## [8] -74.9841 -81.6850 -83.3736 -126.2500 -113.9300 -89.3776 -86.0808
## [15] -93.3714 -98.1156 -84.7674 -92.2724 -68.9801 -76.6459 -71.5800
## [22] -84.6870 -94.6043 -89.8065 -92.5137 -109.3200 -99.5898 -116.8510
## [29] -71.3924 -74.2336 -105.9420 -75.1449 -78.4686 -100.0990 -82.5963
## [36] -97.1239 -120.0680 -77.4500 -71.1244 -80.5056 -99.7238 -86.4560
## [43] -98.7857 -111.3300 -72.5450 -78.2005 -119.7460 -80.6665 -89.9941
## [50] -107.2560
```
#### Discussion
`pluck()` comes in the purrr package and does the equivalent of `[[` subsetting. If you pass `pluck()` a character string, `pluck()` will return the element whose name matches the string. If you pass `pluck()` an integer *n*, `pluck()` will return the *nth* element of the list.
Pass multiple arguments to `pluck()` to subset multiple times. `pluck()` will subset the result of each argument with the argument that follows, e.g.
```
library(repurrrsive)
sw_films %>%
pluck(7, "title")
```
```
## [1] "The Force Awakens"
```
5\.2 Determine the type of a vector
-----------------------------------
You want to know the type of a vector.
#### Solution
```
typeof(letters)
```
```
## [1] "character"
```
#### Discussion
R vectors can be one of six atomic types, or a list. `typeof()` provides a useful way to check which type of vector you are working with. This is useful, for example, when you want to match a function’s output to an appropriate map function ([below](transform-lists-and-vectors.html#map)).
#### Solution
```
typeof(letters)
```
```
## [1] "character"
```
#### Discussion
R vectors can be one of six atomic types, or a list. `typeof()` provides a useful way to check which type of vector you are working with. This is useful, for example, when you want to match a function’s output to an appropriate map function ([below](transform-lists-and-vectors.html#map)).
5\.3 Map a function to each element of a vector
-----------------------------------------------
You want to apply a function separately to each element in a vector and then combine the results into a single object. This is similar to what you might do with a for loop, or with the apply family of functions.
For example, `got_chars` is a list of 30 sublists. You want to compute the `length()` of each sublist.
#### Solution
```
library(repurrrsive)
got_chars %>%
map(length)
```
```
## [[1]]
## [1] 18
##
## [[2]]
## [1] 18
##
## [[3]]
## [1] 18
##
## [[4]]
## [1] 18
##
## [[5]]
## [1] 18
##
## [[6]]
## [1] 18
##
## [[7]]
## [1] 18
##
## [[8]]
## [1] 18
##
## [[9]]
## [1] 18
##
## [[10]]
## [1] 18
##
## [[11]]
## [1] 18
##
## [[12]]
## [1] 18
##
## [[13]]
## [1] 18
##
## [[14]]
## [1] 18
##
## [[15]]
## [1] 18
##
## [[16]]
## [1] 18
##
## [[17]]
## [1] 18
##
## [[18]]
## [1] 18
##
## [[19]]
## [1] 18
##
## [[20]]
## [1] 18
##
## [[21]]
## [1] 18
##
## [[22]]
## [1] 18
##
## [[23]]
## [1] 18
##
## [[24]]
## [1] 18
##
## [[25]]
## [1] 18
##
## [[26]]
## [1] 18
##
## [[27]]
## [1] 18
##
## [[28]]
## [1] 18
##
## [[29]]
## [1] 18
##
## [[30]]
## [1] 18
```
#### Discussion
`map()` takes a vector to iterate over (here supplied by the pipe) followed by a function to apply to each element of the vector, followed by any arguments to pass to the function when it is applied to the vector.
Pass the function name to `map()` without quotes and without parentheses. `map()` will pass each element of the vector one at a time to the first argument of the function.
If your function requires additional arguments to do its job, pass the arguments to `map()`. `map()` will forward these arguments in order, with their names, to the function when `map()` runs the function.
```
got_chars %>%
map(keep, is.numeric)
```
```
## [[1]]
## [[1]]$id
## [1] 1022
##
##
## [[2]]
## [[2]]$id
## [1] 1052
##
##
## [[3]]
## [[3]]$id
## [1] 1074
##
##
## [[4]]
## [[4]]$id
## [1] 1109
##
##
## [[5]]
## [[5]]$id
## [1] 1166
##
##
## [[6]]
## [[6]]$id
## [1] 1267
##
##
## [[7]]
## [[7]]$id
## [1] 1295
##
##
## [[8]]
## [[8]]$id
## [1] 130
##
##
## [[9]]
## [[9]]$id
## [1] 1303
##
##
## [[10]]
## [[10]]$id
## [1] 1319
##
##
## [[11]]
## [[11]]$id
## [1] 148
##
##
## [[12]]
## [[12]]$id
## [1] 149
##
##
## [[13]]
## [[13]]$id
## [1] 150
##
##
## [[14]]
## [[14]]$id
## [1] 168
##
##
## [[15]]
## [[15]]$id
## [1] 2066
##
##
## [[16]]
## [[16]]$id
## [1] 208
##
##
## [[17]]
## [[17]]$id
## [1] 216
##
##
## [[18]]
## [[18]]$id
## [1] 232
##
##
## [[19]]
## [[19]]$id
## [1] 238
##
##
## [[20]]
## [[20]]$id
## [1] 339
##
##
## [[21]]
## [[21]]$id
## [1] 529
##
##
## [[22]]
## [[22]]$id
## [1] 576
##
##
## [[23]]
## [[23]]$id
## [1] 583
##
##
## [[24]]
## [[24]]$id
## [1] 60
##
##
## [[25]]
## [[25]]$id
## [1] 605
##
##
## [[26]]
## [[26]]$id
## [1] 743
##
##
## [[27]]
## [[27]]$id
## [1] 751
##
##
## [[28]]
## [[28]]$id
## [1] 844
##
##
## [[29]]
## [[29]]$id
## [1] 954
##
##
## [[30]]
## [[30]]$id
## [1] 957
```
##### 5\.3\.0\.0\.1 The map family of functions
`map()` is one of ten similar functions provided by the purrr package that together form a family of functions. Each member of the map family applies a function to a vector in the same iterative way; but each member returns the results in a different type of data structure.
| Function | Returns |
| --- | --- |
| `map` | A list |
| `map_chr` | A character vector |
| `map_dbl` | A double (numeric) vector |
| `map_df` | A data frame (`map_df` does the equivalent of `map_dfr`) |
| `map_dfr` | A single data frame made by row\-binding the individual results[6](#fn6) |
| `map_dfc` | A single data frame made by column\-binding the individual results[7](#fn7) |
| `map_int` | An integer vector |
| `map_lgl` | A logical vector |
| `walk` | The original input (returned invisibly) |
#### 5\.3\.0\.1 How to choose a map function
To map a function over a vector, consider what type of output the function will produce. Then pick the map function that returns that type of output. This is a general rule that will return sensible results.
For example, the `length()` function returns an integer, so you would map `length()` over `got_chars` with `map_int()`, which returns the results as an integer vector.
```
got_chars %>%
map_int(length)
```
```
## [1] 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
## [26] 18 18 18 18 18
```
`walk()` returns the original vector invisibly (so you can pipe the result to a new function). `walk()` is intended to be used with functions like `plot()` or `print()`, which execute side effects but do not return an object to pass on.
##### 5\.3\.0\.1\.1 Map shorthand syntax
Map functions recognize two syntax shorthands.
1. If your vector is a list of sublists, you can extract elements from the sublists by name or position, e.g.
```
got_chars %>%
map_chr("name")
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
```
got_chars %>%
map_chr(3)
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
These do the equivalent of
```
got_chars %>%
map_chr(pluck, "name")
got_chars %>%
map_chr(pluck, 3)
```
2. You can use `~` and `.x` to map with expressions instead of functions. To turn a pice of code into an expression to map, first place a `~` at the start of the code. Then use `.x` as a pronoun for the value that map should supply from the vector to map over. The map function will iteratively pass each element of your vector to the `.x` in the expression and then run the expression.
An expression like this:
```
got_chars %>%
map_lgl(~length(.x) > 0)
```
becomes the equivalent of
```
got_chars %>%
map_lgl(function(x) length(x) > 0)
```
Expressions provide an easy way to map over a function argument that is not the first.
#### Solution
```
library(repurrrsive)
got_chars %>%
map(length)
```
```
## [[1]]
## [1] 18
##
## [[2]]
## [1] 18
##
## [[3]]
## [1] 18
##
## [[4]]
## [1] 18
##
## [[5]]
## [1] 18
##
## [[6]]
## [1] 18
##
## [[7]]
## [1] 18
##
## [[8]]
## [1] 18
##
## [[9]]
## [1] 18
##
## [[10]]
## [1] 18
##
## [[11]]
## [1] 18
##
## [[12]]
## [1] 18
##
## [[13]]
## [1] 18
##
## [[14]]
## [1] 18
##
## [[15]]
## [1] 18
##
## [[16]]
## [1] 18
##
## [[17]]
## [1] 18
##
## [[18]]
## [1] 18
##
## [[19]]
## [1] 18
##
## [[20]]
## [1] 18
##
## [[21]]
## [1] 18
##
## [[22]]
## [1] 18
##
## [[23]]
## [1] 18
##
## [[24]]
## [1] 18
##
## [[25]]
## [1] 18
##
## [[26]]
## [1] 18
##
## [[27]]
## [1] 18
##
## [[28]]
## [1] 18
##
## [[29]]
## [1] 18
##
## [[30]]
## [1] 18
```
#### Discussion
`map()` takes a vector to iterate over (here supplied by the pipe) followed by a function to apply to each element of the vector, followed by any arguments to pass to the function when it is applied to the vector.
Pass the function name to `map()` without quotes and without parentheses. `map()` will pass each element of the vector one at a time to the first argument of the function.
If your function requires additional arguments to do its job, pass the arguments to `map()`. `map()` will forward these arguments in order, with their names, to the function when `map()` runs the function.
```
got_chars %>%
map(keep, is.numeric)
```
```
## [[1]]
## [[1]]$id
## [1] 1022
##
##
## [[2]]
## [[2]]$id
## [1] 1052
##
##
## [[3]]
## [[3]]$id
## [1] 1074
##
##
## [[4]]
## [[4]]$id
## [1] 1109
##
##
## [[5]]
## [[5]]$id
## [1] 1166
##
##
## [[6]]
## [[6]]$id
## [1] 1267
##
##
## [[7]]
## [[7]]$id
## [1] 1295
##
##
## [[8]]
## [[8]]$id
## [1] 130
##
##
## [[9]]
## [[9]]$id
## [1] 1303
##
##
## [[10]]
## [[10]]$id
## [1] 1319
##
##
## [[11]]
## [[11]]$id
## [1] 148
##
##
## [[12]]
## [[12]]$id
## [1] 149
##
##
## [[13]]
## [[13]]$id
## [1] 150
##
##
## [[14]]
## [[14]]$id
## [1] 168
##
##
## [[15]]
## [[15]]$id
## [1] 2066
##
##
## [[16]]
## [[16]]$id
## [1] 208
##
##
## [[17]]
## [[17]]$id
## [1] 216
##
##
## [[18]]
## [[18]]$id
## [1] 232
##
##
## [[19]]
## [[19]]$id
## [1] 238
##
##
## [[20]]
## [[20]]$id
## [1] 339
##
##
## [[21]]
## [[21]]$id
## [1] 529
##
##
## [[22]]
## [[22]]$id
## [1] 576
##
##
## [[23]]
## [[23]]$id
## [1] 583
##
##
## [[24]]
## [[24]]$id
## [1] 60
##
##
## [[25]]
## [[25]]$id
## [1] 605
##
##
## [[26]]
## [[26]]$id
## [1] 743
##
##
## [[27]]
## [[27]]$id
## [1] 751
##
##
## [[28]]
## [[28]]$id
## [1] 844
##
##
## [[29]]
## [[29]]$id
## [1] 954
##
##
## [[30]]
## [[30]]$id
## [1] 957
```
##### 5\.3\.0\.0\.1 The map family of functions
`map()` is one of ten similar functions provided by the purrr package that together form a family of functions. Each member of the map family applies a function to a vector in the same iterative way; but each member returns the results in a different type of data structure.
| Function | Returns |
| --- | --- |
| `map` | A list |
| `map_chr` | A character vector |
| `map_dbl` | A double (numeric) vector |
| `map_df` | A data frame (`map_df` does the equivalent of `map_dfr`) |
| `map_dfr` | A single data frame made by row\-binding the individual results[6](#fn6) |
| `map_dfc` | A single data frame made by column\-binding the individual results[7](#fn7) |
| `map_int` | An integer vector |
| `map_lgl` | A logical vector |
| `walk` | The original input (returned invisibly) |
##### 5\.3\.0\.0\.1 The map family of functions
`map()` is one of ten similar functions provided by the purrr package that together form a family of functions. Each member of the map family applies a function to a vector in the same iterative way; but each member returns the results in a different type of data structure.
| Function | Returns |
| --- | --- |
| `map` | A list |
| `map_chr` | A character vector |
| `map_dbl` | A double (numeric) vector |
| `map_df` | A data frame (`map_df` does the equivalent of `map_dfr`) |
| `map_dfr` | A single data frame made by row\-binding the individual results[6](#fn6) |
| `map_dfc` | A single data frame made by column\-binding the individual results[7](#fn7) |
| `map_int` | An integer vector |
| `map_lgl` | A logical vector |
| `walk` | The original input (returned invisibly) |
#### 5\.3\.0\.1 How to choose a map function
To map a function over a vector, consider what type of output the function will produce. Then pick the map function that returns that type of output. This is a general rule that will return sensible results.
For example, the `length()` function returns an integer, so you would map `length()` over `got_chars` with `map_int()`, which returns the results as an integer vector.
```
got_chars %>%
map_int(length)
```
```
## [1] 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
## [26] 18 18 18 18 18
```
`walk()` returns the original vector invisibly (so you can pipe the result to a new function). `walk()` is intended to be used with functions like `plot()` or `print()`, which execute side effects but do not return an object to pass on.
##### 5\.3\.0\.1\.1 Map shorthand syntax
Map functions recognize two syntax shorthands.
1. If your vector is a list of sublists, you can extract elements from the sublists by name or position, e.g.
```
got_chars %>%
map_chr("name")
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
```
got_chars %>%
map_chr(3)
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
These do the equivalent of
```
got_chars %>%
map_chr(pluck, "name")
got_chars %>%
map_chr(pluck, 3)
```
2. You can use `~` and `.x` to map with expressions instead of functions. To turn a pice of code into an expression to map, first place a `~` at the start of the code. Then use `.x` as a pronoun for the value that map should supply from the vector to map over. The map function will iteratively pass each element of your vector to the `.x` in the expression and then run the expression.
An expression like this:
```
got_chars %>%
map_lgl(~length(.x) > 0)
```
becomes the equivalent of
```
got_chars %>%
map_lgl(function(x) length(x) > 0)
```
Expressions provide an easy way to map over a function argument that is not the first.
##### 5\.3\.0\.1\.1 Map shorthand syntax
Map functions recognize two syntax shorthands.
1. If your vector is a list of sublists, you can extract elements from the sublists by name or position, e.g.
```
got_chars %>%
map_chr("name")
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
```
got_chars %>%
map_chr(3)
```
```
## [1] "Theon Greyjoy" "Tyrion Lannister" "Victarion Greyjoy"
## [4] "Will" "Areo Hotah" "Chett"
## [7] "Cressen" "Arianne Martell" "Daenerys Targaryen"
## [10] "Davos Seaworth" "Arya Stark" "Arys Oakheart"
## [13] "Asha Greyjoy" "Barristan Selmy" "Varamyr"
## [16] "Brandon Stark" "Brienne of Tarth" "Catelyn Stark"
## [19] "Cersei Lannister" "Eddard Stark" "Jaime Lannister"
## [22] "Jon Connington" "Jon Snow" "Aeron Greyjoy"
## [25] "Kevan Lannister" "Melisandre" "Merrett Frey"
## [28] "Quentyn Martell" "Samwell Tarly" "Sansa Stark"
```
These do the equivalent of
```
got_chars %>%
map_chr(pluck, "name")
got_chars %>%
map_chr(pluck, 3)
```
2. You can use `~` and `.x` to map with expressions instead of functions. To turn a pice of code into an expression to map, first place a `~` at the start of the code. Then use `.x` as a pronoun for the value that map should supply from the vector to map over. The map function will iteratively pass each element of your vector to the `.x` in the expression and then run the expression.
An expression like this:
```
got_chars %>%
map_lgl(~length(.x) > 0)
```
becomes the equivalent of
```
got_chars %>%
map_lgl(function(x) length(x) > 0)
```
Expressions provide an easy way to map over a function argument that is not the first.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/visualize-data.html |
9 Visualize Data
================
---
1. [Make a plot](visualize-data.html#make-a-plot)
2. [Make a specific “type” of plot, like a line plot, bar chart, etc.](visualize-data.html#make-a-specific-type-of-plot-like-a-line-plot-bar-chart-etc.)
3. [Add a variable to a plot as a color, shape, etc.](visualize-data.html#add-a-variable-to-a-plot-as-a-color-shape-etc.)
4. [Add multiple layers to a plot](visualize-data.html#add-multiple-layers-to-a-plot)
5. [Subdivide a plot into multiple subplots](visualize-data.html#subdivide-a-plot-into-multiple-subplots)
6. [Add a title, subtitle, or caption to a plot](visualize-data.html#add-a-title-subtitle-or-caption-to-a-plot)
7. [Change the axis labels of a plot](visualize-data.html#change-the-axis-labels-of-a-plot)
8. [Change the title of a legend](visualize-data.html#change-the-title-of-a-legend)
9. [Change the names within a legend](visualize-data.html#change-the-names-within-a-legend)
10. [Remove the legend from a plot](visualize-data.html#remove-the-legend-from-a-plot)
11. [Change the appearance or look of a plot](visualize-data.html#change-the-appearance-or-look-of-a-plot)
12. [Change the colors used in a plot](visualize-data.html#change-the-colors-used-in-a-plot)
13. [Change the range of the x or y axis](visualize-data.html#change-the-range-of-the-x-or-y-axis)
14. [Use polar coordinates](visualize-data.html#use-polar-coordinates)
15. [Flip the x and y axes](visualize-data.html#flip-the-x-and-y-axes)
---
What you should know before you begin
-------------------------------------
The ggplot2 package provides the most important tidyverse functions for making plots. These functions let you use the [layered grammar of graphics](https://r4ds.had.co.nz/data-visualisation.html#the-layered-grammar-of-graphics) to describe the plot you wish to make. In theory, you can describe *any* plot with the layered grammar of graphics.
To use ggplot2, you build and then extend the graph template described in [Make a plot](visualize-data.html#make-a-plot). ggplot is loaded when you run `library(tidyverse)`.
9\.1 Make a plot
----------------
You want to make a plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point()
```
#### Discussion
Use the same code template to begin *any* plot:
```
ggplot(data = <DATA>, mapping = aes(x = <X_VARIABLE>, y = <Y_VARIABLE>)) +
<GEOM_FUNCTION>()
```
Replace `<DATA>`, `<X_VARIABLE>`, `<Y_VARIABLE>` and `<GEOM_FUNCTION>` with the data set, column names within that data set, and [geom\_function](visualize-data.html#make-a-specific-type-of-plot-like-a-line-plot-bar-chart-etc.) that you’d like to use in your plot. Within ggplot2 code, you should *not* surround column names with quotes.
The `geom_point()` function above will create a scatterplot. Notice that some type of plots, like a histogram, will not require a y variable, e.g.
```
ggplot(data = mpg, mapping = aes(x = cty)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
When using the ggplot2 template, be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
9\.2 Make a specific “type” of plot, like a line plot, bar chart, etc.
----------------------------------------------------------------------
You want to make a plot that is not a scatterplot, as in the [Make a plot](visualize-data.html#make-a-plot) recipe.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_line()
```
#### Discussion
The `<GEOM_FUNCTION>` in the [ggplot2 code template](visualize-data.html#make-a-plot) determines which type of plot R will draw. So use the geom function that corresponds to the type of plot you have in mind. A complete list of geom functions can be found in the [ggplot2 cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf) or at the [ggplot2 package website](https://ggplot2.tidyverse.org/reference/index.html).
Geom is short for *geometric object*. It refers to the type of visual mark your plot uses to represent data.
9\.3 Add a variable to a plot as a color, shape, etc.
-----------------------------------------------------
You want to add a new variable your plot that will appear as a color, shape, line type or some other visual element accompanied by a legend.
#### Solution
```
ggplot(
data = mpg,
mapping = aes(x = displ, y = hwy, color = class, size = cty)
) +
geom_point()
```
#### Discussion
In the language of ggplot2, visual elements, like color, are called *aesthetics*. Use the `mapping = aes()` argument of `ggplot` to map aesthetics to variables in the data set, like `class` and `cty`, in this example. Separate each new mapping with a comma. ggplot2 will automatically add a legend to the plot that explains which elements of the aesthetic correspond to which values of the variable.
Different types of plots support different types of aesthetics. To determine which aesthetics you may use, read the aesthetics section of the help page of the plot’s geom function, e.g. `?geom_point`.
Aesthetic mappings must be defined inside of the `aes()` helper function, which preprocesses them to pass to ggplot2\.
9\.4 Add multiple layers to a plot
----------------------------------
You want to add multiple layers of data to your plot. For example, you want to overlay a trend line on top of a scatterplot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
To add new layers to a plot, add new geom functions to the end of the plot’s [ggplot2 code template](visualize-data.html#make-a-plot). Each new geom will be drawn *on top* of the previous geoms in the same coordinate space.
Be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
When you use multiple geoms, data and mappings defined in `ggplot()` are applied *globally* to each geom. You may also define data and mappings within each geom function. These will be applied locally to only the geom in which they are defined. For that geom, local mappings will override and extend global ones, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point(mapping = aes(color = class)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
9\.5 Subdivide a plot into multiple subplots
--------------------------------------------
You want to make a collection of subplots that each display a different part of the data set.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_wrap(~class)
```
#### Discussion
In ggplot2 vocabulary, subplots are called *facets*. Each subplot shares the same x and y variables, which facilitates comparison.
Use one of two functions to facet your plot.`facet_wrap()` creates a separate subplot for each value of a variable. Pass the variable to `facet_wrap()` preceded by a `~`.
`facet_grid()` creates a grid of facets based on the combinations of values of *two* variables. Pass `facet_grid()` two variables separated by a `~`, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(cyl ~ fl)
```
9\.6 Add a title, subtitle, or caption to a plot
------------------------------------------------
You want to add a title and subtitle above your plot, and/or a caption below it.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
subtitle = "Fuel efficiency estimated for highway driving",
caption = "Data from fueleconomy.gov"
)
```
#### Discussion
The `labs()` function adds labels to ggplot2 plots. Each argument of `labs()` is optional. Labels should be provided as character strings.
9\.7 Change the axis labels of a plot
-------------------------------------
You want to change the names of the x and y axes.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
x = "Engine Displacement (L)",
y = "Fuel Efficiency (MPG)"
)
```
#### Discussion
By default, ggplot2 labels the x and y axes with the names of the variables mapped to the axes. Override this with the `x` and `y` arguments of `labs()`.
9\.8 Change the title of a legend
---------------------------------
You want to change the title of a legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of car")
```
#### Discussion
ggplot2 creates a legend for each aesthetic except for `x`, `y`, and `group`. By default, ggplot2 labels the legend with the name of the variable mapped to the aesthetic. To override this, add the `labs()` function to your template and pass a character string to the aesthetic whose legend you want to relabel.
Use a `\n` to place a line break in the title, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of\ncar")
```
9\.9 Change the names within a legend
-------------------------------------
You want to change the value labels that appear within a legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_discrete(labels = c("Two seater", "Compact" ,"Sedan", "Minivan", "Truck", "Subcompact", "SUV"))
```
#### Discussion
To change the labels within a legend, you must interact with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). To do this:
1. Determine which scale the plot will use. This will be a function whose name is composed of `scale_`, followed by the name of an aesthetic, followed by `_`, followed by an identifier, often `discrete` or `continuous`.
2. Explicitly add the scale to your code template
3. Set the labels argument of the scale to a vector of character strings, one for each label in the legend.
9\.10 Remove the legend from a plot
-----------------------------------
You want to remove the legend from a ggplot2 plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme(legend.position = "none")
```
#### Discussion
To suppress legends, add `theme()` to the end of your [ggplot2 code template](visualize-data.html#make-a-plot). Pass it the argument `legend.position = "none"`.
9\.11 Change the appearance or look of a plot
---------------------------------------------
You want to change the appearance of the *non\-data* elements of your plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme_bw()
```
#### Discussion
The appearance of the non\-data elements of a ggplot2 plot are controlled by a *theme*. To give your plot a theme, add a theme function to your [ggplot2 code template](visualize-data.html#make-a-plot). ggplot2 comes with nine theme functions, `theme_grey()`, `theme_bw()`, `theme_linedraw()`, `theme_light()`, `theme_dark()`, `theme_minimal()`, `theme_classic()`, `theme_void()`, `theme_test()`, and more can be loaded from the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
Use the `theme()` function to tweak individual elements of the active theme as in [Remove the legend from a plot](visualize-data.html#remove-the-legend-from-a-plot).
9\.12 Change the colors used in a plot
--------------------------------------
You want to change the colors that you plot uses to display data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_brewer(palette = "Spectral")
```
#### Discussion
ggplot2 uses colors for both the `color` and `fill` aesthetics. Determine which one you are using, and then add a new color of fill scale with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). See the [ggplot2 website](https://ggplot2.tidyverse.org/reference/index.html) for a list of available scales. Additional scales are available in the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
`scale_color_brewer()`/`scale_fill_brewer()` (for discrete data) and `scale_color_distill()`/`scale_fill_distill()` (for continuous data) are popular choices for pleasing color schemes. Each takes a `palette` argument that can be set to the name of any of the scales displayed below
```
# install.packages(RColorBrewer)
library(RColorBrewer)
display.brewer.all()
```
`scale_color_viridis_d()`/`scale_fill_viridis_d()` (for discrete data) and `scale_color_viridis()_c`/`scale_fill_viridis()_c` (for continuous data) are another popular choice for continuous data. Each takes an `option` argument that can be set to one of the palette names below.
The [`ggsci`](https://nanx.me/ggsci/) and [`wesanderson`](https://github.com/karthik/wesanderson) packages also provide popular color scales.
9\.13 Change the range of the x or y axis
-----------------------------------------
You want to change the range of the x or y axes, effectively zooming in or out on your data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth() +
coord_cartesian(xlim = c(5, 7), ylim = c(20, 30))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
Each ggplot2 plot uses a coordinate system, which can be set by adding a `coord_` function to the [ggplot2 code template](visualize-data.html#make-a-plot). If you do add a `coord_` function, ggplot2 will use `coord_cartesian()` whihc provides a cartesian coordinate system.
To change the range of a plot, set one or both of the `xlim` and `ylim` arguments of its `coord_` function. Each argument taxes a vector of length two: the minimum and maximum values for the range of the x or y axis.
9\.14 Use polar coordinates
---------------------------
You want to make a polar plot, e.g. one that uses polar coordinates.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
coord_polar(theta = "x")
```
#### Discussion
To draw a plot in a polar coordinate system, add `coord_polar()` to the [ggplot2 code template](visualize-data.html#make-a-plot). Set the `theta` argument to `"x"` or `"y"` to indicate which axis should be mapped to the angle of the polar coordinate system (defaults to `"x"`).
9\.15 Flip the x and y axes
---------------------------
You want to flip the plot along its diagonal, swapping the locations of the x and y axes.
#### Solution
```
ggplot(data = diamonds, mapping = aes(x = cut)) +
geom_bar() +
coord_flip()
```
#### Discussion
To flip a plot, add `coord_flip()` to the [ggplot2 code template](visualize-data.html#make-a-plot). This swaps the locations of the x and y axes and keeps the correct orientation of all words.
`coord_flip()` has a different effect from merely exchanging the x and y variables. For example, you can use it to make bars and boxplots extend horizontally instead of vertically.
What you should know before you begin
-------------------------------------
The ggplot2 package provides the most important tidyverse functions for making plots. These functions let you use the [layered grammar of graphics](https://r4ds.had.co.nz/data-visualisation.html#the-layered-grammar-of-graphics) to describe the plot you wish to make. In theory, you can describe *any* plot with the layered grammar of graphics.
To use ggplot2, you build and then extend the graph template described in [Make a plot](visualize-data.html#make-a-plot). ggplot is loaded when you run `library(tidyverse)`.
9\.1 Make a plot
----------------
You want to make a plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point()
```
#### Discussion
Use the same code template to begin *any* plot:
```
ggplot(data = <DATA>, mapping = aes(x = <X_VARIABLE>, y = <Y_VARIABLE>)) +
<GEOM_FUNCTION>()
```
Replace `<DATA>`, `<X_VARIABLE>`, `<Y_VARIABLE>` and `<GEOM_FUNCTION>` with the data set, column names within that data set, and [geom\_function](visualize-data.html#make-a-specific-type-of-plot-like-a-line-plot-bar-chart-etc.) that you’d like to use in your plot. Within ggplot2 code, you should *not* surround column names with quotes.
The `geom_point()` function above will create a scatterplot. Notice that some type of plots, like a histogram, will not require a y variable, e.g.
```
ggplot(data = mpg, mapping = aes(x = cty)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
When using the ggplot2 template, be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point()
```
#### Discussion
Use the same code template to begin *any* plot:
```
ggplot(data = <DATA>, mapping = aes(x = <X_VARIABLE>, y = <Y_VARIABLE>)) +
<GEOM_FUNCTION>()
```
Replace `<DATA>`, `<X_VARIABLE>`, `<Y_VARIABLE>` and `<GEOM_FUNCTION>` with the data set, column names within that data set, and [geom\_function](visualize-data.html#make-a-specific-type-of-plot-like-a-line-plot-bar-chart-etc.) that you’d like to use in your plot. Within ggplot2 code, you should *not* surround column names with quotes.
The `geom_point()` function above will create a scatterplot. Notice that some type of plots, like a histogram, will not require a y variable, e.g.
```
ggplot(data = mpg, mapping = aes(x = cty)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
When using the ggplot2 template, be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
9\.2 Make a specific “type” of plot, like a line plot, bar chart, etc.
----------------------------------------------------------------------
You want to make a plot that is not a scatterplot, as in the [Make a plot](visualize-data.html#make-a-plot) recipe.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_line()
```
#### Discussion
The `<GEOM_FUNCTION>` in the [ggplot2 code template](visualize-data.html#make-a-plot) determines which type of plot R will draw. So use the geom function that corresponds to the type of plot you have in mind. A complete list of geom functions can be found in the [ggplot2 cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf) or at the [ggplot2 package website](https://ggplot2.tidyverse.org/reference/index.html).
Geom is short for *geometric object*. It refers to the type of visual mark your plot uses to represent data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_line()
```
#### Discussion
The `<GEOM_FUNCTION>` in the [ggplot2 code template](visualize-data.html#make-a-plot) determines which type of plot R will draw. So use the geom function that corresponds to the type of plot you have in mind. A complete list of geom functions can be found in the [ggplot2 cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf) or at the [ggplot2 package website](https://ggplot2.tidyverse.org/reference/index.html).
Geom is short for *geometric object*. It refers to the type of visual mark your plot uses to represent data.
9\.3 Add a variable to a plot as a color, shape, etc.
-----------------------------------------------------
You want to add a new variable your plot that will appear as a color, shape, line type or some other visual element accompanied by a legend.
#### Solution
```
ggplot(
data = mpg,
mapping = aes(x = displ, y = hwy, color = class, size = cty)
) +
geom_point()
```
#### Discussion
In the language of ggplot2, visual elements, like color, are called *aesthetics*. Use the `mapping = aes()` argument of `ggplot` to map aesthetics to variables in the data set, like `class` and `cty`, in this example. Separate each new mapping with a comma. ggplot2 will automatically add a legend to the plot that explains which elements of the aesthetic correspond to which values of the variable.
Different types of plots support different types of aesthetics. To determine which aesthetics you may use, read the aesthetics section of the help page of the plot’s geom function, e.g. `?geom_point`.
Aesthetic mappings must be defined inside of the `aes()` helper function, which preprocesses them to pass to ggplot2\.
#### Solution
```
ggplot(
data = mpg,
mapping = aes(x = displ, y = hwy, color = class, size = cty)
) +
geom_point()
```
#### Discussion
In the language of ggplot2, visual elements, like color, are called *aesthetics*. Use the `mapping = aes()` argument of `ggplot` to map aesthetics to variables in the data set, like `class` and `cty`, in this example. Separate each new mapping with a comma. ggplot2 will automatically add a legend to the plot that explains which elements of the aesthetic correspond to which values of the variable.
Different types of plots support different types of aesthetics. To determine which aesthetics you may use, read the aesthetics section of the help page of the plot’s geom function, e.g. `?geom_point`.
Aesthetic mappings must be defined inside of the `aes()` helper function, which preprocesses them to pass to ggplot2\.
9\.4 Add multiple layers to a plot
----------------------------------
You want to add multiple layers of data to your plot. For example, you want to overlay a trend line on top of a scatterplot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
To add new layers to a plot, add new geom functions to the end of the plot’s [ggplot2 code template](visualize-data.html#make-a-plot). Each new geom will be drawn *on top* of the previous geoms in the same coordinate space.
Be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
When you use multiple geoms, data and mappings defined in `ggplot()` are applied *globally* to each geom. You may also define data and mappings within each geom function. These will be applied locally to only the geom in which they are defined. For that geom, local mappings will override and extend global ones, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point(mapping = aes(color = class)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
To add new layers to a plot, add new geom functions to the end of the plot’s [ggplot2 code template](visualize-data.html#make-a-plot). Each new geom will be drawn *on top* of the previous geoms in the same coordinate space.
Be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
When you use multiple geoms, data and mappings defined in `ggplot()` are applied *globally* to each geom. You may also define data and mappings within each geom function. These will be applied locally to only the geom in which they are defined. For that geom, local mappings will override and extend global ones, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point(mapping = aes(color = class)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
9\.5 Subdivide a plot into multiple subplots
--------------------------------------------
You want to make a collection of subplots that each display a different part of the data set.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_wrap(~class)
```
#### Discussion
In ggplot2 vocabulary, subplots are called *facets*. Each subplot shares the same x and y variables, which facilitates comparison.
Use one of two functions to facet your plot.`facet_wrap()` creates a separate subplot for each value of a variable. Pass the variable to `facet_wrap()` preceded by a `~`.
`facet_grid()` creates a grid of facets based on the combinations of values of *two* variables. Pass `facet_grid()` two variables separated by a `~`, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(cyl ~ fl)
```
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_wrap(~class)
```
#### Discussion
In ggplot2 vocabulary, subplots are called *facets*. Each subplot shares the same x and y variables, which facilitates comparison.
Use one of two functions to facet your plot.`facet_wrap()` creates a separate subplot for each value of a variable. Pass the variable to `facet_wrap()` preceded by a `~`.
`facet_grid()` creates a grid of facets based on the combinations of values of *two* variables. Pass `facet_grid()` two variables separated by a `~`, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(cyl ~ fl)
```
9\.6 Add a title, subtitle, or caption to a plot
------------------------------------------------
You want to add a title and subtitle above your plot, and/or a caption below it.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
subtitle = "Fuel efficiency estimated for highway driving",
caption = "Data from fueleconomy.gov"
)
```
#### Discussion
The `labs()` function adds labels to ggplot2 plots. Each argument of `labs()` is optional. Labels should be provided as character strings.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
subtitle = "Fuel efficiency estimated for highway driving",
caption = "Data from fueleconomy.gov"
)
```
#### Discussion
The `labs()` function adds labels to ggplot2 plots. Each argument of `labs()` is optional. Labels should be provided as character strings.
9\.7 Change the axis labels of a plot
-------------------------------------
You want to change the names of the x and y axes.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
x = "Engine Displacement (L)",
y = "Fuel Efficiency (MPG)"
)
```
#### Discussion
By default, ggplot2 labels the x and y axes with the names of the variables mapped to the axes. Override this with the `x` and `y` arguments of `labs()`.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
x = "Engine Displacement (L)",
y = "Fuel Efficiency (MPG)"
)
```
#### Discussion
By default, ggplot2 labels the x and y axes with the names of the variables mapped to the axes. Override this with the `x` and `y` arguments of `labs()`.
9\.8 Change the title of a legend
---------------------------------
You want to change the title of a legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of car")
```
#### Discussion
ggplot2 creates a legend for each aesthetic except for `x`, `y`, and `group`. By default, ggplot2 labels the legend with the name of the variable mapped to the aesthetic. To override this, add the `labs()` function to your template and pass a character string to the aesthetic whose legend you want to relabel.
Use a `\n` to place a line break in the title, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of\ncar")
```
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of car")
```
#### Discussion
ggplot2 creates a legend for each aesthetic except for `x`, `y`, and `group`. By default, ggplot2 labels the legend with the name of the variable mapped to the aesthetic. To override this, add the `labs()` function to your template and pass a character string to the aesthetic whose legend you want to relabel.
Use a `\n` to place a line break in the title, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of\ncar")
```
9\.9 Change the names within a legend
-------------------------------------
You want to change the value labels that appear within a legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_discrete(labels = c("Two seater", "Compact" ,"Sedan", "Minivan", "Truck", "Subcompact", "SUV"))
```
#### Discussion
To change the labels within a legend, you must interact with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). To do this:
1. Determine which scale the plot will use. This will be a function whose name is composed of `scale_`, followed by the name of an aesthetic, followed by `_`, followed by an identifier, often `discrete` or `continuous`.
2. Explicitly add the scale to your code template
3. Set the labels argument of the scale to a vector of character strings, one for each label in the legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_discrete(labels = c("Two seater", "Compact" ,"Sedan", "Minivan", "Truck", "Subcompact", "SUV"))
```
#### Discussion
To change the labels within a legend, you must interact with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). To do this:
1. Determine which scale the plot will use. This will be a function whose name is composed of `scale_`, followed by the name of an aesthetic, followed by `_`, followed by an identifier, often `discrete` or `continuous`.
2. Explicitly add the scale to your code template
3. Set the labels argument of the scale to a vector of character strings, one for each label in the legend.
9\.10 Remove the legend from a plot
-----------------------------------
You want to remove the legend from a ggplot2 plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme(legend.position = "none")
```
#### Discussion
To suppress legends, add `theme()` to the end of your [ggplot2 code template](visualize-data.html#make-a-plot). Pass it the argument `legend.position = "none"`.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme(legend.position = "none")
```
#### Discussion
To suppress legends, add `theme()` to the end of your [ggplot2 code template](visualize-data.html#make-a-plot). Pass it the argument `legend.position = "none"`.
9\.11 Change the appearance or look of a plot
---------------------------------------------
You want to change the appearance of the *non\-data* elements of your plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme_bw()
```
#### Discussion
The appearance of the non\-data elements of a ggplot2 plot are controlled by a *theme*. To give your plot a theme, add a theme function to your [ggplot2 code template](visualize-data.html#make-a-plot). ggplot2 comes with nine theme functions, `theme_grey()`, `theme_bw()`, `theme_linedraw()`, `theme_light()`, `theme_dark()`, `theme_minimal()`, `theme_classic()`, `theme_void()`, `theme_test()`, and more can be loaded from the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
Use the `theme()` function to tweak individual elements of the active theme as in [Remove the legend from a plot](visualize-data.html#remove-the-legend-from-a-plot).
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme_bw()
```
#### Discussion
The appearance of the non\-data elements of a ggplot2 plot are controlled by a *theme*. To give your plot a theme, add a theme function to your [ggplot2 code template](visualize-data.html#make-a-plot). ggplot2 comes with nine theme functions, `theme_grey()`, `theme_bw()`, `theme_linedraw()`, `theme_light()`, `theme_dark()`, `theme_minimal()`, `theme_classic()`, `theme_void()`, `theme_test()`, and more can be loaded from the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
Use the `theme()` function to tweak individual elements of the active theme as in [Remove the legend from a plot](visualize-data.html#remove-the-legend-from-a-plot).
9\.12 Change the colors used in a plot
--------------------------------------
You want to change the colors that you plot uses to display data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_brewer(palette = "Spectral")
```
#### Discussion
ggplot2 uses colors for both the `color` and `fill` aesthetics. Determine which one you are using, and then add a new color of fill scale with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). See the [ggplot2 website](https://ggplot2.tidyverse.org/reference/index.html) for a list of available scales. Additional scales are available in the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
`scale_color_brewer()`/`scale_fill_brewer()` (for discrete data) and `scale_color_distill()`/`scale_fill_distill()` (for continuous data) are popular choices for pleasing color schemes. Each takes a `palette` argument that can be set to the name of any of the scales displayed below
```
# install.packages(RColorBrewer)
library(RColorBrewer)
display.brewer.all()
```
`scale_color_viridis_d()`/`scale_fill_viridis_d()` (for discrete data) and `scale_color_viridis()_c`/`scale_fill_viridis()_c` (for continuous data) are another popular choice for continuous data. Each takes an `option` argument that can be set to one of the palette names below.
The [`ggsci`](https://nanx.me/ggsci/) and [`wesanderson`](https://github.com/karthik/wesanderson) packages also provide popular color scales.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_brewer(palette = "Spectral")
```
#### Discussion
ggplot2 uses colors for both the `color` and `fill` aesthetics. Determine which one you are using, and then add a new color of fill scale with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). See the [ggplot2 website](https://ggplot2.tidyverse.org/reference/index.html) for a list of available scales. Additional scales are available in the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
`scale_color_brewer()`/`scale_fill_brewer()` (for discrete data) and `scale_color_distill()`/`scale_fill_distill()` (for continuous data) are popular choices for pleasing color schemes. Each takes a `palette` argument that can be set to the name of any of the scales displayed below
```
# install.packages(RColorBrewer)
library(RColorBrewer)
display.brewer.all()
```
`scale_color_viridis_d()`/`scale_fill_viridis_d()` (for discrete data) and `scale_color_viridis()_c`/`scale_fill_viridis()_c` (for continuous data) are another popular choice for continuous data. Each takes an `option` argument that can be set to one of the palette names below.
The [`ggsci`](https://nanx.me/ggsci/) and [`wesanderson`](https://github.com/karthik/wesanderson) packages also provide popular color scales.
9\.13 Change the range of the x or y axis
-----------------------------------------
You want to change the range of the x or y axes, effectively zooming in or out on your data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth() +
coord_cartesian(xlim = c(5, 7), ylim = c(20, 30))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
Each ggplot2 plot uses a coordinate system, which can be set by adding a `coord_` function to the [ggplot2 code template](visualize-data.html#make-a-plot). If you do add a `coord_` function, ggplot2 will use `coord_cartesian()` whihc provides a cartesian coordinate system.
To change the range of a plot, set one or both of the `xlim` and `ylim` arguments of its `coord_` function. Each argument taxes a vector of length two: the minimum and maximum values for the range of the x or y axis.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth() +
coord_cartesian(xlim = c(5, 7), ylim = c(20, 30))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
Each ggplot2 plot uses a coordinate system, which can be set by adding a `coord_` function to the [ggplot2 code template](visualize-data.html#make-a-plot). If you do add a `coord_` function, ggplot2 will use `coord_cartesian()` whihc provides a cartesian coordinate system.
To change the range of a plot, set one or both of the `xlim` and `ylim` arguments of its `coord_` function. Each argument taxes a vector of length two: the minimum and maximum values for the range of the x or y axis.
9\.14 Use polar coordinates
---------------------------
You want to make a polar plot, e.g. one that uses polar coordinates.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
coord_polar(theta = "x")
```
#### Discussion
To draw a plot in a polar coordinate system, add `coord_polar()` to the [ggplot2 code template](visualize-data.html#make-a-plot). Set the `theta` argument to `"x"` or `"y"` to indicate which axis should be mapped to the angle of the polar coordinate system (defaults to `"x"`).
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
coord_polar(theta = "x")
```
#### Discussion
To draw a plot in a polar coordinate system, add `coord_polar()` to the [ggplot2 code template](visualize-data.html#make-a-plot). Set the `theta` argument to `"x"` or `"y"` to indicate which axis should be mapped to the angle of the polar coordinate system (defaults to `"x"`).
9\.15 Flip the x and y axes
---------------------------
You want to flip the plot along its diagonal, swapping the locations of the x and y axes.
#### Solution
```
ggplot(data = diamonds, mapping = aes(x = cut)) +
geom_bar() +
coord_flip()
```
#### Discussion
To flip a plot, add `coord_flip()` to the [ggplot2 code template](visualize-data.html#make-a-plot). This swaps the locations of the x and y axes and keeps the correct orientation of all words.
`coord_flip()` has a different effect from merely exchanging the x and y variables. For example, you can use it to make bars and boxplots extend horizontally instead of vertically.
#### Solution
```
ggplot(data = diamonds, mapping = aes(x = cut)) +
geom_bar() +
coord_flip()
```
#### Discussion
To flip a plot, add `coord_flip()` to the [ggplot2 code template](visualize-data.html#make-a-plot). This swaps the locations of the x and y axes and keeps the correct orientation of all words.
`coord_flip()` has a different effect from merely exchanging the x and y variables. For example, you can use it to make bars and boxplots extend horizontally instead of vertically.
| R Programming |
rstudio-education.github.io | https://rstudio-education.github.io/tidyverse-cookbook/visualize-data.html |
9 Visualize Data
================
---
1. [Make a plot](visualize-data.html#make-a-plot)
2. [Make a specific “type” of plot, like a line plot, bar chart, etc.](visualize-data.html#make-a-specific-type-of-plot-like-a-line-plot-bar-chart-etc.)
3. [Add a variable to a plot as a color, shape, etc.](visualize-data.html#add-a-variable-to-a-plot-as-a-color-shape-etc.)
4. [Add multiple layers to a plot](visualize-data.html#add-multiple-layers-to-a-plot)
5. [Subdivide a plot into multiple subplots](visualize-data.html#subdivide-a-plot-into-multiple-subplots)
6. [Add a title, subtitle, or caption to a plot](visualize-data.html#add-a-title-subtitle-or-caption-to-a-plot)
7. [Change the axis labels of a plot](visualize-data.html#change-the-axis-labels-of-a-plot)
8. [Change the title of a legend](visualize-data.html#change-the-title-of-a-legend)
9. [Change the names within a legend](visualize-data.html#change-the-names-within-a-legend)
10. [Remove the legend from a plot](visualize-data.html#remove-the-legend-from-a-plot)
11. [Change the appearance or look of a plot](visualize-data.html#change-the-appearance-or-look-of-a-plot)
12. [Change the colors used in a plot](visualize-data.html#change-the-colors-used-in-a-plot)
13. [Change the range of the x or y axis](visualize-data.html#change-the-range-of-the-x-or-y-axis)
14. [Use polar coordinates](visualize-data.html#use-polar-coordinates)
15. [Flip the x and y axes](visualize-data.html#flip-the-x-and-y-axes)
---
What you should know before you begin
-------------------------------------
The ggplot2 package provides the most important tidyverse functions for making plots. These functions let you use the [layered grammar of graphics](https://r4ds.had.co.nz/data-visualisation.html#the-layered-grammar-of-graphics) to describe the plot you wish to make. In theory, you can describe *any* plot with the layered grammar of graphics.
To use ggplot2, you build and then extend the graph template described in [Make a plot](visualize-data.html#make-a-plot). ggplot is loaded when you run `library(tidyverse)`.
9\.1 Make a plot
----------------
You want to make a plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point()
```
#### Discussion
Use the same code template to begin *any* plot:
```
ggplot(data = <DATA>, mapping = aes(x = <X_VARIABLE>, y = <Y_VARIABLE>)) +
<GEOM_FUNCTION>()
```
Replace `<DATA>`, `<X_VARIABLE>`, `<Y_VARIABLE>` and `<GEOM_FUNCTION>` with the data set, column names within that data set, and [geom\_function](visualize-data.html#make-a-specific-type-of-plot-like-a-line-plot-bar-chart-etc.) that you’d like to use in your plot. Within ggplot2 code, you should *not* surround column names with quotes.
The `geom_point()` function above will create a scatterplot. Notice that some type of plots, like a histogram, will not require a y variable, e.g.
```
ggplot(data = mpg, mapping = aes(x = cty)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
When using the ggplot2 template, be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
9\.2 Make a specific “type” of plot, like a line plot, bar chart, etc.
----------------------------------------------------------------------
You want to make a plot that is not a scatterplot, as in the [Make a plot](visualize-data.html#make-a-plot) recipe.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_line()
```
#### Discussion
The `<GEOM_FUNCTION>` in the [ggplot2 code template](visualize-data.html#make-a-plot) determines which type of plot R will draw. So use the geom function that corresponds to the type of plot you have in mind. A complete list of geom functions can be found in the [ggplot2 cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf) or at the [ggplot2 package website](https://ggplot2.tidyverse.org/reference/index.html).
Geom is short for *geometric object*. It refers to the type of visual mark your plot uses to represent data.
9\.3 Add a variable to a plot as a color, shape, etc.
-----------------------------------------------------
You want to add a new variable your plot that will appear as a color, shape, line type or some other visual element accompanied by a legend.
#### Solution
```
ggplot(
data = mpg,
mapping = aes(x = displ, y = hwy, color = class, size = cty)
) +
geom_point()
```
#### Discussion
In the language of ggplot2, visual elements, like color, are called *aesthetics*. Use the `mapping = aes()` argument of `ggplot` to map aesthetics to variables in the data set, like `class` and `cty`, in this example. Separate each new mapping with a comma. ggplot2 will automatically add a legend to the plot that explains which elements of the aesthetic correspond to which values of the variable.
Different types of plots support different types of aesthetics. To determine which aesthetics you may use, read the aesthetics section of the help page of the plot’s geom function, e.g. `?geom_point`.
Aesthetic mappings must be defined inside of the `aes()` helper function, which preprocesses them to pass to ggplot2\.
9\.4 Add multiple layers to a plot
----------------------------------
You want to add multiple layers of data to your plot. For example, you want to overlay a trend line on top of a scatterplot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
To add new layers to a plot, add new geom functions to the end of the plot’s [ggplot2 code template](visualize-data.html#make-a-plot). Each new geom will be drawn *on top* of the previous geoms in the same coordinate space.
Be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
When you use multiple geoms, data and mappings defined in `ggplot()` are applied *globally* to each geom. You may also define data and mappings within each geom function. These will be applied locally to only the geom in which they are defined. For that geom, local mappings will override and extend global ones, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point(mapping = aes(color = class)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
9\.5 Subdivide a plot into multiple subplots
--------------------------------------------
You want to make a collection of subplots that each display a different part of the data set.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_wrap(~class)
```
#### Discussion
In ggplot2 vocabulary, subplots are called *facets*. Each subplot shares the same x and y variables, which facilitates comparison.
Use one of two functions to facet your plot.`facet_wrap()` creates a separate subplot for each value of a variable. Pass the variable to `facet_wrap()` preceded by a `~`.
`facet_grid()` creates a grid of facets based on the combinations of values of *two* variables. Pass `facet_grid()` two variables separated by a `~`, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(cyl ~ fl)
```
9\.6 Add a title, subtitle, or caption to a plot
------------------------------------------------
You want to add a title and subtitle above your plot, and/or a caption below it.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
subtitle = "Fuel efficiency estimated for highway driving",
caption = "Data from fueleconomy.gov"
)
```
#### Discussion
The `labs()` function adds labels to ggplot2 plots. Each argument of `labs()` is optional. Labels should be provided as character strings.
9\.7 Change the axis labels of a plot
-------------------------------------
You want to change the names of the x and y axes.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
x = "Engine Displacement (L)",
y = "Fuel Efficiency (MPG)"
)
```
#### Discussion
By default, ggplot2 labels the x and y axes with the names of the variables mapped to the axes. Override this with the `x` and `y` arguments of `labs()`.
9\.8 Change the title of a legend
---------------------------------
You want to change the title of a legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of car")
```
#### Discussion
ggplot2 creates a legend for each aesthetic except for `x`, `y`, and `group`. By default, ggplot2 labels the legend with the name of the variable mapped to the aesthetic. To override this, add the `labs()` function to your template and pass a character string to the aesthetic whose legend you want to relabel.
Use a `\n` to place a line break in the title, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of\ncar")
```
9\.9 Change the names within a legend
-------------------------------------
You want to change the value labels that appear within a legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_discrete(labels = c("Two seater", "Compact" ,"Sedan", "Minivan", "Truck", "Subcompact", "SUV"))
```
#### Discussion
To change the labels within a legend, you must interact with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). To do this:
1. Determine which scale the plot will use. This will be a function whose name is composed of `scale_`, followed by the name of an aesthetic, followed by `_`, followed by an identifier, often `discrete` or `continuous`.
2. Explicitly add the scale to your code template
3. Set the labels argument of the scale to a vector of character strings, one for each label in the legend.
9\.10 Remove the legend from a plot
-----------------------------------
You want to remove the legend from a ggplot2 plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme(legend.position = "none")
```
#### Discussion
To suppress legends, add `theme()` to the end of your [ggplot2 code template](visualize-data.html#make-a-plot). Pass it the argument `legend.position = "none"`.
9\.11 Change the appearance or look of a plot
---------------------------------------------
You want to change the appearance of the *non\-data* elements of your plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme_bw()
```
#### Discussion
The appearance of the non\-data elements of a ggplot2 plot are controlled by a *theme*. To give your plot a theme, add a theme function to your [ggplot2 code template](visualize-data.html#make-a-plot). ggplot2 comes with nine theme functions, `theme_grey()`, `theme_bw()`, `theme_linedraw()`, `theme_light()`, `theme_dark()`, `theme_minimal()`, `theme_classic()`, `theme_void()`, `theme_test()`, and more can be loaded from the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
Use the `theme()` function to tweak individual elements of the active theme as in [Remove the legend from a plot](visualize-data.html#remove-the-legend-from-a-plot).
9\.12 Change the colors used in a plot
--------------------------------------
You want to change the colors that you plot uses to display data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_brewer(palette = "Spectral")
```
#### Discussion
ggplot2 uses colors for both the `color` and `fill` aesthetics. Determine which one you are using, and then add a new color of fill scale with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). See the [ggplot2 website](https://ggplot2.tidyverse.org/reference/index.html) for a list of available scales. Additional scales are available in the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
`scale_color_brewer()`/`scale_fill_brewer()` (for discrete data) and `scale_color_distill()`/`scale_fill_distill()` (for continuous data) are popular choices for pleasing color schemes. Each takes a `palette` argument that can be set to the name of any of the scales displayed below
```
# install.packages(RColorBrewer)
library(RColorBrewer)
display.brewer.all()
```
`scale_color_viridis_d()`/`scale_fill_viridis_d()` (for discrete data) and `scale_color_viridis()_c`/`scale_fill_viridis()_c` (for continuous data) are another popular choice for continuous data. Each takes an `option` argument that can be set to one of the palette names below.
The [`ggsci`](https://nanx.me/ggsci/) and [`wesanderson`](https://github.com/karthik/wesanderson) packages also provide popular color scales.
9\.13 Change the range of the x or y axis
-----------------------------------------
You want to change the range of the x or y axes, effectively zooming in or out on your data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth() +
coord_cartesian(xlim = c(5, 7), ylim = c(20, 30))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
Each ggplot2 plot uses a coordinate system, which can be set by adding a `coord_` function to the [ggplot2 code template](visualize-data.html#make-a-plot). If you do add a `coord_` function, ggplot2 will use `coord_cartesian()` whihc provides a cartesian coordinate system.
To change the range of a plot, set one or both of the `xlim` and `ylim` arguments of its `coord_` function. Each argument taxes a vector of length two: the minimum and maximum values for the range of the x or y axis.
9\.14 Use polar coordinates
---------------------------
You want to make a polar plot, e.g. one that uses polar coordinates.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
coord_polar(theta = "x")
```
#### Discussion
To draw a plot in a polar coordinate system, add `coord_polar()` to the [ggplot2 code template](visualize-data.html#make-a-plot). Set the `theta` argument to `"x"` or `"y"` to indicate which axis should be mapped to the angle of the polar coordinate system (defaults to `"x"`).
9\.15 Flip the x and y axes
---------------------------
You want to flip the plot along its diagonal, swapping the locations of the x and y axes.
#### Solution
```
ggplot(data = diamonds, mapping = aes(x = cut)) +
geom_bar() +
coord_flip()
```
#### Discussion
To flip a plot, add `coord_flip()` to the [ggplot2 code template](visualize-data.html#make-a-plot). This swaps the locations of the x and y axes and keeps the correct orientation of all words.
`coord_flip()` has a different effect from merely exchanging the x and y variables. For example, you can use it to make bars and boxplots extend horizontally instead of vertically.
What you should know before you begin
-------------------------------------
The ggplot2 package provides the most important tidyverse functions for making plots. These functions let you use the [layered grammar of graphics](https://r4ds.had.co.nz/data-visualisation.html#the-layered-grammar-of-graphics) to describe the plot you wish to make. In theory, you can describe *any* plot with the layered grammar of graphics.
To use ggplot2, you build and then extend the graph template described in [Make a plot](visualize-data.html#make-a-plot). ggplot is loaded when you run `library(tidyverse)`.
9\.1 Make a plot
----------------
You want to make a plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point()
```
#### Discussion
Use the same code template to begin *any* plot:
```
ggplot(data = <DATA>, mapping = aes(x = <X_VARIABLE>, y = <Y_VARIABLE>)) +
<GEOM_FUNCTION>()
```
Replace `<DATA>`, `<X_VARIABLE>`, `<Y_VARIABLE>` and `<GEOM_FUNCTION>` with the data set, column names within that data set, and [geom\_function](visualize-data.html#make-a-specific-type-of-plot-like-a-line-plot-bar-chart-etc.) that you’d like to use in your plot. Within ggplot2 code, you should *not* surround column names with quotes.
The `geom_point()` function above will create a scatterplot. Notice that some type of plots, like a histogram, will not require a y variable, e.g.
```
ggplot(data = mpg, mapping = aes(x = cty)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
When using the ggplot2 template, be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point()
```
#### Discussion
Use the same code template to begin *any* plot:
```
ggplot(data = <DATA>, mapping = aes(x = <X_VARIABLE>, y = <Y_VARIABLE>)) +
<GEOM_FUNCTION>()
```
Replace `<DATA>`, `<X_VARIABLE>`, `<Y_VARIABLE>` and `<GEOM_FUNCTION>` with the data set, column names within that data set, and [geom\_function](visualize-data.html#make-a-specific-type-of-plot-like-a-line-plot-bar-chart-etc.) that you’d like to use in your plot. Within ggplot2 code, you should *not* surround column names with quotes.
The `geom_point()` function above will create a scatterplot. Notice that some type of plots, like a histogram, will not require a y variable, e.g.
```
ggplot(data = mpg, mapping = aes(x = cty)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
When using the ggplot2 template, be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
9\.2 Make a specific “type” of plot, like a line plot, bar chart, etc.
----------------------------------------------------------------------
You want to make a plot that is not a scatterplot, as in the [Make a plot](visualize-data.html#make-a-plot) recipe.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_line()
```
#### Discussion
The `<GEOM_FUNCTION>` in the [ggplot2 code template](visualize-data.html#make-a-plot) determines which type of plot R will draw. So use the geom function that corresponds to the type of plot you have in mind. A complete list of geom functions can be found in the [ggplot2 cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf) or at the [ggplot2 package website](https://ggplot2.tidyverse.org/reference/index.html).
Geom is short for *geometric object*. It refers to the type of visual mark your plot uses to represent data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_line()
```
#### Discussion
The `<GEOM_FUNCTION>` in the [ggplot2 code template](visualize-data.html#make-a-plot) determines which type of plot R will draw. So use the geom function that corresponds to the type of plot you have in mind. A complete list of geom functions can be found in the [ggplot2 cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf) or at the [ggplot2 package website](https://ggplot2.tidyverse.org/reference/index.html).
Geom is short for *geometric object*. It refers to the type of visual mark your plot uses to represent data.
9\.3 Add a variable to a plot as a color, shape, etc.
-----------------------------------------------------
You want to add a new variable your plot that will appear as a color, shape, line type or some other visual element accompanied by a legend.
#### Solution
```
ggplot(
data = mpg,
mapping = aes(x = displ, y = hwy, color = class, size = cty)
) +
geom_point()
```
#### Discussion
In the language of ggplot2, visual elements, like color, are called *aesthetics*. Use the `mapping = aes()` argument of `ggplot` to map aesthetics to variables in the data set, like `class` and `cty`, in this example. Separate each new mapping with a comma. ggplot2 will automatically add a legend to the plot that explains which elements of the aesthetic correspond to which values of the variable.
Different types of plots support different types of aesthetics. To determine which aesthetics you may use, read the aesthetics section of the help page of the plot’s geom function, e.g. `?geom_point`.
Aesthetic mappings must be defined inside of the `aes()` helper function, which preprocesses them to pass to ggplot2\.
#### Solution
```
ggplot(
data = mpg,
mapping = aes(x = displ, y = hwy, color = class, size = cty)
) +
geom_point()
```
#### Discussion
In the language of ggplot2, visual elements, like color, are called *aesthetics*. Use the `mapping = aes()` argument of `ggplot` to map aesthetics to variables in the data set, like `class` and `cty`, in this example. Separate each new mapping with a comma. ggplot2 will automatically add a legend to the plot that explains which elements of the aesthetic correspond to which values of the variable.
Different types of plots support different types of aesthetics. To determine which aesthetics you may use, read the aesthetics section of the help page of the plot’s geom function, e.g. `?geom_point`.
Aesthetic mappings must be defined inside of the `aes()` helper function, which preprocesses them to pass to ggplot2\.
9\.4 Add multiple layers to a plot
----------------------------------
You want to add multiple layers of data to your plot. For example, you want to overlay a trend line on top of a scatterplot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
To add new layers to a plot, add new geom functions to the end of the plot’s [ggplot2 code template](visualize-data.html#make-a-plot). Each new geom will be drawn *on top* of the previous geoms in the same coordinate space.
Be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
When you use multiple geoms, data and mappings defined in `ggplot()` are applied *globally* to each geom. You may also define data and mappings within each geom function. These will be applied locally to only the geom in which they are defined. For that geom, local mappings will override and extend global ones, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point(mapping = aes(color = class)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
To add new layers to a plot, add new geom functions to the end of the plot’s [ggplot2 code template](visualize-data.html#make-a-plot). Each new geom will be drawn *on top* of the previous geoms in the same coordinate space.
Be sure to put each `+` at the *end* of the line it follows and never at the beginning of the line that follows it.
When you use multiple geoms, data and mappings defined in `ggplot()` are applied *globally* to each geom. You may also define data and mappings within each geom function. These will be applied locally to only the geom in which they are defined. For that geom, local mappings will override and extend global ones, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point(mapping = aes(color = class)) +
geom_smooth()
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
9\.5 Subdivide a plot into multiple subplots
--------------------------------------------
You want to make a collection of subplots that each display a different part of the data set.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_wrap(~class)
```
#### Discussion
In ggplot2 vocabulary, subplots are called *facets*. Each subplot shares the same x and y variables, which facilitates comparison.
Use one of two functions to facet your plot.`facet_wrap()` creates a separate subplot for each value of a variable. Pass the variable to `facet_wrap()` preceded by a `~`.
`facet_grid()` creates a grid of facets based on the combinations of values of *two* variables. Pass `facet_grid()` two variables separated by a `~`, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(cyl ~ fl)
```
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_wrap(~class)
```
#### Discussion
In ggplot2 vocabulary, subplots are called *facets*. Each subplot shares the same x and y variables, which facilitates comparison.
Use one of two functions to facet your plot.`facet_wrap()` creates a separate subplot for each value of a variable. Pass the variable to `facet_wrap()` preceded by a `~`.
`facet_grid()` creates a grid of facets based on the combinations of values of *two* variables. Pass `facet_grid()` two variables separated by a `~`, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(cyl ~ fl)
```
9\.6 Add a title, subtitle, or caption to a plot
------------------------------------------------
You want to add a title and subtitle above your plot, and/or a caption below it.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
subtitle = "Fuel efficiency estimated for highway driving",
caption = "Data from fueleconomy.gov"
)
```
#### Discussion
The `labs()` function adds labels to ggplot2 plots. Each argument of `labs()` is optional. Labels should be provided as character strings.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
subtitle = "Fuel efficiency estimated for highway driving",
caption = "Data from fueleconomy.gov"
)
```
#### Discussion
The `labs()` function adds labels to ggplot2 plots. Each argument of `labs()` is optional. Labels should be provided as character strings.
9\.7 Change the axis labels of a plot
-------------------------------------
You want to change the names of the x and y axes.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
x = "Engine Displacement (L)",
y = "Fuel Efficiency (MPG)"
)
```
#### Discussion
By default, ggplot2 labels the x and y axes with the names of the variables mapped to the axes. Override this with the `x` and `y` arguments of `labs()`.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
labs(
title = "Engine size vs. Fuel efficiency",
x = "Engine Displacement (L)",
y = "Fuel Efficiency (MPG)"
)
```
#### Discussion
By default, ggplot2 labels the x and y axes with the names of the variables mapped to the axes. Override this with the `x` and `y` arguments of `labs()`.
9\.8 Change the title of a legend
---------------------------------
You want to change the title of a legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of car")
```
#### Discussion
ggplot2 creates a legend for each aesthetic except for `x`, `y`, and `group`. By default, ggplot2 labels the legend with the name of the variable mapped to the aesthetic. To override this, add the `labs()` function to your template and pass a character string to the aesthetic whose legend you want to relabel.
Use a `\n` to place a line break in the title, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of\ncar")
```
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of car")
```
#### Discussion
ggplot2 creates a legend for each aesthetic except for `x`, `y`, and `group`. By default, ggplot2 labels the legend with the name of the variable mapped to the aesthetic. To override this, add the `labs()` function to your template and pass a character string to the aesthetic whose legend you want to relabel.
Use a `\n` to place a line break in the title, e.g.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
labs(color = "Type of\ncar")
```
9\.9 Change the names within a legend
-------------------------------------
You want to change the value labels that appear within a legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_discrete(labels = c("Two seater", "Compact" ,"Sedan", "Minivan", "Truck", "Subcompact", "SUV"))
```
#### Discussion
To change the labels within a legend, you must interact with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). To do this:
1. Determine which scale the plot will use. This will be a function whose name is composed of `scale_`, followed by the name of an aesthetic, followed by `_`, followed by an identifier, often `discrete` or `continuous`.
2. Explicitly add the scale to your code template
3. Set the labels argument of the scale to a vector of character strings, one for each label in the legend.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_discrete(labels = c("Two seater", "Compact" ,"Sedan", "Minivan", "Truck", "Subcompact", "SUV"))
```
#### Discussion
To change the labels within a legend, you must interact with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). To do this:
1. Determine which scale the plot will use. This will be a function whose name is composed of `scale_`, followed by the name of an aesthetic, followed by `_`, followed by an identifier, often `discrete` or `continuous`.
2. Explicitly add the scale to your code template
3. Set the labels argument of the scale to a vector of character strings, one for each label in the legend.
9\.10 Remove the legend from a plot
-----------------------------------
You want to remove the legend from a ggplot2 plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme(legend.position = "none")
```
#### Discussion
To suppress legends, add `theme()` to the end of your [ggplot2 code template](visualize-data.html#make-a-plot). Pass it the argument `legend.position = "none"`.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme(legend.position = "none")
```
#### Discussion
To suppress legends, add `theme()` to the end of your [ggplot2 code template](visualize-data.html#make-a-plot). Pass it the argument `legend.position = "none"`.
9\.11 Change the appearance or look of a plot
---------------------------------------------
You want to change the appearance of the *non\-data* elements of your plot.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme_bw()
```
#### Discussion
The appearance of the non\-data elements of a ggplot2 plot are controlled by a *theme*. To give your plot a theme, add a theme function to your [ggplot2 code template](visualize-data.html#make-a-plot). ggplot2 comes with nine theme functions, `theme_grey()`, `theme_bw()`, `theme_linedraw()`, `theme_light()`, `theme_dark()`, `theme_minimal()`, `theme_classic()`, `theme_void()`, `theme_test()`, and more can be loaded from the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
Use the `theme()` function to tweak individual elements of the active theme as in [Remove the legend from a plot](visualize-data.html#remove-the-legend-from-a-plot).
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
theme_bw()
```
#### Discussion
The appearance of the non\-data elements of a ggplot2 plot are controlled by a *theme*. To give your plot a theme, add a theme function to your [ggplot2 code template](visualize-data.html#make-a-plot). ggplot2 comes with nine theme functions, `theme_grey()`, `theme_bw()`, `theme_linedraw()`, `theme_light()`, `theme_dark()`, `theme_minimal()`, `theme_classic()`, `theme_void()`, `theme_test()`, and more can be loaded from the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
Use the `theme()` function to tweak individual elements of the active theme as in [Remove the legend from a plot](visualize-data.html#remove-the-legend-from-a-plot).
9\.12 Change the colors used in a plot
--------------------------------------
You want to change the colors that you plot uses to display data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_brewer(palette = "Spectral")
```
#### Discussion
ggplot2 uses colors for both the `color` and `fill` aesthetics. Determine which one you are using, and then add a new color of fill scale with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). See the [ggplot2 website](https://ggplot2.tidyverse.org/reference/index.html) for a list of available scales. Additional scales are available in the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
`scale_color_brewer()`/`scale_fill_brewer()` (for discrete data) and `scale_color_distill()`/`scale_fill_distill()` (for continuous data) are popular choices for pleasing color schemes. Each takes a `palette` argument that can be set to the name of any of the scales displayed below
```
# install.packages(RColorBrewer)
library(RColorBrewer)
display.brewer.all()
```
`scale_color_viridis_d()`/`scale_fill_viridis_d()` (for discrete data) and `scale_color_viridis()_c`/`scale_fill_viridis()_c` (for continuous data) are another popular choice for continuous data. Each takes an `option` argument that can be set to one of the palette names below.
The [`ggsci`](https://nanx.me/ggsci/) and [`wesanderson`](https://github.com/karthik/wesanderson) packages also provide popular color scales.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = class)) +
geom_point() +
scale_color_brewer(palette = "Spectral")
```
#### Discussion
ggplot2 uses colors for both the `color` and `fill` aesthetics. Determine which one you are using, and then add a new color of fill scale with ggplot2’s [scale system](https://r4ds.had.co.nz/graphics-for-communication.html#scales). See the [ggplot2 website](https://ggplot2.tidyverse.org/reference/index.html) for a list of available scales. Additional scales are available in the [ggthemes package](https://jrnold.github.io/ggthemes/reference/index.html).
`scale_color_brewer()`/`scale_fill_brewer()` (for discrete data) and `scale_color_distill()`/`scale_fill_distill()` (for continuous data) are popular choices for pleasing color schemes. Each takes a `palette` argument that can be set to the name of any of the scales displayed below
```
# install.packages(RColorBrewer)
library(RColorBrewer)
display.brewer.all()
```
`scale_color_viridis_d()`/`scale_fill_viridis_d()` (for discrete data) and `scale_color_viridis()_c`/`scale_fill_viridis()_c` (for continuous data) are another popular choice for continuous data. Each takes an `option` argument that can be set to one of the palette names below.
The [`ggsci`](https://nanx.me/ggsci/) and [`wesanderson`](https://github.com/karthik/wesanderson) packages also provide popular color scales.
9\.13 Change the range of the x or y axis
-----------------------------------------
You want to change the range of the x or y axes, effectively zooming in or out on your data.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth() +
coord_cartesian(xlim = c(5, 7), ylim = c(20, 30))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
Each ggplot2 plot uses a coordinate system, which can be set by adding a `coord_` function to the [ggplot2 code template](visualize-data.html#make-a-plot). If you do add a `coord_` function, ggplot2 will use `coord_cartesian()` whihc provides a cartesian coordinate system.
To change the range of a plot, set one or both of the `xlim` and `ylim` arguments of its `coord_` function. Each argument taxes a vector of length two: the minimum and maximum values for the range of the x or y axis.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth() +
coord_cartesian(xlim = c(5, 7), ylim = c(20, 30))
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
#### Discussion
Each ggplot2 plot uses a coordinate system, which can be set by adding a `coord_` function to the [ggplot2 code template](visualize-data.html#make-a-plot). If you do add a `coord_` function, ggplot2 will use `coord_cartesian()` whihc provides a cartesian coordinate system.
To change the range of a plot, set one or both of the `xlim` and `ylim` arguments of its `coord_` function. Each argument taxes a vector of length two: the minimum and maximum values for the range of the x or y axis.
9\.14 Use polar coordinates
---------------------------
You want to make a polar plot, e.g. one that uses polar coordinates.
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
coord_polar(theta = "x")
```
#### Discussion
To draw a plot in a polar coordinate system, add `coord_polar()` to the [ggplot2 code template](visualize-data.html#make-a-plot). Set the `theta` argument to `"x"` or `"y"` to indicate which axis should be mapped to the angle of the polar coordinate system (defaults to `"x"`).
#### Solution
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
coord_polar(theta = "x")
```
#### Discussion
To draw a plot in a polar coordinate system, add `coord_polar()` to the [ggplot2 code template](visualize-data.html#make-a-plot). Set the `theta` argument to `"x"` or `"y"` to indicate which axis should be mapped to the angle of the polar coordinate system (defaults to `"x"`).
9\.15 Flip the x and y axes
---------------------------
You want to flip the plot along its diagonal, swapping the locations of the x and y axes.
#### Solution
```
ggplot(data = diamonds, mapping = aes(x = cut)) +
geom_bar() +
coord_flip()
```
#### Discussion
To flip a plot, add `coord_flip()` to the [ggplot2 code template](visualize-data.html#make-a-plot). This swaps the locations of the x and y axes and keeps the correct orientation of all words.
`coord_flip()` has a different effect from merely exchanging the x and y variables. For example, you can use it to make bars and boxplots extend horizontally instead of vertically.
#### Solution
```
ggplot(data = diamonds, mapping = aes(x = cut)) +
geom_bar() +
coord_flip()
```
#### Discussion
To flip a plot, add `coord_flip()` to the [ggplot2 code template](visualize-data.html#make-a-plot). This swaps the locations of the x and y axes and keeps the correct orientation of all words.
`coord_flip()` has a different effect from merely exchanging the x and y variables. For example, you can use it to make bars and boxplots extend horizontally instead of vertically.
| R Programming |
coreysparks.github.io | https://coreysparks.github.io/appdem_Book/rintro.html | Social Science |
|
coreysparks.github.io | https://coreysparks.github.io/appdem_Book/survey.html | Social Science |
|
coreysparks.github.io | https://coreysparks.github.io/appdem_Book/macro.html | Social Science |
|
coreysparks.github.io | https://coreysparks.github.io/appdem_Book/micro.html | Social Science |
|
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/index.html |
Chapter 1 Introduction
======================
**IMPORTANT INFORMATION**
*As of July 5th 2022 COINr has had a major upgrade from v0\.6\.1 to v1\.0\.0\. This was to fix a number of underlying issues, introduce many new features and harmonise syntax. The syntax of v1\.0\.0 is quite different from previous versions \- as a result the code and examples in this book will for the most part not work with COINr 1\.0\.0 and higher versions.*
*The material and guidance in this e\-book has been largely replaced with COINr’s [new website](https://bluefoxr.github.io/COINr/). If you want to use the latest COINr version, please now go there for documentation.*
*If you prefer to continue with the older COINr versions, you can install the “legacy” version of COINr 0\.6\.1 as a separate package called “COINr6”. This can be loaded to allow any code written with older versions of COINr to work without any problems. COINr6 must be installed via GitHub (see below). I will leave this book here for users of COINr6 to follow.*
*Note that in the rest of the book, “COINr” now implicitly refers to “COINr6”. I have only changed the `library()` commands to ensure the code works.*
*To learn more about these changes and why they were necessary, see [here](https://bluefoxr.github.io/COINr/articles/v1.html).*
1\.1 Background
---------------
This documentation describes in detail the COINr package, which is an open source R package for developing and analysing composite indicators, developed by the European Commission’s [Joint Research Centre](https://knowledge4policy.ec.europa.eu/composite-indicators/about_en). In fact, this is slightly more than technical documentation, and also gives some tips on composite indicator development along the way.
A *composite indicator* is an aggregation of indicators which aims to measure a particular concept. Composite indicators are typically used to measure complex and multidimensional concepts which are difficult to define, and cannot be measured directly. Examples include innovation, human development, environmental performance, and so on. Composite indicators are closely related to scoreboards, which are also groups of indicators aiming to capture a concept. However, scoreboards do not aggregate indicator values. Composite indicators also usually use a hierarchical structure which breaks the concept down into elements, sometimes known as sub\-pillars, pillars, sub\-indexes, dimensions, and so on.
You can find the latest version of COINr at its [GitHub repo](https://github.com/bluefoxr/COINr), and also the source code for this documentation [here](https://github.com/bluefoxr/COINrDoc).
**NOTE** as discussed above, this book now refers to the “COINr6” package, which has its repo [here](https://github.com/bluefoxr/COINr6).
1\.2 Installation
-----------------
As discussed above, this book now refers to the COINr6 package. While the latest COINr version is on CRAN, COINr6 can only be installed from GitHub, as follows:
```
devtools::install_github("bluefoxr/COINr6")
```
This should directly install the package from Github, without any other steps. You may be asked to update packages. This might not be strictly necessary, so you can also skip this step.
In case you do spot a bug, or have a suggestion, [open an issue](https://github.com/bluefoxr/COINr6/issues) on the repo.
1\.3 What does it do?
---------------------
COINr is the first fully\-flexible development and analysis environment for composite indicators and scoreboards. It is more than simply a collection of useful functions \- it wraps up all data, methodology and analysis of a composite indicator into a single object called a “COIN”. COINr functions are adapted to work with very few commands on COINs, but many will also work on data frames as well. This leads to a fast and neat environment for building and analysing composite indicators.
Using COINs, i.e. the *COINrverse approach*, enables the full feature set of COINr, because some functions only work on COINs \- for example, exporting all results to Excel in one command, or running an uncertainty/sensitivity analysis. Indeed, COINr is built mainly for COINrverse use, but functions have been written with the flexibility to also accommodate independent use where possible. COINr also includes a number of accessory functions which are generally useful in the process of building/analysing composite indicators.
The main features can be summarised as features for *building*, features for *analysis* and features for *visualisation and presentation*. This list is not exhaustive and a nearly\-full list of functions is found in [A map of COINr](foundations.html#a-map-of-coinr). An actually\-full list can be found in the [COINr function documentation](https://cran.r-project.org/web/packages/COINr/COINr.pdf).
**Building features**:
* Flexible and fast development of composite indicators with no limits on aggregation levels, numbers of indicators, highly flexible set of methodological choices.
* Denomination by other indicators (including built in world denominators data set)
* Screening units by data requirements
* Imputation of missing data, by a variety of methods
* Data treatment using Winsorisation and nonlinear transformations
* Normalisation by more than ten methods, either for all indicators or for each individually
* Weighting using either manual weighting, PCA weights or correlation optimised weights. COINr also includes a re\-weighting app which explores the effects of weights on correlations.
* Aggregation of indicators using a variety of methods which can be different for each aggregation level.
**Analysis features:**
* Detailed indicator statistics, and data availability within aggregation groups
* Multivariate analysis, including quick functions for PCA, and detailed correlation analysis and visualisation tools
* Easy “what if” analysis \- very quickly checking the effects of adding and removing indicators, changing weights, methodological variations
* Full global uncertainty and sensitivity analysis which can check the impacts of uncertainties in weighting and many methodological choices
**Visualisation and presentation:**
* Statistical plots of indicators \- histograms, violin plots, dot plots, scatter plots and more, including interactive html plots and an app for quickly exploring indicator data
* Bar charts, stacked bar charts, maps, tables and radar charts for presenting indicator data and making comparisons between units
* Static and interactive correlation plots for visualising correlations between indicators and between aggregation levels
* An interactive app for visualising and presenting initial results
* Automatic generation of unit reports (e.g. country reports) using customisable R markdown templates
COINr also allows fast import from the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en) and fast export to Excel.
In short, COINr aims to allow composite indicators to be developed and prototyped very quickly and in a structured fashion, with the results immediately available and able to be explored interactively. Although it is built in R, it is a high\-level package that aims to make command simple and intuitive, with the hard work performed behind the scenes, therefore it is also accessible to less experienced R users.
1\.4 How to use this manual
---------------------------
COINr has many features, and it is unlikely you will want to read this manual from cover to cover. Roughly speaking, the chapters of this manual represent the major functions and groups of functions present in COINr. If you are looking for guidance on a specific function, simply look for the relevant chapter, or search for the function using the search box. You can also use `?function_name` in R for help on each function.
If you are building a composite indicator, the following chapters should be useful in particular:
* [COINs: the currency of COINr](coins-the-currency-of-coinr.html#coins-the-currency-of-coinr)
* [Normalisation](appendix-building-a-composite-indicator-example.html#normalisation-1)
* [Aggregation](appendix-building-a-composite-indicator-example.html#aggregation-1)
* [Appendix: Building a Composite Indicator Example](appendix-building-a-composite-indicator-example.html#appendix-building-a-composite-indicator-example)
Then you may wish to see chapters on [Foundations](foundations.html#foundations), [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis), [Denomination](appendix-building-a-composite-indicator-example.html#denomination-1), [Missing data and Imputation](missing-data-and-imputation.html#missing-data-and-imputation), [Data Treatment](appendix-building-a-composite-indicator-example.html#data-treatment-1) and [Weighting](weighting-1.html#weighting-2) depending on what you want to do.
If you are analysing a composite indicator (and this may also be part of the construction process), some relevant chapters are:
* [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis)
* [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1)
* [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons)
* [Sensitivity analysis](sensitivity-analysis.html#sensitivity-analysis)
* [Appendix: Analysing a Composite Indicator Example](appendix-analysing-a-composite-indicator-example.html#appendix-analysing-a-composite-indicator-example)
It is however worth browsing through the contents to pick out any further relevant chapters and sections that might be of interest.
1\.5 Demo
---------
COINr is highly customisable, but equally allows composite indicator construction in a few commands. Taking the built\-in ASEM dataset, we can assemble a composite indicator in a few steps.
```
library(COINr6)
##
## Attaching package: 'COINr6'
## The following object is masked from 'package:stats':
##
## aggregate
# assemble basic composite indicator object from input data
ASEM <- assemble(IndData = ASEMIndData,
IndMeta = ASEMIndMeta,
AggMeta = ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
Here, we have loaded the COINr package, and assembled a so\-called “COIN”, which is a structured object used universally within COINr. It contains a complete record of all data sets, parameters, methodological choices, analysis and results generated during the construction and analysis process. A COIN is initially assembled by supplying a table of indicator data, a table of indicator metadata (which also specifies the structure of the index), and a table of aggregation metadata. More details can be found in the chapter on [COINs: the currency of COINr](coins-the-currency-of-coinr.html#coins-the-currency-of-coinr).
Currently, the COIN (composite indicator) only contains the input data set and other metadata. To actually get the aggregated results (i.e. and index), we have to denominate, normalise and aggregate it. These operations are performed by dedicated functions in COINr.
```
# denominate data using specifications in ASEMIndMeta
ASEM <- denominate(ASEM, dset = "Raw")
# normalise data
ASEM <- normalise(ASEM, dset = "Denominated")
# Aggregate normalised data
ASEM <- aggregate(ASEM, dset = "Normalised")
```
Of course, there are many ways to aggregate data, and other steps could be performed before aggregating, such as (multivariate) analysis, data treatment and so forth. Here, we have simply taken three basic steps.
Each of the functions generates a new data set – here, respectively “Denominated”, “Normalised” and “Aggregated” data sets which are added to and stored within the COIN. Although in the example here the default specifications have been used, the functions have many options, and can be directed to operate on any of the data sets within the COIN. Details on this are left to later chapters in this book.
At this point, let us examine some of the outputs in the COIN. First, we may want to see the index structure:
```
plotframework(ASEM)
```
In the online version of this documentation, this plot appears as an interactive “HTML widget” which can be embedded into HTML documents. COINr leverages existing R packages, and interactive graphics (like that of Figure 6\) are generated using bindings to the plotly package, which is based on Javascript libraries. Static graphics are largely produced by the ggplot2 package.
The framework plot is called a “sunburst plot” and summarises the structure of the index and the relative weight of each indicator in each aggregation level and the index.
At any time we can get an overview of the contents of the COIN using the `print()` method:
```
# print() method is implicitly called by naming the object on the command line
ASEM
## --------------
## A COIN with...
## --------------
## Input:
## Units: 51 (AUT, BEL, BGR, ...)
## Indicators: 49 (Goods, Services, FDI, ...)
## Denominators: 4 (Den_Area, Den_Energy, Den_GDP, ...)
## Groups: 4 (Group_GDP, Group_GDPpc, Group_Pop, ...)
##
## Structure:
## Level 1: 49 indicators (Goods, Services, FDI, ...)
## Level 2: 8 groups (ConEcFin, Instit, P2P, ...)
## Level 3: 2 groups (Conn, Sust)
## Level 4: 1 groups (Index)
##
## Data sets:
## Raw (51 units)
## Denominated (51 units)
## Normalised (51 units)
## Aggregated (51 units)
```
We can also get indicator statistics in table format:
```
library(reactable)
ASEM <- getStats(ASEM, dset = "Raw")
ASEM$Analysis$Raw$StatTable |>
roundDF() |>
reactable::reactable(defaultPageSize = 5, highlight = TRUE, wrap = F)
```
Again, this table is an presented as an interactive HTML table and is easier explored using the online version of this book. In short, `getstats()` provides key statistics on each indicator, relating to data availability, outliers, summary statistics such as mean, median, skew, kurtosis, and so on. This can be applied to any data set within the COIN.
Indicator distributions can also be easily plotted:
```
plotIndDist(ASEM, type = "Violindot", icodes = "Political")
```
Finally, results can be visualised and explored in an interactive dashboard. This can be useful as a rapid prototype of the index. The app may even be hosted online, or can be used as the basis for a more customised indicator visualisation platform. Here, a basic screenshot is provided.
Figure 1\.1: Results dashboard screenshot
Clearly, this is not meant to replace a dedicated composite indicator visualisation site, but is a fast first step.
1\.1 Background
---------------
This documentation describes in detail the COINr package, which is an open source R package for developing and analysing composite indicators, developed by the European Commission’s [Joint Research Centre](https://knowledge4policy.ec.europa.eu/composite-indicators/about_en). In fact, this is slightly more than technical documentation, and also gives some tips on composite indicator development along the way.
A *composite indicator* is an aggregation of indicators which aims to measure a particular concept. Composite indicators are typically used to measure complex and multidimensional concepts which are difficult to define, and cannot be measured directly. Examples include innovation, human development, environmental performance, and so on. Composite indicators are closely related to scoreboards, which are also groups of indicators aiming to capture a concept. However, scoreboards do not aggregate indicator values. Composite indicators also usually use a hierarchical structure which breaks the concept down into elements, sometimes known as sub\-pillars, pillars, sub\-indexes, dimensions, and so on.
You can find the latest version of COINr at its [GitHub repo](https://github.com/bluefoxr/COINr), and also the source code for this documentation [here](https://github.com/bluefoxr/COINrDoc).
**NOTE** as discussed above, this book now refers to the “COINr6” package, which has its repo [here](https://github.com/bluefoxr/COINr6).
1\.2 Installation
-----------------
As discussed above, this book now refers to the COINr6 package. While the latest COINr version is on CRAN, COINr6 can only be installed from GitHub, as follows:
```
devtools::install_github("bluefoxr/COINr6")
```
This should directly install the package from Github, without any other steps. You may be asked to update packages. This might not be strictly necessary, so you can also skip this step.
In case you do spot a bug, or have a suggestion, [open an issue](https://github.com/bluefoxr/COINr6/issues) on the repo.
1\.3 What does it do?
---------------------
COINr is the first fully\-flexible development and analysis environment for composite indicators and scoreboards. It is more than simply a collection of useful functions \- it wraps up all data, methodology and analysis of a composite indicator into a single object called a “COIN”. COINr functions are adapted to work with very few commands on COINs, but many will also work on data frames as well. This leads to a fast and neat environment for building and analysing composite indicators.
Using COINs, i.e. the *COINrverse approach*, enables the full feature set of COINr, because some functions only work on COINs \- for example, exporting all results to Excel in one command, or running an uncertainty/sensitivity analysis. Indeed, COINr is built mainly for COINrverse use, but functions have been written with the flexibility to also accommodate independent use where possible. COINr also includes a number of accessory functions which are generally useful in the process of building/analysing composite indicators.
The main features can be summarised as features for *building*, features for *analysis* and features for *visualisation and presentation*. This list is not exhaustive and a nearly\-full list of functions is found in [A map of COINr](foundations.html#a-map-of-coinr). An actually\-full list can be found in the [COINr function documentation](https://cran.r-project.org/web/packages/COINr/COINr.pdf).
**Building features**:
* Flexible and fast development of composite indicators with no limits on aggregation levels, numbers of indicators, highly flexible set of methodological choices.
* Denomination by other indicators (including built in world denominators data set)
* Screening units by data requirements
* Imputation of missing data, by a variety of methods
* Data treatment using Winsorisation and nonlinear transformations
* Normalisation by more than ten methods, either for all indicators or for each individually
* Weighting using either manual weighting, PCA weights or correlation optimised weights. COINr also includes a re\-weighting app which explores the effects of weights on correlations.
* Aggregation of indicators using a variety of methods which can be different for each aggregation level.
**Analysis features:**
* Detailed indicator statistics, and data availability within aggregation groups
* Multivariate analysis, including quick functions for PCA, and detailed correlation analysis and visualisation tools
* Easy “what if” analysis \- very quickly checking the effects of adding and removing indicators, changing weights, methodological variations
* Full global uncertainty and sensitivity analysis which can check the impacts of uncertainties in weighting and many methodological choices
**Visualisation and presentation:**
* Statistical plots of indicators \- histograms, violin plots, dot plots, scatter plots and more, including interactive html plots and an app for quickly exploring indicator data
* Bar charts, stacked bar charts, maps, tables and radar charts for presenting indicator data and making comparisons between units
* Static and interactive correlation plots for visualising correlations between indicators and between aggregation levels
* An interactive app for visualising and presenting initial results
* Automatic generation of unit reports (e.g. country reports) using customisable R markdown templates
COINr also allows fast import from the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en) and fast export to Excel.
In short, COINr aims to allow composite indicators to be developed and prototyped very quickly and in a structured fashion, with the results immediately available and able to be explored interactively. Although it is built in R, it is a high\-level package that aims to make command simple and intuitive, with the hard work performed behind the scenes, therefore it is also accessible to less experienced R users.
1\.4 How to use this manual
---------------------------
COINr has many features, and it is unlikely you will want to read this manual from cover to cover. Roughly speaking, the chapters of this manual represent the major functions and groups of functions present in COINr. If you are looking for guidance on a specific function, simply look for the relevant chapter, or search for the function using the search box. You can also use `?function_name` in R for help on each function.
If you are building a composite indicator, the following chapters should be useful in particular:
* [COINs: the currency of COINr](coins-the-currency-of-coinr.html#coins-the-currency-of-coinr)
* [Normalisation](appendix-building-a-composite-indicator-example.html#normalisation-1)
* [Aggregation](appendix-building-a-composite-indicator-example.html#aggregation-1)
* [Appendix: Building a Composite Indicator Example](appendix-building-a-composite-indicator-example.html#appendix-building-a-composite-indicator-example)
Then you may wish to see chapters on [Foundations](foundations.html#foundations), [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis), [Denomination](appendix-building-a-composite-indicator-example.html#denomination-1), [Missing data and Imputation](missing-data-and-imputation.html#missing-data-and-imputation), [Data Treatment](appendix-building-a-composite-indicator-example.html#data-treatment-1) and [Weighting](weighting-1.html#weighting-2) depending on what you want to do.
If you are analysing a composite indicator (and this may also be part of the construction process), some relevant chapters are:
* [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis)
* [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1)
* [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons)
* [Sensitivity analysis](sensitivity-analysis.html#sensitivity-analysis)
* [Appendix: Analysing a Composite Indicator Example](appendix-analysing-a-composite-indicator-example.html#appendix-analysing-a-composite-indicator-example)
It is however worth browsing through the contents to pick out any further relevant chapters and sections that might be of interest.
1\.5 Demo
---------
COINr is highly customisable, but equally allows composite indicator construction in a few commands. Taking the built\-in ASEM dataset, we can assemble a composite indicator in a few steps.
```
library(COINr6)
##
## Attaching package: 'COINr6'
## The following object is masked from 'package:stats':
##
## aggregate
# assemble basic composite indicator object from input data
ASEM <- assemble(IndData = ASEMIndData,
IndMeta = ASEMIndMeta,
AggMeta = ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
Here, we have loaded the COINr package, and assembled a so\-called “COIN”, which is a structured object used universally within COINr. It contains a complete record of all data sets, parameters, methodological choices, analysis and results generated during the construction and analysis process. A COIN is initially assembled by supplying a table of indicator data, a table of indicator metadata (which also specifies the structure of the index), and a table of aggregation metadata. More details can be found in the chapter on [COINs: the currency of COINr](coins-the-currency-of-coinr.html#coins-the-currency-of-coinr).
Currently, the COIN (composite indicator) only contains the input data set and other metadata. To actually get the aggregated results (i.e. and index), we have to denominate, normalise and aggregate it. These operations are performed by dedicated functions in COINr.
```
# denominate data using specifications in ASEMIndMeta
ASEM <- denominate(ASEM, dset = "Raw")
# normalise data
ASEM <- normalise(ASEM, dset = "Denominated")
# Aggregate normalised data
ASEM <- aggregate(ASEM, dset = "Normalised")
```
Of course, there are many ways to aggregate data, and other steps could be performed before aggregating, such as (multivariate) analysis, data treatment and so forth. Here, we have simply taken three basic steps.
Each of the functions generates a new data set – here, respectively “Denominated”, “Normalised” and “Aggregated” data sets which are added to and stored within the COIN. Although in the example here the default specifications have been used, the functions have many options, and can be directed to operate on any of the data sets within the COIN. Details on this are left to later chapters in this book.
At this point, let us examine some of the outputs in the COIN. First, we may want to see the index structure:
```
plotframework(ASEM)
```
In the online version of this documentation, this plot appears as an interactive “HTML widget” which can be embedded into HTML documents. COINr leverages existing R packages, and interactive graphics (like that of Figure 6\) are generated using bindings to the plotly package, which is based on Javascript libraries. Static graphics are largely produced by the ggplot2 package.
The framework plot is called a “sunburst plot” and summarises the structure of the index and the relative weight of each indicator in each aggregation level and the index.
At any time we can get an overview of the contents of the COIN using the `print()` method:
```
# print() method is implicitly called by naming the object on the command line
ASEM
## --------------
## A COIN with...
## --------------
## Input:
## Units: 51 (AUT, BEL, BGR, ...)
## Indicators: 49 (Goods, Services, FDI, ...)
## Denominators: 4 (Den_Area, Den_Energy, Den_GDP, ...)
## Groups: 4 (Group_GDP, Group_GDPpc, Group_Pop, ...)
##
## Structure:
## Level 1: 49 indicators (Goods, Services, FDI, ...)
## Level 2: 8 groups (ConEcFin, Instit, P2P, ...)
## Level 3: 2 groups (Conn, Sust)
## Level 4: 1 groups (Index)
##
## Data sets:
## Raw (51 units)
## Denominated (51 units)
## Normalised (51 units)
## Aggregated (51 units)
```
We can also get indicator statistics in table format:
```
library(reactable)
ASEM <- getStats(ASEM, dset = "Raw")
ASEM$Analysis$Raw$StatTable |>
roundDF() |>
reactable::reactable(defaultPageSize = 5, highlight = TRUE, wrap = F)
```
Again, this table is an presented as an interactive HTML table and is easier explored using the online version of this book. In short, `getstats()` provides key statistics on each indicator, relating to data availability, outliers, summary statistics such as mean, median, skew, kurtosis, and so on. This can be applied to any data set within the COIN.
Indicator distributions can also be easily plotted:
```
plotIndDist(ASEM, type = "Violindot", icodes = "Political")
```
Finally, results can be visualised and explored in an interactive dashboard. This can be useful as a rapid prototype of the index. The app may even be hosted online, or can be used as the basis for a more customised indicator visualisation platform. Here, a basic screenshot is provided.
Figure 1\.1: Results dashboard screenshot
Clearly, this is not meant to replace a dedicated composite indicator visualisation site, but is a fast first step.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/foundations.html |
Chapter 2 Foundations
=====================
This chapter aims to explain some important principles about how COINr works, and explain some terminology. It also gives a map of the functions available in the package. If you want to get straight to using COINr, you could skip this chapter for now and look at some examples in the appendices or directly at the chapters relating to specific functions and operations (the next chapter onwards).
2\.1 Terminology
----------------
Let’s clearly define a few terms first to avoid confusion later on.
* An *indicator* is a variable which has an observed value for each unit. Indicators might be things like life expectancy, CO2 emissions, number of tertiary graduates, and so on.
* A *unit* is one of the entities that you are comparing using indicators. Often, units are countries, but they could also be regions, universities, individuals or even competing policy options (the latter is the realm of multicriteria decision analysis).
Together, indicators and units form the main input data frame for COINr (units as rows, and indicators as columns):
Figure 2\.1: Indicators and units
*Composite indicators* are created by aggregating multiple indicators into a single value. This may often be repeated more than once, following a hierarchical structure, such as the following figure:
Figure 2\.2: Example of composite indicator structure.
In this example, four pairs of indicators are each aggregated into a “sub\-pillar”, and sub\-pillars are aggregated into two “pillars”. Finally, the pillars are aggregated into a single index. Typically, the sub\-pillars and pillars represent specific sub\-concepts of the central concept. For example, in the [Global Innovation Index](https://www.globalinnovationindex.org/Home), which aims to measure innovation at the national level, pillars include things like “Institutions”, “Human Capital”, and “Infrastructure”, each of which are populated with indicators which measure those specific concepts. The idea is that a complex multidimensional concept is broken down into sub\-concepts which are easier to capture with a set of indicators. Additionally, the scores of pillars and sub\-pillars can be interesting in their own right.
Whether each of these groups is called a “pillar”, “sub\-index”, “dimension”, or something else, depends on the index and your own favourite words. For this reason, in COINr a more general terminology is adopted.
When a group of indicators is aggregated together to form a single sub\-pillar, pillar or otherwise, this is called an *aggregation group*. For example, the aggregation group of “SP 1” in the diagram above consists of “ind.01” and “ind.02”. The aggregation group of “Pillar 1” consists of “SP 1” and “SP 2”. And so on. The resulting value from aggregating an aggregation group is called an *aggregate* or *aggregate value*.
COINr also frequently refers to *aggregation levels* or equivalently just *levels*. This refers to the vertical position in the hierarchy shown above. In this example, therefore,
* Level 1 \= Indicators
* Level 2 \= Pillars
* Level 3 \= Sub\-indexes
* Level 4 \= Index
This means that the fictional index above has four levels in total. COINr can handle any number of levels and aggregation groups. The structure of the index is defined when the COIN is constructed \- see the following chapter for details.
2\.2 COINr syntax
-----------------
Many COINr functions follow a common syntax which is related to some of the terminology described above.
* `dset` always refers to the name of the data set that is found in `.$Data`. For example, `dset = "Normalised"` will return `.$Data$Normalised`. The only exception is `dset = "Denominators"`, which returns `.$Input$Denominators`. The argument `dset` is used in many COINr functions because they have to know which data set to operate on.
* `icodes` is a character vector of indicator or aggregate codes
* `aglev` is the aggregation level, or levels, to take the indicator data from. To understand how this works, see [Selecting data sets and indicators](helper-functions.html#selecting-data-sets-and-indicators).
* `out2` controls the output of many functions \- if set to “COIN” it will append the results back to the COIN that was input to the function, otherwise if “df” or “list” it will return the results as a data frame or list depending on the function.
In particular, the combination of `icodes` and `aglev` is useful because it allows to call sets of indicators inside aggregation groups very quickly. See [Selecting data sets and indicators](helper-functions.html#selecting-data-sets-and-indicators) for more details.
2\.3 A map of COINr
-------------------
COINr consists of many functions that do a number of different types of operations. Here is a summary.
### 2\.3\.1 Construction
The following functions are the main functions for building a composite indicator in COINr.
| Function | Description |
| --- | --- |
| `assemble()` | Assembles indicator data/metadata into a COIN |
| `checkData()` | Data availability check and unit screening |
| `denominate()` | Denominate (divide) indicators by other indicators |
| `impute()` | Impute missing data using various methods |
| `treat()` | Treat outliers with Winsorisation and transformations |
| `normalise()` | Normalise data using various methods |
| `aggregate()` | Aggregate indicators into hierarchical levels, up to index |
| `regen()` | Regenerates COIN results using specifications stored in `.$Method` |
### 2\.3\.2 Visualisation and presenting
These functions are for visualising and presenting data and results
| Function | Description |
| --- | --- |
| `iplotBar()` | Interactive bar chart for any indicator, includes stacked bar charts |
| `iplotCorr()` | Interactive correlation heatmap |
| `iplotIndDist()` | Interactive indicator distribution plots for a single indicator |
| `iplotIndDist2()` | Interactive indicator distribution plots for two indicators simultaneously |
| `iplotMap()` | Interactive choropleth map for any indicator (only works for countries) |
| `iplotRadar()` | Interactive radar chart for specified unit(s) and specified indicators |
| `iplotTable()` | Interactive results table with conditional formatting |
| `plotCorr()` | Correlation heatmap between any levels/groups of the index |
| `plotframework()` | Interactive sunburst plot visualising the indicator framework |
| `plotIndDist()` | Static plot of distributions for indicators and groups of indicators |
| `plotIndDot()` | Dot plot of a single indicator with possibility to label units |
| `plotSA()` | Plot sensitivity analysis results (sensitivity indices) |
| `plotSARanks()` | Plot confidence intervals on ranks, following an uncertainty/sensitivity analysis |
| `colourTable()` | Conditionally\-formatted interactive HTML table, given a data frame |
| `getResults()` | Gets quick results summary tables |
| `getStrengthNWeak()` | Gets strengths and weaknesses (N top ranked indicators) of a selected unit |
| `getUnitReport()` | Generates a unit report following a template, and outputs either HTML, Word or pdf format |
| `getUnitSummary()` | Summarises scores and ranks of a selected unit, at specified aggregation levels |
### 2\.3\.3 Adjustment and comparison
| Function | Description |
| --- | --- |
| `compTable()` | Comparison table of selected indicator/aggregate between two COINs |
| `compTableMulti()` | Comparison table of selected indicator/aggregate between multiple COINs |
| `indChange()` | Short cut to add or remove indicators |
### 2\.3\.4 Analysis
The following functions analyse indicator data.
| Function | Description |
| --- | --- |
| `getCorr()` | Get correlation matrices between various groups and levels |
| `getCronbach()` | Get Cronbach’s alpha for specified group/level |
| `getPCA()` | Principle component analysis on a specified data set and subset of indicators. Also returns PCA weights. |
| `effectiveWeight()` | Calculates the effective weights of each element in the indicator hierarchy. |
| `getStats()` | Get table of indicator statistics for any data set |
| `hicorrSP()` | Identify highly correlated indicators in the same aggregation group |
| `outrankMatrix()` | Outranking matrix based on a data frame of indicator data and corresponding weights |
| `removeElements()` | Reports the impact of sequentially removing indicators or aggregates |
| `sensitivity()` | Perform a Monte Carlo uncertainty analysis and/or a global sensitivity analysis on a COIN |
| `weightOpt()` | Weight optimisation according to a pre\-specified vector of “importances” |
### 2\.3\.5 Interactive apps
These are interactive apps, built using Shiny, which allow fast interactive exploration and adjustments.
| Function | Description |
| --- | --- |
| `indDash()` | Indicator visualisation (distribution) dashboard for one or two indicators |
| `rew8r()` | Interactively re\-weight indicators and check updated results and correlations |
| `resultsDash()` | Interactive dashboard for visualising and exploring results |
### 2\.3\.6 Import/export
Functions to import and export data and results to and from COINr and R.
| Function | Description |
| --- | --- |
| `COINToolIn()` | Import indicator data and metadata from COIN Tool |
| `coin2Excel()` W | rite data, analysis and results from a COIN to Excel |
### 2\.3\.7 Other functions
These are other functions that may be of interest, but do not neatly fit in the previous categories. Many of them are called from the other functions listed above, but may still be useful on their own.
| Function | Description |
| --- | --- |
| `BoxCox()` | Box Cox transformation on a vector of data |
| `build_ASEM()` | Build ASEM (example) composite indicator in one command |
| `coin_win()` | Winsorise one column of data according to skew/kurtosis thresholds |
| `compareDF()` | Detailed comparison of two data frames (e.g. for cross checking external calculations) |
| `copeland()` | Aggregates a data frame into a single column using the Copeland method. |
| `geoMean()` | Weighted geometric mean of a vector |
| `getIn()` | Useful function for subsetting indicator data. See [Helper functions](helper-functions.html#helper-functions). |
| `harMean()` | Weighted harmonic mean of a vector |
| `is.coin()` | Check if object is a COIN |
| `loggish()` | Log\-type transformation, of various types, for a vector |
| `names2Codes()` | Given a character vector of long names (probably with spaces), generates short codes. |
| `rankDF()` | Convert a data frame of scores to ranks (non\-numerical columns are ignored) |
| `replaceDF()` | Simultaneously replace multiple values in a data frame. Useful for converting categorical variables to numerical. |
| `roundDF()` | Round down a data frame (i.e. for presentation) |
### 2\.3\.8 Data
COINr comes with some example data embedded into the package.
| Function | Description |
| --- | --- |
| `ASEMIndData` | ASEM (example) indicator data as input for `assemble()` |
| `ASEMIndMeta` | ASEM (example) indicator metadata as input for `assemble()` |
| `ASEMAggMeta` | ASEM (example) aggregate metadata as input for `assemble()` |
| `WorldDenoms` | National denomination data (GDP, population, etc) worldwide |
### 2\.3\.9 Finally
There are also a number functions which are mainly for internal use, and are not listed here.
2\.4 Tips for using COINr
-------------------------
### 2\.4\.1 Use scripts and R Markdown
Like any good data science, anything you do in COINr should be reproducible. This means that ideally, you should be able to go from importing the data into R up to visualising the results. This can be either done by a script or a set of scripts, or perhaps better using an R markdown document or documents which combines text with code. If you haven’t used R Markdown before, you probably should be using it. It is very similar to the idea of Jupyter notebooks in Python. Find out more about R Markdown in the free and excellent online book [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/). You might also notice that this manual is written in R Markdown, and more specifically using the [bookdown package](https://bookdown.org/yihui/bookdown/).
Ideally, the whole process of building and analysing the composite indicator should be documented with an R Markdown file or files, or scripts. Apart from reproducibility, this makes it very easy to make retrospective changes and update the whole file. I have personally found that it may help to have several markdown notebooks \- one for exploring the data and selecting indicators, another for data treatment, and another for construction, and so on. At the end of each notebook, the COIN(s) containing all data and analysis are saved to a folder and loaded back in in the next notebook. This avoids having one very long notebook. Of course, this is just one way of doing things and is just a suggestion.
### 2\.4\.2 Missing data
It’s very important to understand the difference between missing data and zeros. A zero means that you know that the value of the indicator is zero, whereas missing data (in R this is denoted as `NA`) means you don’t know what the value is at all.
When aggregating indicators, this difference becomes particularly important. If a data point is missing, it is often excluded from the aggregation, which effectively means reassigning it with the mean (or similar) value of the indicators in the same group. If you impute the data, missing values could be assigned with mean or median indicator values, for example. Clearly, these values will be very different from zeros.
This point is made here, because data may sometimes come with missing values denoted as zeros, or with zeros denoted as missing values. Ensure that these are checked carefully before proceeding with construction.
### 2\.4\.3 Plotting
COINr includes a number of plotting functions which return various types of plots. These are all either based on the ggplot2 or plotly packages. Think of COINr plots as “starting points” rather than finished plots. While I have tried to make the plots useful and even visually appealing, much more could be done to improve them and to fit particular uses and contexts.
Luckily, because of the way that plotly and ggplot2 work, plots generated by COINr can easily be modified by assigning them to an object and then modifying the object. Generally this would look like this:
```
# generate a plot, but assign to variable "plt"
plt <- COINr_plot_function(COIN, plot_options)
# alter plt
plt <- plt + alter_plot_function(options)
# view the plot
plt
```
The `alter_plot_function` could be any of the very many layout and geometry functions from ggplot2, for example. This means you can change colours, axis labels, point styles, titles and many other things.
In summary, with COINr you may choose to either:
* Generate plots with COINr functions and be content with them as they are
* Generate plots with COINr functions and tweak them to your tastes using the approach shown above
* Make your own plots
### 2\.4\.4 Panel data and multiple COINs
Currently COINr does not explicitly support panel data, although the `assemble()` function can extract and impute a single year of data from a panel data set. If you wish to work with panel data, I would recommend to build a separate COIN for each point in time, and store the COINs in a list. This makes it easy to apply COINr functions to all COINs in one go using purrr or `lapply()` functions. It is also more flexible to use separate COINs because methodology, units and indicators may vary from one year to the next.
The same strategy can be applied any parallel development of multiple indexes, such as separate indexes for country groups.
2\.1 Terminology
----------------
Let’s clearly define a few terms first to avoid confusion later on.
* An *indicator* is a variable which has an observed value for each unit. Indicators might be things like life expectancy, CO2 emissions, number of tertiary graduates, and so on.
* A *unit* is one of the entities that you are comparing using indicators. Often, units are countries, but they could also be regions, universities, individuals or even competing policy options (the latter is the realm of multicriteria decision analysis).
Together, indicators and units form the main input data frame for COINr (units as rows, and indicators as columns):
Figure 2\.1: Indicators and units
*Composite indicators* are created by aggregating multiple indicators into a single value. This may often be repeated more than once, following a hierarchical structure, such as the following figure:
Figure 2\.2: Example of composite indicator structure.
In this example, four pairs of indicators are each aggregated into a “sub\-pillar”, and sub\-pillars are aggregated into two “pillars”. Finally, the pillars are aggregated into a single index. Typically, the sub\-pillars and pillars represent specific sub\-concepts of the central concept. For example, in the [Global Innovation Index](https://www.globalinnovationindex.org/Home), which aims to measure innovation at the national level, pillars include things like “Institutions”, “Human Capital”, and “Infrastructure”, each of which are populated with indicators which measure those specific concepts. The idea is that a complex multidimensional concept is broken down into sub\-concepts which are easier to capture with a set of indicators. Additionally, the scores of pillars and sub\-pillars can be interesting in their own right.
Whether each of these groups is called a “pillar”, “sub\-index”, “dimension”, or something else, depends on the index and your own favourite words. For this reason, in COINr a more general terminology is adopted.
When a group of indicators is aggregated together to form a single sub\-pillar, pillar or otherwise, this is called an *aggregation group*. For example, the aggregation group of “SP 1” in the diagram above consists of “ind.01” and “ind.02”. The aggregation group of “Pillar 1” consists of “SP 1” and “SP 2”. And so on. The resulting value from aggregating an aggregation group is called an *aggregate* or *aggregate value*.
COINr also frequently refers to *aggregation levels* or equivalently just *levels*. This refers to the vertical position in the hierarchy shown above. In this example, therefore,
* Level 1 \= Indicators
* Level 2 \= Pillars
* Level 3 \= Sub\-indexes
* Level 4 \= Index
This means that the fictional index above has four levels in total. COINr can handle any number of levels and aggregation groups. The structure of the index is defined when the COIN is constructed \- see the following chapter for details.
2\.2 COINr syntax
-----------------
Many COINr functions follow a common syntax which is related to some of the terminology described above.
* `dset` always refers to the name of the data set that is found in `.$Data`. For example, `dset = "Normalised"` will return `.$Data$Normalised`. The only exception is `dset = "Denominators"`, which returns `.$Input$Denominators`. The argument `dset` is used in many COINr functions because they have to know which data set to operate on.
* `icodes` is a character vector of indicator or aggregate codes
* `aglev` is the aggregation level, or levels, to take the indicator data from. To understand how this works, see [Selecting data sets and indicators](helper-functions.html#selecting-data-sets-and-indicators).
* `out2` controls the output of many functions \- if set to “COIN” it will append the results back to the COIN that was input to the function, otherwise if “df” or “list” it will return the results as a data frame or list depending on the function.
In particular, the combination of `icodes` and `aglev` is useful because it allows to call sets of indicators inside aggregation groups very quickly. See [Selecting data sets and indicators](helper-functions.html#selecting-data-sets-and-indicators) for more details.
2\.3 A map of COINr
-------------------
COINr consists of many functions that do a number of different types of operations. Here is a summary.
### 2\.3\.1 Construction
The following functions are the main functions for building a composite indicator in COINr.
| Function | Description |
| --- | --- |
| `assemble()` | Assembles indicator data/metadata into a COIN |
| `checkData()` | Data availability check and unit screening |
| `denominate()` | Denominate (divide) indicators by other indicators |
| `impute()` | Impute missing data using various methods |
| `treat()` | Treat outliers with Winsorisation and transformations |
| `normalise()` | Normalise data using various methods |
| `aggregate()` | Aggregate indicators into hierarchical levels, up to index |
| `regen()` | Regenerates COIN results using specifications stored in `.$Method` |
### 2\.3\.2 Visualisation and presenting
These functions are for visualising and presenting data and results
| Function | Description |
| --- | --- |
| `iplotBar()` | Interactive bar chart for any indicator, includes stacked bar charts |
| `iplotCorr()` | Interactive correlation heatmap |
| `iplotIndDist()` | Interactive indicator distribution plots for a single indicator |
| `iplotIndDist2()` | Interactive indicator distribution plots for two indicators simultaneously |
| `iplotMap()` | Interactive choropleth map for any indicator (only works for countries) |
| `iplotRadar()` | Interactive radar chart for specified unit(s) and specified indicators |
| `iplotTable()` | Interactive results table with conditional formatting |
| `plotCorr()` | Correlation heatmap between any levels/groups of the index |
| `plotframework()` | Interactive sunburst plot visualising the indicator framework |
| `plotIndDist()` | Static plot of distributions for indicators and groups of indicators |
| `plotIndDot()` | Dot plot of a single indicator with possibility to label units |
| `plotSA()` | Plot sensitivity analysis results (sensitivity indices) |
| `plotSARanks()` | Plot confidence intervals on ranks, following an uncertainty/sensitivity analysis |
| `colourTable()` | Conditionally\-formatted interactive HTML table, given a data frame |
| `getResults()` | Gets quick results summary tables |
| `getStrengthNWeak()` | Gets strengths and weaknesses (N top ranked indicators) of a selected unit |
| `getUnitReport()` | Generates a unit report following a template, and outputs either HTML, Word or pdf format |
| `getUnitSummary()` | Summarises scores and ranks of a selected unit, at specified aggregation levels |
### 2\.3\.3 Adjustment and comparison
| Function | Description |
| --- | --- |
| `compTable()` | Comparison table of selected indicator/aggregate between two COINs |
| `compTableMulti()` | Comparison table of selected indicator/aggregate between multiple COINs |
| `indChange()` | Short cut to add or remove indicators |
### 2\.3\.4 Analysis
The following functions analyse indicator data.
| Function | Description |
| --- | --- |
| `getCorr()` | Get correlation matrices between various groups and levels |
| `getCronbach()` | Get Cronbach’s alpha for specified group/level |
| `getPCA()` | Principle component analysis on a specified data set and subset of indicators. Also returns PCA weights. |
| `effectiveWeight()` | Calculates the effective weights of each element in the indicator hierarchy. |
| `getStats()` | Get table of indicator statistics for any data set |
| `hicorrSP()` | Identify highly correlated indicators in the same aggregation group |
| `outrankMatrix()` | Outranking matrix based on a data frame of indicator data and corresponding weights |
| `removeElements()` | Reports the impact of sequentially removing indicators or aggregates |
| `sensitivity()` | Perform a Monte Carlo uncertainty analysis and/or a global sensitivity analysis on a COIN |
| `weightOpt()` | Weight optimisation according to a pre\-specified vector of “importances” |
### 2\.3\.5 Interactive apps
These are interactive apps, built using Shiny, which allow fast interactive exploration and adjustments.
| Function | Description |
| --- | --- |
| `indDash()` | Indicator visualisation (distribution) dashboard for one or two indicators |
| `rew8r()` | Interactively re\-weight indicators and check updated results and correlations |
| `resultsDash()` | Interactive dashboard for visualising and exploring results |
### 2\.3\.6 Import/export
Functions to import and export data and results to and from COINr and R.
| Function | Description |
| --- | --- |
| `COINToolIn()` | Import indicator data and metadata from COIN Tool |
| `coin2Excel()` W | rite data, analysis and results from a COIN to Excel |
### 2\.3\.7 Other functions
These are other functions that may be of interest, but do not neatly fit in the previous categories. Many of them are called from the other functions listed above, but may still be useful on their own.
| Function | Description |
| --- | --- |
| `BoxCox()` | Box Cox transformation on a vector of data |
| `build_ASEM()` | Build ASEM (example) composite indicator in one command |
| `coin_win()` | Winsorise one column of data according to skew/kurtosis thresholds |
| `compareDF()` | Detailed comparison of two data frames (e.g. for cross checking external calculations) |
| `copeland()` | Aggregates a data frame into a single column using the Copeland method. |
| `geoMean()` | Weighted geometric mean of a vector |
| `getIn()` | Useful function for subsetting indicator data. See [Helper functions](helper-functions.html#helper-functions). |
| `harMean()` | Weighted harmonic mean of a vector |
| `is.coin()` | Check if object is a COIN |
| `loggish()` | Log\-type transformation, of various types, for a vector |
| `names2Codes()` | Given a character vector of long names (probably with spaces), generates short codes. |
| `rankDF()` | Convert a data frame of scores to ranks (non\-numerical columns are ignored) |
| `replaceDF()` | Simultaneously replace multiple values in a data frame. Useful for converting categorical variables to numerical. |
| `roundDF()` | Round down a data frame (i.e. for presentation) |
### 2\.3\.8 Data
COINr comes with some example data embedded into the package.
| Function | Description |
| --- | --- |
| `ASEMIndData` | ASEM (example) indicator data as input for `assemble()` |
| `ASEMIndMeta` | ASEM (example) indicator metadata as input for `assemble()` |
| `ASEMAggMeta` | ASEM (example) aggregate metadata as input for `assemble()` |
| `WorldDenoms` | National denomination data (GDP, population, etc) worldwide |
### 2\.3\.9 Finally
There are also a number functions which are mainly for internal use, and are not listed here.
### 2\.3\.1 Construction
The following functions are the main functions for building a composite indicator in COINr.
| Function | Description |
| --- | --- |
| `assemble()` | Assembles indicator data/metadata into a COIN |
| `checkData()` | Data availability check and unit screening |
| `denominate()` | Denominate (divide) indicators by other indicators |
| `impute()` | Impute missing data using various methods |
| `treat()` | Treat outliers with Winsorisation and transformations |
| `normalise()` | Normalise data using various methods |
| `aggregate()` | Aggregate indicators into hierarchical levels, up to index |
| `regen()` | Regenerates COIN results using specifications stored in `.$Method` |
### 2\.3\.2 Visualisation and presenting
These functions are for visualising and presenting data and results
| Function | Description |
| --- | --- |
| `iplotBar()` | Interactive bar chart for any indicator, includes stacked bar charts |
| `iplotCorr()` | Interactive correlation heatmap |
| `iplotIndDist()` | Interactive indicator distribution plots for a single indicator |
| `iplotIndDist2()` | Interactive indicator distribution plots for two indicators simultaneously |
| `iplotMap()` | Interactive choropleth map for any indicator (only works for countries) |
| `iplotRadar()` | Interactive radar chart for specified unit(s) and specified indicators |
| `iplotTable()` | Interactive results table with conditional formatting |
| `plotCorr()` | Correlation heatmap between any levels/groups of the index |
| `plotframework()` | Interactive sunburst plot visualising the indicator framework |
| `plotIndDist()` | Static plot of distributions for indicators and groups of indicators |
| `plotIndDot()` | Dot plot of a single indicator with possibility to label units |
| `plotSA()` | Plot sensitivity analysis results (sensitivity indices) |
| `plotSARanks()` | Plot confidence intervals on ranks, following an uncertainty/sensitivity analysis |
| `colourTable()` | Conditionally\-formatted interactive HTML table, given a data frame |
| `getResults()` | Gets quick results summary tables |
| `getStrengthNWeak()` | Gets strengths and weaknesses (N top ranked indicators) of a selected unit |
| `getUnitReport()` | Generates a unit report following a template, and outputs either HTML, Word or pdf format |
| `getUnitSummary()` | Summarises scores and ranks of a selected unit, at specified aggregation levels |
### 2\.3\.3 Adjustment and comparison
| Function | Description |
| --- | --- |
| `compTable()` | Comparison table of selected indicator/aggregate between two COINs |
| `compTableMulti()` | Comparison table of selected indicator/aggregate between multiple COINs |
| `indChange()` | Short cut to add or remove indicators |
### 2\.3\.4 Analysis
The following functions analyse indicator data.
| Function | Description |
| --- | --- |
| `getCorr()` | Get correlation matrices between various groups and levels |
| `getCronbach()` | Get Cronbach’s alpha for specified group/level |
| `getPCA()` | Principle component analysis on a specified data set and subset of indicators. Also returns PCA weights. |
| `effectiveWeight()` | Calculates the effective weights of each element in the indicator hierarchy. |
| `getStats()` | Get table of indicator statistics for any data set |
| `hicorrSP()` | Identify highly correlated indicators in the same aggregation group |
| `outrankMatrix()` | Outranking matrix based on a data frame of indicator data and corresponding weights |
| `removeElements()` | Reports the impact of sequentially removing indicators or aggregates |
| `sensitivity()` | Perform a Monte Carlo uncertainty analysis and/or a global sensitivity analysis on a COIN |
| `weightOpt()` | Weight optimisation according to a pre\-specified vector of “importances” |
### 2\.3\.5 Interactive apps
These are interactive apps, built using Shiny, which allow fast interactive exploration and adjustments.
| Function | Description |
| --- | --- |
| `indDash()` | Indicator visualisation (distribution) dashboard for one or two indicators |
| `rew8r()` | Interactively re\-weight indicators and check updated results and correlations |
| `resultsDash()` | Interactive dashboard for visualising and exploring results |
### 2\.3\.6 Import/export
Functions to import and export data and results to and from COINr and R.
| Function | Description |
| --- | --- |
| `COINToolIn()` | Import indicator data and metadata from COIN Tool |
| `coin2Excel()` W | rite data, analysis and results from a COIN to Excel |
### 2\.3\.7 Other functions
These are other functions that may be of interest, but do not neatly fit in the previous categories. Many of them are called from the other functions listed above, but may still be useful on their own.
| Function | Description |
| --- | --- |
| `BoxCox()` | Box Cox transformation on a vector of data |
| `build_ASEM()` | Build ASEM (example) composite indicator in one command |
| `coin_win()` | Winsorise one column of data according to skew/kurtosis thresholds |
| `compareDF()` | Detailed comparison of two data frames (e.g. for cross checking external calculations) |
| `copeland()` | Aggregates a data frame into a single column using the Copeland method. |
| `geoMean()` | Weighted geometric mean of a vector |
| `getIn()` | Useful function for subsetting indicator data. See [Helper functions](helper-functions.html#helper-functions). |
| `harMean()` | Weighted harmonic mean of a vector |
| `is.coin()` | Check if object is a COIN |
| `loggish()` | Log\-type transformation, of various types, for a vector |
| `names2Codes()` | Given a character vector of long names (probably with spaces), generates short codes. |
| `rankDF()` | Convert a data frame of scores to ranks (non\-numerical columns are ignored) |
| `replaceDF()` | Simultaneously replace multiple values in a data frame. Useful for converting categorical variables to numerical. |
| `roundDF()` | Round down a data frame (i.e. for presentation) |
### 2\.3\.8 Data
COINr comes with some example data embedded into the package.
| Function | Description |
| --- | --- |
| `ASEMIndData` | ASEM (example) indicator data as input for `assemble()` |
| `ASEMIndMeta` | ASEM (example) indicator metadata as input for `assemble()` |
| `ASEMAggMeta` | ASEM (example) aggregate metadata as input for `assemble()` |
| `WorldDenoms` | National denomination data (GDP, population, etc) worldwide |
### 2\.3\.9 Finally
There are also a number functions which are mainly for internal use, and are not listed here.
2\.4 Tips for using COINr
-------------------------
### 2\.4\.1 Use scripts and R Markdown
Like any good data science, anything you do in COINr should be reproducible. This means that ideally, you should be able to go from importing the data into R up to visualising the results. This can be either done by a script or a set of scripts, or perhaps better using an R markdown document or documents which combines text with code. If you haven’t used R Markdown before, you probably should be using it. It is very similar to the idea of Jupyter notebooks in Python. Find out more about R Markdown in the free and excellent online book [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/). You might also notice that this manual is written in R Markdown, and more specifically using the [bookdown package](https://bookdown.org/yihui/bookdown/).
Ideally, the whole process of building and analysing the composite indicator should be documented with an R Markdown file or files, or scripts. Apart from reproducibility, this makes it very easy to make retrospective changes and update the whole file. I have personally found that it may help to have several markdown notebooks \- one for exploring the data and selecting indicators, another for data treatment, and another for construction, and so on. At the end of each notebook, the COIN(s) containing all data and analysis are saved to a folder and loaded back in in the next notebook. This avoids having one very long notebook. Of course, this is just one way of doing things and is just a suggestion.
### 2\.4\.2 Missing data
It’s very important to understand the difference between missing data and zeros. A zero means that you know that the value of the indicator is zero, whereas missing data (in R this is denoted as `NA`) means you don’t know what the value is at all.
When aggregating indicators, this difference becomes particularly important. If a data point is missing, it is often excluded from the aggregation, which effectively means reassigning it with the mean (or similar) value of the indicators in the same group. If you impute the data, missing values could be assigned with mean or median indicator values, for example. Clearly, these values will be very different from zeros.
This point is made here, because data may sometimes come with missing values denoted as zeros, or with zeros denoted as missing values. Ensure that these are checked carefully before proceeding with construction.
### 2\.4\.3 Plotting
COINr includes a number of plotting functions which return various types of plots. These are all either based on the ggplot2 or plotly packages. Think of COINr plots as “starting points” rather than finished plots. While I have tried to make the plots useful and even visually appealing, much more could be done to improve them and to fit particular uses and contexts.
Luckily, because of the way that plotly and ggplot2 work, plots generated by COINr can easily be modified by assigning them to an object and then modifying the object. Generally this would look like this:
```
# generate a plot, but assign to variable "plt"
plt <- COINr_plot_function(COIN, plot_options)
# alter plt
plt <- plt + alter_plot_function(options)
# view the plot
plt
```
The `alter_plot_function` could be any of the very many layout and geometry functions from ggplot2, for example. This means you can change colours, axis labels, point styles, titles and many other things.
In summary, with COINr you may choose to either:
* Generate plots with COINr functions and be content with them as they are
* Generate plots with COINr functions and tweak them to your tastes using the approach shown above
* Make your own plots
### 2\.4\.4 Panel data and multiple COINs
Currently COINr does not explicitly support panel data, although the `assemble()` function can extract and impute a single year of data from a panel data set. If you wish to work with panel data, I would recommend to build a separate COIN for each point in time, and store the COINs in a list. This makes it easy to apply COINr functions to all COINs in one go using purrr or `lapply()` functions. It is also more flexible to use separate COINs because methodology, units and indicators may vary from one year to the next.
The same strategy can be applied any parallel development of multiple indexes, such as separate indexes for country groups.
### 2\.4\.1 Use scripts and R Markdown
Like any good data science, anything you do in COINr should be reproducible. This means that ideally, you should be able to go from importing the data into R up to visualising the results. This can be either done by a script or a set of scripts, or perhaps better using an R markdown document or documents which combines text with code. If you haven’t used R Markdown before, you probably should be using it. It is very similar to the idea of Jupyter notebooks in Python. Find out more about R Markdown in the free and excellent online book [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/). You might also notice that this manual is written in R Markdown, and more specifically using the [bookdown package](https://bookdown.org/yihui/bookdown/).
Ideally, the whole process of building and analysing the composite indicator should be documented with an R Markdown file or files, or scripts. Apart from reproducibility, this makes it very easy to make retrospective changes and update the whole file. I have personally found that it may help to have several markdown notebooks \- one for exploring the data and selecting indicators, another for data treatment, and another for construction, and so on. At the end of each notebook, the COIN(s) containing all data and analysis are saved to a folder and loaded back in in the next notebook. This avoids having one very long notebook. Of course, this is just one way of doing things and is just a suggestion.
### 2\.4\.2 Missing data
It’s very important to understand the difference between missing data and zeros. A zero means that you know that the value of the indicator is zero, whereas missing data (in R this is denoted as `NA`) means you don’t know what the value is at all.
When aggregating indicators, this difference becomes particularly important. If a data point is missing, it is often excluded from the aggregation, which effectively means reassigning it with the mean (or similar) value of the indicators in the same group. If you impute the data, missing values could be assigned with mean or median indicator values, for example. Clearly, these values will be very different from zeros.
This point is made here, because data may sometimes come with missing values denoted as zeros, or with zeros denoted as missing values. Ensure that these are checked carefully before proceeding with construction.
### 2\.4\.3 Plotting
COINr includes a number of plotting functions which return various types of plots. These are all either based on the ggplot2 or plotly packages. Think of COINr plots as “starting points” rather than finished plots. While I have tried to make the plots useful and even visually appealing, much more could be done to improve them and to fit particular uses and contexts.
Luckily, because of the way that plotly and ggplot2 work, plots generated by COINr can easily be modified by assigning them to an object and then modifying the object. Generally this would look like this:
```
# generate a plot, but assign to variable "plt"
plt <- COINr_plot_function(COIN, plot_options)
# alter plt
plt <- plt + alter_plot_function(options)
# view the plot
plt
```
The `alter_plot_function` could be any of the very many layout and geometry functions from ggplot2, for example. This means you can change colours, axis labels, point styles, titles and many other things.
In summary, with COINr you may choose to either:
* Generate plots with COINr functions and be content with them as they are
* Generate plots with COINr functions and tweak them to your tastes using the approach shown above
* Make your own plots
### 2\.4\.4 Panel data and multiple COINs
Currently COINr does not explicitly support panel data, although the `assemble()` function can extract and impute a single year of data from a panel data set. If you wish to work with panel data, I would recommend to build a separate COIN for each point in time, and store the COINs in a list. This makes it easy to apply COINr functions to all COINs in one go using purrr or `lapply()` functions. It is also more flexible to use separate COINs because methodology, units and indicators may vary from one year to the next.
The same strategy can be applied any parallel development of multiple indexes, such as separate indexes for country groups.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/coins-the-currency-of-coinr.html |
Chapter 3 COINs: the currency of COINr
======================================
Where possible COINr functions can be used as standalone functions, typically on data frames of indicator data, for operations such as normalisation, imputation and so on.
The full functionality of COINr is however harnessed by assembling the indicator data, and the structure of the index, into a “COIN”. A COIN is a structured list, which neatly stores the various data sets, parameters, analyses and methodological decisions in one place. There are various reasons for doing this:
1. It simplifies operations because functions know where data and parameters are. Therefore, COINr operations typically follow a syntax of the form `COIN <- COINr_function(COIN, <methodological settings>)`, updating the COIN with new results and data sets generated by the function. This avoids having to repeatedly specify indicator names, structure, etc, as separate arguments. It’s like putting a coin in a vending machine and getting a packet of crisps. Except you also get your coin back[1](#fn1).
2. It keeps things organised and avoids a workspace of dozens of variables.
3. It keeps a record of methodological decisions \- this allows results to be easily regenerated following “what if” experiments, such as removing or changing indicators etc (see [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons)).
The logic of doing this will become clearer as you build your index. If you only want to use the COINr functions on data frames, you can probably skip this chapter.
To build a COIN, COINr has a dedicated function called `assemble()`. This is explained below in more detail.
3\.1 Inside a COIN
------------------
COINs are hierarchical lists, a bit like a folder system. In R they are sometimes called “lists of lists” or “nested lists”. A COIN contains all the data, parameters, methodological choices and results of a composite indicator. It has a structure that looks like this:
Figure 3\.1: Inside a COIN
The COIN is best explored using e.g. `View(ASEM)` in R Studio. It consists of six main sub\-lists which are briefly described as follows (the dot in e.g. `.$Input` in the following refers to a generic COIN):
* `.$Input` is all input data and metadata that was initially used to construct the COIN.
* `.$Data` consists of data frames of indicator data at each construction step, e.g. `.$Data$Raw` is the raw data used to construct the COIN, `.$Data$Normalised` is the same data set after normalisation, and so on.
* `.$Parameters` contains parameters such as sets of weights, details on the number of indicators, aggregation levels, etc.
* `.$Method` is a record of the methodology applied to build the COIN. In practice, this records all the inputs to any COINr functions applied in building the COIN, for example the normalisation method and parameters, data treatment specifications, and so on. Apart from keeping a record of the construction, this also allows the entire results to be regenerated using the information stored in the COIN. This will be explained better in [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons).
* `.$Analysis` contains any analysis tables which are sorted by the data sets to which they are applied. This can be e.g. indicator statistics of any of the data sets found in `.$Data`, or correlations.
* `.$Results` contains summary tables of results, arranged and sorted for easy viewing.
When you first build the COIN, using the three input data frames explained in the next section, the COIN will only consist of the Input, Data, Parameters and possibly Method “folders” (sub\-lists). As you construct the composite indicator, more things will be added to the COIN, for example extra data sets (denominated, normalised, aggregated), analysis (missing data, PCA, etc), and results (summary results tables).
Many analysis functions in COINr have an argument called `out2`, which specifies whether the result should be stored inside the COIN or output as a separate list or data frame.
It is worth exploring the contents of the COIN as it is built to understand where things are kept. A couple of things to keep in mind are:
1. One good reason to store things inside the COIN is that if you plan to export the results to Excel at any point, you can use the `coin2Excel()` function to export (almost) everything inside the COIN in one command. This includes all data sets, analysis tables and results tables.
2. A COIN is a list that has been labelled as a “COIN” using R’s `class()` function, and has no hard restrictions on its structure. That means you can edit it as you would with any other list. The implications of this are:
1. You can also store other things in the COIN if you want, and add folders and lists to it, but,
2. You have to be careful editing it, because COINr functions expect things to be in certain places inside the COIN. So if you delete something, for example, some COINr functions may not work.
3\.2 The three inputs
---------------------
To build a COIN, three ingredients (data frames) are needed to begin with, and are used as inputs to the `assemble()` function, which outputs a COIN[2](#fn2). In short, they are the indicator data, the indicator metadata, and the aggregation metadata. Here, each will be explained separately. Inputting the data in the first place takes a little effort, because you have to get your data in a format that COINr understands, and you are required to specify many things. But once you have got your data assembled, COINr should do most of the hard work, so hang in there!
How you actually create these data frames is up to you. If your data is coming from Excel, it may be easiest to assemble these data frames (tables) manually in Excel and then read directly into R. Alternatively, you may wish to create them from scratch in R (an advantage here could be that you can source data directly from APIs, and have a fully reproducible data pipeline), or you may have yet a different approach.
### 3\.2\.1 Indicator data
The indicator data is a data frame which, shockingly, specifies the data for the indicators. However, it can do more than that, and also some rules have to be followed. The easiest way to explain is to start with an example.
```
library(COINr6)
head(ASEMIndData[1:8])
## # A tibble: 6 x 8
## UnitName UnitCode Group_GDP Group_GDPpc Group_Pop Group_EurAsia Year Den_Area
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl>
## 1 Austria AUT L XL M Europe 2018 83871
## 2 Belgium BEL L L L Europe 2018 30528
## 3 Bulgaria BGR S S M Europe 2018 110879
## 4 Croatia HRV S M S Europe 2018 56594
## 5 Cyprus CYP S L S Europe 2018 9251
## 6 Czech R~ CZE M L M Europe 2018 78867
head(ASEMIndData[9:16])
## # A tibble: 6 x 8
## Den_Energy Den_GDP Den_Pop LPI Flights Ship Bord Elec
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 27 391. 8735. 4.10 29.0 0 35 35.4
## 2 41.8 468. 11429. 4.11 31.9 20.6 48 26.5
## 3 9.96 53.2 7085. 2.81 9.24 7.92 18 11.3
## 4 7.01 51.2 4189. 3.16 9.25 12.4 41 19.5
## 5 1.43 20.0 1180. 3.00 8.75 11.7 0 0.439
## 6 25.5 195. 10618. 3.67 15.3 0 35 46.8
```
COINr comes prepackaged with some example data \- here we are looking at indicator data from the [ASEM Sustainable Connectivity Portal](https://composite-indicators.jrc.ec.europa.eu/asem-sustainable-connectivity/), which has an indicator data set covering 51 Asian and European countries.
The first thing to notice is that each row is an observation (here, a country), and each column is a variable (mostly indicators, but also other things). Look at the structure of the data frame, working from left to right:
* `UnitName` \[**required**] gives the name of each unit. Here, units are countries, so these are the names of each country.
* `UnitCode` \[**required**] is a unique code assigned to each unit (country). This is very important, and is the main “reference” inside COINr for units. If your units are countries, I recommend using [ISO Alpha\-3 codes](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3), because these are recognised by COINr for generating maps. It also makes data processing generally easier. For converting names to codes and vice versa, the [countrycode package](https://github.com/vincentarelbundock/countrycode) is extremely handy.
* `Group_*` \[optional] Any column name that starts with `Group_` is recognised as a group column rather than an indicator. You don’t have to have any groups, but some COINr functions support specific operations on groups (e.g. imputation within group, some charts also recognise groups). You can have as many group columns as you want.
* `Year` \[optional] gives the reference year of the data. This allows you to have multiple years of data, for example, you can have a value for a given country for 2018, and another value for the same country for 2019, and so on. Like groups, this feature is not yet fully developed in COINr, but the `assemble()` function has the possibility to select years of data and impute by latest year (see below).
* `Den_*`\[optional] Any column names that begin with `Den_*` are recognised as *denominators*, i.e. indicators that are used to scale other indicators. These are used in COINr’s `denominate()` function.
* Finally, any column that begins with `x_` will be ignored and passed through. This is not shown in the data set above, but is useful for e.g. alternative codes or other variables that you want to retain.
*Any remaining columns that do not begin with `x_` or use the other names in this list are recognised as indicators.*
You will notice that all column (variable/indicator) names use short codes. This is to keep things concise in tables, rough plots etc. Indicator codes should be short, but informative enough that you know which indicator it refers to (e.g. “Ind1” is not so helpful). In some COINr plots, codes are displayed, so you might want to take that into account. In any case, the full names of indicators, and other details, are also specified in the indicator metadata table \- see the next section.
Some important rules and tips to keep in mind are:
* The following columns are *required*. All other columns are optional (they can be excluded):
+ UnitCode
+ UnitName (if you don’t have separate names you can just replicate the unit codes)
+ At least one indicator column
* Columns don’t have to be in any particular order, columns are identified by names rather than positions.
* You can have as many indicators and units as you like.
* Indicator codes and unit codes must have unique names. You can’t use the same code twice otherwise bad things will happen.
* Avoid any accented characters or basically any characters outside of English \- this can sometimes cause trouble with encoding.
* Column names are case\-sensitive. Most things in COINr are built to have a degree of flexibility where possible, but *column names need to be written exactly as they appear here for COINr to recognise them*.,
### 3\.2\.2 Indicator metadata
The second data frame you need to input specifies the *metadata* of each indicator. This serves two purposes: first, to give details about each indicator, such as its name, its units and so on; and second, to specify the structure of the index. Here’s what this looks like, for our example ASEM data set:
```
head(ASEMIndMeta)
## # A tibble: 6 x 10
## IndName IndCode Direction IndWeight Denominator IndUnit Target Agg1 Agg2
## <chr> <chr> <dbl> <dbl> <chr> <chr> <dbl> <chr> <chr>
## 1 Logistics~ LPI 1 1 <NA> Score 1~ 4.12 Phys~ Conn
## 2 Internati~ Flights 1 1 Den_Pop Thousan~ 200. Phys~ Conn
## 3 Liner Shi~ Ship 1 1 <NA> Score 20.1 Phys~ Conn
## 4 Border cr~ Bord 1 1 Den_Area Number ~ 116. Phys~ Conn
## 5 Trade in ~ Elec 1 1 Den_Energy TWh 105. Phys~ Conn
## 6 Trade in ~ Gas 1 1 Den_Energy Billion~ 90.1 Phys~ Conn
## # ... with 1 more variable: Agg3 <chr>
```
Notice that here each row is an indicator (as opposed to `IndData` where columns represented indicators), and the columns specify attributes of indicators. This is to keep the data in a “tidy” format. Working through the columns one by one:
* `IndName` \[**required**] This is the full name of the indicator, which will be used in display plots.
* `IndCode`\[**required**] A reference code for each indicator. These *must be the same codes as specified in the indicator metadata*. The codes must also be unique.
* `Direction` \[**required**] The “direction” of each indicator \- this takes values of either 1 or \-1 for each indicator. A value of 1 means that higher values of the indicator correspond to higher values of the index, whereas \-1 means the opposite.
* `IndWeight` \[**required**] The initial weights assigned to each indicator. Weights are relative within each aggregation group and do not need to sum to one (they will be re\-scaled to sum to one within each group by COINr).
* `Denominator` \[**optional**] These should be the indicator codes of one of the denominator variables for each indicator to be denominated. E.g. here “Den\_Pop” specifies that the indicator should be denominated by the “Den\_Pop” indicators (population, in this case). For any indicators that do not need denominating, just set `NA`. Denominators can also be specified later, so if you want you can leave this column out. See [Denomination](appendix-building-a-composite-indicator-example.html#denomination-1) for more information.
* `IndUnit` \[**optional**] The units of the indicator. This helps for keeping track of what the numbers actually mean, and can be used in plots.
* `Target` \[**optional**] Targets associated with each indicator. Here, artificial targets have been generated which are 95% of the maximum score (accounting for the direction of the indicator). These are only used if the normalisation method is distance\-to\-target.
* `Agg*` \[**required**] Any column name that begins with `Agg` is recognised as a column specifying the aggregation group, and therefore the structure of the index. Aggregation columns *should be in the order of the aggregation*, but otherwise can have arbitrary names.
Let’s look at the aggregation columns in a bit more detail. Each column represents a separate aggregation level, so in the ASEM example here we have three aggregation levels \- the pillars, the two sub\-indexes, and the overall index. The entry of each column specifies which group each indicator falls in. So, the first column `Agg1` specifies the pillar of each indicator. Again, each aggregation group (pillar, sub\-index or index) is referenced by a unique code.
The next column `Agg2`, gives the sub\-index that the indicator belongs to. There is a bit of redundancy here, because obviously indicators in the same pillar must also belong to the same sub\-index. Finally, `Agg3` specifies that all indicators belong to the index.
You can have as many `Agg` columns as you like, and the names don’t have to be `Agg1` etc, but could be e.g. `Agg_Pillar`, `Agg_SubIndex`, etc. However, they *must* begin with `Agg`, otherwise COINr will not recognise them. And they *must* appear in the order of the aggregation, i.e. lowest level of aggregation first, then working upwards. This is in fact the only aspect of any of the input data frames that is dependent on order. It is not necessary even to aggregate to a single index – COINr will aggregate as far as the structure specified here.
### 3\.2\.3 Aggregation metadata
The final data input is the aggregation metadata, which is also the simplest. The aim is to specify the names of each aggregation group (so far only codes have been specified), and the weights of each aggregate for building higher levels. Here’s our example for the ASEM data set:
```
head(ASEMAggMeta)
## # A tibble: 6 x 4
## AgLevel Code Name Weight
## <dbl> <chr> <chr> <dbl>
## 1 2 Physical Physical 1
## 2 2 ConEcFin Economic and Financial (Con) 1
## 3 2 Political Political 1
## 4 2 Instit Institutional 1
## 5 2 P2P People to People 1
## 6 2 Environ Environmental 1
```
This data frame simply consists of four columns, all of which are required for assembly:
* `AgLevel` \[**required**] The aggregation level (where 1 is indicator level, 2 is the first aggregation level, and so on – see Section [**??**](#Sec:ContructingCIs))
* `Code` \[**required**] The aggregation group codes. These codes must match the codes in the corresponding column in the indicator metadata aggregation columns.
* `Name` \[**required**] The aggregation group names.
* `Weight` \[**required**] The aggregation group weights. These weights can be changed later on if needed.
The codes specified here must be unique and not coincide with any other codes used for defining units or indicators. All columns are required, but do not need to be in a particular order.
3\.3 Putting everything together
--------------------------------
Having got all your data in the correct format, you can finally build it into a COIN. From here, things start to get a bit easier. The function to build the COIN is called `assemble()`. This function takes the three data frames mentioned and converts them into a so\-called *COIN*, which is a hierarchical list that is structured in a way that is recognised by all COINr functions. Let’s run `assemble()` using the built\-in data sets to show how it works.
```
ASEM <- assemble(IndData = ASEMIndData, IndMeta = ASEMIndMeta, AggMeta = ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
The three inputs here are as follows:
* `IndData` which is the indicator data
* `IndMeta` which is the indicator metadata
* `AggMeta` which is the aggregation metadata
And this outputs a “COIN” which you can name as you want (here we have called it “ASEM”). There are some further arguments to `assemble()` which are not used here, and they are:
* `include` \- this is a character vector of indicator codes (i.e. codes that are found in `IndData$IndCode`) which specifies which indicators to include out of the `IndData` and `IndMeta` inputs. By default, all indicators are included, but this gives the option to only include certain indicators, if for example, you have several alternative data sources.
* `exclude` \- this an analogous character vector of indicator codes, but specifying which indicators to *exclude*, if any. Again, by default, nothing is excluded. This may be easier to specify when creating a subset of indicators.
* `preagg` \- by default `assemble()` expects raw data to be input, which is then used to build the composite indicator. However, sometimes you may simply be analysing an already\-built composite indicator. If `preagg = TRUE`, the `IndData` argument of `assemble()` can be a data frame of pre\-aggregated data, i.e. columns for each indicator *plus* columns for each aggregate. In this case, you still need to supply `IndMeta` and `AggMeta`. An example of this is in [Appendix: Analysing a Composite Indicator Example](appendix-analysing-a-composite-indicator-example.html#appendix-analysing-a-composite-indicator-example).
* `use_year` \- you can optionally supply `assemble()` with panel data, where there is a separate observation of each indicator, for each year (or other time interval). To do this, include a “Year” column in `IndData`. Currently, `assemble()` will only build a COIN for a single year of data, but `use_year` allows you to easily select this. So setting e.g. `use_year = 2018` will filter `IndData` to only include values from 2018 (if it exists).
* `impute_latest` \- if panel data is supplied, you can optionally impute to the latest available observation by setting `impute_latest = TRUE`. This means if a unit has a missing value for a given indicator in the year specified by `use_year`, it will look backwards for previous observations of this data point and replace with the latest or `NA` if none is available.
The structure of the resulting COIN is explained in more detail in [Inside a COIN](coins-the-currency-of-coinr.html#inside-a-coin)). The logic of the COIN is that COINr functions can take all the data and parameters that they need from it, and you only have to specify some particular parameters of the function. Then, outputs, such as new data sets, are returned back to the COIN, which is added to and expanded as the analysis progresses.
Apart from building the COIN, `assemble()` does a few other things:
* It checks that indicator codes are consistent between indicator data and indicator metadata
* It checks that required columns, such as indicator codes, are present
* It returns some basic information about the data that was input, such as the number of indicators, the number of units, the number of aggregation levels and the groups in each level. This is done so you can check what you have entered, and that it agrees with your expectations.
In general, the job of `assemble()` is to construct a *valid* COIN which should not throw any errors later on. In other words, if you succeed in assembling your COIN, things should work fairly well from there on. Wherever possible `assemble()` tries to return helpful error messages to help you fix any problems with your inputs.
We can view a summary of the contents of the COIN at any time with the `print()` method for COINs. This can be called simply by naming the object on the command line:
```
ASEM
## --------------
## A COIN with...
## --------------
## Input:
## Units: 51 (AUT, BEL, BGR, ...)
## Indicators: 49 (Goods, Services, FDI, ...)
## Denominators: 4 (Den_Area, Den_Energy, Den_GDP, ...)
## Groups: 4 (Group_GDP, Group_GDPpc, Group_Pop, ...)
##
## Structure:
## Level 1: 49 indicators (Goods, Services, FDI, ...)
## Level 2: 8 groups (ConEcFin, Instit, P2P, ...)
## Level 3: 2 groups (Conn, Sust)
## Level 4: 1 groups (Index)
##
## Data sets:
## Raw (51 units)
```
3\.4 Moving on
--------------
Now that you have a COIN, you can start using all the other functions in COINr to their full potential. Most functions will still work on standalone data frames, so you don’t *need* to work with COINs if you prefer not to.
3\.1 Inside a COIN
------------------
COINs are hierarchical lists, a bit like a folder system. In R they are sometimes called “lists of lists” or “nested lists”. A COIN contains all the data, parameters, methodological choices and results of a composite indicator. It has a structure that looks like this:
Figure 3\.1: Inside a COIN
The COIN is best explored using e.g. `View(ASEM)` in R Studio. It consists of six main sub\-lists which are briefly described as follows (the dot in e.g. `.$Input` in the following refers to a generic COIN):
* `.$Input` is all input data and metadata that was initially used to construct the COIN.
* `.$Data` consists of data frames of indicator data at each construction step, e.g. `.$Data$Raw` is the raw data used to construct the COIN, `.$Data$Normalised` is the same data set after normalisation, and so on.
* `.$Parameters` contains parameters such as sets of weights, details on the number of indicators, aggregation levels, etc.
* `.$Method` is a record of the methodology applied to build the COIN. In practice, this records all the inputs to any COINr functions applied in building the COIN, for example the normalisation method and parameters, data treatment specifications, and so on. Apart from keeping a record of the construction, this also allows the entire results to be regenerated using the information stored in the COIN. This will be explained better in [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons).
* `.$Analysis` contains any analysis tables which are sorted by the data sets to which they are applied. This can be e.g. indicator statistics of any of the data sets found in `.$Data`, or correlations.
* `.$Results` contains summary tables of results, arranged and sorted for easy viewing.
When you first build the COIN, using the three input data frames explained in the next section, the COIN will only consist of the Input, Data, Parameters and possibly Method “folders” (sub\-lists). As you construct the composite indicator, more things will be added to the COIN, for example extra data sets (denominated, normalised, aggregated), analysis (missing data, PCA, etc), and results (summary results tables).
Many analysis functions in COINr have an argument called `out2`, which specifies whether the result should be stored inside the COIN or output as a separate list or data frame.
It is worth exploring the contents of the COIN as it is built to understand where things are kept. A couple of things to keep in mind are:
1. One good reason to store things inside the COIN is that if you plan to export the results to Excel at any point, you can use the `coin2Excel()` function to export (almost) everything inside the COIN in one command. This includes all data sets, analysis tables and results tables.
2. A COIN is a list that has been labelled as a “COIN” using R’s `class()` function, and has no hard restrictions on its structure. That means you can edit it as you would with any other list. The implications of this are:
1. You can also store other things in the COIN if you want, and add folders and lists to it, but,
2. You have to be careful editing it, because COINr functions expect things to be in certain places inside the COIN. So if you delete something, for example, some COINr functions may not work.
3\.2 The three inputs
---------------------
To build a COIN, three ingredients (data frames) are needed to begin with, and are used as inputs to the `assemble()` function, which outputs a COIN[2](#fn2). In short, they are the indicator data, the indicator metadata, and the aggregation metadata. Here, each will be explained separately. Inputting the data in the first place takes a little effort, because you have to get your data in a format that COINr understands, and you are required to specify many things. But once you have got your data assembled, COINr should do most of the hard work, so hang in there!
How you actually create these data frames is up to you. If your data is coming from Excel, it may be easiest to assemble these data frames (tables) manually in Excel and then read directly into R. Alternatively, you may wish to create them from scratch in R (an advantage here could be that you can source data directly from APIs, and have a fully reproducible data pipeline), or you may have yet a different approach.
### 3\.2\.1 Indicator data
The indicator data is a data frame which, shockingly, specifies the data for the indicators. However, it can do more than that, and also some rules have to be followed. The easiest way to explain is to start with an example.
```
library(COINr6)
head(ASEMIndData[1:8])
## # A tibble: 6 x 8
## UnitName UnitCode Group_GDP Group_GDPpc Group_Pop Group_EurAsia Year Den_Area
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl>
## 1 Austria AUT L XL M Europe 2018 83871
## 2 Belgium BEL L L L Europe 2018 30528
## 3 Bulgaria BGR S S M Europe 2018 110879
## 4 Croatia HRV S M S Europe 2018 56594
## 5 Cyprus CYP S L S Europe 2018 9251
## 6 Czech R~ CZE M L M Europe 2018 78867
head(ASEMIndData[9:16])
## # A tibble: 6 x 8
## Den_Energy Den_GDP Den_Pop LPI Flights Ship Bord Elec
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 27 391. 8735. 4.10 29.0 0 35 35.4
## 2 41.8 468. 11429. 4.11 31.9 20.6 48 26.5
## 3 9.96 53.2 7085. 2.81 9.24 7.92 18 11.3
## 4 7.01 51.2 4189. 3.16 9.25 12.4 41 19.5
## 5 1.43 20.0 1180. 3.00 8.75 11.7 0 0.439
## 6 25.5 195. 10618. 3.67 15.3 0 35 46.8
```
COINr comes prepackaged with some example data \- here we are looking at indicator data from the [ASEM Sustainable Connectivity Portal](https://composite-indicators.jrc.ec.europa.eu/asem-sustainable-connectivity/), which has an indicator data set covering 51 Asian and European countries.
The first thing to notice is that each row is an observation (here, a country), and each column is a variable (mostly indicators, but also other things). Look at the structure of the data frame, working from left to right:
* `UnitName` \[**required**] gives the name of each unit. Here, units are countries, so these are the names of each country.
* `UnitCode` \[**required**] is a unique code assigned to each unit (country). This is very important, and is the main “reference” inside COINr for units. If your units are countries, I recommend using [ISO Alpha\-3 codes](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3), because these are recognised by COINr for generating maps. It also makes data processing generally easier. For converting names to codes and vice versa, the [countrycode package](https://github.com/vincentarelbundock/countrycode) is extremely handy.
* `Group_*` \[optional] Any column name that starts with `Group_` is recognised as a group column rather than an indicator. You don’t have to have any groups, but some COINr functions support specific operations on groups (e.g. imputation within group, some charts also recognise groups). You can have as many group columns as you want.
* `Year` \[optional] gives the reference year of the data. This allows you to have multiple years of data, for example, you can have a value for a given country for 2018, and another value for the same country for 2019, and so on. Like groups, this feature is not yet fully developed in COINr, but the `assemble()` function has the possibility to select years of data and impute by latest year (see below).
* `Den_*`\[optional] Any column names that begin with `Den_*` are recognised as *denominators*, i.e. indicators that are used to scale other indicators. These are used in COINr’s `denominate()` function.
* Finally, any column that begins with `x_` will be ignored and passed through. This is not shown in the data set above, but is useful for e.g. alternative codes or other variables that you want to retain.
*Any remaining columns that do not begin with `x_` or use the other names in this list are recognised as indicators.*
You will notice that all column (variable/indicator) names use short codes. This is to keep things concise in tables, rough plots etc. Indicator codes should be short, but informative enough that you know which indicator it refers to (e.g. “Ind1” is not so helpful). In some COINr plots, codes are displayed, so you might want to take that into account. In any case, the full names of indicators, and other details, are also specified in the indicator metadata table \- see the next section.
Some important rules and tips to keep in mind are:
* The following columns are *required*. All other columns are optional (they can be excluded):
+ UnitCode
+ UnitName (if you don’t have separate names you can just replicate the unit codes)
+ At least one indicator column
* Columns don’t have to be in any particular order, columns are identified by names rather than positions.
* You can have as many indicators and units as you like.
* Indicator codes and unit codes must have unique names. You can’t use the same code twice otherwise bad things will happen.
* Avoid any accented characters or basically any characters outside of English \- this can sometimes cause trouble with encoding.
* Column names are case\-sensitive. Most things in COINr are built to have a degree of flexibility where possible, but *column names need to be written exactly as they appear here for COINr to recognise them*.,
### 3\.2\.2 Indicator metadata
The second data frame you need to input specifies the *metadata* of each indicator. This serves two purposes: first, to give details about each indicator, such as its name, its units and so on; and second, to specify the structure of the index. Here’s what this looks like, for our example ASEM data set:
```
head(ASEMIndMeta)
## # A tibble: 6 x 10
## IndName IndCode Direction IndWeight Denominator IndUnit Target Agg1 Agg2
## <chr> <chr> <dbl> <dbl> <chr> <chr> <dbl> <chr> <chr>
## 1 Logistics~ LPI 1 1 <NA> Score 1~ 4.12 Phys~ Conn
## 2 Internati~ Flights 1 1 Den_Pop Thousan~ 200. Phys~ Conn
## 3 Liner Shi~ Ship 1 1 <NA> Score 20.1 Phys~ Conn
## 4 Border cr~ Bord 1 1 Den_Area Number ~ 116. Phys~ Conn
## 5 Trade in ~ Elec 1 1 Den_Energy TWh 105. Phys~ Conn
## 6 Trade in ~ Gas 1 1 Den_Energy Billion~ 90.1 Phys~ Conn
## # ... with 1 more variable: Agg3 <chr>
```
Notice that here each row is an indicator (as opposed to `IndData` where columns represented indicators), and the columns specify attributes of indicators. This is to keep the data in a “tidy” format. Working through the columns one by one:
* `IndName` \[**required**] This is the full name of the indicator, which will be used in display plots.
* `IndCode`\[**required**] A reference code for each indicator. These *must be the same codes as specified in the indicator metadata*. The codes must also be unique.
* `Direction` \[**required**] The “direction” of each indicator \- this takes values of either 1 or \-1 for each indicator. A value of 1 means that higher values of the indicator correspond to higher values of the index, whereas \-1 means the opposite.
* `IndWeight` \[**required**] The initial weights assigned to each indicator. Weights are relative within each aggregation group and do not need to sum to one (they will be re\-scaled to sum to one within each group by COINr).
* `Denominator` \[**optional**] These should be the indicator codes of one of the denominator variables for each indicator to be denominated. E.g. here “Den\_Pop” specifies that the indicator should be denominated by the “Den\_Pop” indicators (population, in this case). For any indicators that do not need denominating, just set `NA`. Denominators can also be specified later, so if you want you can leave this column out. See [Denomination](appendix-building-a-composite-indicator-example.html#denomination-1) for more information.
* `IndUnit` \[**optional**] The units of the indicator. This helps for keeping track of what the numbers actually mean, and can be used in plots.
* `Target` \[**optional**] Targets associated with each indicator. Here, artificial targets have been generated which are 95% of the maximum score (accounting for the direction of the indicator). These are only used if the normalisation method is distance\-to\-target.
* `Agg*` \[**required**] Any column name that begins with `Agg` is recognised as a column specifying the aggregation group, and therefore the structure of the index. Aggregation columns *should be in the order of the aggregation*, but otherwise can have arbitrary names.
Let’s look at the aggregation columns in a bit more detail. Each column represents a separate aggregation level, so in the ASEM example here we have three aggregation levels \- the pillars, the two sub\-indexes, and the overall index. The entry of each column specifies which group each indicator falls in. So, the first column `Agg1` specifies the pillar of each indicator. Again, each aggregation group (pillar, sub\-index or index) is referenced by a unique code.
The next column `Agg2`, gives the sub\-index that the indicator belongs to. There is a bit of redundancy here, because obviously indicators in the same pillar must also belong to the same sub\-index. Finally, `Agg3` specifies that all indicators belong to the index.
You can have as many `Agg` columns as you like, and the names don’t have to be `Agg1` etc, but could be e.g. `Agg_Pillar`, `Agg_SubIndex`, etc. However, they *must* begin with `Agg`, otherwise COINr will not recognise them. And they *must* appear in the order of the aggregation, i.e. lowest level of aggregation first, then working upwards. This is in fact the only aspect of any of the input data frames that is dependent on order. It is not necessary even to aggregate to a single index – COINr will aggregate as far as the structure specified here.
### 3\.2\.3 Aggregation metadata
The final data input is the aggregation metadata, which is also the simplest. The aim is to specify the names of each aggregation group (so far only codes have been specified), and the weights of each aggregate for building higher levels. Here’s our example for the ASEM data set:
```
head(ASEMAggMeta)
## # A tibble: 6 x 4
## AgLevel Code Name Weight
## <dbl> <chr> <chr> <dbl>
## 1 2 Physical Physical 1
## 2 2 ConEcFin Economic and Financial (Con) 1
## 3 2 Political Political 1
## 4 2 Instit Institutional 1
## 5 2 P2P People to People 1
## 6 2 Environ Environmental 1
```
This data frame simply consists of four columns, all of which are required for assembly:
* `AgLevel` \[**required**] The aggregation level (where 1 is indicator level, 2 is the first aggregation level, and so on – see Section [**??**](#Sec:ContructingCIs))
* `Code` \[**required**] The aggregation group codes. These codes must match the codes in the corresponding column in the indicator metadata aggregation columns.
* `Name` \[**required**] The aggregation group names.
* `Weight` \[**required**] The aggregation group weights. These weights can be changed later on if needed.
The codes specified here must be unique and not coincide with any other codes used for defining units or indicators. All columns are required, but do not need to be in a particular order.
### 3\.2\.1 Indicator data
The indicator data is a data frame which, shockingly, specifies the data for the indicators. However, it can do more than that, and also some rules have to be followed. The easiest way to explain is to start with an example.
```
library(COINr6)
head(ASEMIndData[1:8])
## # A tibble: 6 x 8
## UnitName UnitCode Group_GDP Group_GDPpc Group_Pop Group_EurAsia Year Den_Area
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl>
## 1 Austria AUT L XL M Europe 2018 83871
## 2 Belgium BEL L L L Europe 2018 30528
## 3 Bulgaria BGR S S M Europe 2018 110879
## 4 Croatia HRV S M S Europe 2018 56594
## 5 Cyprus CYP S L S Europe 2018 9251
## 6 Czech R~ CZE M L M Europe 2018 78867
head(ASEMIndData[9:16])
## # A tibble: 6 x 8
## Den_Energy Den_GDP Den_Pop LPI Flights Ship Bord Elec
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 27 391. 8735. 4.10 29.0 0 35 35.4
## 2 41.8 468. 11429. 4.11 31.9 20.6 48 26.5
## 3 9.96 53.2 7085. 2.81 9.24 7.92 18 11.3
## 4 7.01 51.2 4189. 3.16 9.25 12.4 41 19.5
## 5 1.43 20.0 1180. 3.00 8.75 11.7 0 0.439
## 6 25.5 195. 10618. 3.67 15.3 0 35 46.8
```
COINr comes prepackaged with some example data \- here we are looking at indicator data from the [ASEM Sustainable Connectivity Portal](https://composite-indicators.jrc.ec.europa.eu/asem-sustainable-connectivity/), which has an indicator data set covering 51 Asian and European countries.
The first thing to notice is that each row is an observation (here, a country), and each column is a variable (mostly indicators, but also other things). Look at the structure of the data frame, working from left to right:
* `UnitName` \[**required**] gives the name of each unit. Here, units are countries, so these are the names of each country.
* `UnitCode` \[**required**] is a unique code assigned to each unit (country). This is very important, and is the main “reference” inside COINr for units. If your units are countries, I recommend using [ISO Alpha\-3 codes](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3), because these are recognised by COINr for generating maps. It also makes data processing generally easier. For converting names to codes and vice versa, the [countrycode package](https://github.com/vincentarelbundock/countrycode) is extremely handy.
* `Group_*` \[optional] Any column name that starts with `Group_` is recognised as a group column rather than an indicator. You don’t have to have any groups, but some COINr functions support specific operations on groups (e.g. imputation within group, some charts also recognise groups). You can have as many group columns as you want.
* `Year` \[optional] gives the reference year of the data. This allows you to have multiple years of data, for example, you can have a value for a given country for 2018, and another value for the same country for 2019, and so on. Like groups, this feature is not yet fully developed in COINr, but the `assemble()` function has the possibility to select years of data and impute by latest year (see below).
* `Den_*`\[optional] Any column names that begin with `Den_*` are recognised as *denominators*, i.e. indicators that are used to scale other indicators. These are used in COINr’s `denominate()` function.
* Finally, any column that begins with `x_` will be ignored and passed through. This is not shown in the data set above, but is useful for e.g. alternative codes or other variables that you want to retain.
*Any remaining columns that do not begin with `x_` or use the other names in this list are recognised as indicators.*
You will notice that all column (variable/indicator) names use short codes. This is to keep things concise in tables, rough plots etc. Indicator codes should be short, but informative enough that you know which indicator it refers to (e.g. “Ind1” is not so helpful). In some COINr plots, codes are displayed, so you might want to take that into account. In any case, the full names of indicators, and other details, are also specified in the indicator metadata table \- see the next section.
Some important rules and tips to keep in mind are:
* The following columns are *required*. All other columns are optional (they can be excluded):
+ UnitCode
+ UnitName (if you don’t have separate names you can just replicate the unit codes)
+ At least one indicator column
* Columns don’t have to be in any particular order, columns are identified by names rather than positions.
* You can have as many indicators and units as you like.
* Indicator codes and unit codes must have unique names. You can’t use the same code twice otherwise bad things will happen.
* Avoid any accented characters or basically any characters outside of English \- this can sometimes cause trouble with encoding.
* Column names are case\-sensitive. Most things in COINr are built to have a degree of flexibility where possible, but *column names need to be written exactly as they appear here for COINr to recognise them*.,
### 3\.2\.2 Indicator metadata
The second data frame you need to input specifies the *metadata* of each indicator. This serves two purposes: first, to give details about each indicator, such as its name, its units and so on; and second, to specify the structure of the index. Here’s what this looks like, for our example ASEM data set:
```
head(ASEMIndMeta)
## # A tibble: 6 x 10
## IndName IndCode Direction IndWeight Denominator IndUnit Target Agg1 Agg2
## <chr> <chr> <dbl> <dbl> <chr> <chr> <dbl> <chr> <chr>
## 1 Logistics~ LPI 1 1 <NA> Score 1~ 4.12 Phys~ Conn
## 2 Internati~ Flights 1 1 Den_Pop Thousan~ 200. Phys~ Conn
## 3 Liner Shi~ Ship 1 1 <NA> Score 20.1 Phys~ Conn
## 4 Border cr~ Bord 1 1 Den_Area Number ~ 116. Phys~ Conn
## 5 Trade in ~ Elec 1 1 Den_Energy TWh 105. Phys~ Conn
## 6 Trade in ~ Gas 1 1 Den_Energy Billion~ 90.1 Phys~ Conn
## # ... with 1 more variable: Agg3 <chr>
```
Notice that here each row is an indicator (as opposed to `IndData` where columns represented indicators), and the columns specify attributes of indicators. This is to keep the data in a “tidy” format. Working through the columns one by one:
* `IndName` \[**required**] This is the full name of the indicator, which will be used in display plots.
* `IndCode`\[**required**] A reference code for each indicator. These *must be the same codes as specified in the indicator metadata*. The codes must also be unique.
* `Direction` \[**required**] The “direction” of each indicator \- this takes values of either 1 or \-1 for each indicator. A value of 1 means that higher values of the indicator correspond to higher values of the index, whereas \-1 means the opposite.
* `IndWeight` \[**required**] The initial weights assigned to each indicator. Weights are relative within each aggregation group and do not need to sum to one (they will be re\-scaled to sum to one within each group by COINr).
* `Denominator` \[**optional**] These should be the indicator codes of one of the denominator variables for each indicator to be denominated. E.g. here “Den\_Pop” specifies that the indicator should be denominated by the “Den\_Pop” indicators (population, in this case). For any indicators that do not need denominating, just set `NA`. Denominators can also be specified later, so if you want you can leave this column out. See [Denomination](appendix-building-a-composite-indicator-example.html#denomination-1) for more information.
* `IndUnit` \[**optional**] The units of the indicator. This helps for keeping track of what the numbers actually mean, and can be used in plots.
* `Target` \[**optional**] Targets associated with each indicator. Here, artificial targets have been generated which are 95% of the maximum score (accounting for the direction of the indicator). These are only used if the normalisation method is distance\-to\-target.
* `Agg*` \[**required**] Any column name that begins with `Agg` is recognised as a column specifying the aggregation group, and therefore the structure of the index. Aggregation columns *should be in the order of the aggregation*, but otherwise can have arbitrary names.
Let’s look at the aggregation columns in a bit more detail. Each column represents a separate aggregation level, so in the ASEM example here we have three aggregation levels \- the pillars, the two sub\-indexes, and the overall index. The entry of each column specifies which group each indicator falls in. So, the first column `Agg1` specifies the pillar of each indicator. Again, each aggregation group (pillar, sub\-index or index) is referenced by a unique code.
The next column `Agg2`, gives the sub\-index that the indicator belongs to. There is a bit of redundancy here, because obviously indicators in the same pillar must also belong to the same sub\-index. Finally, `Agg3` specifies that all indicators belong to the index.
You can have as many `Agg` columns as you like, and the names don’t have to be `Agg1` etc, but could be e.g. `Agg_Pillar`, `Agg_SubIndex`, etc. However, they *must* begin with `Agg`, otherwise COINr will not recognise them. And they *must* appear in the order of the aggregation, i.e. lowest level of aggregation first, then working upwards. This is in fact the only aspect of any of the input data frames that is dependent on order. It is not necessary even to aggregate to a single index – COINr will aggregate as far as the structure specified here.
### 3\.2\.3 Aggregation metadata
The final data input is the aggregation metadata, which is also the simplest. The aim is to specify the names of each aggregation group (so far only codes have been specified), and the weights of each aggregate for building higher levels. Here’s our example for the ASEM data set:
```
head(ASEMAggMeta)
## # A tibble: 6 x 4
## AgLevel Code Name Weight
## <dbl> <chr> <chr> <dbl>
## 1 2 Physical Physical 1
## 2 2 ConEcFin Economic and Financial (Con) 1
## 3 2 Political Political 1
## 4 2 Instit Institutional 1
## 5 2 P2P People to People 1
## 6 2 Environ Environmental 1
```
This data frame simply consists of four columns, all of which are required for assembly:
* `AgLevel` \[**required**] The aggregation level (where 1 is indicator level, 2 is the first aggregation level, and so on – see Section [**??**](#Sec:ContructingCIs))
* `Code` \[**required**] The aggregation group codes. These codes must match the codes in the corresponding column in the indicator metadata aggregation columns.
* `Name` \[**required**] The aggregation group names.
* `Weight` \[**required**] The aggregation group weights. These weights can be changed later on if needed.
The codes specified here must be unique and not coincide with any other codes used for defining units or indicators. All columns are required, but do not need to be in a particular order.
3\.3 Putting everything together
--------------------------------
Having got all your data in the correct format, you can finally build it into a COIN. From here, things start to get a bit easier. The function to build the COIN is called `assemble()`. This function takes the three data frames mentioned and converts them into a so\-called *COIN*, which is a hierarchical list that is structured in a way that is recognised by all COINr functions. Let’s run `assemble()` using the built\-in data sets to show how it works.
```
ASEM <- assemble(IndData = ASEMIndData, IndMeta = ASEMIndMeta, AggMeta = ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
The three inputs here are as follows:
* `IndData` which is the indicator data
* `IndMeta` which is the indicator metadata
* `AggMeta` which is the aggregation metadata
And this outputs a “COIN” which you can name as you want (here we have called it “ASEM”). There are some further arguments to `assemble()` which are not used here, and they are:
* `include` \- this is a character vector of indicator codes (i.e. codes that are found in `IndData$IndCode`) which specifies which indicators to include out of the `IndData` and `IndMeta` inputs. By default, all indicators are included, but this gives the option to only include certain indicators, if for example, you have several alternative data sources.
* `exclude` \- this an analogous character vector of indicator codes, but specifying which indicators to *exclude*, if any. Again, by default, nothing is excluded. This may be easier to specify when creating a subset of indicators.
* `preagg` \- by default `assemble()` expects raw data to be input, which is then used to build the composite indicator. However, sometimes you may simply be analysing an already\-built composite indicator. If `preagg = TRUE`, the `IndData` argument of `assemble()` can be a data frame of pre\-aggregated data, i.e. columns for each indicator *plus* columns for each aggregate. In this case, you still need to supply `IndMeta` and `AggMeta`. An example of this is in [Appendix: Analysing a Composite Indicator Example](appendix-analysing-a-composite-indicator-example.html#appendix-analysing-a-composite-indicator-example).
* `use_year` \- you can optionally supply `assemble()` with panel data, where there is a separate observation of each indicator, for each year (or other time interval). To do this, include a “Year” column in `IndData`. Currently, `assemble()` will only build a COIN for a single year of data, but `use_year` allows you to easily select this. So setting e.g. `use_year = 2018` will filter `IndData` to only include values from 2018 (if it exists).
* `impute_latest` \- if panel data is supplied, you can optionally impute to the latest available observation by setting `impute_latest = TRUE`. This means if a unit has a missing value for a given indicator in the year specified by `use_year`, it will look backwards for previous observations of this data point and replace with the latest or `NA` if none is available.
The structure of the resulting COIN is explained in more detail in [Inside a COIN](coins-the-currency-of-coinr.html#inside-a-coin)). The logic of the COIN is that COINr functions can take all the data and parameters that they need from it, and you only have to specify some particular parameters of the function. Then, outputs, such as new data sets, are returned back to the COIN, which is added to and expanded as the analysis progresses.
Apart from building the COIN, `assemble()` does a few other things:
* It checks that indicator codes are consistent between indicator data and indicator metadata
* It checks that required columns, such as indicator codes, are present
* It returns some basic information about the data that was input, such as the number of indicators, the number of units, the number of aggregation levels and the groups in each level. This is done so you can check what you have entered, and that it agrees with your expectations.
In general, the job of `assemble()` is to construct a *valid* COIN which should not throw any errors later on. In other words, if you succeed in assembling your COIN, things should work fairly well from there on. Wherever possible `assemble()` tries to return helpful error messages to help you fix any problems with your inputs.
We can view a summary of the contents of the COIN at any time with the `print()` method for COINs. This can be called simply by naming the object on the command line:
```
ASEM
## --------------
## A COIN with...
## --------------
## Input:
## Units: 51 (AUT, BEL, BGR, ...)
## Indicators: 49 (Goods, Services, FDI, ...)
## Denominators: 4 (Den_Area, Den_Energy, Den_GDP, ...)
## Groups: 4 (Group_GDP, Group_GDPpc, Group_Pop, ...)
##
## Structure:
## Level 1: 49 indicators (Goods, Services, FDI, ...)
## Level 2: 8 groups (ConEcFin, Instit, P2P, ...)
## Level 3: 2 groups (Conn, Sust)
## Level 4: 1 groups (Index)
##
## Data sets:
## Raw (51 units)
```
3\.4 Moving on
--------------
Now that you have a COIN, you can start using all the other functions in COINr to their full potential. Most functions will still work on standalone data frames, so you don’t *need* to work with COINs if you prefer not to.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/initial-visualisation-and-analysis.html |
Chapter 4 Initial visualisation and analysis
============================================
One of the first things to do with indicator data is to look at it, in as many ways as possible. This helps to get a feel for how the data is distributed between units/countries, how it may be spatially distributed, and how indicators relate to one another. COINr includes various tools for visualising and analysing indicator data and the index structure.
The types of plots generated by COINr fall into two categories: static plots, and interactive plots. static plots generate images in standard formats such as png, pdf and so on. Interactive plots generate javascript graphics, which have interactive elements such as zooming and panning, and information when you hover the mouse over. These latter type of plots are particularly useful for including in HTML documents, because they are self contained. For example, they can be used in R Markdown documents, then knitted to HTML, or embedded on websites (such as this one \- see below), e.g. via the blogdown or bookdown packages. Javascript plots can also be rendered to png and other formats, so can also be used in static documents.
The plotting tools here can be useful at any stage of building a composite indicator or scoreboard, from initial visualisation of the data, to checking the effects of data treatment, to visualising final index results.
4\.1 Structure
--------------
Independently from the indicator data, a good way to begin is to check the structure of the index. This can be done visually with the `plotframework()` function, which generates a sunburst plot of the index structure.
```
library(COINr6)
# assemble ASEM COIN
ASEM <- assemble(ASEMIndData, ASEMIndMeta, ASEMAggMeta) |>
suppressMessages()
# plot framework
plotframework(ASEM)
```
The sunburst plot is useful for a few things. First, it shows the structure that COINr has understood. If you get an error here, it is probably an indication that something has gone wrong in the input of the structure, so go back to the input data and check. If it does successfully display the sunburst plot, you can check whether the structure agrees with your expectations.
Second, it shows the effective weight of each indicator (the value is visible by hovering over each segment). Effective weights are the final weights of each indicator in the index, as a result of the indicator weights, the parent aggregate weights, and the structure of the index. This can reveal which indicators are implicitly weighted more than others, by e.g. having more or less indicators in the same aggregation groups.
Finally, it can be a good way to communicate your index structure to other people.
4\.2 Distributions
------------------
Individual indicator distributions can be visualised in several different ways. For static plots, the main tool is `plotIndDist()` which generates histograms, boxplots, violin plots, dot plots and violin\-dot plots. This is powered by `ggplot2`, and if you want to customise plots, you should use that directly. However, COINr plotting functions are intended as quick tools to visualise data, with easy access to the hierarchical data set. You can plot individual indicators:
```
plotIndDist(ASEM, type = "Histogram", icodes = "LPI")
```
And you can also plot groups of indicators by calling aggregate names (notice that when multiple indicators are plotted, the indicator codes are used to label each plot, rather than indicator names, to save space):
```
plotIndDist(ASEM, type = "Violindot", icodes = "Physical")
```
The `plotIndDist()` function has several arguments. In the first place, any indicator or aggregation (pillar, dimension, index etc) can be plotted by using the `dset` argument. If you have only just assembled the COIN, you will only have the “Raw” dataset, but any other dataset can be accessed, e.g. treated data set, aggregated data set, and so on. You can also target different levels using the `aglev` argument \- for more details see the chapter on [Helper functions](helper-functions.html#helper-functions).
Stand\-alone data frames are also supported by `plotIndDist()` (this can also be achieved directly by `ggplot` without too much effort):
```
df <- as.data.frame(matrix(rnorm(90),nrow = 30, ncol = 3))
colnames(df) <- c("Dogs", "Cats", "Rabbits")
plotIndDist(df, type = "Box")
```
COINr also includes some interactive plots, which are embedded into apps (see later), but can be used for your own purposes, such as embedding in HTML documents or websites.
```
iplotIndDist(ASEM, "Raw", "Renew", ptype = "Violin")
```
Since all the plotting functions output plot objects (plotly objects for `iplotIndDist`, and ggplot2 plot objects for `plotIndDist`), you can also modify them if you want to customise the plots. This might be a helpful workflow \- to use COINr’s default options and then tweak the plot to your liking. In a very simple example, here we just change the title.
```
iplotIndDist(ASEM, "Raw", "Flights", ptype = "Histogram") |>
plotly::layout(title = "Customised plot")
```
If you are purely interested in exploring the data, rather than presenting it to someone else, the plots here are also embedded into a Shiny app which lets you quickly explore and compare indicator distributions \- see [Data Treatment](appendix-building-a-composite-indicator-example.html#data-treatment-1) for more details on this.
4\.3 Ranks and Maps
-------------------
While the previous functions concerned plotting the statistical distributions of each indicator, functions are also available for plotting the indicator values in order or on a map.
```
iplotBar(ASEM, dset = "Raw", isel = "Embs", usel = "SGP", aglev = 1)
```
Here, a single indicator is plotted in order as a bar chart. There is an optional argument to highlight one or more units, using the `usel` argument. This function also allows to select groups of units from the “Group\_\*” columns of `IndData`.
```
# plot only units from Asia group
iplotBar(ASEM, dset = "Raw", isel = "Goods", aglev = 1,
from_group = list(Group_EurAsia = "Asia"))
```
From a different perspective, we can plot the same data on a map:
```
iplotMap(ASEM, dset = "Raw", isel = "Embs")
```
Note that this only works if `IndData$UnitCode` corresponds to ISO alpha\-3 country codes. If you want to do some more sophisticated mapping R, Plotly has [many mapping options](https://plotly.com/r/maps/), but R in general a wealth of mapping packages, you just have to search for them. COINr uses Plotly maps to keep things simple and to not depend on too many packages.
Another useful plotting function is `plotIndDot()`, which simply plots a single indicator using dots, with options to highlight particular units.
```
plotIndDot(ASEM, dset = "Raw", icode = "CO2",
usel = c("GBR", "ESP", "AUS"), add_stat = "median")
```
This also gives the option to add a statistic, either mean or median, or to pass your own value to mark. This is similar to a dot plot using e.g. `plotIndDist()`, but is more suited to showing where a particular unit falls in an ordering.
COINr has yet more tools to plot data, but let’s leave it at that for the moment. Other tools will be introduced in other chapters.
4\.4 Statistics and analysis
----------------------------
Aside from plots, COINr gives a fairly detailed statistical analysis of initial indicator data. The function `getStats()` returns a series of statistics which can be aimed at any of the data sets in the `.$Data` folder. You can also specify if you want the output to be returned back to the COIN, or to a separate list.
```
# get stats
ASEM <- getStats(ASEM, dset = "Raw", out2 = "COIN")
## Number of collinear indicators = 3
## Number of signficant negative indicator correlations = 322
## Number of indicators with high denominator correlations = 7
# display in table using Reactable
# (note the use of helper function roundDF() to round the values to a sensible number of decimals)
ASEM$Analysis$Raw$StatTable |>
roundDF() |>
reactable::reactable(resizable = TRUE, bordered = TRUE,
highlight = TRUE, defaultPageSize = 10)
```
The columns of this table give all kind of information from max, min, standard deviation, etc, to the presence of outliers and amount of missing data.
Apart from the overall statistics for each indicator, `getStats` also returns a few other things:
* `.$Outliers`, which flags individual outlying points using the relation to the interquartile range
* `.$Correlations`, which gives a correlation matrix between all indicators in the data set
* `.$DenomCorrelations`, which gives the correlations between indicators and any denominators
Each of these aspects will be explained in more detail in \[Multivariate anlaysis], so for the moment it is enough to mention that they exist.
4\.1 Structure
--------------
Independently from the indicator data, a good way to begin is to check the structure of the index. This can be done visually with the `plotframework()` function, which generates a sunburst plot of the index structure.
```
library(COINr6)
# assemble ASEM COIN
ASEM <- assemble(ASEMIndData, ASEMIndMeta, ASEMAggMeta) |>
suppressMessages()
# plot framework
plotframework(ASEM)
```
The sunburst plot is useful for a few things. First, it shows the structure that COINr has understood. If you get an error here, it is probably an indication that something has gone wrong in the input of the structure, so go back to the input data and check. If it does successfully display the sunburst plot, you can check whether the structure agrees with your expectations.
Second, it shows the effective weight of each indicator (the value is visible by hovering over each segment). Effective weights are the final weights of each indicator in the index, as a result of the indicator weights, the parent aggregate weights, and the structure of the index. This can reveal which indicators are implicitly weighted more than others, by e.g. having more or less indicators in the same aggregation groups.
Finally, it can be a good way to communicate your index structure to other people.
4\.2 Distributions
------------------
Individual indicator distributions can be visualised in several different ways. For static plots, the main tool is `plotIndDist()` which generates histograms, boxplots, violin plots, dot plots and violin\-dot plots. This is powered by `ggplot2`, and if you want to customise plots, you should use that directly. However, COINr plotting functions are intended as quick tools to visualise data, with easy access to the hierarchical data set. You can plot individual indicators:
```
plotIndDist(ASEM, type = "Histogram", icodes = "LPI")
```
And you can also plot groups of indicators by calling aggregate names (notice that when multiple indicators are plotted, the indicator codes are used to label each plot, rather than indicator names, to save space):
```
plotIndDist(ASEM, type = "Violindot", icodes = "Physical")
```
The `plotIndDist()` function has several arguments. In the first place, any indicator or aggregation (pillar, dimension, index etc) can be plotted by using the `dset` argument. If you have only just assembled the COIN, you will only have the “Raw” dataset, but any other dataset can be accessed, e.g. treated data set, aggregated data set, and so on. You can also target different levels using the `aglev` argument \- for more details see the chapter on [Helper functions](helper-functions.html#helper-functions).
Stand\-alone data frames are also supported by `plotIndDist()` (this can also be achieved directly by `ggplot` without too much effort):
```
df <- as.data.frame(matrix(rnorm(90),nrow = 30, ncol = 3))
colnames(df) <- c("Dogs", "Cats", "Rabbits")
plotIndDist(df, type = "Box")
```
COINr also includes some interactive plots, which are embedded into apps (see later), but can be used for your own purposes, such as embedding in HTML documents or websites.
```
iplotIndDist(ASEM, "Raw", "Renew", ptype = "Violin")
```
Since all the plotting functions output plot objects (plotly objects for `iplotIndDist`, and ggplot2 plot objects for `plotIndDist`), you can also modify them if you want to customise the plots. This might be a helpful workflow \- to use COINr’s default options and then tweak the plot to your liking. In a very simple example, here we just change the title.
```
iplotIndDist(ASEM, "Raw", "Flights", ptype = "Histogram") |>
plotly::layout(title = "Customised plot")
```
If you are purely interested in exploring the data, rather than presenting it to someone else, the plots here are also embedded into a Shiny app which lets you quickly explore and compare indicator distributions \- see [Data Treatment](appendix-building-a-composite-indicator-example.html#data-treatment-1) for more details on this.
4\.3 Ranks and Maps
-------------------
While the previous functions concerned plotting the statistical distributions of each indicator, functions are also available for plotting the indicator values in order or on a map.
```
iplotBar(ASEM, dset = "Raw", isel = "Embs", usel = "SGP", aglev = 1)
```
Here, a single indicator is plotted in order as a bar chart. There is an optional argument to highlight one or more units, using the `usel` argument. This function also allows to select groups of units from the “Group\_\*” columns of `IndData`.
```
# plot only units from Asia group
iplotBar(ASEM, dset = "Raw", isel = "Goods", aglev = 1,
from_group = list(Group_EurAsia = "Asia"))
```
From a different perspective, we can plot the same data on a map:
```
iplotMap(ASEM, dset = "Raw", isel = "Embs")
```
Note that this only works if `IndData$UnitCode` corresponds to ISO alpha\-3 country codes. If you want to do some more sophisticated mapping R, Plotly has [many mapping options](https://plotly.com/r/maps/), but R in general a wealth of mapping packages, you just have to search for them. COINr uses Plotly maps to keep things simple and to not depend on too many packages.
Another useful plotting function is `plotIndDot()`, which simply plots a single indicator using dots, with options to highlight particular units.
```
plotIndDot(ASEM, dset = "Raw", icode = "CO2",
usel = c("GBR", "ESP", "AUS"), add_stat = "median")
```
This also gives the option to add a statistic, either mean or median, or to pass your own value to mark. This is similar to a dot plot using e.g. `plotIndDist()`, but is more suited to showing where a particular unit falls in an ordering.
COINr has yet more tools to plot data, but let’s leave it at that for the moment. Other tools will be introduced in other chapters.
4\.4 Statistics and analysis
----------------------------
Aside from plots, COINr gives a fairly detailed statistical analysis of initial indicator data. The function `getStats()` returns a series of statistics which can be aimed at any of the data sets in the `.$Data` folder. You can also specify if you want the output to be returned back to the COIN, or to a separate list.
```
# get stats
ASEM <- getStats(ASEM, dset = "Raw", out2 = "COIN")
## Number of collinear indicators = 3
## Number of signficant negative indicator correlations = 322
## Number of indicators with high denominator correlations = 7
# display in table using Reactable
# (note the use of helper function roundDF() to round the values to a sensible number of decimals)
ASEM$Analysis$Raw$StatTable |>
roundDF() |>
reactable::reactable(resizable = TRUE, bordered = TRUE,
highlight = TRUE, defaultPageSize = 10)
```
The columns of this table give all kind of information from max, min, standard deviation, etc, to the presence of outliers and amount of missing data.
Apart from the overall statistics for each indicator, `getStats` also returns a few other things:
* `.$Outliers`, which flags individual outlying points using the relation to the interquartile range
* `.$Correlations`, which gives a correlation matrix between all indicators in the data set
* `.$DenomCorrelations`, which gives the correlations between indicators and any denominators
Each of these aspects will be explained in more detail in \[Multivariate anlaysis], so for the moment it is enough to mention that they exist.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/multivariate-analysis.html |
Chapter 5 Multivariate analysis
===============================
Correlations, and other relationships between indicators, can help to understand the structure of the data and to see whether indicators are redundant or are mistakenly encoded.
5\.1 Correlations
-----------------
Correlations are a measure of *statistical dependence* between one variable and another. Often, what is meant by “correlation”, is the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient), which is a measure of *linear* dependence between two variables.
It’s worth spending a moment to consider what this the implication of the “linear” part actually means. Consider the following example:
```
# numbers in [-1, 1]
x1 <- seq(-1, 1, 0.1)
# x2 is a function of x1
x2 <- x1^2
# plot
library(ggplot2)
qplot(x1, x2)
```
Now, clearly there is a very strong relationship between x1 and x2\. In fact, x2 is completely dependent on x2 (there are no other variables or noise that control its values). So what happens if we check the correlation?
```
cor(x1, x2, method = "pearson")
## [1] 1.216307e-16
```
We end up with a correlation of zero. This is because although there is a very strong *nonlinear* relationship between \\(x\_1\\) and \\(x\_2\\), the *linear* relationship is zero. This is made even clearer when we fit a linear regression:
```
qplot(x1, x2) +
geom_smooth(method=lm, # Add linear regression line
se=FALSE) # Don't add shaded confidence region
## `geom_smooth()` using formula 'y ~ x'
```
Obviously, this is a very contrived example, and it is unlikely you will see a relationship like this in real indicator data. However the point is that sometimes, linear measures of dependence don’t tell the whole story.
That said, relationships between indicators can often turn out to be fairly linear. Exceptions arise when indicators are highly skewed, for example. In these cases, a good alternative is to turn to *rank correlation* measures. Two well\-known such measures are the [Spearman rank correlation](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) and the [Kendall rank correlation](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient). Rank correlations can handle nonlinearity as long as the relationship is monotonic. You could also argue that since the focus of composite indicators is often on ranks, rank correlation also makes sense on a conceptual level.
COINr offers a few ways to check correlations that are more convenient than `stats::cor()`. The first is to call the `getStats()` function that was already mentioned in [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis). This can be pointed at any data set and indicator subset and will give correlation between all indicators present. For example, taking the raw data of our example data set:
```
library(COINr6)
# shortcut to build ASEM COIN
ASEM <- build_ASEM()
# get statistics of raw data set
statlist <- getStats(ASEM, dset = "Raw", out2 = "list")
# see a bit of correlation matrix
statlist$Correlations[1:5, 1:5]
## IndCode Goods Services FDI PRemit
## 1 Goods NA 0.8886729 0.8236043 0.8580885
## 2 Services 0.8886729 NA 0.8567035 0.8322828
## 3 FDI 0.8236043 0.8567035 NA 0.8009064
## 4 PRemit 0.8580885 0.8322828 0.8009064 NA
## 5 ForPort 0.4750653 0.6955028 0.4366541 0.4837038
# see a bit of correlations with denominators
statlist$DenomCorrelations[, 1:5]
## Denominator Goods Services FDI PRemit
## 1 Den_Area 0.2473558 0.2071061 0.3668625 0.2429470
## 2 Den_Energy 0.6276219 0.5916882 0.7405170 0.5312195
## 3 Den_GDP 0.8026770 0.7708846 0.8482453 0.7111943
## 4 Den_Pop 0.4158811 0.4577006 0.6529376 0.4379999
```
The type of correlation can be changed by the `cortype` argument, which can be set to “pearson” or “kendall”.
The `getStats()` function also summarises some of this information in its output in the `.$StatTable` data frame:
```
statlist$StatTable[ c("Indicator", "Collinearity", "Neg.Correls", "Denom.correlation")] |>
head(10)
## # A tibble: 10 x 4
## Indicator Collinearity Neg.Correls Denom.correlation
## <chr> <chr> <dbl> <chr>
## 1 Goods Collinear 1 High
## 2 Services OK 1 High
## 3 FDI OK 1 High
## 4 PRemit OK 0 High
## 5 ForPort OK 6 OK
## 6 CostImpEx OK 16 OK
## 7 Tariff OK 15 OK
## 8 TBTs OK 7 OK
## 9 TIRcon OK 5 OK
## 10 RTAs OK 9 OK
```
This flags any indicators that have collinearity with any other indicators, or are significantly negatively correlated, or have correlations with denominators. These flags are activated based on thresholds that are inputs to the `getStats()` function (`t_colin` and `t_denom`).
Correlations can also be plotted using the `plotCorr()` function. This is a flexible correlation plotting function, powered by ggplot2, which is adapted to the composite indicator context. It has a number of options because there are many ways to plot correlations in a composite indicator, including within an aggregation level, between levels, selecting certain groups, focusing on parents, and so on.
To see this, here are a few examples. First, let’s plot correlations between raw indicators.
```
plotCorr(ASEM, dset = "Raw", aglevs = 1, showvals = F)
```
Here, we have plotted raw indicators against indicators. A few things to note:
1. By default, if a level is plotted against itself, it only plots correlations within the groups of the next aggregation level above. Use the `grouplev` argument to control the grouping or disable.
2. Insignificant correlations (at 5% level) are coloured grey by default. Use the `pval` argument to change the significance threshold or to disable.
We can repeat the same plot with some variations. First, grouping by sub\-index:
```
plotCorr(ASEM, dset = "Raw", aglevs = 1, showvals = F, grouplev = 3)
```
This gives more information, but the first level of aggregation is not easily visible. To improve it, we can also draw boxes around aggregation groups at any level.
```
plotCorr(ASEM, dset = "Raw", aglevs = 1, showvals = F, grouplev = 3, box_level = 2)
```
Between `grouplev` and `box_level`, it usually possible find a plot which communicates the “important” correlations. But why focus on correlations within aggregation groups? A good reason is that when we aggregate indicators, they should ideally be moderately correlated to avoid losing too much information. This is a long discussion which is outside of the remit of this book, but if your interested you could look at [this paper](https://doi.org/10.1016/j.envsoft.2021.105208).
Of course, we can also plot the whole correlation matrix, and also include insignificant correlations:
```
plotCorr(ASEM, dset = "Raw", aglevs = 1, showvals = F, grouplev = 0, pval = 0)
```
It may often be more interesting to plot correlations between aggregation levels. Since there is a bit more space, we will also enable the values of the correlations using `showvals = TRUE`.
```
plotCorr(ASEM, dset = "Aggregated", aglevs = c(1,2), showvals = T)
```
Note that again, by default the correlations are grouped with the parent level above. This is disabled by setting `withparent = "none"`. To see correlations with all parent groups at once, we can set `withparent = "family"`. Here, we will also call a subset of the indicators, in this case only the indicators in the Sustainability sub\-index.
```
plotCorr(ASEM, dset = "Aggregated", aglevs = c(1,2), icodes = "Sust",
showvals = T, withparent = "family")
```
COINr draws rectangles around aggregation groups. The labels on the x\-axis here correspond to the column names used in `IndMeta`.
It is also possible to switch to a discrete colour map to highlight negative, weak and collinear values.
```
plotCorr(ASEM, dset = "Aggregated", aglevs = c(2,3), showvals = T, withparent = "family",
flagcolours = TRUE)
```
The thresholds for these colours can be controlled by the `flagthresh` argument. This type of plot helps to see at a glance where there might be a problematic indicator. There are also a number of arguments for controlling the colours of the discrete colour map, insignificant values, the box lines, and so on.
Finally, it might be useful to simply have this information as a table rather than a plot. Setting `out2` to “dflong” or “dfwide” outputs a data frame in long or wide format, instead of a figure. This can be helpful for presenting the information in a report or doing custom formatting.
```
plotCorr(ASEM, dset = "Aggregated", aglevs = c(2,3), showvals = T, withparent = "family",
flagcolours = TRUE, out2 = "dfwide")
## # A tibble: 8 x 3
## Indicator Agg2 Agg3
## <chr> <dbl> <dbl>
## 1 ConEcFin 0.53 0.42
## 2 Instit 0.86 0.81
## 3 P2P 0.86 0.78
## 4 Physical 0.87 0.83
## 5 Political 0.64 0.68
## 6 Environ 0.48 NA
## 7 Social 0.74 0.9
## 8 SusEcFin NA NA
```
Arguably it’s not so intuitive for a function called `plotCorr()` to output tables, but that’s how it is for now.
If you like your correlation heatmaps interactive, COINr has another function called `iplotCorr()`. It functions in a similar way `plotCorr()`, but is designed in particular as a component of the `rew8r()` app and can be used in HTML documents. For a basic correlation map:
```
iplotCorr(ASEM, aglevs = c(2, 3))
```
Like `plotCorr()`, indicators are grouped by rectangles around the groups. This might be better for small plots but can be a bit confusing when there are many elements plotted at the same time. The `iplotCorr()` function can also show a continuous colour scale, and values can be removed.
```
iplotCorr(ASEM, aglevs = c(1,2), showvals = F, flagcolours = F)
```
Like `plotCorr()`, threshold values can be controlled for the discrete colour map. The function is powered by *plotly*, which means it can be further altered by using Plotly commands. It’s also worth pointing out that any ggplot plot (so all the static plots in COINr) can be made interactive by the `plotly::ggplotly()` command.
An additional tool that can be used in checking correlations is [Cronbach’s alpha](https://en.wikipedia.org/wiki/Cronbach%27s_alpha) \- this is a measure of internal consistency or “reliability” of the data, based on the strength of correlations between indicators. COINr has a simple in\-built function for calculating this, which can be directed at any data set and subset of indicators as with other functions. To see the consistency of the entire raw data set:
```
getCronbach(ASEM, dset = "Raw")
## [1] 0.2186314
```
Often, it may be more relevant to look at consistency within aggregation groups. For example, the indicators within the “Physical” pillar of the ASEM data set:
```
getCronbach(ASEM, dset = "Raw", icodes = "Physical", aglev = 1)
## [1] 0.6198804
```
This shows that the consistency is much stronger within a sub\-group. This can also apply to the aggregated data:
```
getCronbach(ASEM, dset = "Aggregated", icodes = "Conn", aglev = 2)
## [1] 0.7827171
```
This shows that the consistency of the sub\-pillars within the Connectivity sub\-index is quite high.
5\.2 PCA
--------
Another facet of multivariate analysis is principle component analysis (PCA). PCA is used in at least two main ways in composite indicators, first to understand the structure of the data and check for latent variables (the subject of this section); and second as a weighting approach (see the chapter on [Weighting](weighting-1.html#weighting-2)).
PCA often seems to occupy an unusual niche where it’s relatively well known, but not always so well\-understood. I’m not going to attempt a full explanation of PCA here, because there are many nice explanations available in books and online. However, here is a very rough guide.
The first thing to know is that PCA is based on correlations between indicators, i.e. *linear* relationships (see caveats of this above). PCA tries to find the linear combination of your indicators that explains the most variance. It’s a bit easier to understand this visually. Let’s consider two variables which are strongly correlated:
```
library(ggplot2)
# random numbers
x1 <- rnorm(50, 10, 1)
# x2 is 0.5*x1 + gaussian noise
x2 <- 0.5*x1 + rnorm(50, 0, 0.2)
#plot
qplot(x1, x2)
```
At the moment, each point/observation lives in a two dimensional space, i.e. it is defined by both its \\(x\_1\\) and \\(x\_2\\) values. But in a way, this is inefficient because actually the data more or less lives on a straight line.
What PCA does is to rotate the axes (or rotate the data, depending on which way you look at it), so that the first axis aligns itself with the “shape” or “spread” of the data, and the remaining axes are orthogonal to that. We can demonstrate this using R’s built in PCA function.
```
# perform PCA
PCAres <- stats::prcomp(cbind(x1, x2), scale = F, center = F)
# get rotated data
xr <- as.data.frame(PCAres$x)
# plot on principle components (set y axis to similar scale to orig data)
ggplot(xr, aes(x = -PC1, y = PC2)) +
geom_point() +
ylim(-(max(x2) - min(x2))/2, (max(x2) - min(x2))/2)
```
If you look carefully, you can see that the data corresponds to the previous plot, but just rotated slightly, so that the direction of greatest spread (i.e. greatest variance) is aligned with the first principle component. Here, I have set the y axis to a similar scale to the previous plot to make it easier to see.
There are many reasons for doing PCA, but they often correspond to two main categories: exploratory analysis and dimension reduction. In the context of composite indicators, both of these goals are relevant.
In terms of exploratory analysis, PCA provides somewhat similar information to correlation analysis and Cronbach’s alpha. For any given set of indicators, or aggregates, we can check to see how much variance is explained by the first (few) principle component(s). If the data can be explained by a few PCs, then it suggests that there is a strong latent variable, i.e. the data is shaped something like the figure above, such that if we rotate it, we can find an axis (linear combination) which explains the data pretty well. This is quite similar to the idea of “consistency” in Cronbach’s alpha.
R has built in functions for PCA (as shown above), and there is no need to reinvent the wheel here. But COINr has a function `getPCA()` which makes it easier to interact with COINs, and follows the familiar COINr syntax. It serves both to give general PCA information, and to provide PCA weights for aggregation.
```
PCAres <- getPCA(ASEM, dset = "Aggregated", aglev = 2, out2 = "list")
```
The `getPCA()` function allows you to obtain PCA weights for any aggregation level and group(s) of indicators, but we should keep a in mind a few things. First, that PCA weights are created *inside groups*. So, calling `getPCA()` with `icodes = NULL` performs a separate PCA for each aggregation group inside the specified level. In the example above, where `aglev = 2`, there are two aggregation groups \- the pillars belonging to the Connectivity sub\-index, and the pillars belonging to the Sustainability sub\-index. This means that two PCAs were performed \- one for the first group, and one for the second. In other words, by default `getPCA()` follows the structure of the index.
This is more evident when we examine the output:
```
str(PCAres, max.level = 2)
## List of 2
## $ Weights :'data.frame': 60 obs. of 3 variables:
## ..$ AgLevel: num [1:60] 1 1 1 1 1 1 1 1 1 1 ...
## ..$ Code : chr [1:60] "Goods" "Services" "FDI" "PRemit" ...
## ..$ Weight : num [1:60] 1 1 1 1 1 1 1 1 1 1 ...
## $ PCAresults:List of 2
## ..$ Conn:List of 3
## ..$ Sust:List of 3
```
The output is `.$Weights`, which is a full data frame of weights with the corresponding PCA weights substituted in; and `.$PCAresults` which has one entry for each aggregation group in the level specified. So here, since we targeted `aglev = 2`, it has done a PCA for each of the two groups \- Connectivity and Sustainability.
If you want to perform a PCA over all indicators/aggregates in a specified level, for example all indicators (ignoring any aggregation groups), set `by_groups = FALSE`.
R has built in tools for inspecting PCA. For example, we can see a summary of the results. Each of the entries in the `.$PCAresults` folder is a PCA object generated by `stats::prcomp()`, which can be fed into functions like `summary()`:
```
summary(PCAres$PCAresults$Conn$PCAres)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5
## Standard deviation 1.7156 1.1598 0.58897 0.46337 0.38718
## Proportion of Variance 0.5887 0.2690 0.06938 0.04294 0.02998
## Cumulative Proportion 0.5887 0.8577 0.92708 0.97002 1.00000
```
This tells us that using the five pillars inside the Connectivity sub\-index, we can explain 59% of the variance with the first principle component. This could be interpreted as a “latent variable”. In terms of composite indicators, we generally prefer to be able to explain a significant proportion of the variance with the first principle component. This is also useful information because if we now apply the weights stored in `.$Weights`, this would result in explaining 59% of the variance of the underlying indicators. In other words, we are passing on a decent amount of information to the sub\-index.
Another way to look at PCA results is using a biplot. Again there is no need to build anything in COINr for this. You can always use base R or in this case the **ggbiplot** package (needs to be installed from Github):
```
# install ggbiplot if you don't have it
# library(devtools)
# install_github("vqv/ggbiplot")
library(ggbiplot)
## Loading required package: plyr
## Loading required package: scales
## Loading required package: grid
ggbiplot(PCAres$PCAresults$Conn$PCAres)
```
We can make this more interesting by pulling in some information from the COIN, such as the names of units, and grouping them using one of the grouping variables we have present with the data.
```
ggbiplot(PCAres$PCAresults$Conn$PCAres,
labels = ASEM$Data$Aggregated$UnitCode,
groups = ASEM$Data$Aggregated$Group_EurAsia)
```
Now this looks quite interesting: we see that the distinction between Asian and European countries is quite strong. We can also see the clustering of countries \- many European countries are clustered in the upper right, except some small countries like Luxembourg, Malta and Singapore which form their own group.
We will not go any further to these results. But we will wrap it up by reminding that PCA is used in two ways here. First is *exploratory analysis*, which aims to understand the structure of the data and can help to pick out groups. Second is *dimensionality reduction* \- this might be a bit less obvious, but composite indicators are really an exercise in dimensionality reduction. We begin with a large number of variables and try to condense them down into one. PCA can help to find the weights that satisfy certain properties, such as maximising variance explained.
Last but not least, consider some caveats. PCA rests on assumptions of linearity and joint\-normality, which are rarely true in real indicator data. Like correlations, however, that doesn’t mean that the results are meaningless, and modest departures from linearity should still allow relevant conclusions to be drawn.
5\.1 Correlations
-----------------
Correlations are a measure of *statistical dependence* between one variable and another. Often, what is meant by “correlation”, is the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient), which is a measure of *linear* dependence between two variables.
It’s worth spending a moment to consider what this the implication of the “linear” part actually means. Consider the following example:
```
# numbers in [-1, 1]
x1 <- seq(-1, 1, 0.1)
# x2 is a function of x1
x2 <- x1^2
# plot
library(ggplot2)
qplot(x1, x2)
```
Now, clearly there is a very strong relationship between x1 and x2\. In fact, x2 is completely dependent on x2 (there are no other variables or noise that control its values). So what happens if we check the correlation?
```
cor(x1, x2, method = "pearson")
## [1] 1.216307e-16
```
We end up with a correlation of zero. This is because although there is a very strong *nonlinear* relationship between \\(x\_1\\) and \\(x\_2\\), the *linear* relationship is zero. This is made even clearer when we fit a linear regression:
```
qplot(x1, x2) +
geom_smooth(method=lm, # Add linear regression line
se=FALSE) # Don't add shaded confidence region
## `geom_smooth()` using formula 'y ~ x'
```
Obviously, this is a very contrived example, and it is unlikely you will see a relationship like this in real indicator data. However the point is that sometimes, linear measures of dependence don’t tell the whole story.
That said, relationships between indicators can often turn out to be fairly linear. Exceptions arise when indicators are highly skewed, for example. In these cases, a good alternative is to turn to *rank correlation* measures. Two well\-known such measures are the [Spearman rank correlation](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) and the [Kendall rank correlation](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient). Rank correlations can handle nonlinearity as long as the relationship is monotonic. You could also argue that since the focus of composite indicators is often on ranks, rank correlation also makes sense on a conceptual level.
COINr offers a few ways to check correlations that are more convenient than `stats::cor()`. The first is to call the `getStats()` function that was already mentioned in [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis). This can be pointed at any data set and indicator subset and will give correlation between all indicators present. For example, taking the raw data of our example data set:
```
library(COINr6)
# shortcut to build ASEM COIN
ASEM <- build_ASEM()
# get statistics of raw data set
statlist <- getStats(ASEM, dset = "Raw", out2 = "list")
# see a bit of correlation matrix
statlist$Correlations[1:5, 1:5]
## IndCode Goods Services FDI PRemit
## 1 Goods NA 0.8886729 0.8236043 0.8580885
## 2 Services 0.8886729 NA 0.8567035 0.8322828
## 3 FDI 0.8236043 0.8567035 NA 0.8009064
## 4 PRemit 0.8580885 0.8322828 0.8009064 NA
## 5 ForPort 0.4750653 0.6955028 0.4366541 0.4837038
# see a bit of correlations with denominators
statlist$DenomCorrelations[, 1:5]
## Denominator Goods Services FDI PRemit
## 1 Den_Area 0.2473558 0.2071061 0.3668625 0.2429470
## 2 Den_Energy 0.6276219 0.5916882 0.7405170 0.5312195
## 3 Den_GDP 0.8026770 0.7708846 0.8482453 0.7111943
## 4 Den_Pop 0.4158811 0.4577006 0.6529376 0.4379999
```
The type of correlation can be changed by the `cortype` argument, which can be set to “pearson” or “kendall”.
The `getStats()` function also summarises some of this information in its output in the `.$StatTable` data frame:
```
statlist$StatTable[ c("Indicator", "Collinearity", "Neg.Correls", "Denom.correlation")] |>
head(10)
## # A tibble: 10 x 4
## Indicator Collinearity Neg.Correls Denom.correlation
## <chr> <chr> <dbl> <chr>
## 1 Goods Collinear 1 High
## 2 Services OK 1 High
## 3 FDI OK 1 High
## 4 PRemit OK 0 High
## 5 ForPort OK 6 OK
## 6 CostImpEx OK 16 OK
## 7 Tariff OK 15 OK
## 8 TBTs OK 7 OK
## 9 TIRcon OK 5 OK
## 10 RTAs OK 9 OK
```
This flags any indicators that have collinearity with any other indicators, or are significantly negatively correlated, or have correlations with denominators. These flags are activated based on thresholds that are inputs to the `getStats()` function (`t_colin` and `t_denom`).
Correlations can also be plotted using the `plotCorr()` function. This is a flexible correlation plotting function, powered by ggplot2, which is adapted to the composite indicator context. It has a number of options because there are many ways to plot correlations in a composite indicator, including within an aggregation level, between levels, selecting certain groups, focusing on parents, and so on.
To see this, here are a few examples. First, let’s plot correlations between raw indicators.
```
plotCorr(ASEM, dset = "Raw", aglevs = 1, showvals = F)
```
Here, we have plotted raw indicators against indicators. A few things to note:
1. By default, if a level is plotted against itself, it only plots correlations within the groups of the next aggregation level above. Use the `grouplev` argument to control the grouping or disable.
2. Insignificant correlations (at 5% level) are coloured grey by default. Use the `pval` argument to change the significance threshold or to disable.
We can repeat the same plot with some variations. First, grouping by sub\-index:
```
plotCorr(ASEM, dset = "Raw", aglevs = 1, showvals = F, grouplev = 3)
```
This gives more information, but the first level of aggregation is not easily visible. To improve it, we can also draw boxes around aggregation groups at any level.
```
plotCorr(ASEM, dset = "Raw", aglevs = 1, showvals = F, grouplev = 3, box_level = 2)
```
Between `grouplev` and `box_level`, it usually possible find a plot which communicates the “important” correlations. But why focus on correlations within aggregation groups? A good reason is that when we aggregate indicators, they should ideally be moderately correlated to avoid losing too much information. This is a long discussion which is outside of the remit of this book, but if your interested you could look at [this paper](https://doi.org/10.1016/j.envsoft.2021.105208).
Of course, we can also plot the whole correlation matrix, and also include insignificant correlations:
```
plotCorr(ASEM, dset = "Raw", aglevs = 1, showvals = F, grouplev = 0, pval = 0)
```
It may often be more interesting to plot correlations between aggregation levels. Since there is a bit more space, we will also enable the values of the correlations using `showvals = TRUE`.
```
plotCorr(ASEM, dset = "Aggregated", aglevs = c(1,2), showvals = T)
```
Note that again, by default the correlations are grouped with the parent level above. This is disabled by setting `withparent = "none"`. To see correlations with all parent groups at once, we can set `withparent = "family"`. Here, we will also call a subset of the indicators, in this case only the indicators in the Sustainability sub\-index.
```
plotCorr(ASEM, dset = "Aggregated", aglevs = c(1,2), icodes = "Sust",
showvals = T, withparent = "family")
```
COINr draws rectangles around aggregation groups. The labels on the x\-axis here correspond to the column names used in `IndMeta`.
It is also possible to switch to a discrete colour map to highlight negative, weak and collinear values.
```
plotCorr(ASEM, dset = "Aggregated", aglevs = c(2,3), showvals = T, withparent = "family",
flagcolours = TRUE)
```
The thresholds for these colours can be controlled by the `flagthresh` argument. This type of plot helps to see at a glance where there might be a problematic indicator. There are also a number of arguments for controlling the colours of the discrete colour map, insignificant values, the box lines, and so on.
Finally, it might be useful to simply have this information as a table rather than a plot. Setting `out2` to “dflong” or “dfwide” outputs a data frame in long or wide format, instead of a figure. This can be helpful for presenting the information in a report or doing custom formatting.
```
plotCorr(ASEM, dset = "Aggregated", aglevs = c(2,3), showvals = T, withparent = "family",
flagcolours = TRUE, out2 = "dfwide")
## # A tibble: 8 x 3
## Indicator Agg2 Agg3
## <chr> <dbl> <dbl>
## 1 ConEcFin 0.53 0.42
## 2 Instit 0.86 0.81
## 3 P2P 0.86 0.78
## 4 Physical 0.87 0.83
## 5 Political 0.64 0.68
## 6 Environ 0.48 NA
## 7 Social 0.74 0.9
## 8 SusEcFin NA NA
```
Arguably it’s not so intuitive for a function called `plotCorr()` to output tables, but that’s how it is for now.
If you like your correlation heatmaps interactive, COINr has another function called `iplotCorr()`. It functions in a similar way `plotCorr()`, but is designed in particular as a component of the `rew8r()` app and can be used in HTML documents. For a basic correlation map:
```
iplotCorr(ASEM, aglevs = c(2, 3))
```
Like `plotCorr()`, indicators are grouped by rectangles around the groups. This might be better for small plots but can be a bit confusing when there are many elements plotted at the same time. The `iplotCorr()` function can also show a continuous colour scale, and values can be removed.
```
iplotCorr(ASEM, aglevs = c(1,2), showvals = F, flagcolours = F)
```
Like `plotCorr()`, threshold values can be controlled for the discrete colour map. The function is powered by *plotly*, which means it can be further altered by using Plotly commands. It’s also worth pointing out that any ggplot plot (so all the static plots in COINr) can be made interactive by the `plotly::ggplotly()` command.
An additional tool that can be used in checking correlations is [Cronbach’s alpha](https://en.wikipedia.org/wiki/Cronbach%27s_alpha) \- this is a measure of internal consistency or “reliability” of the data, based on the strength of correlations between indicators. COINr has a simple in\-built function for calculating this, which can be directed at any data set and subset of indicators as with other functions. To see the consistency of the entire raw data set:
```
getCronbach(ASEM, dset = "Raw")
## [1] 0.2186314
```
Often, it may be more relevant to look at consistency within aggregation groups. For example, the indicators within the “Physical” pillar of the ASEM data set:
```
getCronbach(ASEM, dset = "Raw", icodes = "Physical", aglev = 1)
## [1] 0.6198804
```
This shows that the consistency is much stronger within a sub\-group. This can also apply to the aggregated data:
```
getCronbach(ASEM, dset = "Aggregated", icodes = "Conn", aglev = 2)
## [1] 0.7827171
```
This shows that the consistency of the sub\-pillars within the Connectivity sub\-index is quite high.
5\.2 PCA
--------
Another facet of multivariate analysis is principle component analysis (PCA). PCA is used in at least two main ways in composite indicators, first to understand the structure of the data and check for latent variables (the subject of this section); and second as a weighting approach (see the chapter on [Weighting](weighting-1.html#weighting-2)).
PCA often seems to occupy an unusual niche where it’s relatively well known, but not always so well\-understood. I’m not going to attempt a full explanation of PCA here, because there are many nice explanations available in books and online. However, here is a very rough guide.
The first thing to know is that PCA is based on correlations between indicators, i.e. *linear* relationships (see caveats of this above). PCA tries to find the linear combination of your indicators that explains the most variance. It’s a bit easier to understand this visually. Let’s consider two variables which are strongly correlated:
```
library(ggplot2)
# random numbers
x1 <- rnorm(50, 10, 1)
# x2 is 0.5*x1 + gaussian noise
x2 <- 0.5*x1 + rnorm(50, 0, 0.2)
#plot
qplot(x1, x2)
```
At the moment, each point/observation lives in a two dimensional space, i.e. it is defined by both its \\(x\_1\\) and \\(x\_2\\) values. But in a way, this is inefficient because actually the data more or less lives on a straight line.
What PCA does is to rotate the axes (or rotate the data, depending on which way you look at it), so that the first axis aligns itself with the “shape” or “spread” of the data, and the remaining axes are orthogonal to that. We can demonstrate this using R’s built in PCA function.
```
# perform PCA
PCAres <- stats::prcomp(cbind(x1, x2), scale = F, center = F)
# get rotated data
xr <- as.data.frame(PCAres$x)
# plot on principle components (set y axis to similar scale to orig data)
ggplot(xr, aes(x = -PC1, y = PC2)) +
geom_point() +
ylim(-(max(x2) - min(x2))/2, (max(x2) - min(x2))/2)
```
If you look carefully, you can see that the data corresponds to the previous plot, but just rotated slightly, so that the direction of greatest spread (i.e. greatest variance) is aligned with the first principle component. Here, I have set the y axis to a similar scale to the previous plot to make it easier to see.
There are many reasons for doing PCA, but they often correspond to two main categories: exploratory analysis and dimension reduction. In the context of composite indicators, both of these goals are relevant.
In terms of exploratory analysis, PCA provides somewhat similar information to correlation analysis and Cronbach’s alpha. For any given set of indicators, or aggregates, we can check to see how much variance is explained by the first (few) principle component(s). If the data can be explained by a few PCs, then it suggests that there is a strong latent variable, i.e. the data is shaped something like the figure above, such that if we rotate it, we can find an axis (linear combination) which explains the data pretty well. This is quite similar to the idea of “consistency” in Cronbach’s alpha.
R has built in functions for PCA (as shown above), and there is no need to reinvent the wheel here. But COINr has a function `getPCA()` which makes it easier to interact with COINs, and follows the familiar COINr syntax. It serves both to give general PCA information, and to provide PCA weights for aggregation.
```
PCAres <- getPCA(ASEM, dset = "Aggregated", aglev = 2, out2 = "list")
```
The `getPCA()` function allows you to obtain PCA weights for any aggregation level and group(s) of indicators, but we should keep a in mind a few things. First, that PCA weights are created *inside groups*. So, calling `getPCA()` with `icodes = NULL` performs a separate PCA for each aggregation group inside the specified level. In the example above, where `aglev = 2`, there are two aggregation groups \- the pillars belonging to the Connectivity sub\-index, and the pillars belonging to the Sustainability sub\-index. This means that two PCAs were performed \- one for the first group, and one for the second. In other words, by default `getPCA()` follows the structure of the index.
This is more evident when we examine the output:
```
str(PCAres, max.level = 2)
## List of 2
## $ Weights :'data.frame': 60 obs. of 3 variables:
## ..$ AgLevel: num [1:60] 1 1 1 1 1 1 1 1 1 1 ...
## ..$ Code : chr [1:60] "Goods" "Services" "FDI" "PRemit" ...
## ..$ Weight : num [1:60] 1 1 1 1 1 1 1 1 1 1 ...
## $ PCAresults:List of 2
## ..$ Conn:List of 3
## ..$ Sust:List of 3
```
The output is `.$Weights`, which is a full data frame of weights with the corresponding PCA weights substituted in; and `.$PCAresults` which has one entry for each aggregation group in the level specified. So here, since we targeted `aglev = 2`, it has done a PCA for each of the two groups \- Connectivity and Sustainability.
If you want to perform a PCA over all indicators/aggregates in a specified level, for example all indicators (ignoring any aggregation groups), set `by_groups = FALSE`.
R has built in tools for inspecting PCA. For example, we can see a summary of the results. Each of the entries in the `.$PCAresults` folder is a PCA object generated by `stats::prcomp()`, which can be fed into functions like `summary()`:
```
summary(PCAres$PCAresults$Conn$PCAres)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5
## Standard deviation 1.7156 1.1598 0.58897 0.46337 0.38718
## Proportion of Variance 0.5887 0.2690 0.06938 0.04294 0.02998
## Cumulative Proportion 0.5887 0.8577 0.92708 0.97002 1.00000
```
This tells us that using the five pillars inside the Connectivity sub\-index, we can explain 59% of the variance with the first principle component. This could be interpreted as a “latent variable”. In terms of composite indicators, we generally prefer to be able to explain a significant proportion of the variance with the first principle component. This is also useful information because if we now apply the weights stored in `.$Weights`, this would result in explaining 59% of the variance of the underlying indicators. In other words, we are passing on a decent amount of information to the sub\-index.
Another way to look at PCA results is using a biplot. Again there is no need to build anything in COINr for this. You can always use base R or in this case the **ggbiplot** package (needs to be installed from Github):
```
# install ggbiplot if you don't have it
# library(devtools)
# install_github("vqv/ggbiplot")
library(ggbiplot)
## Loading required package: plyr
## Loading required package: scales
## Loading required package: grid
ggbiplot(PCAres$PCAresults$Conn$PCAres)
```
We can make this more interesting by pulling in some information from the COIN, such as the names of units, and grouping them using one of the grouping variables we have present with the data.
```
ggbiplot(PCAres$PCAresults$Conn$PCAres,
labels = ASEM$Data$Aggregated$UnitCode,
groups = ASEM$Data$Aggregated$Group_EurAsia)
```
Now this looks quite interesting: we see that the distinction between Asian and European countries is quite strong. We can also see the clustering of countries \- many European countries are clustered in the upper right, except some small countries like Luxembourg, Malta and Singapore which form their own group.
We will not go any further to these results. But we will wrap it up by reminding that PCA is used in two ways here. First is *exploratory analysis*, which aims to understand the structure of the data and can help to pick out groups. Second is *dimensionality reduction* \- this might be a bit less obvious, but composite indicators are really an exercise in dimensionality reduction. We begin with a large number of variables and try to condense them down into one. PCA can help to find the weights that satisfy certain properties, such as maximising variance explained.
Last but not least, consider some caveats. PCA rests on assumptions of linearity and joint\-normality, which are rarely true in real indicator data. Like correlations, however, that doesn’t mean that the results are meaningless, and modest departures from linearity should still allow relevant conclusions to be drawn.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/missing-data-and-imputation.html |
Chapter 6 Missing data and Imputation
=====================================
Imputation is the process of estimating missing data points. This can be done in any number of ways, and as usual, the “best” way depends on the problem.
Of course, you don’t *have* to impute data. You can also simply delete any indicator or unit that has missing values, although in many cases this can be too restrictive. Reasonable results can still be obtained despite small amounts of missing data, although if too much data is missing, the uncertainty can be too high to give a meaningful analysis. As usual, it is a balance.
A good first step is to check how much data is missing, and where (see below). Units with very high amounts of missing data can be screened out. Small amounts of missing data can then be imputed.
6\.1 Concept
------------
The simplest imputation approach is to use values of other units to estimate the missing point. Typically, this could involve the sample mean or median of the indicator. Here’s some data from the ASEM data set regarding the average connection speed of each country. Towards the end there are some missing values, so let’s view the last few rows:
```
library(COINr6)
Ind1 <- data.frame(Country = ASEMIndData$UnitName, ConSpeed = ASEMIndData$ConSpeed)
Ind1[40:nrow(Ind1),]
## Country ConSpeed
## 40 Korea 28.6
## 41 Lao PDR NA
## 42 Malaysia 8.9
## 43 Mongolia NA
## 44 Myanmar NA
## 45 New Zealand 14.7
## 46 Pakistan NA
## 47 Philippines 5.5
## 48 Russian Federation 11.8
## 49 Singapore 20.3
## 50 Thailand 16.0
## 51 Vietnam 9.5
```
Using our simple imputation method, we just replace the `NA` values with the sample mean.
```
Ind1$ConSpeed <- replace(Ind1$ConSpeed, is.na(Ind1$ConSpeed), mean(Ind1$ConSpeed, na.rm = T))
Ind1[40:nrow(Ind1),]
## Country ConSpeed
## 40 Korea 28.60000
## 41 Lao PDR 14.28605
## 42 Malaysia 8.90000
## 43 Mongolia 14.28605
## 44 Myanmar 14.28605
## 45 New Zealand 14.70000
## 46 Pakistan 14.28605
## 47 Philippines 5.50000
## 48 Russian Federation 11.80000
## 49 Singapore 20.30000
## 50 Thailand 16.00000
## 51 Vietnam 9.50000
```
This approach can be reasonable if countries are somehow similar in that indicator. However, other perhaps more informative ways are available:
* Substituting by the mean or median of the country group (e.g. income group or continent)
* If time series data is available, use the latest known data point
Then there is another class of methods which use data from other indicators to estimate the missing point. The core idea here is that if the indicators are related to one another, it is possible to guess the missing point by using known values of other indicators. This can take the form of:
* Simply substituting the mean or median of the *normalised* values of the other indicators
* Subsituting the mean or median of *normalised* values of the other indicators *within the aggregation group*
* Using a more formal approach, based on regression or more generally on statistical modelling
Let’s explore the options available in COINr
6\.2 Data checks and screening
------------------------------
A first step is to check in detail how much data are missing. The function `checkData()` does this, and also has the option to screen units based on data availability.
```
# Assemble ASEM data first
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta,
AggMeta = COINr6::ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
# Missing data check on the raw data set
ASEM <- checkData(ASEM, dset = "Raw")
head(ASEM$Analysis$Raw$MissDatSummary)
## UnitCode N_missing N_zero N_miss_or_zero PrcDataAll PrcNonZero LowDataAll
## 1 AUT 0 2 2 100 95.91837 FALSE
## 2 BEL 0 2 2 100 95.91837 FALSE
## 3 BGR 0 0 0 100 100.00000 FALSE
## 4 HRV 0 1 1 100 97.95918 FALSE
## 5 CYP 0 3 3 100 93.87755 FALSE
## 6 CZE 0 3 3 100 93.87755 FALSE
## ZeroFlag LowDatOrZeroFlag Included
## 1 FALSE FALSE TRUE
## 2 FALSE FALSE TRUE
## 3 FALSE FALSE TRUE
## 4 FALSE FALSE TRUE
## 5 FALSE FALSE TRUE
## 6 FALSE FALSE TRUE
```
The `MissDataSummary` table shows the unit code, number of missing observations, and overall percentage data availability. Finally, the `LowDataAll` column can be used as a flag for units with data availability lower than a set amount `ind_thresh` which is one of the input arguments to `checkData()`. ASEM indicators were already chosen to fulfill minimum data requirements, for which reason the data availability is all above the default of 2/3\.
We will set the minimum data threshold a bit higher, and in doing so also demonstrate another feature of `checkData()`, which is to automatically screen units based on data availability. Setting `unit_screen = "byNA"` will generate a new data set `.$Data$Screened` which only includes units with data availability above the set threshold. This data set can then be passed on to subsequent operations.
```
# Missing data check on the raw data set
ASEM <- checkData(ASEM, dset = "Raw", ind_thresh = 0.85, unit_screen = "byNA")
ASEM$Analysis$Raw$RemovedUnits
## [1] "BRN" "LAO" "MMR"
```
This generates a new data set `.$Data$Screened` with the removed units recorded in `.$Analysis$Raw$RemovedUnits`, which in this case are Brunei, Laos and Myanmar. It is also possible to screen units based on the number of zero values (`unit_screen = "byzeros"`) or both (`unit_screen = "byNAandzeros"`). Screening by zeros can be useful to exclude units which have zero values for all or most indicators, and can skew distributions.
As a final option to mention, you can manually include or exclude units. So for example, a unit that doesn’t have sufficient data coverage could be manually included anyway, and another unit could be excluded.
```
# Countries to include and exclude
df <- data.frame(UnitCode = c("AUT","BRN"), Status = c("Exclude", "Include"))
# Missing data check on the raw data set
ASEM <- checkData(ASEM, dset = "Raw", ind_thresh = 0.85, unit_screen = "byNA", Force = df)
ASEM$Analysis$Raw$RemovedUnits
## [1] "AUT" "LAO" "MMR"
```
Now AUT has been excluded, even though it had sufficient data coverage, and BRN has been manually included, even though it didn’t. The intention is to give some flexibility to make exceptions for hard rules.
The other output of interest from `checkData()` is missing data by groups:
```
head(ASEM$Analysis$Raw$MissDatByGroup[30:40,])
## UnitCode ConEcFin Instit P2P Physical Political Environ Social
## 30 GBR 100 100.00000 100 100.0 100 100 100.00000
## 31 AUS 100 100.00000 100 100.0 100 100 100.00000
## 32 BGD 100 83.33333 75 87.5 100 100 88.88889
## 33 BRN 80 100.00000 75 87.5 100 100 55.55556
## 34 KHM 80 100.00000 75 87.5 100 100 77.77778
## 35 CHN 100 100.00000 100 100.0 100 100 100.00000
## SusEcFin Conn Sust Index
## 30 100 100.00000 100.00000 100.00000
## 31 100 100.00000 100.00000 100.00000
## 32 80 86.66667 89.47368 87.75510
## 33 60 86.66667 68.42105 79.59184
## 34 100 86.66667 89.47368 87.75510
## 35 80 100.00000 94.73684 97.95918
```
This gives the percentage indicator data availability inside each aggregation group. It’s worth mentioning here that on aggregation, it is possible to set a data availability threshold for aggregation groups, below which an aggregate score returns `NA`. This is explained more in the chapter on [Aggregation](appendix-building-a-composite-indicator-example.html#aggregation-1).
6\.3 Imputation in COINr
------------------------
Now we turn to estimating missing data points in COINr. The function of interest is `impute()`, which gives a number of options, corresponding to the types of imputation discussed above. Briefly, COINr can impute using:
* The indicator mean, either across all countries or within a specified group
* The indicator median, either across all countries or within a specified group
* The mean or median, within the aggregation group
* The expectation maximisation algorithm, via the [AMELIA](https://cran.r-project.org/web/packages/Amelia/index.html) package
### 6\.3\.1 By column
The `impute()` function follows a similar logic to other COINr functions. We can enter a COIN or a data frame, and get a COIN or a data frame back. Let’s impute on a COIN first, using the raw ASEM data.
```
# Assemble ASEM data if not done already (uncomment the following)
# ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta,
# AggMeta = COINr6::ASEMAggMeta)
# check how many NAs we have
sum(is.na(ASEM$Data$Raw))
## [1] 63
# impute using indicator mean
ASEM <- impute(ASEM, dset = "Raw", imtype = "ind_mean")
## Missing data points detected = 63
## Missing data points imputed = 63, using method = ind_mean
# check how many NAs after imputation
sum(is.na(ASEM$Data$Imputed))
## [1] 0
```
If the input is a COIN, by default the output is an updated COIN, with a new data set `.$Data$Imputed`. We can also set `out2 = "df"` to output the imputed data frame on its own.
The above example replaces `NA` values with the mean of the indicator, and setting `imtype = ind_median` will instead use the indicator median. In this respect, we are using information from other units, inside the same indicator, to estimate the missing values.
Imputing by group may be more informative if you have meaningful grouping variables. For example, the ASEM data set has groupings by GDP, population, GDP per capita and whether the country is European or Asian. We might hypothesise that it is better to replace `NA` values with the median inside a group, say by GDP group. This would imply that countries with a similar GDP are likely to have similar indicator values.
```
# impute using GDP group median
ASEM <- impute(ASEM, dset = "Raw", imtype = "indgroup_median", groupvar = "Group_GDP")
## Missing data points detected = 63
## Missing data points imputed = 63, using method = indgroup_median
```
### 6\.3\.2 By row
So far, the imputation has been using values from the same column (indicator). Another possibility is to use values from other indicators, i.e operate row\-wise. COINr offers two basic options in this respect: either to take the mean or median of the other indicators inside the aggregation group, and this can be done by setting `imtype = "agg_mean"` or `imtype = "agg_median"` respectively.
Both of these imputation methods will, for a given unit, take the mean or median of the *normalised* values of the other indicators in the aggregation group in the level above indicator level. The process is as follows, for each `NA` value:
1. Find indicators in the same aggregation group
2. Normalise these indicators using the min\-max method, so they each have a minimum of zero and a maximum of one.
3. Replace the `NA` with the mean or median of the normalised values, for the unit in question.
4. Reverse the min\-max transformation so that the indicators are returned to their original scale.
It’s important to realise that this is equivalent to aggregating with the mean or median without imputing first. This is because in the aggregation step, if we take the mean of a group of indicators, and there is a `NA` present, this value is excluded from the mean calculation. Doing this is mathematically equivalent to assigning the mean to that missing value. Therefore, one reason to use this imputation method is to see which values are being implicitly assigned as a result of excluding missing values from the aggregation step. This is sometimes known as “shadow imputation”.
### 6\.3\.3 By column and row
Finally, we can use a regression\-based *expectation maximisation* algorithm, imported via the `AMELIA` package. This estimates missing values using the values of the other indicators *and* the other units. This effectively involves regressing each indicator on other indicators, and predicting missing values.
A catch of this approach is that if the number of units is small compared to the number of indicators, there will not be enough observations to estimate the parameters of the model. This is definitely the case if you have more indicators than observations. To use the EM algorithm in this case, imputation is performed on aggregation groups. So, to use the EM approach in `COINr`, you have to specify the aggregation level to group indicators by.
```
# impute using EM, grouping indicators inside their pillars (level 2)
ASEM <- impute(ASEM, dset = "Raw", imtype = "EM", EMaglev = 2)
## Missing data points detected = 63
## Missing data points imputed = 63, using method = EM
```
This works, because the ASEM data set has 51 units and no more than eight indicators per pillar. However, if we were to use `EMaglev = 3`, i.e. try imputing indicators grouped into the two sub\-indexes, it does not work.
Finding the right aggregation level to impute at might involve a bit of trial and error, but the advantage of imputing by groups of indicators is that aggregation groups are usually composed of indicators that are similar in some respect. This makes it more likely that they are good predictors of the other indicators in their group.
The `AMELIA` package offers many more options for imputation that are not available through the `impute()` function, such as bootstrapping, time\-series imputation and more. As with plotting packages `COINr` aims to provide an easy and quick interface to standard imputation in `AMELIA`, but if you want to have full control you should use the `AMELIA` package directly. Other good imputation options are the [`MICE`](https://cran.r-project.org/web/packages/mice/index.html) package (Multivariate Imputation via Chained Equations) and the [`missForest`](https://cran.rstudio.com/web/packages/missForest/index.html) package (using random forests).
Finally, if you are interested in imputing data over time, i.e. using the most recent data point available with panel data, go back to the [Putting everything together](coins-the-currency-of-coinr.html#putting-everything-together) section, which explains how `assemble()` can be used to extract a single time point from panel data and impute using the latest year. This is deliberately done at the assembly stage because once constructed, a COIN represents only a single year of data.
6\.1 Concept
------------
The simplest imputation approach is to use values of other units to estimate the missing point. Typically, this could involve the sample mean or median of the indicator. Here’s some data from the ASEM data set regarding the average connection speed of each country. Towards the end there are some missing values, so let’s view the last few rows:
```
library(COINr6)
Ind1 <- data.frame(Country = ASEMIndData$UnitName, ConSpeed = ASEMIndData$ConSpeed)
Ind1[40:nrow(Ind1),]
## Country ConSpeed
## 40 Korea 28.6
## 41 Lao PDR NA
## 42 Malaysia 8.9
## 43 Mongolia NA
## 44 Myanmar NA
## 45 New Zealand 14.7
## 46 Pakistan NA
## 47 Philippines 5.5
## 48 Russian Federation 11.8
## 49 Singapore 20.3
## 50 Thailand 16.0
## 51 Vietnam 9.5
```
Using our simple imputation method, we just replace the `NA` values with the sample mean.
```
Ind1$ConSpeed <- replace(Ind1$ConSpeed, is.na(Ind1$ConSpeed), mean(Ind1$ConSpeed, na.rm = T))
Ind1[40:nrow(Ind1),]
## Country ConSpeed
## 40 Korea 28.60000
## 41 Lao PDR 14.28605
## 42 Malaysia 8.90000
## 43 Mongolia 14.28605
## 44 Myanmar 14.28605
## 45 New Zealand 14.70000
## 46 Pakistan 14.28605
## 47 Philippines 5.50000
## 48 Russian Federation 11.80000
## 49 Singapore 20.30000
## 50 Thailand 16.00000
## 51 Vietnam 9.50000
```
This approach can be reasonable if countries are somehow similar in that indicator. However, other perhaps more informative ways are available:
* Substituting by the mean or median of the country group (e.g. income group or continent)
* If time series data is available, use the latest known data point
Then there is another class of methods which use data from other indicators to estimate the missing point. The core idea here is that if the indicators are related to one another, it is possible to guess the missing point by using known values of other indicators. This can take the form of:
* Simply substituting the mean or median of the *normalised* values of the other indicators
* Subsituting the mean or median of *normalised* values of the other indicators *within the aggregation group*
* Using a more formal approach, based on regression or more generally on statistical modelling
Let’s explore the options available in COINr
6\.2 Data checks and screening
------------------------------
A first step is to check in detail how much data are missing. The function `checkData()` does this, and also has the option to screen units based on data availability.
```
# Assemble ASEM data first
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta,
AggMeta = COINr6::ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
# Missing data check on the raw data set
ASEM <- checkData(ASEM, dset = "Raw")
head(ASEM$Analysis$Raw$MissDatSummary)
## UnitCode N_missing N_zero N_miss_or_zero PrcDataAll PrcNonZero LowDataAll
## 1 AUT 0 2 2 100 95.91837 FALSE
## 2 BEL 0 2 2 100 95.91837 FALSE
## 3 BGR 0 0 0 100 100.00000 FALSE
## 4 HRV 0 1 1 100 97.95918 FALSE
## 5 CYP 0 3 3 100 93.87755 FALSE
## 6 CZE 0 3 3 100 93.87755 FALSE
## ZeroFlag LowDatOrZeroFlag Included
## 1 FALSE FALSE TRUE
## 2 FALSE FALSE TRUE
## 3 FALSE FALSE TRUE
## 4 FALSE FALSE TRUE
## 5 FALSE FALSE TRUE
## 6 FALSE FALSE TRUE
```
The `MissDataSummary` table shows the unit code, number of missing observations, and overall percentage data availability. Finally, the `LowDataAll` column can be used as a flag for units with data availability lower than a set amount `ind_thresh` which is one of the input arguments to `checkData()`. ASEM indicators were already chosen to fulfill minimum data requirements, for which reason the data availability is all above the default of 2/3\.
We will set the minimum data threshold a bit higher, and in doing so also demonstrate another feature of `checkData()`, which is to automatically screen units based on data availability. Setting `unit_screen = "byNA"` will generate a new data set `.$Data$Screened` which only includes units with data availability above the set threshold. This data set can then be passed on to subsequent operations.
```
# Missing data check on the raw data set
ASEM <- checkData(ASEM, dset = "Raw", ind_thresh = 0.85, unit_screen = "byNA")
ASEM$Analysis$Raw$RemovedUnits
## [1] "BRN" "LAO" "MMR"
```
This generates a new data set `.$Data$Screened` with the removed units recorded in `.$Analysis$Raw$RemovedUnits`, which in this case are Brunei, Laos and Myanmar. It is also possible to screen units based on the number of zero values (`unit_screen = "byzeros"`) or both (`unit_screen = "byNAandzeros"`). Screening by zeros can be useful to exclude units which have zero values for all or most indicators, and can skew distributions.
As a final option to mention, you can manually include or exclude units. So for example, a unit that doesn’t have sufficient data coverage could be manually included anyway, and another unit could be excluded.
```
# Countries to include and exclude
df <- data.frame(UnitCode = c("AUT","BRN"), Status = c("Exclude", "Include"))
# Missing data check on the raw data set
ASEM <- checkData(ASEM, dset = "Raw", ind_thresh = 0.85, unit_screen = "byNA", Force = df)
ASEM$Analysis$Raw$RemovedUnits
## [1] "AUT" "LAO" "MMR"
```
Now AUT has been excluded, even though it had sufficient data coverage, and BRN has been manually included, even though it didn’t. The intention is to give some flexibility to make exceptions for hard rules.
The other output of interest from `checkData()` is missing data by groups:
```
head(ASEM$Analysis$Raw$MissDatByGroup[30:40,])
## UnitCode ConEcFin Instit P2P Physical Political Environ Social
## 30 GBR 100 100.00000 100 100.0 100 100 100.00000
## 31 AUS 100 100.00000 100 100.0 100 100 100.00000
## 32 BGD 100 83.33333 75 87.5 100 100 88.88889
## 33 BRN 80 100.00000 75 87.5 100 100 55.55556
## 34 KHM 80 100.00000 75 87.5 100 100 77.77778
## 35 CHN 100 100.00000 100 100.0 100 100 100.00000
## SusEcFin Conn Sust Index
## 30 100 100.00000 100.00000 100.00000
## 31 100 100.00000 100.00000 100.00000
## 32 80 86.66667 89.47368 87.75510
## 33 60 86.66667 68.42105 79.59184
## 34 100 86.66667 89.47368 87.75510
## 35 80 100.00000 94.73684 97.95918
```
This gives the percentage indicator data availability inside each aggregation group. It’s worth mentioning here that on aggregation, it is possible to set a data availability threshold for aggregation groups, below which an aggregate score returns `NA`. This is explained more in the chapter on [Aggregation](appendix-building-a-composite-indicator-example.html#aggregation-1).
6\.3 Imputation in COINr
------------------------
Now we turn to estimating missing data points in COINr. The function of interest is `impute()`, which gives a number of options, corresponding to the types of imputation discussed above. Briefly, COINr can impute using:
* The indicator mean, either across all countries or within a specified group
* The indicator median, either across all countries or within a specified group
* The mean or median, within the aggregation group
* The expectation maximisation algorithm, via the [AMELIA](https://cran.r-project.org/web/packages/Amelia/index.html) package
### 6\.3\.1 By column
The `impute()` function follows a similar logic to other COINr functions. We can enter a COIN or a data frame, and get a COIN or a data frame back. Let’s impute on a COIN first, using the raw ASEM data.
```
# Assemble ASEM data if not done already (uncomment the following)
# ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta,
# AggMeta = COINr6::ASEMAggMeta)
# check how many NAs we have
sum(is.na(ASEM$Data$Raw))
## [1] 63
# impute using indicator mean
ASEM <- impute(ASEM, dset = "Raw", imtype = "ind_mean")
## Missing data points detected = 63
## Missing data points imputed = 63, using method = ind_mean
# check how many NAs after imputation
sum(is.na(ASEM$Data$Imputed))
## [1] 0
```
If the input is a COIN, by default the output is an updated COIN, with a new data set `.$Data$Imputed`. We can also set `out2 = "df"` to output the imputed data frame on its own.
The above example replaces `NA` values with the mean of the indicator, and setting `imtype = ind_median` will instead use the indicator median. In this respect, we are using information from other units, inside the same indicator, to estimate the missing values.
Imputing by group may be more informative if you have meaningful grouping variables. For example, the ASEM data set has groupings by GDP, population, GDP per capita and whether the country is European or Asian. We might hypothesise that it is better to replace `NA` values with the median inside a group, say by GDP group. This would imply that countries with a similar GDP are likely to have similar indicator values.
```
# impute using GDP group median
ASEM <- impute(ASEM, dset = "Raw", imtype = "indgroup_median", groupvar = "Group_GDP")
## Missing data points detected = 63
## Missing data points imputed = 63, using method = indgroup_median
```
### 6\.3\.2 By row
So far, the imputation has been using values from the same column (indicator). Another possibility is to use values from other indicators, i.e operate row\-wise. COINr offers two basic options in this respect: either to take the mean or median of the other indicators inside the aggregation group, and this can be done by setting `imtype = "agg_mean"` or `imtype = "agg_median"` respectively.
Both of these imputation methods will, for a given unit, take the mean or median of the *normalised* values of the other indicators in the aggregation group in the level above indicator level. The process is as follows, for each `NA` value:
1. Find indicators in the same aggregation group
2. Normalise these indicators using the min\-max method, so they each have a minimum of zero and a maximum of one.
3. Replace the `NA` with the mean or median of the normalised values, for the unit in question.
4. Reverse the min\-max transformation so that the indicators are returned to their original scale.
It’s important to realise that this is equivalent to aggregating with the mean or median without imputing first. This is because in the aggregation step, if we take the mean of a group of indicators, and there is a `NA` present, this value is excluded from the mean calculation. Doing this is mathematically equivalent to assigning the mean to that missing value. Therefore, one reason to use this imputation method is to see which values are being implicitly assigned as a result of excluding missing values from the aggregation step. This is sometimes known as “shadow imputation”.
### 6\.3\.3 By column and row
Finally, we can use a regression\-based *expectation maximisation* algorithm, imported via the `AMELIA` package. This estimates missing values using the values of the other indicators *and* the other units. This effectively involves regressing each indicator on other indicators, and predicting missing values.
A catch of this approach is that if the number of units is small compared to the number of indicators, there will not be enough observations to estimate the parameters of the model. This is definitely the case if you have more indicators than observations. To use the EM algorithm in this case, imputation is performed on aggregation groups. So, to use the EM approach in `COINr`, you have to specify the aggregation level to group indicators by.
```
# impute using EM, grouping indicators inside their pillars (level 2)
ASEM <- impute(ASEM, dset = "Raw", imtype = "EM", EMaglev = 2)
## Missing data points detected = 63
## Missing data points imputed = 63, using method = EM
```
This works, because the ASEM data set has 51 units and no more than eight indicators per pillar. However, if we were to use `EMaglev = 3`, i.e. try imputing indicators grouped into the two sub\-indexes, it does not work.
Finding the right aggregation level to impute at might involve a bit of trial and error, but the advantage of imputing by groups of indicators is that aggregation groups are usually composed of indicators that are similar in some respect. This makes it more likely that they are good predictors of the other indicators in their group.
The `AMELIA` package offers many more options for imputation that are not available through the `impute()` function, such as bootstrapping, time\-series imputation and more. As with plotting packages `COINr` aims to provide an easy and quick interface to standard imputation in `AMELIA`, but if you want to have full control you should use the `AMELIA` package directly. Other good imputation options are the [`MICE`](https://cran.r-project.org/web/packages/mice/index.html) package (Multivariate Imputation via Chained Equations) and the [`missForest`](https://cran.rstudio.com/web/packages/missForest/index.html) package (using random forests).
Finally, if you are interested in imputing data over time, i.e. using the most recent data point available with panel data, go back to the [Putting everything together](coins-the-currency-of-coinr.html#putting-everything-together) section, which explains how `assemble()` can be used to extract a single time point from panel data and impute using the latest year. This is deliberately done at the assembly stage because once constructed, a COIN represents only a single year of data.
### 6\.3\.1 By column
The `impute()` function follows a similar logic to other COINr functions. We can enter a COIN or a data frame, and get a COIN or a data frame back. Let’s impute on a COIN first, using the raw ASEM data.
```
# Assemble ASEM data if not done already (uncomment the following)
# ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta,
# AggMeta = COINr6::ASEMAggMeta)
# check how many NAs we have
sum(is.na(ASEM$Data$Raw))
## [1] 63
# impute using indicator mean
ASEM <- impute(ASEM, dset = "Raw", imtype = "ind_mean")
## Missing data points detected = 63
## Missing data points imputed = 63, using method = ind_mean
# check how many NAs after imputation
sum(is.na(ASEM$Data$Imputed))
## [1] 0
```
If the input is a COIN, by default the output is an updated COIN, with a new data set `.$Data$Imputed`. We can also set `out2 = "df"` to output the imputed data frame on its own.
The above example replaces `NA` values with the mean of the indicator, and setting `imtype = ind_median` will instead use the indicator median. In this respect, we are using information from other units, inside the same indicator, to estimate the missing values.
Imputing by group may be more informative if you have meaningful grouping variables. For example, the ASEM data set has groupings by GDP, population, GDP per capita and whether the country is European or Asian. We might hypothesise that it is better to replace `NA` values with the median inside a group, say by GDP group. This would imply that countries with a similar GDP are likely to have similar indicator values.
```
# impute using GDP group median
ASEM <- impute(ASEM, dset = "Raw", imtype = "indgroup_median", groupvar = "Group_GDP")
## Missing data points detected = 63
## Missing data points imputed = 63, using method = indgroup_median
```
### 6\.3\.2 By row
So far, the imputation has been using values from the same column (indicator). Another possibility is to use values from other indicators, i.e operate row\-wise. COINr offers two basic options in this respect: either to take the mean or median of the other indicators inside the aggregation group, and this can be done by setting `imtype = "agg_mean"` or `imtype = "agg_median"` respectively.
Both of these imputation methods will, for a given unit, take the mean or median of the *normalised* values of the other indicators in the aggregation group in the level above indicator level. The process is as follows, for each `NA` value:
1. Find indicators in the same aggregation group
2. Normalise these indicators using the min\-max method, so they each have a minimum of zero and a maximum of one.
3. Replace the `NA` with the mean or median of the normalised values, for the unit in question.
4. Reverse the min\-max transformation so that the indicators are returned to their original scale.
It’s important to realise that this is equivalent to aggregating with the mean or median without imputing first. This is because in the aggregation step, if we take the mean of a group of indicators, and there is a `NA` present, this value is excluded from the mean calculation. Doing this is mathematically equivalent to assigning the mean to that missing value. Therefore, one reason to use this imputation method is to see which values are being implicitly assigned as a result of excluding missing values from the aggregation step. This is sometimes known as “shadow imputation”.
### 6\.3\.3 By column and row
Finally, we can use a regression\-based *expectation maximisation* algorithm, imported via the `AMELIA` package. This estimates missing values using the values of the other indicators *and* the other units. This effectively involves regressing each indicator on other indicators, and predicting missing values.
A catch of this approach is that if the number of units is small compared to the number of indicators, there will not be enough observations to estimate the parameters of the model. This is definitely the case if you have more indicators than observations. To use the EM algorithm in this case, imputation is performed on aggregation groups. So, to use the EM approach in `COINr`, you have to specify the aggregation level to group indicators by.
```
# impute using EM, grouping indicators inside their pillars (level 2)
ASEM <- impute(ASEM, dset = "Raw", imtype = "EM", EMaglev = 2)
## Missing data points detected = 63
## Missing data points imputed = 63, using method = EM
```
This works, because the ASEM data set has 51 units and no more than eight indicators per pillar. However, if we were to use `EMaglev = 3`, i.e. try imputing indicators grouped into the two sub\-indexes, it does not work.
Finding the right aggregation level to impute at might involve a bit of trial and error, but the advantage of imputing by groups of indicators is that aggregation groups are usually composed of indicators that are similar in some respect. This makes it more likely that they are good predictors of the other indicators in their group.
The `AMELIA` package offers many more options for imputation that are not available through the `impute()` function, such as bootstrapping, time\-series imputation and more. As with plotting packages `COINr` aims to provide an easy and quick interface to standard imputation in `AMELIA`, but if you want to have full control you should use the `AMELIA` package directly. Other good imputation options are the [`MICE`](https://cran.r-project.org/web/packages/mice/index.html) package (Multivariate Imputation via Chained Equations) and the [`missForest`](https://cran.rstudio.com/web/packages/missForest/index.html) package (using random forests).
Finally, if you are interested in imputing data over time, i.e. using the most recent data point available with panel data, go back to the [Putting everything together](coins-the-currency-of-coinr.html#putting-everything-together) section, which explains how `assemble()` can be used to extract a single time point from panel data and impute using the latest year. This is deliberately done at the assembly stage because once constructed, a COIN represents only a single year of data.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/denomination.html |
Chapter 7 Denomination
======================
The first section here gives a bit of introduction to the concept of denomination. If you just want to know how to denominate in COINr, skip to the section on [Denominating in COINr](denomination.html#denominating-in-coinr).
7\.1 Concept
------------
*Denomination* refers to the operation of dividing one indicator by another. But why should we do that?
As anyone who has ever looked at a map will know, countries come in all different sizes. The problem is that many indicators, on their own, are strongly linked to the size of the country. That means that if we compare countries using these indicators directly, we will often get a ranking that has roughly the biggest countries at the top, and the smallest countries at the bottom.
To take an example, let’s examine some indicators in the ASEM data set, and introduce another plotting function in the process.
```
# load COINr package
library(COINr6)
library(magrittr) # for pipe operations
# build ASEM index
ASEM <- build_ASEM()
# plot international trade in goods against GDP
iplotIndDist2(ASEM, dsets = c("Denominators", "Raw"), icodes = c("Den_GDP", "Goods"))
```
\# (note: need to fix labelling and units of denominators)
The function `iplotIndDist2()` is similar to `iplotIndDist()` but allows plotting two indicators against each other. You can pick any indicator from any data set for each, including denominators.
What are these “denominators” anyway? Denominators are indicators that are used to scale other indicators, in order to remove the “size effect”. Typically, they are those related to physical or economic size, including GDP, population, land area and so on.
Anyway, looking at the plot above, it is clear that that countries with a higher GDP have a higher international trade in international goods (e.g. Germany, China, UK, Japan, France), which is not very surprising. The problem comes when we want to include this indicator as a measure of connectivity: on its own, trade in goods simply summarises having a large economy. What is more interesting, is to measure international trade *per unit GDP*, and this is done by dividing (i.e. **denominating**) the international trade of each country by its GDP. Let’s do that manually here and see what we get.
```
# divide trade by GDP
tradeperGDP <- ASEM$Data$Raw$Goods/ASEM$Input$Denominators$Den_GDP
# bar chart: add unit names first
iplotBar(data.frame(UnitCode = ASEM$Data$Raw$UnitCode, TradePerGDP = tradeperGDP))
```
Now the picture is completely different \- small countries like Slovakia, Czech Republic and Singapore have the highest values.
The rankings here are completely different because the meanings of these two measures are completely different. Denomination is in fact a nonlinear transformation, because every value is divided by a different value (each country is divided by its own unique GDP, in this case). That doesn’t mean that denominated indicators are suddenly more “right” than the before their denomination, however. Trade per GDP is a useful measure of how much a country’s economy is dependent on international trade, but in terms of economic power, it might not be meaningful to scale by GDP. In summary, it is important to consider the meaning of the indicator compared to what you want to actually measure.
More precisely, indicators can be thought of as either *intensive* or *extensive* variables. Intensive variables are not (or only weakly) related to the size of the country, and allow “fair” comparisons between countries of different sizes. Extensive variables, on the contrary, are strongly related to the size of the country.
This distinction is well known in physics, for example. Mass is related to the size of the object and is an extensive variable. If we take a block of steel, and double its size (volume), we also double its mass. Density, which is mass per unit volume, is an intensive quantity: if we double the size of the block, the density remains the same.
* An example of an extensive variable is population. Bigger countries tend to have bigger populations.
* An example of an intensive variable is population density. This is no longer dependent on the physical size of the country.
The summary here is that an **extensive** variable becomes an **intensive** variable when we divide it by a **denominator**.
An important point to make here is about ordering. In the example above, we have simply divided two data frames by one another. To get the right result, we need to make sure that the units (rows) match properly in both data frames, as well as the columns. The example above is correct because the denominator data and indicator data originally came from the same data frame (`ASEMIndData`). However normally it would be better to use R’s `match()` or `merge()` functions, or else similar tidyverse equivalents, to ensure no errors arise. In COINr, this ordering problem is automatically taken care of.
7\.2 Denominating in COINr
--------------------------
Denomination is fairly simple to do in R, it’s just dividing one vector by another. Nevertheless, COINr has a dedicated function for denominating which makes life easier and helps you to track what you have done.
Before we get to that, it’s worth mentioning that COINr has a few built\-in denominators sourced from the World Bank. It looks like this:
```
WorldDenoms
## # A tibble: 249 x 7
## UnitName UnitCode Den_GDP Den_Pop Den_Area Den_GDPpc Den_IncomeGroup
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 Afghanistan AFG 1.93e10 3.80e7 652860 507. Low income
## 2 Albania ALB 1.53e10 2.85e6 27400 5353. Upper middle in~
## 3 Algeria DZA 1.71e11 4.31e7 2381740 3974. Lower middle in~
## 4 American Sam~ ASM 6.36e 8 5.53e4 200 11467. Upper middle in~
## 5 Andorra AND 3.15e 9 7.71e4 470 40886. High income
## 6 Angola AGO 8.88e10 3.18e7 1246700 2791. Lower middle in~
## 7 Anguilla AIA NA NA NA NA <NA>
## 8 Antarctica ATA NA NA NA NA <NA>
## 9 Antigua and ~ ATG 1.66e 9 9.71e4 440 17113. High income
## 10 Argentina ARG 4.45e11 4.49e7 2736690 9912. Upper middle in~
## # ... with 239 more rows
```
and the metadata can be found by calling `?WorldDenoms`. Data here is the latest available as of February 2021 and I would recommend using these only for exploration, then updating your data yourself.
To denominate your indicators in COINr, the function to call is `denominate()`. Like other COINr functions, this can be used either independently on a data frame of indicators, or on COINs. Consider that in all cases you need three main ingredients:
1. Some indicator data that should be denominated
2. Some other indicators to use as denominators
3. A mapping to say which denominators (if any) to use for each indicator.
### 7\.2\.1 On COINs
If you are working with a COIN, the indicator data will be present in the `.$Data` folder. If you specified any denominators in `IndData` (i.e. columns beginning with “Den\_”) when calling `assemble()` you will also find them in `.$Input$Denominators`. Finally, if you specified a `Denominator` column in `IndMeta` when calling `assemble()` then the mapping of denominators to indicators will also be present.
```
# The raw indicator data which will be denominated
ASEM$Data$Raw
## # A tibble: 51 x 56
## UnitCode UnitName Year Group_GDP Group_GDPpc Group_Pop Group_EurAsia Goods
## <chr> <chr> <dbl> <chr> <chr> <chr> <chr> <dbl>
## 1 AUT Austria 2018 L XL M Europe 278.
## 2 BEL Belgium 2018 L L L Europe 598.
## 3 BGR Bulgaria 2018 S S M Europe 42.8
## 4 HRV Croatia 2018 S M S Europe 28.4
## 5 CYP Cyprus 2018 S L S Europe 8.77
## 6 CZE Czech Re~ 2018 M L M Europe 274.
## 7 DNK Denmark 2018 L XL M Europe 147.
## 8 EST Estonia 2018 S M S Europe 28.2
## 9 FIN Finland 2018 M XL M Europe 102.
## 10 FRA France 2018 XL L L Europe 849.
## # ... with 41 more rows, and 48 more variables: Services <dbl>, FDI <dbl>,
## # PRemit <dbl>, ForPort <dbl>, CostImpEx <dbl>, Tariff <dbl>, TBTs <dbl>,
## # TIRcon <dbl>, RTAs <dbl>, Visa <dbl>, StMob <dbl>, Research <dbl>,
## # Pat <dbl>, CultServ <dbl>, CultGood <dbl>, Tourist <dbl>, MigStock <dbl>,
## # Lang <dbl>, LPI <dbl>, Flights <dbl>, Ship <dbl>, Bord <dbl>, Elec <dbl>,
## # Gas <dbl>, ConSpeed <dbl>, Cov4G <dbl>, Embs <dbl>, IGOs <dbl>,
## # UNVote <dbl>, Renew <dbl>, PrimEner <dbl>, CO2 <dbl>, MatCon <dbl>, ...
# The denominators
ASEM$Input$Denominators
## # A tibble: 51 x 11
## UnitCode UnitName Year Group_GDP Group_GDPpc Group_Pop Group_EurAsia
## <chr> <chr> <dbl> <chr> <chr> <chr> <chr>
## 1 AUT Austria 2018 L XL M Europe
## 2 BEL Belgium 2018 L L L Europe
## 3 BGR Bulgaria 2018 S S M Europe
## 4 HRV Croatia 2018 S M S Europe
## 5 CYP Cyprus 2018 S L S Europe
## 6 CZE Czech Republic 2018 M L M Europe
## 7 DNK Denmark 2018 L XL M Europe
## 8 EST Estonia 2018 S M S Europe
## 9 FIN Finland 2018 M XL M Europe
## 10 FRA France 2018 XL L L Europe
## # ... with 41 more rows, and 4 more variables: Den_Area <dbl>,
## # Den_Energy <dbl>, Den_GDP <dbl>, Den_Pop <dbl>
# The mapping of denominators to indicators (see Denominator column)
ASEM$Input$IndMeta
## # A tibble: 49 x 10
## IndName IndCode Direction IndWeight Denominator IndUnit Target Agg1 Agg2
## <chr> <chr> <dbl> <dbl> <chr> <chr> <dbl> <chr> <chr>
## 1 Trade in~ Goods 1 1 Den_GDP Trilli~ 1.82e+3 ConE~ Conn
## 2 Trade in~ Servic~ 1 1 Den_GDP Millio~ 6.24e+2 ConE~ Conn
## 3 Foreign ~ FDI 1 1 Den_GDP Billio~ 7.18e+1 ConE~ Conn
## 4 Personal~ PRemit 1 1 Den_GDP Millio~ 2.87e+1 ConE~ Conn
## 5 Foreign ~ ForPort 1 1 Den_GDP Millio~ 1.01e+4 ConE~ Conn
## 6 Cost to ~ CostIm~ -1 1 <NA> Curren~ 4.96e+1 Inst~ Conn
## 7 Mean tar~ Tariff -1 1 <NA> Percent 5.26e-1 Inst~ Conn
## 8 Technica~ TBTs -1 1 <NA> Number~ 8.86e+1 Inst~ Conn
## 9 Signator~ TIRcon 1 1 <NA> (1 (ye~ 9.5 e-1 Inst~ Conn
## 10 Regional~ RTAs 1 1 <NA> Number~ 4.38e+1 Inst~ Conn
## # ... with 39 more rows, and 1 more variable: Agg3 <chr>
```
COINr’s `denominate()` function knows where to look for each of these ingredients, so we can simply call:
```
ASEM <- denominate(ASEM, dset = "Raw")
```
which will return a new data set `.Data$Denominated`. To return the dataset directly, rather than outputting an updated COIN, you can also set `out2 = "df"` (this is a common argument to many functions which can be useful if you want to examine the result directly).
You can also change which indicators are denominated, and by what.
```
# Get denominator specification from metadata
denomby_meta <- ASEM$Input$IndMeta$Denominator
# Example: we want to change the denominator of flights from population to GDP
denomby_meta[ASEM$Input$IndMeta$IndCode == "Flights"] <- "Den_GDP"
# Now re-denominate. Return data frame for inspection
ASEM <- denominate(ASEM, dset = "Raw", specby = "user", denomby = denomby_meta)
```
Here we have changed the denominator of one of the indicators, “Flights”, to GDP. This is done by creating a character vector `denomby_meta` (copied from the original denominator specification) which has an entry for each indicator, specifying which denominator to use, if any. We then changed the entry corresponding to “Flights”. This overwrites any previous denomination. If you want to keep and compare alternative specifications, see the chapter on [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons).
Let’s compare the raw Flights data with the Flights per GDP data:
```
# plot raw flights against denominated
iplotIndDist2(ASEM, dsets = c("Raw", "Denominated"), icodes = "Flights")
```
Clearly, the denominated and raw indicators are very different from one another, reflecting the completely different meaning.
### 7\.2\.2 On data frames
If you are just working with data frames, you need to supply the three ingredients directly to the function. Here we will take some of the ASEM data for illustration (recalling that both indicator and denominator data is specified in `ASEMIndMeta`).
```
# a small data frame of indicator data
IndData <- ASEMIndData[c("UnitCode", "Goods", "Services", "FDI")]
IndData
## # A tibble: 51 x 4
## UnitCode Goods Services FDI
## <chr> <dbl> <dbl> <dbl>
## 1 AUT 278. 108. 5
## 2 BEL 598. 216. 5.71
## 3 BGR 42.8 13.0 1.35
## 4 HRV 28.4 17.4 0.387
## 5 CYP 8.77 15.2 1.23
## 6 CZE 274. 43.5 3.88
## 7 DNK 147. 114. 9.1
## 8 EST 28.2 10.2 0.58
## 9 FIN 102. 53.8 6.03
## 10 FRA 849. 471. 30.9
## # ... with 41 more rows
# two selected denominators
Denoms <- ASEMIndData[c("UnitCode", "Den_Pop", "Den_GDP")]
# denominate the data
IndDataDenom <- denominate(IndData, denomby = c("Den_GDP", NA, "Den_Pop"), denominators = Denoms)
IndDataDenom
## # A tibble: 51 x 4
## UnitCode Goods Services FDI
## <chr> <dbl> <dbl> <dbl>
## 1 AUT 0.712 108. 0.000572
## 2 BEL 1.28 216. 0.000500
## 3 BGR 0.804 13.0 0.000191
## 4 HRV 0.554 17.4 0.0000924
## 5 CYP 0.437 15.2 0.00104
## 6 CZE 1.40 43.5 0.000365
## 7 DNK 0.478 114. 0.00159
## 8 EST 1.21 10.2 0.000443
## 9 FIN 0.426 53.8 0.00109
## 10 FRA 0.344 471. 0.000476
## # ... with 41 more rows
```
Since the input is recognised as a data frame, you don’t need to specify any other arguments, and the output is automatically a data frame. Note how `denomby` works: here it specifies that that “Goods” should be denominated by “Den\_GDP”, that “Services” should not be denominated, and that “FDI” should be denominated by “Den\_Pop”.
7\.3 When to denominate, and by what?
-------------------------------------
Denomination is mathematically very simple, but from a conceptual point of view it needs to be handled with care. As we have shown, denominating an indicator will usually completely change it, and will have a corresponding impact on the results.
Two ways of looking at the problem are first, from the conceptual point of view. Consider each indicator and whether it fits with the aim of your index. Some indicators are anyway intensive, such as the percentage of tertiary graduates. Others, such as trade, will be strongly linked to the size of the country. In those cases, consider whether countries with high trade values should have higher scores in your index or not? Or should it be trade as a percentage of GDP? Or trade per capita? Each of these will have different meanings.
Sometimes extensive variables will anyway be the right choice. The [Lowy Asia Power Index](https://power.lowyinstitute.org/) measures the power of each country in an absolute sense: in this case, the size of the country is all\-important and e.g. trade or military capabilities per capita would not make much sense.
The second (complimentary) way to approach denomination is from a statistical point of view. We can check which indicators are strongly related to the size of a country by correlating them with some typical denominators, such as the ones used here. The `getStats()` function does just this, checking the correlations between each indicator and each denominator, and flagging any possible high correlations.
```
# get statistics on raw data set
ASEM <- getStats(ASEM, dset = "Raw", out2 = "COIN", t_denom = 0.8)
## Number of collinear indicators = 3
## Number of signficant negative indicator correlations = 322
## Number of indicators with high denominator correlations = 4
# get the result
ctable <- ASEM$Analysis$Raw$DenomCorrelations
# remove first column
ctable <- ctable[-1]
# return only columns that have entries with correlation above 0.8
ctable[,colSums((abs(ctable) > 0.8))>0]
## Goods FDI StMob CultGood
## 1 0.2473558 0.3668625 0.5228032 0.2696675
## 2 0.6276219 0.7405170 0.7345030 0.7120445
## 3 0.8026770 0.8482453 0.8379575 0.8510374
## 4 0.4158811 0.6529376 0.5523545 0.5028121
```
The matrix that is displayed above only includes columns where there is a correlation value (between an indicator and any denominator) of greater than 0\.8\. The results show some interesting patterns:
* Many indicators have a high positive correlation with GDP, including flights, trade, foreign direct investment (very high), personal remittances, research, stock of migrants, and so on.
* Bigger countries, in terms of land area, tend to have *less* trade agreements (RTAs) and a more restrictive visa policy
* Larger populations are associated with higher levels of poverty
The purpose here is not to draw causal links between these quantities, although they might reveal interesting patterns. Rather, these might suggest which quantities to denominate by, if the choices also work on a conceptual level.
7\.1 Concept
------------
*Denomination* refers to the operation of dividing one indicator by another. But why should we do that?
As anyone who has ever looked at a map will know, countries come in all different sizes. The problem is that many indicators, on their own, are strongly linked to the size of the country. That means that if we compare countries using these indicators directly, we will often get a ranking that has roughly the biggest countries at the top, and the smallest countries at the bottom.
To take an example, let’s examine some indicators in the ASEM data set, and introduce another plotting function in the process.
```
# load COINr package
library(COINr6)
library(magrittr) # for pipe operations
# build ASEM index
ASEM <- build_ASEM()
# plot international trade in goods against GDP
iplotIndDist2(ASEM, dsets = c("Denominators", "Raw"), icodes = c("Den_GDP", "Goods"))
```
\# (note: need to fix labelling and units of denominators)
The function `iplotIndDist2()` is similar to `iplotIndDist()` but allows plotting two indicators against each other. You can pick any indicator from any data set for each, including denominators.
What are these “denominators” anyway? Denominators are indicators that are used to scale other indicators, in order to remove the “size effect”. Typically, they are those related to physical or economic size, including GDP, population, land area and so on.
Anyway, looking at the plot above, it is clear that that countries with a higher GDP have a higher international trade in international goods (e.g. Germany, China, UK, Japan, France), which is not very surprising. The problem comes when we want to include this indicator as a measure of connectivity: on its own, trade in goods simply summarises having a large economy. What is more interesting, is to measure international trade *per unit GDP*, and this is done by dividing (i.e. **denominating**) the international trade of each country by its GDP. Let’s do that manually here and see what we get.
```
# divide trade by GDP
tradeperGDP <- ASEM$Data$Raw$Goods/ASEM$Input$Denominators$Den_GDP
# bar chart: add unit names first
iplotBar(data.frame(UnitCode = ASEM$Data$Raw$UnitCode, TradePerGDP = tradeperGDP))
```
Now the picture is completely different \- small countries like Slovakia, Czech Republic and Singapore have the highest values.
The rankings here are completely different because the meanings of these two measures are completely different. Denomination is in fact a nonlinear transformation, because every value is divided by a different value (each country is divided by its own unique GDP, in this case). That doesn’t mean that denominated indicators are suddenly more “right” than the before their denomination, however. Trade per GDP is a useful measure of how much a country’s economy is dependent on international trade, but in terms of economic power, it might not be meaningful to scale by GDP. In summary, it is important to consider the meaning of the indicator compared to what you want to actually measure.
More precisely, indicators can be thought of as either *intensive* or *extensive* variables. Intensive variables are not (or only weakly) related to the size of the country, and allow “fair” comparisons between countries of different sizes. Extensive variables, on the contrary, are strongly related to the size of the country.
This distinction is well known in physics, for example. Mass is related to the size of the object and is an extensive variable. If we take a block of steel, and double its size (volume), we also double its mass. Density, which is mass per unit volume, is an intensive quantity: if we double the size of the block, the density remains the same.
* An example of an extensive variable is population. Bigger countries tend to have bigger populations.
* An example of an intensive variable is population density. This is no longer dependent on the physical size of the country.
The summary here is that an **extensive** variable becomes an **intensive** variable when we divide it by a **denominator**.
An important point to make here is about ordering. In the example above, we have simply divided two data frames by one another. To get the right result, we need to make sure that the units (rows) match properly in both data frames, as well as the columns. The example above is correct because the denominator data and indicator data originally came from the same data frame (`ASEMIndData`). However normally it would be better to use R’s `match()` or `merge()` functions, or else similar tidyverse equivalents, to ensure no errors arise. In COINr, this ordering problem is automatically taken care of.
7\.2 Denominating in COINr
--------------------------
Denomination is fairly simple to do in R, it’s just dividing one vector by another. Nevertheless, COINr has a dedicated function for denominating which makes life easier and helps you to track what you have done.
Before we get to that, it’s worth mentioning that COINr has a few built\-in denominators sourced from the World Bank. It looks like this:
```
WorldDenoms
## # A tibble: 249 x 7
## UnitName UnitCode Den_GDP Den_Pop Den_Area Den_GDPpc Den_IncomeGroup
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 Afghanistan AFG 1.93e10 3.80e7 652860 507. Low income
## 2 Albania ALB 1.53e10 2.85e6 27400 5353. Upper middle in~
## 3 Algeria DZA 1.71e11 4.31e7 2381740 3974. Lower middle in~
## 4 American Sam~ ASM 6.36e 8 5.53e4 200 11467. Upper middle in~
## 5 Andorra AND 3.15e 9 7.71e4 470 40886. High income
## 6 Angola AGO 8.88e10 3.18e7 1246700 2791. Lower middle in~
## 7 Anguilla AIA NA NA NA NA <NA>
## 8 Antarctica ATA NA NA NA NA <NA>
## 9 Antigua and ~ ATG 1.66e 9 9.71e4 440 17113. High income
## 10 Argentina ARG 4.45e11 4.49e7 2736690 9912. Upper middle in~
## # ... with 239 more rows
```
and the metadata can be found by calling `?WorldDenoms`. Data here is the latest available as of February 2021 and I would recommend using these only for exploration, then updating your data yourself.
To denominate your indicators in COINr, the function to call is `denominate()`. Like other COINr functions, this can be used either independently on a data frame of indicators, or on COINs. Consider that in all cases you need three main ingredients:
1. Some indicator data that should be denominated
2. Some other indicators to use as denominators
3. A mapping to say which denominators (if any) to use for each indicator.
### 7\.2\.1 On COINs
If you are working with a COIN, the indicator data will be present in the `.$Data` folder. If you specified any denominators in `IndData` (i.e. columns beginning with “Den\_”) when calling `assemble()` you will also find them in `.$Input$Denominators`. Finally, if you specified a `Denominator` column in `IndMeta` when calling `assemble()` then the mapping of denominators to indicators will also be present.
```
# The raw indicator data which will be denominated
ASEM$Data$Raw
## # A tibble: 51 x 56
## UnitCode UnitName Year Group_GDP Group_GDPpc Group_Pop Group_EurAsia Goods
## <chr> <chr> <dbl> <chr> <chr> <chr> <chr> <dbl>
## 1 AUT Austria 2018 L XL M Europe 278.
## 2 BEL Belgium 2018 L L L Europe 598.
## 3 BGR Bulgaria 2018 S S M Europe 42.8
## 4 HRV Croatia 2018 S M S Europe 28.4
## 5 CYP Cyprus 2018 S L S Europe 8.77
## 6 CZE Czech Re~ 2018 M L M Europe 274.
## 7 DNK Denmark 2018 L XL M Europe 147.
## 8 EST Estonia 2018 S M S Europe 28.2
## 9 FIN Finland 2018 M XL M Europe 102.
## 10 FRA France 2018 XL L L Europe 849.
## # ... with 41 more rows, and 48 more variables: Services <dbl>, FDI <dbl>,
## # PRemit <dbl>, ForPort <dbl>, CostImpEx <dbl>, Tariff <dbl>, TBTs <dbl>,
## # TIRcon <dbl>, RTAs <dbl>, Visa <dbl>, StMob <dbl>, Research <dbl>,
## # Pat <dbl>, CultServ <dbl>, CultGood <dbl>, Tourist <dbl>, MigStock <dbl>,
## # Lang <dbl>, LPI <dbl>, Flights <dbl>, Ship <dbl>, Bord <dbl>, Elec <dbl>,
## # Gas <dbl>, ConSpeed <dbl>, Cov4G <dbl>, Embs <dbl>, IGOs <dbl>,
## # UNVote <dbl>, Renew <dbl>, PrimEner <dbl>, CO2 <dbl>, MatCon <dbl>, ...
# The denominators
ASEM$Input$Denominators
## # A tibble: 51 x 11
## UnitCode UnitName Year Group_GDP Group_GDPpc Group_Pop Group_EurAsia
## <chr> <chr> <dbl> <chr> <chr> <chr> <chr>
## 1 AUT Austria 2018 L XL M Europe
## 2 BEL Belgium 2018 L L L Europe
## 3 BGR Bulgaria 2018 S S M Europe
## 4 HRV Croatia 2018 S M S Europe
## 5 CYP Cyprus 2018 S L S Europe
## 6 CZE Czech Republic 2018 M L M Europe
## 7 DNK Denmark 2018 L XL M Europe
## 8 EST Estonia 2018 S M S Europe
## 9 FIN Finland 2018 M XL M Europe
## 10 FRA France 2018 XL L L Europe
## # ... with 41 more rows, and 4 more variables: Den_Area <dbl>,
## # Den_Energy <dbl>, Den_GDP <dbl>, Den_Pop <dbl>
# The mapping of denominators to indicators (see Denominator column)
ASEM$Input$IndMeta
## # A tibble: 49 x 10
## IndName IndCode Direction IndWeight Denominator IndUnit Target Agg1 Agg2
## <chr> <chr> <dbl> <dbl> <chr> <chr> <dbl> <chr> <chr>
## 1 Trade in~ Goods 1 1 Den_GDP Trilli~ 1.82e+3 ConE~ Conn
## 2 Trade in~ Servic~ 1 1 Den_GDP Millio~ 6.24e+2 ConE~ Conn
## 3 Foreign ~ FDI 1 1 Den_GDP Billio~ 7.18e+1 ConE~ Conn
## 4 Personal~ PRemit 1 1 Den_GDP Millio~ 2.87e+1 ConE~ Conn
## 5 Foreign ~ ForPort 1 1 Den_GDP Millio~ 1.01e+4 ConE~ Conn
## 6 Cost to ~ CostIm~ -1 1 <NA> Curren~ 4.96e+1 Inst~ Conn
## 7 Mean tar~ Tariff -1 1 <NA> Percent 5.26e-1 Inst~ Conn
## 8 Technica~ TBTs -1 1 <NA> Number~ 8.86e+1 Inst~ Conn
## 9 Signator~ TIRcon 1 1 <NA> (1 (ye~ 9.5 e-1 Inst~ Conn
## 10 Regional~ RTAs 1 1 <NA> Number~ 4.38e+1 Inst~ Conn
## # ... with 39 more rows, and 1 more variable: Agg3 <chr>
```
COINr’s `denominate()` function knows where to look for each of these ingredients, so we can simply call:
```
ASEM <- denominate(ASEM, dset = "Raw")
```
which will return a new data set `.Data$Denominated`. To return the dataset directly, rather than outputting an updated COIN, you can also set `out2 = "df"` (this is a common argument to many functions which can be useful if you want to examine the result directly).
You can also change which indicators are denominated, and by what.
```
# Get denominator specification from metadata
denomby_meta <- ASEM$Input$IndMeta$Denominator
# Example: we want to change the denominator of flights from population to GDP
denomby_meta[ASEM$Input$IndMeta$IndCode == "Flights"] <- "Den_GDP"
# Now re-denominate. Return data frame for inspection
ASEM <- denominate(ASEM, dset = "Raw", specby = "user", denomby = denomby_meta)
```
Here we have changed the denominator of one of the indicators, “Flights”, to GDP. This is done by creating a character vector `denomby_meta` (copied from the original denominator specification) which has an entry for each indicator, specifying which denominator to use, if any. We then changed the entry corresponding to “Flights”. This overwrites any previous denomination. If you want to keep and compare alternative specifications, see the chapter on [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons).
Let’s compare the raw Flights data with the Flights per GDP data:
```
# plot raw flights against denominated
iplotIndDist2(ASEM, dsets = c("Raw", "Denominated"), icodes = "Flights")
```
Clearly, the denominated and raw indicators are very different from one another, reflecting the completely different meaning.
### 7\.2\.2 On data frames
If you are just working with data frames, you need to supply the three ingredients directly to the function. Here we will take some of the ASEM data for illustration (recalling that both indicator and denominator data is specified in `ASEMIndMeta`).
```
# a small data frame of indicator data
IndData <- ASEMIndData[c("UnitCode", "Goods", "Services", "FDI")]
IndData
## # A tibble: 51 x 4
## UnitCode Goods Services FDI
## <chr> <dbl> <dbl> <dbl>
## 1 AUT 278. 108. 5
## 2 BEL 598. 216. 5.71
## 3 BGR 42.8 13.0 1.35
## 4 HRV 28.4 17.4 0.387
## 5 CYP 8.77 15.2 1.23
## 6 CZE 274. 43.5 3.88
## 7 DNK 147. 114. 9.1
## 8 EST 28.2 10.2 0.58
## 9 FIN 102. 53.8 6.03
## 10 FRA 849. 471. 30.9
## # ... with 41 more rows
# two selected denominators
Denoms <- ASEMIndData[c("UnitCode", "Den_Pop", "Den_GDP")]
# denominate the data
IndDataDenom <- denominate(IndData, denomby = c("Den_GDP", NA, "Den_Pop"), denominators = Denoms)
IndDataDenom
## # A tibble: 51 x 4
## UnitCode Goods Services FDI
## <chr> <dbl> <dbl> <dbl>
## 1 AUT 0.712 108. 0.000572
## 2 BEL 1.28 216. 0.000500
## 3 BGR 0.804 13.0 0.000191
## 4 HRV 0.554 17.4 0.0000924
## 5 CYP 0.437 15.2 0.00104
## 6 CZE 1.40 43.5 0.000365
## 7 DNK 0.478 114. 0.00159
## 8 EST 1.21 10.2 0.000443
## 9 FIN 0.426 53.8 0.00109
## 10 FRA 0.344 471. 0.000476
## # ... with 41 more rows
```
Since the input is recognised as a data frame, you don’t need to specify any other arguments, and the output is automatically a data frame. Note how `denomby` works: here it specifies that that “Goods” should be denominated by “Den\_GDP”, that “Services” should not be denominated, and that “FDI” should be denominated by “Den\_Pop”.
### 7\.2\.1 On COINs
If you are working with a COIN, the indicator data will be present in the `.$Data` folder. If you specified any denominators in `IndData` (i.e. columns beginning with “Den\_”) when calling `assemble()` you will also find them in `.$Input$Denominators`. Finally, if you specified a `Denominator` column in `IndMeta` when calling `assemble()` then the mapping of denominators to indicators will also be present.
```
# The raw indicator data which will be denominated
ASEM$Data$Raw
## # A tibble: 51 x 56
## UnitCode UnitName Year Group_GDP Group_GDPpc Group_Pop Group_EurAsia Goods
## <chr> <chr> <dbl> <chr> <chr> <chr> <chr> <dbl>
## 1 AUT Austria 2018 L XL M Europe 278.
## 2 BEL Belgium 2018 L L L Europe 598.
## 3 BGR Bulgaria 2018 S S M Europe 42.8
## 4 HRV Croatia 2018 S M S Europe 28.4
## 5 CYP Cyprus 2018 S L S Europe 8.77
## 6 CZE Czech Re~ 2018 M L M Europe 274.
## 7 DNK Denmark 2018 L XL M Europe 147.
## 8 EST Estonia 2018 S M S Europe 28.2
## 9 FIN Finland 2018 M XL M Europe 102.
## 10 FRA France 2018 XL L L Europe 849.
## # ... with 41 more rows, and 48 more variables: Services <dbl>, FDI <dbl>,
## # PRemit <dbl>, ForPort <dbl>, CostImpEx <dbl>, Tariff <dbl>, TBTs <dbl>,
## # TIRcon <dbl>, RTAs <dbl>, Visa <dbl>, StMob <dbl>, Research <dbl>,
## # Pat <dbl>, CultServ <dbl>, CultGood <dbl>, Tourist <dbl>, MigStock <dbl>,
## # Lang <dbl>, LPI <dbl>, Flights <dbl>, Ship <dbl>, Bord <dbl>, Elec <dbl>,
## # Gas <dbl>, ConSpeed <dbl>, Cov4G <dbl>, Embs <dbl>, IGOs <dbl>,
## # UNVote <dbl>, Renew <dbl>, PrimEner <dbl>, CO2 <dbl>, MatCon <dbl>, ...
# The denominators
ASEM$Input$Denominators
## # A tibble: 51 x 11
## UnitCode UnitName Year Group_GDP Group_GDPpc Group_Pop Group_EurAsia
## <chr> <chr> <dbl> <chr> <chr> <chr> <chr>
## 1 AUT Austria 2018 L XL M Europe
## 2 BEL Belgium 2018 L L L Europe
## 3 BGR Bulgaria 2018 S S M Europe
## 4 HRV Croatia 2018 S M S Europe
## 5 CYP Cyprus 2018 S L S Europe
## 6 CZE Czech Republic 2018 M L M Europe
## 7 DNK Denmark 2018 L XL M Europe
## 8 EST Estonia 2018 S M S Europe
## 9 FIN Finland 2018 M XL M Europe
## 10 FRA France 2018 XL L L Europe
## # ... with 41 more rows, and 4 more variables: Den_Area <dbl>,
## # Den_Energy <dbl>, Den_GDP <dbl>, Den_Pop <dbl>
# The mapping of denominators to indicators (see Denominator column)
ASEM$Input$IndMeta
## # A tibble: 49 x 10
## IndName IndCode Direction IndWeight Denominator IndUnit Target Agg1 Agg2
## <chr> <chr> <dbl> <dbl> <chr> <chr> <dbl> <chr> <chr>
## 1 Trade in~ Goods 1 1 Den_GDP Trilli~ 1.82e+3 ConE~ Conn
## 2 Trade in~ Servic~ 1 1 Den_GDP Millio~ 6.24e+2 ConE~ Conn
## 3 Foreign ~ FDI 1 1 Den_GDP Billio~ 7.18e+1 ConE~ Conn
## 4 Personal~ PRemit 1 1 Den_GDP Millio~ 2.87e+1 ConE~ Conn
## 5 Foreign ~ ForPort 1 1 Den_GDP Millio~ 1.01e+4 ConE~ Conn
## 6 Cost to ~ CostIm~ -1 1 <NA> Curren~ 4.96e+1 Inst~ Conn
## 7 Mean tar~ Tariff -1 1 <NA> Percent 5.26e-1 Inst~ Conn
## 8 Technica~ TBTs -1 1 <NA> Number~ 8.86e+1 Inst~ Conn
## 9 Signator~ TIRcon 1 1 <NA> (1 (ye~ 9.5 e-1 Inst~ Conn
## 10 Regional~ RTAs 1 1 <NA> Number~ 4.38e+1 Inst~ Conn
## # ... with 39 more rows, and 1 more variable: Agg3 <chr>
```
COINr’s `denominate()` function knows where to look for each of these ingredients, so we can simply call:
```
ASEM <- denominate(ASEM, dset = "Raw")
```
which will return a new data set `.Data$Denominated`. To return the dataset directly, rather than outputting an updated COIN, you can also set `out2 = "df"` (this is a common argument to many functions which can be useful if you want to examine the result directly).
You can also change which indicators are denominated, and by what.
```
# Get denominator specification from metadata
denomby_meta <- ASEM$Input$IndMeta$Denominator
# Example: we want to change the denominator of flights from population to GDP
denomby_meta[ASEM$Input$IndMeta$IndCode == "Flights"] <- "Den_GDP"
# Now re-denominate. Return data frame for inspection
ASEM <- denominate(ASEM, dset = "Raw", specby = "user", denomby = denomby_meta)
```
Here we have changed the denominator of one of the indicators, “Flights”, to GDP. This is done by creating a character vector `denomby_meta` (copied from the original denominator specification) which has an entry for each indicator, specifying which denominator to use, if any. We then changed the entry corresponding to “Flights”. This overwrites any previous denomination. If you want to keep and compare alternative specifications, see the chapter on [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons).
Let’s compare the raw Flights data with the Flights per GDP data:
```
# plot raw flights against denominated
iplotIndDist2(ASEM, dsets = c("Raw", "Denominated"), icodes = "Flights")
```
Clearly, the denominated and raw indicators are very different from one another, reflecting the completely different meaning.
### 7\.2\.2 On data frames
If you are just working with data frames, you need to supply the three ingredients directly to the function. Here we will take some of the ASEM data for illustration (recalling that both indicator and denominator data is specified in `ASEMIndMeta`).
```
# a small data frame of indicator data
IndData <- ASEMIndData[c("UnitCode", "Goods", "Services", "FDI")]
IndData
## # A tibble: 51 x 4
## UnitCode Goods Services FDI
## <chr> <dbl> <dbl> <dbl>
## 1 AUT 278. 108. 5
## 2 BEL 598. 216. 5.71
## 3 BGR 42.8 13.0 1.35
## 4 HRV 28.4 17.4 0.387
## 5 CYP 8.77 15.2 1.23
## 6 CZE 274. 43.5 3.88
## 7 DNK 147. 114. 9.1
## 8 EST 28.2 10.2 0.58
## 9 FIN 102. 53.8 6.03
## 10 FRA 849. 471. 30.9
## # ... with 41 more rows
# two selected denominators
Denoms <- ASEMIndData[c("UnitCode", "Den_Pop", "Den_GDP")]
# denominate the data
IndDataDenom <- denominate(IndData, denomby = c("Den_GDP", NA, "Den_Pop"), denominators = Denoms)
IndDataDenom
## # A tibble: 51 x 4
## UnitCode Goods Services FDI
## <chr> <dbl> <dbl> <dbl>
## 1 AUT 0.712 108. 0.000572
## 2 BEL 1.28 216. 0.000500
## 3 BGR 0.804 13.0 0.000191
## 4 HRV 0.554 17.4 0.0000924
## 5 CYP 0.437 15.2 0.00104
## 6 CZE 1.40 43.5 0.000365
## 7 DNK 0.478 114. 0.00159
## 8 EST 1.21 10.2 0.000443
## 9 FIN 0.426 53.8 0.00109
## 10 FRA 0.344 471. 0.000476
## # ... with 41 more rows
```
Since the input is recognised as a data frame, you don’t need to specify any other arguments, and the output is automatically a data frame. Note how `denomby` works: here it specifies that that “Goods” should be denominated by “Den\_GDP”, that “Services” should not be denominated, and that “FDI” should be denominated by “Den\_Pop”.
7\.3 When to denominate, and by what?
-------------------------------------
Denomination is mathematically very simple, but from a conceptual point of view it needs to be handled with care. As we have shown, denominating an indicator will usually completely change it, and will have a corresponding impact on the results.
Two ways of looking at the problem are first, from the conceptual point of view. Consider each indicator and whether it fits with the aim of your index. Some indicators are anyway intensive, such as the percentage of tertiary graduates. Others, such as trade, will be strongly linked to the size of the country. In those cases, consider whether countries with high trade values should have higher scores in your index or not? Or should it be trade as a percentage of GDP? Or trade per capita? Each of these will have different meanings.
Sometimes extensive variables will anyway be the right choice. The [Lowy Asia Power Index](https://power.lowyinstitute.org/) measures the power of each country in an absolute sense: in this case, the size of the country is all\-important and e.g. trade or military capabilities per capita would not make much sense.
The second (complimentary) way to approach denomination is from a statistical point of view. We can check which indicators are strongly related to the size of a country by correlating them with some typical denominators, such as the ones used here. The `getStats()` function does just this, checking the correlations between each indicator and each denominator, and flagging any possible high correlations.
```
# get statistics on raw data set
ASEM <- getStats(ASEM, dset = "Raw", out2 = "COIN", t_denom = 0.8)
## Number of collinear indicators = 3
## Number of signficant negative indicator correlations = 322
## Number of indicators with high denominator correlations = 4
# get the result
ctable <- ASEM$Analysis$Raw$DenomCorrelations
# remove first column
ctable <- ctable[-1]
# return only columns that have entries with correlation above 0.8
ctable[,colSums((abs(ctable) > 0.8))>0]
## Goods FDI StMob CultGood
## 1 0.2473558 0.3668625 0.5228032 0.2696675
## 2 0.6276219 0.7405170 0.7345030 0.7120445
## 3 0.8026770 0.8482453 0.8379575 0.8510374
## 4 0.4158811 0.6529376 0.5523545 0.5028121
```
The matrix that is displayed above only includes columns where there is a correlation value (between an indicator and any denominator) of greater than 0\.8\. The results show some interesting patterns:
* Many indicators have a high positive correlation with GDP, including flights, trade, foreign direct investment (very high), personal remittances, research, stock of migrants, and so on.
* Bigger countries, in terms of land area, tend to have *less* trade agreements (RTAs) and a more restrictive visa policy
* Larger populations are associated with higher levels of poverty
The purpose here is not to draw causal links between these quantities, although they might reveal interesting patterns. Rather, these might suggest which quantities to denominate by, if the choices also work on a conceptual level.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/data-treatment.html |
Chapter 8 Data Treatment
========================
Data treatment is the process of altering indicators to improve their statistical properties, mainly for the purposes of aggregation.
Data treatment is a delicate subject, because it essentially involves changing the values of certain observations, or transforming an entire distribution. This entails balancing two opposing considerations:
* On the one hand, treatment should be used as sparingly as possible, because you are altering one or more known data points.
* On the other hand, *this is only done for the purposes of aggregation* (i.e. creating a composite score), and since composite indicators are normally presented with the index scores and original data accessible underneath, the underlying data would normally be presented in its original form.
Therefore, be careful, but also realise that data treatment is not unnerviving or unethical, it’s simply another assumption in a statistical process. Like any other step or assumption though, any data treatment should be carefully recorded and its implications understood.
8\.1 Why treat data?
--------------------
There can be many reasons to treat data, but in composite indicators the main reason is to remove outliers or adjust heavily\-skewed distributions. Outliers can exist because of errors in measurement and data processing, and should always be double\-checked. But often, they are simply a reflection of reality. Outliers and skewed distributions are common in economics. One has to look no further than the in\-built ASEM data set, and the denominating variables of population, GDP, energy consumption and country area:
```
plotIndDist(WorldDenoms)
```
To illustrate why this can be a problem, consider the following artificial example.
```
library(plotly)
# normally-distributed data
outlierdata <- rnorm(100)
# two outliers
outlierdata <- c(outlierdata, 10, 25)
# plot
plot_ly(x = outlierdata, type = "histogram")
```
Here we have the large majority of observations with values which are crammed into the bottom fifth of the scale, and two observations that are much higher, i.e. they are outliers. If we were to normalise this distribution using a min\-max method scaled to \[0, 100], for example, this would produce an indicator where the large majority of units have a very low score.
This might not reflect the narrative that you want to convey. It may be, for example, that values above the median are “good”, and in this case, values in the 1\.5\-2\.5 range are “very good”. Two units have truly exception values, but that shouldn’t take away from the fact that other units have values that are considered to be good.
The problem is that by normalising to \[0, 100], these units with “very good” values will have normalised scores of around 20 (out of 100\), which when compared to other indicators, is a “bad” score. And this will be reflected in the aggregated score. In summary, the outlying values are “dominating” the scale, which reduces the discriminatory power of the indicator.
A few things to consider here are that:
1. The outlier problem in composite indicators is mostly linked to the fact that you are aggregating indicators. If you are not aggregating, the outlying values may not be so much of a problem.
2. It might be that you *want* to keep the distribution as it is, and let the indicator be defined by its outliers. This will depend on the objectives of the index.
3. If you do wish to do something about the outlying values, there are two possibilities. One is to treat the data, and this is described in the rest of this chapter The second is to use a normalisation method that is less sensitive to outliers \- this is dealt with in the [Normalisation](appendix-building-a-composite-indicator-example.html#normalisation-1) chapter.
8\.2 How to treat data
----------------------
If you decide to treat the data before normalising, there are a two main options.
### 8\.2\.1 Winsorisation
The first is to *Winsorise* the data. Winsorisation involves reassigning outlying points to the next highest point, or to a percentile value. To do this manually, using the data from the previous section, it would look like this:
```
# get position of maximum value
imax <- which(outlierdata==max(outlierdata))
# reassign with next highest value
outlierdata[imax] <- max(outlierdata[-imax])
plot_ly(x = outlierdata, type = "histogram")
```
\# and let's do it again
imax \<\- which(outlierdata\=\=max(outlierdata))
outlierdata\[imax] \<\- max(outlierdata\[\-imax])
plot\_ly(x \= outlierdata, type \= "histogram")
And now we have arrived back to a normal distribution with no outliers, which is well\-spread over the scale. Of course, this has come at the cost of actually moving data points. However keep in mind that this is only done for the purposes of aggregation, and the original indicator data would still be retained. A helpful way of looking at Winsorisation is that it is like capping the scale: it is enough to know that certain units have the highest score, without needing to know that they are ten times higher than the other units.
Clearly, outliers could also occur from the lower end of the scale, in which case Winsorisation would use the minimum values.
Notice that in general, Winsorisation does not change the shape of the distribution, apart from the outlying values. Therefore it is suited to distributions like the example, which are well\-spread except for some few outlying values.
### 8\.2\.2 Transformation
The second option is to transform the distribution, by applying a transformation that changes all of the data points and thus the overall shape of the distribution.
The most obvious transformation in this respect is to take the logarithm. Denoting \\(x\\) as the original indicator and \\(x'\\) as the transformed indicator, this would simply be:
\\\[ x' \= \\ln(x) \\]
This is a sensible choice because skewed distributions are often roughly log\-normal, and taking the log of a log\-normal distrinbution results in a normal distribution. However, this will not work for negative values. In that case, an alternative is:
\\\[ x' \= \\ln(x\- \\text{min}(x)\+a), \\; \\; \\text{where}\\; \\; a \= 0\.01(\\text{max}(x)\-\\text{min}(x)) \\]
The logic being that by subtracting the minimum and adding something, all values will be positive, so they can be safely log\-transformed. This formula is similar to that used in the COIN Tool, but with an adjustment. In the COIN Tool, \\(a\=1\\), which, depending on the range of the indicator, can give only a gentle and sometimes near\-linear transformation. By setting \\(a\\) to be a set percentage of the range, it is ensured that the shape of the transformation is consistent.
A general family of transformations are called the *Box\-Cox* transformations. These are given as:
\\\[ x'(\\lambda) \=
\\begin{cases}
\\frac{x^\\lambda\-1}{\\lambda} \& \\text{if}\\ \\lambda \\neq 0 \\\\
\\ln(x) \& \\text{if}\\ \\lambda \= 0
\\end{cases} \\]
In other words, a log transformation or a power transformation. The Box Cox transformation is often accompanied by an optimisation which chooses the \\(\\lambda\\) value to best approximate the normal distribution.
Comparing to Winsorisation, log transforms and Box Cox transforms perform the transformation on every data point. To see the effect of this, we can take the log of one of the denominator indicators in `WorldDenoms`, the national GDPs of over 200 countries in 2019\.
```
# make df with original indicator and log-transformed version
df <- data.frame(GDP = WorldDenoms$Den_GDP,
LogGDP = log(WorldDenoms$Den_GDP))
plotIndDist(df, type = "Histogram")
```
And there we see a beautiful normal distribution as a result.
8\.3 Data treatment in COINr
----------------------------
COINr data treatment has a number of options and gives the user a fairly high degree of control in terms of which kind of treatment should be applied, either for all indicators at once, or for individual indicators. We will first explore the default data treatment options that are applied to all indicators, then see how to make exceptions and specific treatment requests.
### 8\.3\.1 Global treatment
The “default” treatment in COINr is similar to that applied in the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en), and widely used in composite indicator construction. It follows a basic process of trying to Winsorise up to a specified limit, followed by a transformation if the Winsorisation does not sufficiently “correct” the distribution. In short, for each indicator it:
1. Checks skew and kurtosis values
2. If absolute skew is greater than a threshold (default 2\) AND kurtosis is greater than a threshold (default 3\.5\):
1. Successively Winsorise up to a specified maximum number of points. If either skew or kurtosis goes below thresholds, stop. If after reaching the maximum number of points, both thresholds are still exceeded, then:
2. Perform a transformation
This is the default process, although individual specifications can be made for individual indicators \- see the next section.
The data treatment function in COINr is called `treat()`. To perform a default data treatment, we can simply use the “COIN in, COIN out” approach as usual.
```
# build ASEM data set, up to denomination
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta,
AggMeta = COINr6::ASEMAggMeta)
ASEM <- denominate(ASEM, dset = "Raw")
# treat at defaults
ASEM <- treat(ASEM, dset = "Denominated", winmax = 5)
# Check which indicators were treated, and how
library(dplyr, quietly = T, verbose = F)
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec Treatment
## V2 Services 0 4 Default, winmax = 5 Winsorised 4 points
## V3 FDI 0 2 Default, winmax = 5 Winsorised 2 points
## V5 ForPort 0 3 Default, winmax = 5 Winsorised 3 points
## V6 CostImpEx 0 1 Default, winmax = 5 Winsorised 1 points
## V7 Tariff 0 3 Default, winmax = 5 Winsorised 3 points
## V12 StMob 0 2 Default, winmax = 5 Winsorised 2 points
## V15 CultServ 0 2 Default, winmax = 5 Winsorised 2 points
## V21 Flights 0 1 Default, winmax = 5 Winsorised 1 points
## V23 Bord 0 3 Default, winmax = 5 Winsorised 3 points
## V25 Gas 0 3 Default, winmax = 5 Winsorised 3 points
## V35 Forest 0 2 Default, winmax = 5 Winsorised 2 points
## V36 Poverty 0 3 Default, winmax = 5 Winsorised 3 points
## V41 NGOs 0 3 Default, winmax = 5 Winsorised 3 points
## V43 FemLab 1 0 Default, winmax = 5 Winsorised 1 points
```
The results of `treat()` are a new data set `.Data$Treated` with the treated data in it, and a details of the data treatment in `$Analysis$Treated`, including `TreatSummary` (shown above), which gives a summary of the data treatment specified and actually applied for each indicator. In this case, all indicators were successfully brought within the specified skew and kurtosis limits by Winsorisation, within the specified Winsorisation limit `winmax` of five points.
To see what happens if `winmax` is exceeded, we will lower the threshold to `winmax = 3`. Additionally, we can control what type of log transform should be applied when `winmax` is exceeded, using the `deflog` argument. We will set `deflog = "CTlog"` which ensures that negative values will not cause errors.
```
# treat at defaults
ASEM <- treat(ASEM, dset = "Denominated", winmax = 3, deflog = "CTlog")
# Check which indicators were treated, and how
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec Treatment
## V2 Services 0 0 Default, winmax = 3 CTLog (exceeded winmax)
## V3 FDI 0 2 Default, winmax = 3 Winsorised 2 points
## V5 ForPort 0 3 Default, winmax = 3 Winsorised 3 points
## V6 CostImpEx 0 1 Default, winmax = 3 Winsorised 1 points
## V7 Tariff 0 3 Default, winmax = 3 Winsorised 3 points
## V12 StMob 0 2 Default, winmax = 3 Winsorised 2 points
## V15 CultServ 0 2 Default, winmax = 3 Winsorised 2 points
## V21 Flights 0 1 Default, winmax = 3 Winsorised 1 points
## V23 Bord 0 3 Default, winmax = 3 Winsorised 3 points
## V25 Gas 0 3 Default, winmax = 3 Winsorised 3 points
## V35 Forest 0 2 Default, winmax = 3 Winsorised 2 points
## V36 Poverty 0 3 Default, winmax = 3 Winsorised 3 points
## V41 NGOs 0 3 Default, winmax = 3 Winsorised 3 points
## V43 FemLab 1 0 Default, winmax = 3 Winsorised 1 points
# Compare before and after treatment
iplotIndDist2(ASEM, dsets = c("Denominated", "Treated"), icodes = "Services",
ptype = "Histogram")
```
Now we can see from the table that the Services indicator has had a log transformation applied, and from the histograms that the skewness has been treated. The range of the indicator is now different, and this introduces negative values, however this will be anyway fixed in the normalisation step. The important thing here is the shape of the distribution.
Other options for `deflog` are “log” (a standard log transform), “GIIlog”, which is a scaled log transformation used by the [Global Innovation Index](https://www.globalinnovationindex.org/), and “boxcox”, which uses a Box\-Cox transformation with the \\(\\lambda\\) parameter set by the `boxlam` argument to `treat()`. Optimised \\(\\lambda\\) values are not currently available in COINr, though this may be considered for future versions, along with custom transformation functions.
A further point to note is that Winsorisation can work in two ways in COINr, specified by the `winchange` argument. If set to `FALSE`, then Winsorisation works in the following way:
1. Perform an initial check of skew and kurtosis (call them \\(s\_0\\) and \\(k\_0\\)). Only if **both** exceed thresholds, continue to 2\. Otherwise no treatment is applied.
2. If \\(s\_0\\) is positive (usually the case), begin by Winsorising the highest value (assign the value of the second\-highest point).
3. If \\(s\_0\\) is negative, begin by Winsorising the lowest value (assign the value of the second\-lowest point).
4. If either skew or kurtosis thesholds are below thresholds, stop, otherwise
5. Go to 2\. if \\(s\_0\\) was positive, or 3\. if \\(s\_0\\) was negative.
Notably, the *direction* of the Winsorisation here is always the same, i.e. if \\(s\_0\\) was positive, Winsorisation will always be applied to the highest values. The only drawback here is that the sign of the skew can conceivably change during the Winsorisation process. For that reason, if you set `winchange = TRUE` (which is anyway default), the function will check after every Winsorised point to see whether to Winsorise high values or low values.
It might seem unlikely that the skew could change sign from one iteration to the next, *and* still remain outside the absolute threshold. But this has been observed to happen in some cases.
Finally, you can set the skewness and kurtosis threshold values to your favourite values using the `t_skew` and `t_kurt` arguments to `treat()`.
### 8\.3\.2 Individual treatment
If you want more control over the treatment of individual indicators, this can be done by specifying the optional `individual` and `indiv_only` arguments. `individual` is a data frame which specifies the treatment to apply to specific indicators. It looks like this:
```
# Example of "individual" data frame spec
individual = data.frame(
IndCode = c("Bord", "Services", "Flights", "Tariff"),
Treat = c("win", "log", "none", "boxcox"),
Winmax = c(10, NA, NA, NA),
Thresh = c("thresh", NA, NA, NA),
Boxlam = c(NA, NA, NA, 2)
)
individual
## IndCode Treat Winmax Thresh Boxlam
## 1 Bord win 10 thresh NA
## 2 Services log NA <NA> NA
## 3 Flights none NA <NA> NA
## 4 Tariff boxcox NA <NA> 2
```
The “IndCode” column is the list of indicators to apply a specific treatment. The “Treat” column dictates which treatment to apply: options are either Winsorise (“win”), or any of the `deflog` options described previously. The “Winmax” column gives the maximum number of points to Winsorise for that indicator. This value is only used if the corresponding row of “Treat” is set to “win”. The “Thresh” column specifies whether to use skew/kurtosis thresholds or not. If “thresh”, it uses the thresholds specified in the `thresh` argument of `treat()`. If instead it is set at `NA`, then it will ignore the thresholds and force the Winsorisation to Winsorise exactly `winmax` points. Finally, the “Boxlam” column allows an individual setting of the `boxlam` parameter if the corresponding entry of the “Treat” column is set to “boxcox”. If `Individual` is specified, all of these columns need to be present regardless of the individual treatments specified. Where entries are not needed, you can simply put `NA`.
An important point is that if the treatment specified in the “Treat” column is forced upon the specified indicator, for example, if “CTlog” is specified, this will be applied, without any Winsorisation first. Conversely, if “win” is specified, this will be applied, without any subsequent log transform even if it fails to correct the distribution.
Finally, the `indiv_only` argument specifies if `TRUE`, ONLY the indicators listed in `individual` are treated, and the rest get no treatment. Otherwise, if `FALSE` then the remaining indicators are subject to the default treatment process.
Let’s now put this into practice.
```
ASEM <- treat(ASEM, dset = "Denominated", individual = individual,
indiv_only = FALSE)
# Check which indicators were treated, and how
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec
## V2 Services 0 0 Default, winmax = unspecified
## V3 FDI 0 2 Default, winmax = 6
## V5 ForPort 0 3 Default, winmax = 6
## V6 CostImpEx 0 1 Default, winmax = 6
## V7 Tariff 0 0 Default, winmax = unspecified
## V12 StMob 0 2 Default, winmax = 6
## V15 CultServ 0 2 Default, winmax = 6
## V23 Bord 0 3 Forced Win, winmax = 10
## V25 Gas 0 3 Default, winmax = 6
## V35 Forest 0 2 Default, winmax = 6
## V36 Poverty 0 3 Default, winmax = 6
## V41 NGOs 0 3 Default, winmax = 6
## V43 FemLab 1 0 Default, winmax = 6
## Treatment
## V2 Log (exceeded winmax)
## V3 Winsorised 2 points
## V5 Winsorised 3 points
## V6 Winsorised 1 points
## V7 Box Cox with lambda = 2 (exceeded winmax)
## V12 Winsorised 2 points
## V15 Winsorised 2 points
## V23 Winsorised 3 points
## V25 Winsorised 3 points
## V35 Winsorised 2 points
## V36 Winsorised 3 points
## V41 Winsorised 3 points
## V43 Winsorised 1 points
```
And this shows how some indicators have had certain treatments applied. Of course here, the individual treatments have been chosen arbitrarily, and the default treatment would have sufficed. In practice, you would apply individual treatment for specific indicators for good reasons, such as excessive skew, or not wanting to treat certain indicators for conceptual reasons.
8\.4 Interactive visualisation
------------------------------
As we have seen in this chapter, it’s essential to visualise your indicator distributions and check what treatment has been applied, and what the main differences are. This can be achieved with the plotting functions previously described, but could become cumbersome if you want to check many indicators.
COINr has an built\-in Shiny app which lets you compare indicator distributions interactively. Shiny is an R package which allows you to build interactive apps, dashboards and gadgets that can run R code. Shiny is a powerful tool for interactively exploring, communicating and presenting results, and can be used to host apps on the web. If you would like to know more about Shiny, I would recommend the book [Mastering Shiny](https://mastering-shiny.org/index.html).
To run the indicator visualisation app, simply run `indDash(ASEM)`. This will open a window which should look like this:
Figure 8\.1: indDash screenshot
The `indDash()` app is purely illustrative \- it does not return anything back to R, but simply displays the distributions of the existing indicators.
In the side panel, all data sets that exist in the `.$Data` folder are accessible, as well as any denominators, if they exist. Any of the indicators from any of these data sets can be plotted against each other.
In the context of data treatment, this gives a fast way to check the effects of treating indicators. In the ASEM example, we can set the the data set of indicator 1 to “Denominated”, and the data set of indicator 2 to “Treated”. This will compare denominated against treated indicators, which is a sensible comparison since `treat()` was applied to `.Data$Denominated` in this case.
When a treated data set is selected, a table of treated indicators appears in the side panel \- this lists all indicators that had treatment applied to them, and gives the details of the treatments applied. You can click on column headings to sort by values, and this can give an easy way to see which indicators were treated. You can then select an indicator of interest using the dropdown menus. In the screenshot above, we are comparing “Gas” in the denominated and treated data sets.
This gives several comparison plots: histograms of each indicator, violin plots, and a scatter plot of one indicator against the other, as well as overlaid histograms. As with any Plotly plot, any of these can be instantly downloaded using the camera button that appears when you hover over the plot. This might be useful for generating quick reports, for example.
Below the violin plots are details of the skew and kurtosis of the indicator, and details of the treatment applied, if any.
Finally, when comparing a treated indicator against its un\-treated version, it can be helpful to check or un\-check the “Plot indicators on the same axis range” box. Checking this box gives a like\-for\-like comparison which shows how the data points have moved, particularly useful in the case of Winsorisation. Unchecking it may be more helpful to compare the shapes of the two distributions.
While `indDash()` has specific features for treated data, it is also useful as a general purpose tool for fast indicator visualisation and exploration. It can help, for example, to observe trends and relationships between indicators and denominators, and will work with the aggregated data, i.e. to see how the index may depend on its indicators.
8\.1 Why treat data?
--------------------
There can be many reasons to treat data, but in composite indicators the main reason is to remove outliers or adjust heavily\-skewed distributions. Outliers can exist because of errors in measurement and data processing, and should always be double\-checked. But often, they are simply a reflection of reality. Outliers and skewed distributions are common in economics. One has to look no further than the in\-built ASEM data set, and the denominating variables of population, GDP, energy consumption and country area:
```
plotIndDist(WorldDenoms)
```
To illustrate why this can be a problem, consider the following artificial example.
```
library(plotly)
# normally-distributed data
outlierdata <- rnorm(100)
# two outliers
outlierdata <- c(outlierdata, 10, 25)
# plot
plot_ly(x = outlierdata, type = "histogram")
```
Here we have the large majority of observations with values which are crammed into the bottom fifth of the scale, and two observations that are much higher, i.e. they are outliers. If we were to normalise this distribution using a min\-max method scaled to \[0, 100], for example, this would produce an indicator where the large majority of units have a very low score.
This might not reflect the narrative that you want to convey. It may be, for example, that values above the median are “good”, and in this case, values in the 1\.5\-2\.5 range are “very good”. Two units have truly exception values, but that shouldn’t take away from the fact that other units have values that are considered to be good.
The problem is that by normalising to \[0, 100], these units with “very good” values will have normalised scores of around 20 (out of 100\), which when compared to other indicators, is a “bad” score. And this will be reflected in the aggregated score. In summary, the outlying values are “dominating” the scale, which reduces the discriminatory power of the indicator.
A few things to consider here are that:
1. The outlier problem in composite indicators is mostly linked to the fact that you are aggregating indicators. If you are not aggregating, the outlying values may not be so much of a problem.
2. It might be that you *want* to keep the distribution as it is, and let the indicator be defined by its outliers. This will depend on the objectives of the index.
3. If you do wish to do something about the outlying values, there are two possibilities. One is to treat the data, and this is described in the rest of this chapter The second is to use a normalisation method that is less sensitive to outliers \- this is dealt with in the [Normalisation](appendix-building-a-composite-indicator-example.html#normalisation-1) chapter.
8\.2 How to treat data
----------------------
If you decide to treat the data before normalising, there are a two main options.
### 8\.2\.1 Winsorisation
The first is to *Winsorise* the data. Winsorisation involves reassigning outlying points to the next highest point, or to a percentile value. To do this manually, using the data from the previous section, it would look like this:
```
# get position of maximum value
imax <- which(outlierdata==max(outlierdata))
# reassign with next highest value
outlierdata[imax] <- max(outlierdata[-imax])
plot_ly(x = outlierdata, type = "histogram")
```
\# and let's do it again
imax \<\- which(outlierdata\=\=max(outlierdata))
outlierdata\[imax] \<\- max(outlierdata\[\-imax])
plot\_ly(x \= outlierdata, type \= "histogram")
And now we have arrived back to a normal distribution with no outliers, which is well\-spread over the scale. Of course, this has come at the cost of actually moving data points. However keep in mind that this is only done for the purposes of aggregation, and the original indicator data would still be retained. A helpful way of looking at Winsorisation is that it is like capping the scale: it is enough to know that certain units have the highest score, without needing to know that they are ten times higher than the other units.
Clearly, outliers could also occur from the lower end of the scale, in which case Winsorisation would use the minimum values.
Notice that in general, Winsorisation does not change the shape of the distribution, apart from the outlying values. Therefore it is suited to distributions like the example, which are well\-spread except for some few outlying values.
### 8\.2\.2 Transformation
The second option is to transform the distribution, by applying a transformation that changes all of the data points and thus the overall shape of the distribution.
The most obvious transformation in this respect is to take the logarithm. Denoting \\(x\\) as the original indicator and \\(x'\\) as the transformed indicator, this would simply be:
\\\[ x' \= \\ln(x) \\]
This is a sensible choice because skewed distributions are often roughly log\-normal, and taking the log of a log\-normal distrinbution results in a normal distribution. However, this will not work for negative values. In that case, an alternative is:
\\\[ x' \= \\ln(x\- \\text{min}(x)\+a), \\; \\; \\text{where}\\; \\; a \= 0\.01(\\text{max}(x)\-\\text{min}(x)) \\]
The logic being that by subtracting the minimum and adding something, all values will be positive, so they can be safely log\-transformed. This formula is similar to that used in the COIN Tool, but with an adjustment. In the COIN Tool, \\(a\=1\\), which, depending on the range of the indicator, can give only a gentle and sometimes near\-linear transformation. By setting \\(a\\) to be a set percentage of the range, it is ensured that the shape of the transformation is consistent.
A general family of transformations are called the *Box\-Cox* transformations. These are given as:
\\\[ x'(\\lambda) \=
\\begin{cases}
\\frac{x^\\lambda\-1}{\\lambda} \& \\text{if}\\ \\lambda \\neq 0 \\\\
\\ln(x) \& \\text{if}\\ \\lambda \= 0
\\end{cases} \\]
In other words, a log transformation or a power transformation. The Box Cox transformation is often accompanied by an optimisation which chooses the \\(\\lambda\\) value to best approximate the normal distribution.
Comparing to Winsorisation, log transforms and Box Cox transforms perform the transformation on every data point. To see the effect of this, we can take the log of one of the denominator indicators in `WorldDenoms`, the national GDPs of over 200 countries in 2019\.
```
# make df with original indicator and log-transformed version
df <- data.frame(GDP = WorldDenoms$Den_GDP,
LogGDP = log(WorldDenoms$Den_GDP))
plotIndDist(df, type = "Histogram")
```
And there we see a beautiful normal distribution as a result.
### 8\.2\.1 Winsorisation
The first is to *Winsorise* the data. Winsorisation involves reassigning outlying points to the next highest point, or to a percentile value. To do this manually, using the data from the previous section, it would look like this:
```
# get position of maximum value
imax <- which(outlierdata==max(outlierdata))
# reassign with next highest value
outlierdata[imax] <- max(outlierdata[-imax])
plot_ly(x = outlierdata, type = "histogram")
```
\# and let's do it again
imax \<\- which(outlierdata\=\=max(outlierdata))
outlierdata\[imax] \<\- max(outlierdata\[\-imax])
plot\_ly(x \= outlierdata, type \= "histogram")
And now we have arrived back to a normal distribution with no outliers, which is well\-spread over the scale. Of course, this has come at the cost of actually moving data points. However keep in mind that this is only done for the purposes of aggregation, and the original indicator data would still be retained. A helpful way of looking at Winsorisation is that it is like capping the scale: it is enough to know that certain units have the highest score, without needing to know that they are ten times higher than the other units.
Clearly, outliers could also occur from the lower end of the scale, in which case Winsorisation would use the minimum values.
Notice that in general, Winsorisation does not change the shape of the distribution, apart from the outlying values. Therefore it is suited to distributions like the example, which are well\-spread except for some few outlying values.
### 8\.2\.2 Transformation
The second option is to transform the distribution, by applying a transformation that changes all of the data points and thus the overall shape of the distribution.
The most obvious transformation in this respect is to take the logarithm. Denoting \\(x\\) as the original indicator and \\(x'\\) as the transformed indicator, this would simply be:
\\\[ x' \= \\ln(x) \\]
This is a sensible choice because skewed distributions are often roughly log\-normal, and taking the log of a log\-normal distrinbution results in a normal distribution. However, this will not work for negative values. In that case, an alternative is:
\\\[ x' \= \\ln(x\- \\text{min}(x)\+a), \\; \\; \\text{where}\\; \\; a \= 0\.01(\\text{max}(x)\-\\text{min}(x)) \\]
The logic being that by subtracting the minimum and adding something, all values will be positive, so they can be safely log\-transformed. This formula is similar to that used in the COIN Tool, but with an adjustment. In the COIN Tool, \\(a\=1\\), which, depending on the range of the indicator, can give only a gentle and sometimes near\-linear transformation. By setting \\(a\\) to be a set percentage of the range, it is ensured that the shape of the transformation is consistent.
A general family of transformations are called the *Box\-Cox* transformations. These are given as:
\\\[ x'(\\lambda) \=
\\begin{cases}
\\frac{x^\\lambda\-1}{\\lambda} \& \\text{if}\\ \\lambda \\neq 0 \\\\
\\ln(x) \& \\text{if}\\ \\lambda \= 0
\\end{cases} \\]
In other words, a log transformation or a power transformation. The Box Cox transformation is often accompanied by an optimisation which chooses the \\(\\lambda\\) value to best approximate the normal distribution.
Comparing to Winsorisation, log transforms and Box Cox transforms perform the transformation on every data point. To see the effect of this, we can take the log of one of the denominator indicators in `WorldDenoms`, the national GDPs of over 200 countries in 2019\.
```
# make df with original indicator and log-transformed version
df <- data.frame(GDP = WorldDenoms$Den_GDP,
LogGDP = log(WorldDenoms$Den_GDP))
plotIndDist(df, type = "Histogram")
```
And there we see a beautiful normal distribution as a result.
8\.3 Data treatment in COINr
----------------------------
COINr data treatment has a number of options and gives the user a fairly high degree of control in terms of which kind of treatment should be applied, either for all indicators at once, or for individual indicators. We will first explore the default data treatment options that are applied to all indicators, then see how to make exceptions and specific treatment requests.
### 8\.3\.1 Global treatment
The “default” treatment in COINr is similar to that applied in the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en), and widely used in composite indicator construction. It follows a basic process of trying to Winsorise up to a specified limit, followed by a transformation if the Winsorisation does not sufficiently “correct” the distribution. In short, for each indicator it:
1. Checks skew and kurtosis values
2. If absolute skew is greater than a threshold (default 2\) AND kurtosis is greater than a threshold (default 3\.5\):
1. Successively Winsorise up to a specified maximum number of points. If either skew or kurtosis goes below thresholds, stop. If after reaching the maximum number of points, both thresholds are still exceeded, then:
2. Perform a transformation
This is the default process, although individual specifications can be made for individual indicators \- see the next section.
The data treatment function in COINr is called `treat()`. To perform a default data treatment, we can simply use the “COIN in, COIN out” approach as usual.
```
# build ASEM data set, up to denomination
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta,
AggMeta = COINr6::ASEMAggMeta)
ASEM <- denominate(ASEM, dset = "Raw")
# treat at defaults
ASEM <- treat(ASEM, dset = "Denominated", winmax = 5)
# Check which indicators were treated, and how
library(dplyr, quietly = T, verbose = F)
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec Treatment
## V2 Services 0 4 Default, winmax = 5 Winsorised 4 points
## V3 FDI 0 2 Default, winmax = 5 Winsorised 2 points
## V5 ForPort 0 3 Default, winmax = 5 Winsorised 3 points
## V6 CostImpEx 0 1 Default, winmax = 5 Winsorised 1 points
## V7 Tariff 0 3 Default, winmax = 5 Winsorised 3 points
## V12 StMob 0 2 Default, winmax = 5 Winsorised 2 points
## V15 CultServ 0 2 Default, winmax = 5 Winsorised 2 points
## V21 Flights 0 1 Default, winmax = 5 Winsorised 1 points
## V23 Bord 0 3 Default, winmax = 5 Winsorised 3 points
## V25 Gas 0 3 Default, winmax = 5 Winsorised 3 points
## V35 Forest 0 2 Default, winmax = 5 Winsorised 2 points
## V36 Poverty 0 3 Default, winmax = 5 Winsorised 3 points
## V41 NGOs 0 3 Default, winmax = 5 Winsorised 3 points
## V43 FemLab 1 0 Default, winmax = 5 Winsorised 1 points
```
The results of `treat()` are a new data set `.Data$Treated` with the treated data in it, and a details of the data treatment in `$Analysis$Treated`, including `TreatSummary` (shown above), which gives a summary of the data treatment specified and actually applied for each indicator. In this case, all indicators were successfully brought within the specified skew and kurtosis limits by Winsorisation, within the specified Winsorisation limit `winmax` of five points.
To see what happens if `winmax` is exceeded, we will lower the threshold to `winmax = 3`. Additionally, we can control what type of log transform should be applied when `winmax` is exceeded, using the `deflog` argument. We will set `deflog = "CTlog"` which ensures that negative values will not cause errors.
```
# treat at defaults
ASEM <- treat(ASEM, dset = "Denominated", winmax = 3, deflog = "CTlog")
# Check which indicators were treated, and how
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec Treatment
## V2 Services 0 0 Default, winmax = 3 CTLog (exceeded winmax)
## V3 FDI 0 2 Default, winmax = 3 Winsorised 2 points
## V5 ForPort 0 3 Default, winmax = 3 Winsorised 3 points
## V6 CostImpEx 0 1 Default, winmax = 3 Winsorised 1 points
## V7 Tariff 0 3 Default, winmax = 3 Winsorised 3 points
## V12 StMob 0 2 Default, winmax = 3 Winsorised 2 points
## V15 CultServ 0 2 Default, winmax = 3 Winsorised 2 points
## V21 Flights 0 1 Default, winmax = 3 Winsorised 1 points
## V23 Bord 0 3 Default, winmax = 3 Winsorised 3 points
## V25 Gas 0 3 Default, winmax = 3 Winsorised 3 points
## V35 Forest 0 2 Default, winmax = 3 Winsorised 2 points
## V36 Poverty 0 3 Default, winmax = 3 Winsorised 3 points
## V41 NGOs 0 3 Default, winmax = 3 Winsorised 3 points
## V43 FemLab 1 0 Default, winmax = 3 Winsorised 1 points
# Compare before and after treatment
iplotIndDist2(ASEM, dsets = c("Denominated", "Treated"), icodes = "Services",
ptype = "Histogram")
```
Now we can see from the table that the Services indicator has had a log transformation applied, and from the histograms that the skewness has been treated. The range of the indicator is now different, and this introduces negative values, however this will be anyway fixed in the normalisation step. The important thing here is the shape of the distribution.
Other options for `deflog` are “log” (a standard log transform), “GIIlog”, which is a scaled log transformation used by the [Global Innovation Index](https://www.globalinnovationindex.org/), and “boxcox”, which uses a Box\-Cox transformation with the \\(\\lambda\\) parameter set by the `boxlam` argument to `treat()`. Optimised \\(\\lambda\\) values are not currently available in COINr, though this may be considered for future versions, along with custom transformation functions.
A further point to note is that Winsorisation can work in two ways in COINr, specified by the `winchange` argument. If set to `FALSE`, then Winsorisation works in the following way:
1. Perform an initial check of skew and kurtosis (call them \\(s\_0\\) and \\(k\_0\\)). Only if **both** exceed thresholds, continue to 2\. Otherwise no treatment is applied.
2. If \\(s\_0\\) is positive (usually the case), begin by Winsorising the highest value (assign the value of the second\-highest point).
3. If \\(s\_0\\) is negative, begin by Winsorising the lowest value (assign the value of the second\-lowest point).
4. If either skew or kurtosis thesholds are below thresholds, stop, otherwise
5. Go to 2\. if \\(s\_0\\) was positive, or 3\. if \\(s\_0\\) was negative.
Notably, the *direction* of the Winsorisation here is always the same, i.e. if \\(s\_0\\) was positive, Winsorisation will always be applied to the highest values. The only drawback here is that the sign of the skew can conceivably change during the Winsorisation process. For that reason, if you set `winchange = TRUE` (which is anyway default), the function will check after every Winsorised point to see whether to Winsorise high values or low values.
It might seem unlikely that the skew could change sign from one iteration to the next, *and* still remain outside the absolute threshold. But this has been observed to happen in some cases.
Finally, you can set the skewness and kurtosis threshold values to your favourite values using the `t_skew` and `t_kurt` arguments to `treat()`.
### 8\.3\.2 Individual treatment
If you want more control over the treatment of individual indicators, this can be done by specifying the optional `individual` and `indiv_only` arguments. `individual` is a data frame which specifies the treatment to apply to specific indicators. It looks like this:
```
# Example of "individual" data frame spec
individual = data.frame(
IndCode = c("Bord", "Services", "Flights", "Tariff"),
Treat = c("win", "log", "none", "boxcox"),
Winmax = c(10, NA, NA, NA),
Thresh = c("thresh", NA, NA, NA),
Boxlam = c(NA, NA, NA, 2)
)
individual
## IndCode Treat Winmax Thresh Boxlam
## 1 Bord win 10 thresh NA
## 2 Services log NA <NA> NA
## 3 Flights none NA <NA> NA
## 4 Tariff boxcox NA <NA> 2
```
The “IndCode” column is the list of indicators to apply a specific treatment. The “Treat” column dictates which treatment to apply: options are either Winsorise (“win”), or any of the `deflog` options described previously. The “Winmax” column gives the maximum number of points to Winsorise for that indicator. This value is only used if the corresponding row of “Treat” is set to “win”. The “Thresh” column specifies whether to use skew/kurtosis thresholds or not. If “thresh”, it uses the thresholds specified in the `thresh` argument of `treat()`. If instead it is set at `NA`, then it will ignore the thresholds and force the Winsorisation to Winsorise exactly `winmax` points. Finally, the “Boxlam” column allows an individual setting of the `boxlam` parameter if the corresponding entry of the “Treat” column is set to “boxcox”. If `Individual` is specified, all of these columns need to be present regardless of the individual treatments specified. Where entries are not needed, you can simply put `NA`.
An important point is that if the treatment specified in the “Treat” column is forced upon the specified indicator, for example, if “CTlog” is specified, this will be applied, without any Winsorisation first. Conversely, if “win” is specified, this will be applied, without any subsequent log transform even if it fails to correct the distribution.
Finally, the `indiv_only` argument specifies if `TRUE`, ONLY the indicators listed in `individual` are treated, and the rest get no treatment. Otherwise, if `FALSE` then the remaining indicators are subject to the default treatment process.
Let’s now put this into practice.
```
ASEM <- treat(ASEM, dset = "Denominated", individual = individual,
indiv_only = FALSE)
# Check which indicators were treated, and how
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec
## V2 Services 0 0 Default, winmax = unspecified
## V3 FDI 0 2 Default, winmax = 6
## V5 ForPort 0 3 Default, winmax = 6
## V6 CostImpEx 0 1 Default, winmax = 6
## V7 Tariff 0 0 Default, winmax = unspecified
## V12 StMob 0 2 Default, winmax = 6
## V15 CultServ 0 2 Default, winmax = 6
## V23 Bord 0 3 Forced Win, winmax = 10
## V25 Gas 0 3 Default, winmax = 6
## V35 Forest 0 2 Default, winmax = 6
## V36 Poverty 0 3 Default, winmax = 6
## V41 NGOs 0 3 Default, winmax = 6
## V43 FemLab 1 0 Default, winmax = 6
## Treatment
## V2 Log (exceeded winmax)
## V3 Winsorised 2 points
## V5 Winsorised 3 points
## V6 Winsorised 1 points
## V7 Box Cox with lambda = 2 (exceeded winmax)
## V12 Winsorised 2 points
## V15 Winsorised 2 points
## V23 Winsorised 3 points
## V25 Winsorised 3 points
## V35 Winsorised 2 points
## V36 Winsorised 3 points
## V41 Winsorised 3 points
## V43 Winsorised 1 points
```
And this shows how some indicators have had certain treatments applied. Of course here, the individual treatments have been chosen arbitrarily, and the default treatment would have sufficed. In practice, you would apply individual treatment for specific indicators for good reasons, such as excessive skew, or not wanting to treat certain indicators for conceptual reasons.
### 8\.3\.1 Global treatment
The “default” treatment in COINr is similar to that applied in the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en), and widely used in composite indicator construction. It follows a basic process of trying to Winsorise up to a specified limit, followed by a transformation if the Winsorisation does not sufficiently “correct” the distribution. In short, for each indicator it:
1. Checks skew and kurtosis values
2. If absolute skew is greater than a threshold (default 2\) AND kurtosis is greater than a threshold (default 3\.5\):
1. Successively Winsorise up to a specified maximum number of points. If either skew or kurtosis goes below thresholds, stop. If after reaching the maximum number of points, both thresholds are still exceeded, then:
2. Perform a transformation
This is the default process, although individual specifications can be made for individual indicators \- see the next section.
The data treatment function in COINr is called `treat()`. To perform a default data treatment, we can simply use the “COIN in, COIN out” approach as usual.
```
# build ASEM data set, up to denomination
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta,
AggMeta = COINr6::ASEMAggMeta)
ASEM <- denominate(ASEM, dset = "Raw")
# treat at defaults
ASEM <- treat(ASEM, dset = "Denominated", winmax = 5)
# Check which indicators were treated, and how
library(dplyr, quietly = T, verbose = F)
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec Treatment
## V2 Services 0 4 Default, winmax = 5 Winsorised 4 points
## V3 FDI 0 2 Default, winmax = 5 Winsorised 2 points
## V5 ForPort 0 3 Default, winmax = 5 Winsorised 3 points
## V6 CostImpEx 0 1 Default, winmax = 5 Winsorised 1 points
## V7 Tariff 0 3 Default, winmax = 5 Winsorised 3 points
## V12 StMob 0 2 Default, winmax = 5 Winsorised 2 points
## V15 CultServ 0 2 Default, winmax = 5 Winsorised 2 points
## V21 Flights 0 1 Default, winmax = 5 Winsorised 1 points
## V23 Bord 0 3 Default, winmax = 5 Winsorised 3 points
## V25 Gas 0 3 Default, winmax = 5 Winsorised 3 points
## V35 Forest 0 2 Default, winmax = 5 Winsorised 2 points
## V36 Poverty 0 3 Default, winmax = 5 Winsorised 3 points
## V41 NGOs 0 3 Default, winmax = 5 Winsorised 3 points
## V43 FemLab 1 0 Default, winmax = 5 Winsorised 1 points
```
The results of `treat()` are a new data set `.Data$Treated` with the treated data in it, and a details of the data treatment in `$Analysis$Treated`, including `TreatSummary` (shown above), which gives a summary of the data treatment specified and actually applied for each indicator. In this case, all indicators were successfully brought within the specified skew and kurtosis limits by Winsorisation, within the specified Winsorisation limit `winmax` of five points.
To see what happens if `winmax` is exceeded, we will lower the threshold to `winmax = 3`. Additionally, we can control what type of log transform should be applied when `winmax` is exceeded, using the `deflog` argument. We will set `deflog = "CTlog"` which ensures that negative values will not cause errors.
```
# treat at defaults
ASEM <- treat(ASEM, dset = "Denominated", winmax = 3, deflog = "CTlog")
# Check which indicators were treated, and how
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec Treatment
## V2 Services 0 0 Default, winmax = 3 CTLog (exceeded winmax)
## V3 FDI 0 2 Default, winmax = 3 Winsorised 2 points
## V5 ForPort 0 3 Default, winmax = 3 Winsorised 3 points
## V6 CostImpEx 0 1 Default, winmax = 3 Winsorised 1 points
## V7 Tariff 0 3 Default, winmax = 3 Winsorised 3 points
## V12 StMob 0 2 Default, winmax = 3 Winsorised 2 points
## V15 CultServ 0 2 Default, winmax = 3 Winsorised 2 points
## V21 Flights 0 1 Default, winmax = 3 Winsorised 1 points
## V23 Bord 0 3 Default, winmax = 3 Winsorised 3 points
## V25 Gas 0 3 Default, winmax = 3 Winsorised 3 points
## V35 Forest 0 2 Default, winmax = 3 Winsorised 2 points
## V36 Poverty 0 3 Default, winmax = 3 Winsorised 3 points
## V41 NGOs 0 3 Default, winmax = 3 Winsorised 3 points
## V43 FemLab 1 0 Default, winmax = 3 Winsorised 1 points
# Compare before and after treatment
iplotIndDist2(ASEM, dsets = c("Denominated", "Treated"), icodes = "Services",
ptype = "Histogram")
```
Now we can see from the table that the Services indicator has had a log transformation applied, and from the histograms that the skewness has been treated. The range of the indicator is now different, and this introduces negative values, however this will be anyway fixed in the normalisation step. The important thing here is the shape of the distribution.
Other options for `deflog` are “log” (a standard log transform), “GIIlog”, which is a scaled log transformation used by the [Global Innovation Index](https://www.globalinnovationindex.org/), and “boxcox”, which uses a Box\-Cox transformation with the \\(\\lambda\\) parameter set by the `boxlam` argument to `treat()`. Optimised \\(\\lambda\\) values are not currently available in COINr, though this may be considered for future versions, along with custom transformation functions.
A further point to note is that Winsorisation can work in two ways in COINr, specified by the `winchange` argument. If set to `FALSE`, then Winsorisation works in the following way:
1. Perform an initial check of skew and kurtosis (call them \\(s\_0\\) and \\(k\_0\\)). Only if **both** exceed thresholds, continue to 2\. Otherwise no treatment is applied.
2. If \\(s\_0\\) is positive (usually the case), begin by Winsorising the highest value (assign the value of the second\-highest point).
3. If \\(s\_0\\) is negative, begin by Winsorising the lowest value (assign the value of the second\-lowest point).
4. If either skew or kurtosis thesholds are below thresholds, stop, otherwise
5. Go to 2\. if \\(s\_0\\) was positive, or 3\. if \\(s\_0\\) was negative.
Notably, the *direction* of the Winsorisation here is always the same, i.e. if \\(s\_0\\) was positive, Winsorisation will always be applied to the highest values. The only drawback here is that the sign of the skew can conceivably change during the Winsorisation process. For that reason, if you set `winchange = TRUE` (which is anyway default), the function will check after every Winsorised point to see whether to Winsorise high values or low values.
It might seem unlikely that the skew could change sign from one iteration to the next, *and* still remain outside the absolute threshold. But this has been observed to happen in some cases.
Finally, you can set the skewness and kurtosis threshold values to your favourite values using the `t_skew` and `t_kurt` arguments to `treat()`.
### 8\.3\.2 Individual treatment
If you want more control over the treatment of individual indicators, this can be done by specifying the optional `individual` and `indiv_only` arguments. `individual` is a data frame which specifies the treatment to apply to specific indicators. It looks like this:
```
# Example of "individual" data frame spec
individual = data.frame(
IndCode = c("Bord", "Services", "Flights", "Tariff"),
Treat = c("win", "log", "none", "boxcox"),
Winmax = c(10, NA, NA, NA),
Thresh = c("thresh", NA, NA, NA),
Boxlam = c(NA, NA, NA, 2)
)
individual
## IndCode Treat Winmax Thresh Boxlam
## 1 Bord win 10 thresh NA
## 2 Services log NA <NA> NA
## 3 Flights none NA <NA> NA
## 4 Tariff boxcox NA <NA> 2
```
The “IndCode” column is the list of indicators to apply a specific treatment. The “Treat” column dictates which treatment to apply: options are either Winsorise (“win”), or any of the `deflog` options described previously. The “Winmax” column gives the maximum number of points to Winsorise for that indicator. This value is only used if the corresponding row of “Treat” is set to “win”. The “Thresh” column specifies whether to use skew/kurtosis thresholds or not. If “thresh”, it uses the thresholds specified in the `thresh` argument of `treat()`. If instead it is set at `NA`, then it will ignore the thresholds and force the Winsorisation to Winsorise exactly `winmax` points. Finally, the “Boxlam” column allows an individual setting of the `boxlam` parameter if the corresponding entry of the “Treat” column is set to “boxcox”. If `Individual` is specified, all of these columns need to be present regardless of the individual treatments specified. Where entries are not needed, you can simply put `NA`.
An important point is that if the treatment specified in the “Treat” column is forced upon the specified indicator, for example, if “CTlog” is specified, this will be applied, without any Winsorisation first. Conversely, if “win” is specified, this will be applied, without any subsequent log transform even if it fails to correct the distribution.
Finally, the `indiv_only` argument specifies if `TRUE`, ONLY the indicators listed in `individual` are treated, and the rest get no treatment. Otherwise, if `FALSE` then the remaining indicators are subject to the default treatment process.
Let’s now put this into practice.
```
ASEM <- treat(ASEM, dset = "Denominated", individual = individual,
indiv_only = FALSE)
# Check which indicators were treated, and how
ASEM$Analysis$Treated$TreatSummary %>% filter(Treatment != "None")
## IndCode Low High TreatSpec
## V2 Services 0 0 Default, winmax = unspecified
## V3 FDI 0 2 Default, winmax = 6
## V5 ForPort 0 3 Default, winmax = 6
## V6 CostImpEx 0 1 Default, winmax = 6
## V7 Tariff 0 0 Default, winmax = unspecified
## V12 StMob 0 2 Default, winmax = 6
## V15 CultServ 0 2 Default, winmax = 6
## V23 Bord 0 3 Forced Win, winmax = 10
## V25 Gas 0 3 Default, winmax = 6
## V35 Forest 0 2 Default, winmax = 6
## V36 Poverty 0 3 Default, winmax = 6
## V41 NGOs 0 3 Default, winmax = 6
## V43 FemLab 1 0 Default, winmax = 6
## Treatment
## V2 Log (exceeded winmax)
## V3 Winsorised 2 points
## V5 Winsorised 3 points
## V6 Winsorised 1 points
## V7 Box Cox with lambda = 2 (exceeded winmax)
## V12 Winsorised 2 points
## V15 Winsorised 2 points
## V23 Winsorised 3 points
## V25 Winsorised 3 points
## V35 Winsorised 2 points
## V36 Winsorised 3 points
## V41 Winsorised 3 points
## V43 Winsorised 1 points
```
And this shows how some indicators have had certain treatments applied. Of course here, the individual treatments have been chosen arbitrarily, and the default treatment would have sufficed. In practice, you would apply individual treatment for specific indicators for good reasons, such as excessive skew, or not wanting to treat certain indicators for conceptual reasons.
8\.4 Interactive visualisation
------------------------------
As we have seen in this chapter, it’s essential to visualise your indicator distributions and check what treatment has been applied, and what the main differences are. This can be achieved with the plotting functions previously described, but could become cumbersome if you want to check many indicators.
COINr has an built\-in Shiny app which lets you compare indicator distributions interactively. Shiny is an R package which allows you to build interactive apps, dashboards and gadgets that can run R code. Shiny is a powerful tool for interactively exploring, communicating and presenting results, and can be used to host apps on the web. If you would like to know more about Shiny, I would recommend the book [Mastering Shiny](https://mastering-shiny.org/index.html).
To run the indicator visualisation app, simply run `indDash(ASEM)`. This will open a window which should look like this:
Figure 8\.1: indDash screenshot
The `indDash()` app is purely illustrative \- it does not return anything back to R, but simply displays the distributions of the existing indicators.
In the side panel, all data sets that exist in the `.$Data` folder are accessible, as well as any denominators, if they exist. Any of the indicators from any of these data sets can be plotted against each other.
In the context of data treatment, this gives a fast way to check the effects of treating indicators. In the ASEM example, we can set the the data set of indicator 1 to “Denominated”, and the data set of indicator 2 to “Treated”. This will compare denominated against treated indicators, which is a sensible comparison since `treat()` was applied to `.Data$Denominated` in this case.
When a treated data set is selected, a table of treated indicators appears in the side panel \- this lists all indicators that had treatment applied to them, and gives the details of the treatments applied. You can click on column headings to sort by values, and this can give an easy way to see which indicators were treated. You can then select an indicator of interest using the dropdown menus. In the screenshot above, we are comparing “Gas” in the denominated and treated data sets.
This gives several comparison plots: histograms of each indicator, violin plots, and a scatter plot of one indicator against the other, as well as overlaid histograms. As with any Plotly plot, any of these can be instantly downloaded using the camera button that appears when you hover over the plot. This might be useful for generating quick reports, for example.
Below the violin plots are details of the skew and kurtosis of the indicator, and details of the treatment applied, if any.
Finally, when comparing a treated indicator against its un\-treated version, it can be helpful to check or un\-check the “Plot indicators on the same axis range” box. Checking this box gives a like\-for\-like comparison which shows how the data points have moved, particularly useful in the case of Winsorisation. Unchecking it may be more helpful to compare the shapes of the two distributions.
While `indDash()` has specific features for treated data, it is also useful as a general purpose tool for fast indicator visualisation and exploration. It can help, for example, to observe trends and relationships between indicators and denominators, and will work with the aggregated data, i.e. to see how the index may depend on its indicators.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/normalisation.html |
Chapter 9 Normalisation
=======================
Normalisation is the operation of bringing indicators onto comparable scales so that they can be aggregated more fairly. To see why this is necessary, consider aggregating GDP values (billions or trillions of dollars) with percentage tertiary graduates (tens of percent). Average values here would make no sense because one is on a completely different scale to the other.
9\.1 Approaches
---------------
### 9\.1\.1 First: adjust direction
Indicators can either be positively or negatively related to the concept that you are trying to measure. For example, in an index of quality of life, median income would probably be a positive indicator. Prevalance of malnourishment would be a negative indicator (higher values should give lower scores in quality of life).
Accounting for these differences is considered part of the normalisation step. Indicators loaded into COINr should have a “Direction” column in the `IndMeta` input to `assemble()`, which is 1 for positive indicators, and \-1 for negative indicators. With this information, normalisation is a two step procedure:
1. Multiply the values of each indicator by their corresponding direction value (either 1 or \-1\).
2. Apply one of the normalisation methods described below.
It’s that simple. COINr has this built in, so you don’t need to do anything other than specify the directions.
### 9\.1\.2 Linear transformations
Normalisation is relatively simple but there are still a number of different approaches which have different properties.
Perhaps the most straightforward and intuitive option (and therefore probably the most widely used) is called the *min\-max* transformation. This is a simple linear function which rescales the indicator to have a minimum value \\(l\\), a maximum value \\(u\\), and consequently a range \\(u\-l\\), and is as follows:
\\\[ \\tilde{x}\_{\\text{min}} \= \\frac{ x \- x\_{\\text{min}} }{ x\_{\\text{max}} \- x\_{\\text{min}} } \\times (u\-l) \+ l\\]
where \\(\\tilde{x}\\) is the normalised indicator value. For example, if \\(l\=0\\) and \\(u\=100\\) this will rescale the indicator to lie exactly onto the interval \\(\[0, 100]\\). The transformation is linear because it does not change the *shape* of the distribution, it simply shrinks or expands it, and moves it.
A similar transformation is to take *z\-scores*, which instead use the mean and standard deviation as reference points:
\\\[ \\tilde{x}\_{\\text{min}} \= \\frac{ x \- \\mu\_x }{ \\sigma\_x } \\times a \+ b\\]
where \\(\\mu\_x\\) and \\(\\sigma\_x\\) are the mean and standard deviation of \\(x\\). The indicator is first re\-scaled to have mean zero and standard deviation of one. Then it is scaled by a factor \\(a\\) and moved by a distance \\(b\\). This is very similar to the min\-max transformation in that it can be reduced to multiplying by a factor and adding a constant, which is the definition of a linear transformation. However, the two approaches have different implications. One is that Z\-scores will generally be less sensitive to outliers, because the standard deviation is less dependent on an outlying value than the minimum or maximum.
Following the min\-max and z\-score, the general linear transformation is defined as:
\\\[ \\tilde{x} \= \\frac{ x \- p }{ q } \\times a \+ b\\]
and it is fairly straightforward to see how z\-scores and the min\-max transformations are special cases of this.
### 9\.1\.3 Nonlinear transformations
A simple nonlinear transformation is the rank transformation.
\\\[ \\tilde{x} \= \\text{rank}(x)\\]
where the ranks should be defined so that the lowest indicator value has a rank of 1, the second lowest a rank of 2, and so on. The rank transformation is attractive because it automatically eliminates any outliers. Therefore there would not usually be any need to treat the data previously. However, it converts detailed indicator scores to simple ranks, which might be too reductionist for some.
It’s worth pointing out that there are different ways to rank values, because of how ties (units with the same score) are handled. To read about this, just call `?rank` in R.
Similar approaches to simple ranks include Borda scores, which are simply the ranks described above but minus 1 (so the lowest score is 0 instead of 1\), and percentile ranks.
### 9\.1\.4 Distances
Another approach is to use the distance of each score to some reference value. Possibilities here are the (normalised) distance to the maximum value of the indicator:
\\\[ \\tilde{x} \= 1 \- \\frac{\\text{max}(x) \- x}{\\text{max}(x) \- \\text{min}(x)}\\]
the fraction of the maximum value of the indicator:
\\\[ \\tilde{x} \= \\frac{x}{\\text{max}(x)}\\]
the distance to a specified unit value in the indicator:
\\\[ \\tilde{x} \= 1 \- \\frac{x\_u \- x}{\\text{max}(x) \- \\text{min}(x)}\\]
where \\(x\_u\\) is the value of unit \\(u\\) in the indicator. This is useful for benchmarking against a reference country, for example. Another possibility is to normalise against indicator targets. This approach is used for example in the [EU2020 Index](https://www.sciencedirect.com/science/article/pii/S2665972720300593), which used European targets on environment, education and employment issues (among others).
\\\[ \\tilde{x} \= \\text{min} \\left\[1, \\ 1 \- \\frac{\\text{targ}(x) \- x}{\\text{max}(x) \- \\text{min}(x)} \\right]\\]
where \\(\\text{targ}(x)\\) is the target for the indicator. In this case, any value that exceeds the target is set to 1, i.e. exceeding the target is counted the same as exactly meeting it. There is also an issue of what to use to scale the distance: here the range of the indicator is used, but one could also use \\(\\text{targ}(x) \- \\text{min}(x)\\) or perhaps some other range.
A similar approach, sometimes called the “goalposts” or “distance to frontier” method, normalises the indicator by specified upper and lower bounds. This may be used, for example, when there are clear boundaries to the indicator (e.g. [Likert scores](https://en.wikipedia.org/wiki/Likert_scale)), or when the you wish to keep the boundaries fixed across possibly different years of data. The goalposts method looks lihe this:
\\\[ \\tilde{x} \= \\text{max} \\left(0, \\ \\text{min} \\left\[1, \\ \\frac{x \- x\_L}{x\_U \- x\_L} \\right] \\right)\\]
where \\(x\_U\\) and \\(x\_L\\) are the upper and lower bounds, respectively, for the indicator. Any values that are below the lower bound are assigned a score of zero, and any above the upper bound are assigned a score of one. This approach is used by the [European Skills Index](https://www.cedefop.europa.eu/en/events-and-projects/projects/european-skills-index-esi), for example.
9\.2 Normalisation in COINr
---------------------------
The normalisation function in COINr is imaginatively named `normalise()`. It has the following main features:
* A wide range of normalisation methods, including the possibility to pass custom functions
* Customisable parameters for normalisation
* Possibility to specify detailed individual treatment for each indicator
The function looks like this:
```
# don't run this chunk, just for illustration (will throw error if run)
normalise <- function(COIN, ntype = NULL, npara = NULL,
dset = NULL, directions = NULL, individual = NULL,
indiv_only = FALSE, out2 = NULL){
```
As an input, it takes a COIN or data frame, as usual. It outputs a new dataset to the COIN `.$Data$Normalised`, or a normalised data frame if `out2 = "df"`. Default normalisation (min\-max, scaled between 0 and 100\) can be achieved by simply calling:
```
library(COINr6)
# Build ASEM index up to denomination
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta, AggMeta = COINr6::ASEMAggMeta)
ASEM <- denominate(ASEM, dset = "Raw")
# Default normalisation (min max scaled between 0 and 100) on denominated data set
ASEM <- normalise(ASEM, dset = "Denominated")
# compare one of the indicators
iplotIndDist2(ASEM, dsets = c("Denominated", "Normalised"),
icodes = "TertGrad", ptype = "Scatter")
```
This plot also illustrates the linear nature of the min\-max transformation.
You can select the normalisation type of the `normalise()` function using the `ntype` and accompanying `npara` arguments, where the latter is an object which specifies any parameters for the type of normalisation selected.
* `ntype = "minmax"` yields a min\-max transformation that scales each indicator onto an interval specified by `npara`, e.g. if `npara$minmax = c(0,10)` the indicators will scale to \[0, 10].
* `ntype = "zscore"` scales the indicator to have a mean and standard deviation specified by `npara`, e.g. if `npara$zscore = c(0,1)` the indicator will have mean zero and standard deviation 1\.
* `ntype = "scaled"` is a general linear transformation defined by `npara$scaled = c(a,b)` which subtracts `a` and divides by `b`.
* `ntype = "goalposts"` is a capped linear transformation (see the “goalposts” method mentioned previously). Here `npara$goalposts = c(l, u, a)`, where `l` is the lower bound, `u` is the upper bound, and `a` is a scaling parameter.
* `ntype = "rank"` replaces indicator scores with their corresponding ranks, such that the highest scores have the largest rank values. Ties take average rank values. Here `npara` is not used.
* `ntype = "borda"` is similar to `ntype = "rank"` but uses Borda scores, which are simply rank values minus one. `npara` is not used.
* `ntype = "prank"` gives percentile ranks. `npara` is not used.
* `ntype = "fracmax"` scales each indicator by dividing the value by the maximum value of the indicator. `npara` is not used.
* `ntype = "dist2ref"` gives the distance to a reference unit, defined by `npara`. For example, if `npara$dist2ref = "AUT"` then all scores will be normalised as the distance to the score of unit “AUT” in each indicator (divided by the range of the indicator). This is useful for benchmarking against a set unit, e.g. a reference country. Scores
* `ntype = "dist2max"` gives the normalised distance to the maximum of each indicator.
* `ntype = "dist2targ"` gives the normalised distance to indicator targets. Targets are specified, in the `IndMeta` argument of `assemble()`, and will be in the COIN at `.$Input$IndMeta`. Any scores that exceed the target will be capped at a normalised value of one.
* `ntype = "custom"` allows to pass a custom function to apply to every indicator. For example, `npara$custom = function(x) {x/max(x, na.rm = T)}` would give the “fracmax” normalisation method described above.
* `ntype = "none"` the indicator is not normalised (this is mainly useful for adjustments and sensitivity analysis).
The `normalise()` function also allows you to specify the directions of the indicators using the argument `directions`. If this is not specified, the directions will be taken from the indicator metadata input to `assemble()`. If they are not present here, all directions will default to positive.
9\.3 Individual normalisation
-----------------------------
Indicators can be normalised using individual specifications in a similar way to the `treat()` function. This is specified by the following two arguments to `normalise()`:
* `individual` A list of named lists specifiying individual normalisation to apply to specific indicators. Should be structured so that the name of each sublist should be the indicator code. The the list elements are:
+ `.$ntype` is the type of normalisation to apply (any of the options mentioned above)
+ `.$npara` is a corresponding object or parameters that are used by ntype
* `indiv_only` Logical. As with `treat()`, if this is set to `FALSE` (default), then the indicators that are *not* specified in `individual` will be normalised according to the `ntype` and `npara` arguments specified in the function argument. Otherwise if `TRUE`, only the indicators in `individual` will be normalised, and the others will be unaffected.
The easiest way to clarify this is with an example. In the following, we will apply min\-max, scaled to \[0, 1] to all indicators, except for “Flights” which will be normalised using Borda scores, “Renew” which will remain un\-normalised, and “Ship” which will be scaled as a distance to the value of Singapore.
```
indiv = list(
Flights = list(ntype = "borda"),
Renew = list(ntype = "none"),
Ship = list(ntype = "dist2ref", npara = list(dist2ref = "SGP"))
)
# Minmax in [0,1] for all indicators, except custom individual normalisation
# for those described above
ASEM <- normalise(ASEM, dset = "Denominated", ntype = "minmax", npara = list(minmax = c(0,1)),
individual = indiv, indiv_only = FALSE)
```
We can visualise the new ranges of the data.
```
plotIndDist(ASEM, dset = "Normalised",
icodes = c("Flights", "Renew", "Ship", "Goods"), type = "Dot")
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
```
This example is meant to be illustrative of the functionality of `normalise()`, rather than being a sensible normalisation strategy, because the indicators are now on very different ranges, from the default minmax range of \[0, 1] (“Goods” in this case), to the unchanged scale of “Renew” and the Borda rank scale of “Flights”. Notice also the scaling of “Ship”, which has values above zero because the reference country, Singapore, is does not have the maximum value of the indicator.
In practice, if different normalisation strategies are selected, it is a good idea to keep the indicators on similar ranges, otherwise the effects will be very unequal in the aggregation step.
9\.1 Approaches
---------------
### 9\.1\.1 First: adjust direction
Indicators can either be positively or negatively related to the concept that you are trying to measure. For example, in an index of quality of life, median income would probably be a positive indicator. Prevalance of malnourishment would be a negative indicator (higher values should give lower scores in quality of life).
Accounting for these differences is considered part of the normalisation step. Indicators loaded into COINr should have a “Direction” column in the `IndMeta` input to `assemble()`, which is 1 for positive indicators, and \-1 for negative indicators. With this information, normalisation is a two step procedure:
1. Multiply the values of each indicator by their corresponding direction value (either 1 or \-1\).
2. Apply one of the normalisation methods described below.
It’s that simple. COINr has this built in, so you don’t need to do anything other than specify the directions.
### 9\.1\.2 Linear transformations
Normalisation is relatively simple but there are still a number of different approaches which have different properties.
Perhaps the most straightforward and intuitive option (and therefore probably the most widely used) is called the *min\-max* transformation. This is a simple linear function which rescales the indicator to have a minimum value \\(l\\), a maximum value \\(u\\), and consequently a range \\(u\-l\\), and is as follows:
\\\[ \\tilde{x}\_{\\text{min}} \= \\frac{ x \- x\_{\\text{min}} }{ x\_{\\text{max}} \- x\_{\\text{min}} } \\times (u\-l) \+ l\\]
where \\(\\tilde{x}\\) is the normalised indicator value. For example, if \\(l\=0\\) and \\(u\=100\\) this will rescale the indicator to lie exactly onto the interval \\(\[0, 100]\\). The transformation is linear because it does not change the *shape* of the distribution, it simply shrinks or expands it, and moves it.
A similar transformation is to take *z\-scores*, which instead use the mean and standard deviation as reference points:
\\\[ \\tilde{x}\_{\\text{min}} \= \\frac{ x \- \\mu\_x }{ \\sigma\_x } \\times a \+ b\\]
where \\(\\mu\_x\\) and \\(\\sigma\_x\\) are the mean and standard deviation of \\(x\\). The indicator is first re\-scaled to have mean zero and standard deviation of one. Then it is scaled by a factor \\(a\\) and moved by a distance \\(b\\). This is very similar to the min\-max transformation in that it can be reduced to multiplying by a factor and adding a constant, which is the definition of a linear transformation. However, the two approaches have different implications. One is that Z\-scores will generally be less sensitive to outliers, because the standard deviation is less dependent on an outlying value than the minimum or maximum.
Following the min\-max and z\-score, the general linear transformation is defined as:
\\\[ \\tilde{x} \= \\frac{ x \- p }{ q } \\times a \+ b\\]
and it is fairly straightforward to see how z\-scores and the min\-max transformations are special cases of this.
### 9\.1\.3 Nonlinear transformations
A simple nonlinear transformation is the rank transformation.
\\\[ \\tilde{x} \= \\text{rank}(x)\\]
where the ranks should be defined so that the lowest indicator value has a rank of 1, the second lowest a rank of 2, and so on. The rank transformation is attractive because it automatically eliminates any outliers. Therefore there would not usually be any need to treat the data previously. However, it converts detailed indicator scores to simple ranks, which might be too reductionist for some.
It’s worth pointing out that there are different ways to rank values, because of how ties (units with the same score) are handled. To read about this, just call `?rank` in R.
Similar approaches to simple ranks include Borda scores, which are simply the ranks described above but minus 1 (so the lowest score is 0 instead of 1\), and percentile ranks.
### 9\.1\.4 Distances
Another approach is to use the distance of each score to some reference value. Possibilities here are the (normalised) distance to the maximum value of the indicator:
\\\[ \\tilde{x} \= 1 \- \\frac{\\text{max}(x) \- x}{\\text{max}(x) \- \\text{min}(x)}\\]
the fraction of the maximum value of the indicator:
\\\[ \\tilde{x} \= \\frac{x}{\\text{max}(x)}\\]
the distance to a specified unit value in the indicator:
\\\[ \\tilde{x} \= 1 \- \\frac{x\_u \- x}{\\text{max}(x) \- \\text{min}(x)}\\]
where \\(x\_u\\) is the value of unit \\(u\\) in the indicator. This is useful for benchmarking against a reference country, for example. Another possibility is to normalise against indicator targets. This approach is used for example in the [EU2020 Index](https://www.sciencedirect.com/science/article/pii/S2665972720300593), which used European targets on environment, education and employment issues (among others).
\\\[ \\tilde{x} \= \\text{min} \\left\[1, \\ 1 \- \\frac{\\text{targ}(x) \- x}{\\text{max}(x) \- \\text{min}(x)} \\right]\\]
where \\(\\text{targ}(x)\\) is the target for the indicator. In this case, any value that exceeds the target is set to 1, i.e. exceeding the target is counted the same as exactly meeting it. There is also an issue of what to use to scale the distance: here the range of the indicator is used, but one could also use \\(\\text{targ}(x) \- \\text{min}(x)\\) or perhaps some other range.
A similar approach, sometimes called the “goalposts” or “distance to frontier” method, normalises the indicator by specified upper and lower bounds. This may be used, for example, when there are clear boundaries to the indicator (e.g. [Likert scores](https://en.wikipedia.org/wiki/Likert_scale)), or when the you wish to keep the boundaries fixed across possibly different years of data. The goalposts method looks lihe this:
\\\[ \\tilde{x} \= \\text{max} \\left(0, \\ \\text{min} \\left\[1, \\ \\frac{x \- x\_L}{x\_U \- x\_L} \\right] \\right)\\]
where \\(x\_U\\) and \\(x\_L\\) are the upper and lower bounds, respectively, for the indicator. Any values that are below the lower bound are assigned a score of zero, and any above the upper bound are assigned a score of one. This approach is used by the [European Skills Index](https://www.cedefop.europa.eu/en/events-and-projects/projects/european-skills-index-esi), for example.
### 9\.1\.1 First: adjust direction
Indicators can either be positively or negatively related to the concept that you are trying to measure. For example, in an index of quality of life, median income would probably be a positive indicator. Prevalance of malnourishment would be a negative indicator (higher values should give lower scores in quality of life).
Accounting for these differences is considered part of the normalisation step. Indicators loaded into COINr should have a “Direction” column in the `IndMeta` input to `assemble()`, which is 1 for positive indicators, and \-1 for negative indicators. With this information, normalisation is a two step procedure:
1. Multiply the values of each indicator by their corresponding direction value (either 1 or \-1\).
2. Apply one of the normalisation methods described below.
It’s that simple. COINr has this built in, so you don’t need to do anything other than specify the directions.
### 9\.1\.2 Linear transformations
Normalisation is relatively simple but there are still a number of different approaches which have different properties.
Perhaps the most straightforward and intuitive option (and therefore probably the most widely used) is called the *min\-max* transformation. This is a simple linear function which rescales the indicator to have a minimum value \\(l\\), a maximum value \\(u\\), and consequently a range \\(u\-l\\), and is as follows:
\\\[ \\tilde{x}\_{\\text{min}} \= \\frac{ x \- x\_{\\text{min}} }{ x\_{\\text{max}} \- x\_{\\text{min}} } \\times (u\-l) \+ l\\]
where \\(\\tilde{x}\\) is the normalised indicator value. For example, if \\(l\=0\\) and \\(u\=100\\) this will rescale the indicator to lie exactly onto the interval \\(\[0, 100]\\). The transformation is linear because it does not change the *shape* of the distribution, it simply shrinks or expands it, and moves it.
A similar transformation is to take *z\-scores*, which instead use the mean and standard deviation as reference points:
\\\[ \\tilde{x}\_{\\text{min}} \= \\frac{ x \- \\mu\_x }{ \\sigma\_x } \\times a \+ b\\]
where \\(\\mu\_x\\) and \\(\\sigma\_x\\) are the mean and standard deviation of \\(x\\). The indicator is first re\-scaled to have mean zero and standard deviation of one. Then it is scaled by a factor \\(a\\) and moved by a distance \\(b\\). This is very similar to the min\-max transformation in that it can be reduced to multiplying by a factor and adding a constant, which is the definition of a linear transformation. However, the two approaches have different implications. One is that Z\-scores will generally be less sensitive to outliers, because the standard deviation is less dependent on an outlying value than the minimum or maximum.
Following the min\-max and z\-score, the general linear transformation is defined as:
\\\[ \\tilde{x} \= \\frac{ x \- p }{ q } \\times a \+ b\\]
and it is fairly straightforward to see how z\-scores and the min\-max transformations are special cases of this.
### 9\.1\.3 Nonlinear transformations
A simple nonlinear transformation is the rank transformation.
\\\[ \\tilde{x} \= \\text{rank}(x)\\]
where the ranks should be defined so that the lowest indicator value has a rank of 1, the second lowest a rank of 2, and so on. The rank transformation is attractive because it automatically eliminates any outliers. Therefore there would not usually be any need to treat the data previously. However, it converts detailed indicator scores to simple ranks, which might be too reductionist for some.
It’s worth pointing out that there are different ways to rank values, because of how ties (units with the same score) are handled. To read about this, just call `?rank` in R.
Similar approaches to simple ranks include Borda scores, which are simply the ranks described above but minus 1 (so the lowest score is 0 instead of 1\), and percentile ranks.
### 9\.1\.4 Distances
Another approach is to use the distance of each score to some reference value. Possibilities here are the (normalised) distance to the maximum value of the indicator:
\\\[ \\tilde{x} \= 1 \- \\frac{\\text{max}(x) \- x}{\\text{max}(x) \- \\text{min}(x)}\\]
the fraction of the maximum value of the indicator:
\\\[ \\tilde{x} \= \\frac{x}{\\text{max}(x)}\\]
the distance to a specified unit value in the indicator:
\\\[ \\tilde{x} \= 1 \- \\frac{x\_u \- x}{\\text{max}(x) \- \\text{min}(x)}\\]
where \\(x\_u\\) is the value of unit \\(u\\) in the indicator. This is useful for benchmarking against a reference country, for example. Another possibility is to normalise against indicator targets. This approach is used for example in the [EU2020 Index](https://www.sciencedirect.com/science/article/pii/S2665972720300593), which used European targets on environment, education and employment issues (among others).
\\\[ \\tilde{x} \= \\text{min} \\left\[1, \\ 1 \- \\frac{\\text{targ}(x) \- x}{\\text{max}(x) \- \\text{min}(x)} \\right]\\]
where \\(\\text{targ}(x)\\) is the target for the indicator. In this case, any value that exceeds the target is set to 1, i.e. exceeding the target is counted the same as exactly meeting it. There is also an issue of what to use to scale the distance: here the range of the indicator is used, but one could also use \\(\\text{targ}(x) \- \\text{min}(x)\\) or perhaps some other range.
A similar approach, sometimes called the “goalposts” or “distance to frontier” method, normalises the indicator by specified upper and lower bounds. This may be used, for example, when there are clear boundaries to the indicator (e.g. [Likert scores](https://en.wikipedia.org/wiki/Likert_scale)), or when the you wish to keep the boundaries fixed across possibly different years of data. The goalposts method looks lihe this:
\\\[ \\tilde{x} \= \\text{max} \\left(0, \\ \\text{min} \\left\[1, \\ \\frac{x \- x\_L}{x\_U \- x\_L} \\right] \\right)\\]
where \\(x\_U\\) and \\(x\_L\\) are the upper and lower bounds, respectively, for the indicator. Any values that are below the lower bound are assigned a score of zero, and any above the upper bound are assigned a score of one. This approach is used by the [European Skills Index](https://www.cedefop.europa.eu/en/events-and-projects/projects/european-skills-index-esi), for example.
9\.2 Normalisation in COINr
---------------------------
The normalisation function in COINr is imaginatively named `normalise()`. It has the following main features:
* A wide range of normalisation methods, including the possibility to pass custom functions
* Customisable parameters for normalisation
* Possibility to specify detailed individual treatment for each indicator
The function looks like this:
```
# don't run this chunk, just for illustration (will throw error if run)
normalise <- function(COIN, ntype = NULL, npara = NULL,
dset = NULL, directions = NULL, individual = NULL,
indiv_only = FALSE, out2 = NULL){
```
As an input, it takes a COIN or data frame, as usual. It outputs a new dataset to the COIN `.$Data$Normalised`, or a normalised data frame if `out2 = "df"`. Default normalisation (min\-max, scaled between 0 and 100\) can be achieved by simply calling:
```
library(COINr6)
# Build ASEM index up to denomination
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta, AggMeta = COINr6::ASEMAggMeta)
ASEM <- denominate(ASEM, dset = "Raw")
# Default normalisation (min max scaled between 0 and 100) on denominated data set
ASEM <- normalise(ASEM, dset = "Denominated")
# compare one of the indicators
iplotIndDist2(ASEM, dsets = c("Denominated", "Normalised"),
icodes = "TertGrad", ptype = "Scatter")
```
This plot also illustrates the linear nature of the min\-max transformation.
You can select the normalisation type of the `normalise()` function using the `ntype` and accompanying `npara` arguments, where the latter is an object which specifies any parameters for the type of normalisation selected.
* `ntype = "minmax"` yields a min\-max transformation that scales each indicator onto an interval specified by `npara`, e.g. if `npara$minmax = c(0,10)` the indicators will scale to \[0, 10].
* `ntype = "zscore"` scales the indicator to have a mean and standard deviation specified by `npara`, e.g. if `npara$zscore = c(0,1)` the indicator will have mean zero and standard deviation 1\.
* `ntype = "scaled"` is a general linear transformation defined by `npara$scaled = c(a,b)` which subtracts `a` and divides by `b`.
* `ntype = "goalposts"` is a capped linear transformation (see the “goalposts” method mentioned previously). Here `npara$goalposts = c(l, u, a)`, where `l` is the lower bound, `u` is the upper bound, and `a` is a scaling parameter.
* `ntype = "rank"` replaces indicator scores with their corresponding ranks, such that the highest scores have the largest rank values. Ties take average rank values. Here `npara` is not used.
* `ntype = "borda"` is similar to `ntype = "rank"` but uses Borda scores, which are simply rank values minus one. `npara` is not used.
* `ntype = "prank"` gives percentile ranks. `npara` is not used.
* `ntype = "fracmax"` scales each indicator by dividing the value by the maximum value of the indicator. `npara` is not used.
* `ntype = "dist2ref"` gives the distance to a reference unit, defined by `npara`. For example, if `npara$dist2ref = "AUT"` then all scores will be normalised as the distance to the score of unit “AUT” in each indicator (divided by the range of the indicator). This is useful for benchmarking against a set unit, e.g. a reference country. Scores
* `ntype = "dist2max"` gives the normalised distance to the maximum of each indicator.
* `ntype = "dist2targ"` gives the normalised distance to indicator targets. Targets are specified, in the `IndMeta` argument of `assemble()`, and will be in the COIN at `.$Input$IndMeta`. Any scores that exceed the target will be capped at a normalised value of one.
* `ntype = "custom"` allows to pass a custom function to apply to every indicator. For example, `npara$custom = function(x) {x/max(x, na.rm = T)}` would give the “fracmax” normalisation method described above.
* `ntype = "none"` the indicator is not normalised (this is mainly useful for adjustments and sensitivity analysis).
The `normalise()` function also allows you to specify the directions of the indicators using the argument `directions`. If this is not specified, the directions will be taken from the indicator metadata input to `assemble()`. If they are not present here, all directions will default to positive.
9\.3 Individual normalisation
-----------------------------
Indicators can be normalised using individual specifications in a similar way to the `treat()` function. This is specified by the following two arguments to `normalise()`:
* `individual` A list of named lists specifiying individual normalisation to apply to specific indicators. Should be structured so that the name of each sublist should be the indicator code. The the list elements are:
+ `.$ntype` is the type of normalisation to apply (any of the options mentioned above)
+ `.$npara` is a corresponding object or parameters that are used by ntype
* `indiv_only` Logical. As with `treat()`, if this is set to `FALSE` (default), then the indicators that are *not* specified in `individual` will be normalised according to the `ntype` and `npara` arguments specified in the function argument. Otherwise if `TRUE`, only the indicators in `individual` will be normalised, and the others will be unaffected.
The easiest way to clarify this is with an example. In the following, we will apply min\-max, scaled to \[0, 1] to all indicators, except for “Flights” which will be normalised using Borda scores, “Renew” which will remain un\-normalised, and “Ship” which will be scaled as a distance to the value of Singapore.
```
indiv = list(
Flights = list(ntype = "borda"),
Renew = list(ntype = "none"),
Ship = list(ntype = "dist2ref", npara = list(dist2ref = "SGP"))
)
# Minmax in [0,1] for all indicators, except custom individual normalisation
# for those described above
ASEM <- normalise(ASEM, dset = "Denominated", ntype = "minmax", npara = list(minmax = c(0,1)),
individual = indiv, indiv_only = FALSE)
```
We can visualise the new ranges of the data.
```
plotIndDist(ASEM, dset = "Normalised",
icodes = c("Flights", "Renew", "Ship", "Goods"), type = "Dot")
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
```
This example is meant to be illustrative of the functionality of `normalise()`, rather than being a sensible normalisation strategy, because the indicators are now on very different ranges, from the default minmax range of \[0, 1] (“Goods” in this case), to the unchanged scale of “Renew” and the Borda rank scale of “Flights”. Notice also the scaling of “Ship”, which has values above zero because the reference country, Singapore, is does not have the maximum value of the indicator.
In practice, if different normalisation strategies are selected, it is a good idea to keep the indicators on similar ranges, otherwise the effects will be very unequal in the aggregation step.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/aggregation.html |
Chapter 10 Aggregation
======================
Aggregation is the operation of combining multiple indicators into one value. Many composite indicators have a hierarchical structure, so in practice this often involves multiple aggregations, for example aggregating groups of indicators into aggregate values, then aggregating those values into higher\-level aggregates, and so on, until the final index value.
Aggregating should almost always be done on normalised data, unless the indicators are already on very similar scales. Otherwise the relative influence of indicators will be very uneven.
Of course you don’t *have* to aggregate indicators at all, and you might be content with a scoreboard, or perhaps aggregating into several aggregate values rather than a single index. However, consider that aggregation should not substitute the underlying indicator data, but complement it.
Overall, aggregating indicators is a form of information compression \- you are trying to combine many indicator values into one, and inevitably information will be lost[3](#fn3). As long as this is kept in mind, and indicator data is presented and made available along side aggregate values, then aggregate (index) values can complement indicators and be used as a useful tool for summarising the underlying data, and identifying overall trends and patterns.
10\.1 Weighting
---------------
Many aggregation methods involve some kind of weighting, i.e. coefficients that define the relative weight of the indicators/aggregates in the aggregation. In order to aggregate, weights need to first be specified, but to effectively adjust weights it is necessary to aggregate.
This chicken and egg conundrum is best solved by aggregating initially with a trial set of weights, perhaps equal weights, then seeing the effects of the weighting, and making any weight adjustments necessary. For this reason, weighting is dealt with in the following chapter on [Weighting](weighting-1.html#weighting-2).
10\.2 Approaches
----------------
### 10\.2\.1 Means
The most straightforward and widely\-used approach to aggregation is the **weighted arithmetic mean**. Denoting the indicators as \\(x\_i \\in \\{x\_1, x\_2, ... , x\_d \\}\\), a weighted arithmetic mean is calculated as:
\\\[ y \= \\frac{1}{\\sum\_{i\=1}^d w\_i} \\sum\_{i\=1}^d x\_iw\_i \\]
where the \\(w\_i\\) are the weights corresponding to each \\(x\_i\\). Here, if the weights are chosen to sum to 1, it will simplify to the weighted sum of the indicators. In any case, the weighted mean is scaled by the sum of the weights, so weights operate relative to each other.
Clearly, if the index has more than two levels, then there will be multiple aggregations. For example, there may be three groups of indicators which give three separate aggregate scores. These aggregate scores would then be fed back into the weighted arithmetic mean above to calculate the overall index.
The arithmetic mean has “perfect compensability”, which means that a high score in one indicator will perfectly compensate a low score in another. In a simple example with two indicators scaled between 0 and 10 and equal weighting, a unit with scores (0, 10\) would be given the same score as a unit with scores (5, 5\) – both have a score of 5\.
An alternative is the **weighted geometric mean**, which uses the product of the indicators rather than the sum.
\\\[ y \= \\left( \\prod\_{i\=1}^d x\_i^{w\_i} \\right)^{1 / \\sum\_{i\=1}^d w\_i} \\]
This is simply the product of each indicator to the power of its weight, all raised the the power of the inverse of the sum of the weights.
The geometric mean is less compensatory than the arithmetic mean – low values in one indicator only partially substitute high values in others. For this reason, the geometric mean may sometimes be preferred when indicators represent “essentials”. An example might be quality of life: a longer life expectancy perhaps should not compensate severe restrictions on personal freedoms.
A third type of mean, in fact the third of the so\-called [Pythagorean means](https://en.wikipedia.org/wiki/Pythagorean_means) is the **weighted harmonic mean**. This uses the mean of the reciprocals of the indicators:
\\\[ y \= \\frac{\\sum\_{i\=1}^d w\_i}{\\sum\_{i\=1}^d w\_i/x\_i} \\]
The harmonic mean is the the least compensatory of the the three means, even less so than the geometric mean. It is often used for taking the mean of rates and ratios.
### 10\.2\.2 Other methods
The *weighted median* is also a simple alternative candidate. It is defined by ordering indicator values, then picking the value which has half of the assigned weight above it, and half below it. For *ordered* indicators \\(x\_1, x\_2, ..., x\_d\\) and corresponding weights \\(w\_1, w\_2, ..., w\_d\\) the weighted median is the indicator value \\(x\_m\\) that satisfies:
\\\[ \\sum\_{i\=1}^{m\-1} w\_i \\leq \\frac{1}{2}, \\: \\: \\text{and} \\sum\_{i\=m\+1}^{d} w\_i \\leq \\frac{1}{2} \\]
The median is known to be robust to outliers, and this may be of interest if the distribution of scores across indicators is skewed.
Another somewhat different approach to aggregation is to use the [Copeland method](https://en.wikipedia.org/wiki/Copeland%27s_method). This approach is based pairwise comparisons between units and proceeds as follows. First, an *outranking matrix* is constructed, which is a square matrix with \\(N\\) columns and \\(N\\) rows, where \\(N\\) is the number of units.
The element in the \\(p\\)th row and \\(q\\)th column of the matrix is calculated by summing all the indicator weights where unit \\(p\\) has a higher value in those indicators than unit \\(q\\). Similarly, the cell in the \\(q\\)th row and \\(p\\)th column (which is the cell opposite on the other side of the diagonal), is calculated as the sum of the weights unit where \\(q\\) has a higher value than unit \\(p\\). If the indicator weights sum to one over all indicators, then these two scores will also sum to 1 by definition. The outranking matrix effectively summarises to what extent each unit scores better or worse than all other units, for all unit pairs.
The Copeland score for each unit is calculated by taking the sum of the row values in the outranking matrix. This can be seen as an average measure of to what extent that unit performs above other units.
Clearly, this can be applied at any level of aggregation and used hierarchically like the other aggregation methods presented here.
In some cases, one unit may score higher than the other in all indicators. This is called a *dominance pair*, and corresponds to any pair scores equal to one (equivalent to any pair scores equal to zero).
The percentage of dominance pairs is an indication of robustness. Under dominance, there is no way methodological choices (weighting, normalisation, etc.) can affect the relative standing of the pair in the ranking. One will always be ranked higher than the other. The greater the number of dominance (or robust) pairs in a classification, the less sensitive country ranks will be to methodological assumptions. COINr allows to calculate the percentage of dominance pairs with an inbuilt function.
10\.3 Aggregation in COINr
--------------------------
Many will be amazed to learn that the function to aggregate in COINr is called `aggregate()`. First, let’s build the ASEM data set up to the point of normalisation, then aggregate it using the default approaches.
```
library(COINr6)
# Build ASEM data up to the normalisation step
# Assemble data and metadata into a COIN
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta, AggMeta = COINr6::ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
# Denominate as specified in metadata
ASEM <- denominate(ASEM, dset = "Raw")
# Normalise using default method: min-max scaled between 0 and 100.
ASEM <- normalise(ASEM, dset = "Denominated")
# Now aggregate using default method: arithmetic mean, using weights found in the COIN
ASEM <- aggregate(ASEM, dset = "Normalised")
# show last few columns of aggregated data set
ASEM$Data$Aggregated[(ncol(ASEM$Data$Aggregated)-5): ncol(ASEM$Data$Aggregated)]
## # A tibble: 51 x 6
## Environ Social SusEcFin Conn Sust Index
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 71.0 67.0 65.7 50.4 67.9 59.1
## 2 55.9 86.3 53.5 54.8 65.3 60.0
## 3 56.4 54.0 62.5 42.4 57.6 50.0
## 4 76.0 54.8 55.5 45.9 62.1 54.0
## 5 61.9 59.5 31.1 45.5 50.8 48.2
## 6 53.7 61.1 72.0 48.9 62.3 55.6
## 7 72.5 79.8 62.7 51.0 71.7 61.4
## 8 35.7 68.3 70.1 49.5 58.0 53.8
## 9 52.9 82.1 63.8 48.2 66.3 57.2
## 10 66.5 60.8 54.2 47.0 60.5 53.7
## # ... with 41 more rows
```
By default, the aggregation function performs the following steps:
* Uses the weights that were attached to `IndMeta` and `AggMeta` in `assemble()`
* Aggregates hierarchically (with default method of weighted arithmetic mean), following the index structure specified in `IndMeta` and using the data specified in `dset`
* Creates a new data set `.$Data$Aggregated`, which consists of the data in `dset`, plus extra columns with scores for each aggregation group, at each aggregation level.
Like other COINr functions `aggregate()` has arguments `dset` which specifies which data set to aggregate, and `out2` which specifies whether to output an updated COIN, or a data frame. Unlike other COINr functions however, `aggregate()` only works on COINs, because it requires a number of inputs, including data, weights, and the index structure.
The data set outputted by `aggregate()` simply adds the aggregate columns onto the end of the data set that was input. This means that aggregate columns will always appear at the end. To more easily see the results of the index, COINr’s `getResults()` function rearranges the aggregated data set into a better format.
```
# generate simple results table, attach to COIN
ASEM <- getResults(ASEM, tab_type = "Summary", out2 = "COIN")
# view the table
ASEM$Results$SummaryScores |>
head(10)
## NULL
```
This function, which is explained more in [Visualising results](visualising-results.html#visualising-results), sorts the aggregated data and extracts the most relevant columns. It has options to control sorting and which columns to display.
The types of aggregation supported by `aggregate()` are specified by `agtype` from the following options:
* `agtype = arith_mean` uses the arithmetic mean, and is the default.
* `agtype = geom_mean` uses the geometric mean. This only works if all data values are positive, otherwise it will throw an error.
* `agtype = harm_mean` uses the harmonic mean. This only works if all data values are non\-zero, otherwise it will throw an error.
* `agtype = median` uses the weighted median
* `agtype = copeland` uses the Copeland method. This may take a few seconds to process depending on the number of units, because it involves pairwise comparisons across units.
* `agtype = mixed` a different aggregation method for each level. In this case, aggregation methods are specified as any of the previous options using the `agtype_bylevel` argument.
* `agtype = custom` allows you to pass a custom aggregation function.
For the last option here, if `agtype = custom` then you need to also specify the custom function via the `agfunc` argument. As an example:
```
# define an aggregation function which is the weighted minimum
weightedMin <- function(x,w) min(x*w, na.rm = TRUE)
# pass to aggregate() and aggregate
ASEM <- aggregate(ASEM, dset = "Normalised", agtype = "custom",
agfunc = weightedMin, out2 = "df")
```
Any custom function should have the form `functionName <- function(x,w)`, i.e. it accepts x and w as vectors of indicator values and weights respectively, and returns a scalar aggregated value. Ensure that `NA`s are handled (e.g. set `na.rm = T`) if your data has missing values, otherwise `NA`s will be passed through to higher levels.
A number of sophisticated aggregation approaches and linked weighting methods are available in the [compind package](https://cran.r-project.org/web/packages/Compind/index.html). These can nicely complement the features in COINr and may be of interest to those looking for more technical approaches to aggregation, such as those based on benefit of the doubt methods.
A further argument, `agtype_bylevel`, allows specification of different normalisation types at different aggregation levels. For example, `agtype_bylevel = c("arith_mean", "geom_mean", "median")` would use the arithmetic mean at the indicator level, the geometric mean at the pillar level, and the median at the sub\-index level (for the ASEM data structure).
Alternative weights can also be used for aggregation by specifying the `agweights` argument. This can either be:
* `NULL`, in which case it will use the weights that were attached to IndMeta and AggMeta in GII\_assemble (if they exist), or
* A character string which corresponds to a named list of weights stored in `.$Parameters$Weights`. You can either add these manually or through `rew8r` (see chapter on [Weighting](weighting-1.html#weighting-2)). E.g. entering `agweights = "Original"` will use the original weights read in on assembly. This is equivalent to `agweights = NULL`.
* A data frame of weights to use in the aggregation. See [Weighting](weighting-1.html#weighting-2) for the format of weight data frames.
A final option of `aggregate()` is the possibility to set missing data limits for aggregation. By default, `aggregate()` will produce an aggregate score as long as at least one value is available within the aggregation group. For example, for an aggregation group we might have five indicators. A unit might have the following:
```
## Ind1 Ind2 Ind3 Ind4 Ind5
## 1 NA 3 NA NA NA
```
The arithmetic mean of this would still be 3\. But the unit has 80% missing data for this aggregation group, so calculating an aggregate score may seem a bit disingenuous. The `avail_limit` argument helps here: it gives the possibility to set a minimum data availability threshold for each level of the index, such that if data availability is below the threshold (for a particular unit and aggregation group), it returns `NA` (for that unit and group) instead of an aggregated score. This is probably good practice in any composite indicator where data availability is an issue: some units/countries may have fairly high data availability overall, but have low data availability for particular groups.
Now that the data has been aggregated, a natural next step is to explore the results. This is dealt with in the chapter on \[Results visualisation].
10\.1 Weighting
---------------
Many aggregation methods involve some kind of weighting, i.e. coefficients that define the relative weight of the indicators/aggregates in the aggregation. In order to aggregate, weights need to first be specified, but to effectively adjust weights it is necessary to aggregate.
This chicken and egg conundrum is best solved by aggregating initially with a trial set of weights, perhaps equal weights, then seeing the effects of the weighting, and making any weight adjustments necessary. For this reason, weighting is dealt with in the following chapter on [Weighting](weighting-1.html#weighting-2).
10\.2 Approaches
----------------
### 10\.2\.1 Means
The most straightforward and widely\-used approach to aggregation is the **weighted arithmetic mean**. Denoting the indicators as \\(x\_i \\in \\{x\_1, x\_2, ... , x\_d \\}\\), a weighted arithmetic mean is calculated as:
\\\[ y \= \\frac{1}{\\sum\_{i\=1}^d w\_i} \\sum\_{i\=1}^d x\_iw\_i \\]
where the \\(w\_i\\) are the weights corresponding to each \\(x\_i\\). Here, if the weights are chosen to sum to 1, it will simplify to the weighted sum of the indicators. In any case, the weighted mean is scaled by the sum of the weights, so weights operate relative to each other.
Clearly, if the index has more than two levels, then there will be multiple aggregations. For example, there may be three groups of indicators which give three separate aggregate scores. These aggregate scores would then be fed back into the weighted arithmetic mean above to calculate the overall index.
The arithmetic mean has “perfect compensability”, which means that a high score in one indicator will perfectly compensate a low score in another. In a simple example with two indicators scaled between 0 and 10 and equal weighting, a unit with scores (0, 10\) would be given the same score as a unit with scores (5, 5\) – both have a score of 5\.
An alternative is the **weighted geometric mean**, which uses the product of the indicators rather than the sum.
\\\[ y \= \\left( \\prod\_{i\=1}^d x\_i^{w\_i} \\right)^{1 / \\sum\_{i\=1}^d w\_i} \\]
This is simply the product of each indicator to the power of its weight, all raised the the power of the inverse of the sum of the weights.
The geometric mean is less compensatory than the arithmetic mean – low values in one indicator only partially substitute high values in others. For this reason, the geometric mean may sometimes be preferred when indicators represent “essentials”. An example might be quality of life: a longer life expectancy perhaps should not compensate severe restrictions on personal freedoms.
A third type of mean, in fact the third of the so\-called [Pythagorean means](https://en.wikipedia.org/wiki/Pythagorean_means) is the **weighted harmonic mean**. This uses the mean of the reciprocals of the indicators:
\\\[ y \= \\frac{\\sum\_{i\=1}^d w\_i}{\\sum\_{i\=1}^d w\_i/x\_i} \\]
The harmonic mean is the the least compensatory of the the three means, even less so than the geometric mean. It is often used for taking the mean of rates and ratios.
### 10\.2\.2 Other methods
The *weighted median* is also a simple alternative candidate. It is defined by ordering indicator values, then picking the value which has half of the assigned weight above it, and half below it. For *ordered* indicators \\(x\_1, x\_2, ..., x\_d\\) and corresponding weights \\(w\_1, w\_2, ..., w\_d\\) the weighted median is the indicator value \\(x\_m\\) that satisfies:
\\\[ \\sum\_{i\=1}^{m\-1} w\_i \\leq \\frac{1}{2}, \\: \\: \\text{and} \\sum\_{i\=m\+1}^{d} w\_i \\leq \\frac{1}{2} \\]
The median is known to be robust to outliers, and this may be of interest if the distribution of scores across indicators is skewed.
Another somewhat different approach to aggregation is to use the [Copeland method](https://en.wikipedia.org/wiki/Copeland%27s_method). This approach is based pairwise comparisons between units and proceeds as follows. First, an *outranking matrix* is constructed, which is a square matrix with \\(N\\) columns and \\(N\\) rows, where \\(N\\) is the number of units.
The element in the \\(p\\)th row and \\(q\\)th column of the matrix is calculated by summing all the indicator weights where unit \\(p\\) has a higher value in those indicators than unit \\(q\\). Similarly, the cell in the \\(q\\)th row and \\(p\\)th column (which is the cell opposite on the other side of the diagonal), is calculated as the sum of the weights unit where \\(q\\) has a higher value than unit \\(p\\). If the indicator weights sum to one over all indicators, then these two scores will also sum to 1 by definition. The outranking matrix effectively summarises to what extent each unit scores better or worse than all other units, for all unit pairs.
The Copeland score for each unit is calculated by taking the sum of the row values in the outranking matrix. This can be seen as an average measure of to what extent that unit performs above other units.
Clearly, this can be applied at any level of aggregation and used hierarchically like the other aggregation methods presented here.
In some cases, one unit may score higher than the other in all indicators. This is called a *dominance pair*, and corresponds to any pair scores equal to one (equivalent to any pair scores equal to zero).
The percentage of dominance pairs is an indication of robustness. Under dominance, there is no way methodological choices (weighting, normalisation, etc.) can affect the relative standing of the pair in the ranking. One will always be ranked higher than the other. The greater the number of dominance (or robust) pairs in a classification, the less sensitive country ranks will be to methodological assumptions. COINr allows to calculate the percentage of dominance pairs with an inbuilt function.
### 10\.2\.1 Means
The most straightforward and widely\-used approach to aggregation is the **weighted arithmetic mean**. Denoting the indicators as \\(x\_i \\in \\{x\_1, x\_2, ... , x\_d \\}\\), a weighted arithmetic mean is calculated as:
\\\[ y \= \\frac{1}{\\sum\_{i\=1}^d w\_i} \\sum\_{i\=1}^d x\_iw\_i \\]
where the \\(w\_i\\) are the weights corresponding to each \\(x\_i\\). Here, if the weights are chosen to sum to 1, it will simplify to the weighted sum of the indicators. In any case, the weighted mean is scaled by the sum of the weights, so weights operate relative to each other.
Clearly, if the index has more than two levels, then there will be multiple aggregations. For example, there may be three groups of indicators which give three separate aggregate scores. These aggregate scores would then be fed back into the weighted arithmetic mean above to calculate the overall index.
The arithmetic mean has “perfect compensability”, which means that a high score in one indicator will perfectly compensate a low score in another. In a simple example with two indicators scaled between 0 and 10 and equal weighting, a unit with scores (0, 10\) would be given the same score as a unit with scores (5, 5\) – both have a score of 5\.
An alternative is the **weighted geometric mean**, which uses the product of the indicators rather than the sum.
\\\[ y \= \\left( \\prod\_{i\=1}^d x\_i^{w\_i} \\right)^{1 / \\sum\_{i\=1}^d w\_i} \\]
This is simply the product of each indicator to the power of its weight, all raised the the power of the inverse of the sum of the weights.
The geometric mean is less compensatory than the arithmetic mean – low values in one indicator only partially substitute high values in others. For this reason, the geometric mean may sometimes be preferred when indicators represent “essentials”. An example might be quality of life: a longer life expectancy perhaps should not compensate severe restrictions on personal freedoms.
A third type of mean, in fact the third of the so\-called [Pythagorean means](https://en.wikipedia.org/wiki/Pythagorean_means) is the **weighted harmonic mean**. This uses the mean of the reciprocals of the indicators:
\\\[ y \= \\frac{\\sum\_{i\=1}^d w\_i}{\\sum\_{i\=1}^d w\_i/x\_i} \\]
The harmonic mean is the the least compensatory of the the three means, even less so than the geometric mean. It is often used for taking the mean of rates and ratios.
### 10\.2\.2 Other methods
The *weighted median* is also a simple alternative candidate. It is defined by ordering indicator values, then picking the value which has half of the assigned weight above it, and half below it. For *ordered* indicators \\(x\_1, x\_2, ..., x\_d\\) and corresponding weights \\(w\_1, w\_2, ..., w\_d\\) the weighted median is the indicator value \\(x\_m\\) that satisfies:
\\\[ \\sum\_{i\=1}^{m\-1} w\_i \\leq \\frac{1}{2}, \\: \\: \\text{and} \\sum\_{i\=m\+1}^{d} w\_i \\leq \\frac{1}{2} \\]
The median is known to be robust to outliers, and this may be of interest if the distribution of scores across indicators is skewed.
Another somewhat different approach to aggregation is to use the [Copeland method](https://en.wikipedia.org/wiki/Copeland%27s_method). This approach is based pairwise comparisons between units and proceeds as follows. First, an *outranking matrix* is constructed, which is a square matrix with \\(N\\) columns and \\(N\\) rows, where \\(N\\) is the number of units.
The element in the \\(p\\)th row and \\(q\\)th column of the matrix is calculated by summing all the indicator weights where unit \\(p\\) has a higher value in those indicators than unit \\(q\\). Similarly, the cell in the \\(q\\)th row and \\(p\\)th column (which is the cell opposite on the other side of the diagonal), is calculated as the sum of the weights unit where \\(q\\) has a higher value than unit \\(p\\). If the indicator weights sum to one over all indicators, then these two scores will also sum to 1 by definition. The outranking matrix effectively summarises to what extent each unit scores better or worse than all other units, for all unit pairs.
The Copeland score for each unit is calculated by taking the sum of the row values in the outranking matrix. This can be seen as an average measure of to what extent that unit performs above other units.
Clearly, this can be applied at any level of aggregation and used hierarchically like the other aggregation methods presented here.
In some cases, one unit may score higher than the other in all indicators. This is called a *dominance pair*, and corresponds to any pair scores equal to one (equivalent to any pair scores equal to zero).
The percentage of dominance pairs is an indication of robustness. Under dominance, there is no way methodological choices (weighting, normalisation, etc.) can affect the relative standing of the pair in the ranking. One will always be ranked higher than the other. The greater the number of dominance (or robust) pairs in a classification, the less sensitive country ranks will be to methodological assumptions. COINr allows to calculate the percentage of dominance pairs with an inbuilt function.
10\.3 Aggregation in COINr
--------------------------
Many will be amazed to learn that the function to aggregate in COINr is called `aggregate()`. First, let’s build the ASEM data set up to the point of normalisation, then aggregate it using the default approaches.
```
library(COINr6)
# Build ASEM data up to the normalisation step
# Assemble data and metadata into a COIN
ASEM <- assemble(IndData = COINr6::ASEMIndData, IndMeta = COINr6::ASEMIndMeta, AggMeta = COINr6::ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
# Denominate as specified in metadata
ASEM <- denominate(ASEM, dset = "Raw")
# Normalise using default method: min-max scaled between 0 and 100.
ASEM <- normalise(ASEM, dset = "Denominated")
# Now aggregate using default method: arithmetic mean, using weights found in the COIN
ASEM <- aggregate(ASEM, dset = "Normalised")
# show last few columns of aggregated data set
ASEM$Data$Aggregated[(ncol(ASEM$Data$Aggregated)-5): ncol(ASEM$Data$Aggregated)]
## # A tibble: 51 x 6
## Environ Social SusEcFin Conn Sust Index
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 71.0 67.0 65.7 50.4 67.9 59.1
## 2 55.9 86.3 53.5 54.8 65.3 60.0
## 3 56.4 54.0 62.5 42.4 57.6 50.0
## 4 76.0 54.8 55.5 45.9 62.1 54.0
## 5 61.9 59.5 31.1 45.5 50.8 48.2
## 6 53.7 61.1 72.0 48.9 62.3 55.6
## 7 72.5 79.8 62.7 51.0 71.7 61.4
## 8 35.7 68.3 70.1 49.5 58.0 53.8
## 9 52.9 82.1 63.8 48.2 66.3 57.2
## 10 66.5 60.8 54.2 47.0 60.5 53.7
## # ... with 41 more rows
```
By default, the aggregation function performs the following steps:
* Uses the weights that were attached to `IndMeta` and `AggMeta` in `assemble()`
* Aggregates hierarchically (with default method of weighted arithmetic mean), following the index structure specified in `IndMeta` and using the data specified in `dset`
* Creates a new data set `.$Data$Aggregated`, which consists of the data in `dset`, plus extra columns with scores for each aggregation group, at each aggregation level.
Like other COINr functions `aggregate()` has arguments `dset` which specifies which data set to aggregate, and `out2` which specifies whether to output an updated COIN, or a data frame. Unlike other COINr functions however, `aggregate()` only works on COINs, because it requires a number of inputs, including data, weights, and the index structure.
The data set outputted by `aggregate()` simply adds the aggregate columns onto the end of the data set that was input. This means that aggregate columns will always appear at the end. To more easily see the results of the index, COINr’s `getResults()` function rearranges the aggregated data set into a better format.
```
# generate simple results table, attach to COIN
ASEM <- getResults(ASEM, tab_type = "Summary", out2 = "COIN")
# view the table
ASEM$Results$SummaryScores |>
head(10)
## NULL
```
This function, which is explained more in [Visualising results](visualising-results.html#visualising-results), sorts the aggregated data and extracts the most relevant columns. It has options to control sorting and which columns to display.
The types of aggregation supported by `aggregate()` are specified by `agtype` from the following options:
* `agtype = arith_mean` uses the arithmetic mean, and is the default.
* `agtype = geom_mean` uses the geometric mean. This only works if all data values are positive, otherwise it will throw an error.
* `agtype = harm_mean` uses the harmonic mean. This only works if all data values are non\-zero, otherwise it will throw an error.
* `agtype = median` uses the weighted median
* `agtype = copeland` uses the Copeland method. This may take a few seconds to process depending on the number of units, because it involves pairwise comparisons across units.
* `agtype = mixed` a different aggregation method for each level. In this case, aggregation methods are specified as any of the previous options using the `agtype_bylevel` argument.
* `agtype = custom` allows you to pass a custom aggregation function.
For the last option here, if `agtype = custom` then you need to also specify the custom function via the `agfunc` argument. As an example:
```
# define an aggregation function which is the weighted minimum
weightedMin <- function(x,w) min(x*w, na.rm = TRUE)
# pass to aggregate() and aggregate
ASEM <- aggregate(ASEM, dset = "Normalised", agtype = "custom",
agfunc = weightedMin, out2 = "df")
```
Any custom function should have the form `functionName <- function(x,w)`, i.e. it accepts x and w as vectors of indicator values and weights respectively, and returns a scalar aggregated value. Ensure that `NA`s are handled (e.g. set `na.rm = T`) if your data has missing values, otherwise `NA`s will be passed through to higher levels.
A number of sophisticated aggregation approaches and linked weighting methods are available in the [compind package](https://cran.r-project.org/web/packages/Compind/index.html). These can nicely complement the features in COINr and may be of interest to those looking for more technical approaches to aggregation, such as those based on benefit of the doubt methods.
A further argument, `agtype_bylevel`, allows specification of different normalisation types at different aggregation levels. For example, `agtype_bylevel = c("arith_mean", "geom_mean", "median")` would use the arithmetic mean at the indicator level, the geometric mean at the pillar level, and the median at the sub\-index level (for the ASEM data structure).
Alternative weights can also be used for aggregation by specifying the `agweights` argument. This can either be:
* `NULL`, in which case it will use the weights that were attached to IndMeta and AggMeta in GII\_assemble (if they exist), or
* A character string which corresponds to a named list of weights stored in `.$Parameters$Weights`. You can either add these manually or through `rew8r` (see chapter on [Weighting](weighting-1.html#weighting-2)). E.g. entering `agweights = "Original"` will use the original weights read in on assembly. This is equivalent to `agweights = NULL`.
* A data frame of weights to use in the aggregation. See [Weighting](weighting-1.html#weighting-2) for the format of weight data frames.
A final option of `aggregate()` is the possibility to set missing data limits for aggregation. By default, `aggregate()` will produce an aggregate score as long as at least one value is available within the aggregation group. For example, for an aggregation group we might have five indicators. A unit might have the following:
```
## Ind1 Ind2 Ind3 Ind4 Ind5
## 1 NA 3 NA NA NA
```
The arithmetic mean of this would still be 3\. But the unit has 80% missing data for this aggregation group, so calculating an aggregate score may seem a bit disingenuous. The `avail_limit` argument helps here: it gives the possibility to set a minimum data availability threshold for each level of the index, such that if data availability is below the threshold (for a particular unit and aggregation group), it returns `NA` (for that unit and group) instead of an aggregated score. This is probably good practice in any composite indicator where data availability is an issue: some units/countries may have fairly high data availability overall, but have low data availability for particular groups.
Now that the data has been aggregated, a natural next step is to explore the results. This is dealt with in the chapter on \[Results visualisation].
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/weighting-1.html |
Chapter 11 Weighting
====================
Strictly speaking, weighting comes before aggregation. However, in order to understand the *effects* of weights, we need to aggregate the index first.
Weighting in composite indicators is a thorny issue, which attracts considerable attention and is often one of the main focuses of critics. Weighting openly expresses a subjective opinion on the relative importance of each indicator relative to the others, and this opinion can easily be contested.
While this criticism definitely has some basis, weights can be viewed as a type of model parameter, and any model (e.g. engineering models, climate models, economic models) is full of uncertain parameters. In large models, these parameters are less evident since they are inside rather complex model code. Weights in composite indicators are easy to criticise since the model of a composite indicator is quite simple, usually using simple averages of indicator values.
That said, weights do need to be carefully considered, taking into account at least:
* The relative conceptual importance of indicators: is indicator A more, less, or equally important to indicator B?
* The statistical implications of the weights
* How to communicate the weights to end users
The last point is important if the index is for advocacy/awareness. Weights may be fine tuned to account for statistical considerations, but the result may make little sense to the public or the media.
Overall, weighting is a large topic which cannot be covered in detail here. Nevertheless, this chapter gives some introduction and points to further references.
11\.1 Approaches to weighting
-----------------------------
Broadly, weighting approaches can be divided into those that are based on expert judgment, and those that are based on statistical considerations. Then there are approaches that combine the two.
### 11\.1\.1 Equal weighting
The most common approach to weighting is simply to make all weights equal to one another. This may seem like an unjustifiable simplification, but in practice, experts often give near\-equal weights when asked, and statistical approaches may give weights that are so unequal that it may be hard to justify them to stakeholders. This is why many composite indicators use equal weights. A further consideration is that the results of a composite indicator are often less sensitive to their weights than you might think.
That said, a number of other methods exist, some of which are discussed in the following sections.
### 11\.1\.2 Budget allocation
Fundamentally, composite indicators involve breaking down a complex multidimensional concept into sub\-concepts, and possibly breaking those sub\-concepts down into sub\-sub\-concepts, and so on. Generally, you keep doing this until you arrive at the point where the (sub\-)^n\-concepts are sufficiently specific that they can be directly measured with a small group of indicators.
Regardless of the aggregation level that you are at, however, some of the indicators (or aggregates) in the group are likely to be more or less important to the concept you are trying to measure than others.
Take the example of university rankings. One might consider the following indicators to be relevant to the notion of “excellence” for a university:
* Number of publications in top\-tier journals (research output)
* Teaching reputation, e.g. through a survey (teaching quality)
* Research funding from private companies (industry links)
* Percentage of international students (global renown)
* Mean earning of graduates (future prospects of students)
These may all be relevant for the concept, but the relative importance is definitely debatable. For example, if you are a prospective student, you might prioritise the teaching quality and graduate earnings. If you are researcher, you might priorities publications and research funding. And so on. The two points to make here are:
1. Indicators are often not equally important to the concept, and
2. Different people have different opinions on what should be more or less important.
Bringing this back to the issue of weights, it is important to make sure that the relative contributions of the indicators to the index scorers reflect the intended importance. This can be achieved by changing the weights attached to each indicator, and the weights of higher aggregation levels.
How then do we understand which indicators should be the most important to the concept? One way is to simply use our own opinion. But this does not reflect a diversity of viewpoints.
The *budget allocation method* is a simple way of eliciting weights from a group of experts. The idea is to get a group of experts on the concept, stakeholders and end\-users, ideally from different backgrounds. Then each member of the group is given 100 “coins” which they can “spend” on the indicators. Members allocate their budget to each indicator, so that they give more of the budget to important indicators, and less to less important ones.
This is a simple way of distributing weight to indicators, in a way that is easy for people to understand and to give their opinions. You can take the results, and take the average weight for each indicator. You can even infer a weight\-distribution over each indicator which could feed into an uncertainty analysis.
### 11\.1\.3 Principle component analysis
A very different approach is to use principle component analysis (PCA). PCA is a statistical approach, which rotates your indicator data into a new set of coordinates with special properties.
One way of looking at indicator data is as data points in a multidimensional space. If we only have two indicators, then any unit can be plotted as a point on a two\-dimensional plot, where the x\-axis is the first indicator, and the y\-axis is the second.
```
Ind1 <- 35
Ind2 <- 60
plot(Ind1, Ind2, main = "World's most boring plot")
```
Each point on this beautiful base\-R plot represents a unit (here we only have one).
If we have three indicators, then we will have three axes (i.e. a 3D plot). If we have more than three, then the point lives in a *hyperspace* which we can’t really visualise.
Anyway, the main point here is that indicator data can be plotted as points in a space, where each axis (dimension) is an indicator.
What PCA does, is that it rotates the data so that the axes are no longer indicators, but instead are *linear combinations of indicators*. And this is done so that the first axis is the linear combination of indicators that explains the most variance. If this is not making much sense (and if this is the first time you have encountered PCA then it probably won’t), I would recommend to look at one of the many visualisations of PCA on the internet, e.g. [this one](https://setosa.io/ev/principal-component-analysis/).
PCA is useful for composite indicators, because if you use an arithmetic mean, then you are using a linear combination of indicators. And the first principle component gives the weights that maximise the variance of the units. In other words, if you use PCA weights, you will have the best weights for discriminating between units, and for capturing as much information from the underlying indicators.
This sounds perfect, but the downside is that PCA weights do not care about the relative importance of indicators. Typically, PCA assigns the highest weights to indicators with the highest correlations with other indicators, and this can result in a very unbalanced combination of indicators. Still, PCA has a number of nice properties, and has the advantage of being a purely statistical approach.
### 11\.1\.4 Weight optimisation
If you choose to go for the budget allocation approach, to match weights to the opinions of experts, or indeed your own opinion, there is a catch that is not very obvious. Put simply, weights do not directly translate into importance.
To understand why, we must first define what “importance” means. Actually there is more than one way to look at this, but one possible measure is to use the (nonlinear) correlation between each indicator and the overall index. If the correlation is high, the indicator is well\-reflected in the index scores, and vice versa.
If we accept this definition of importance, then it’s important to realise that this correlation is affected not only by the weights attached to each indicator, but also by the correlations *between indicators*. This means that these correlations must be accounted for in choosing weights that agree with the budgets assigned by the group of experts.
In fact, it is possible to reverse\-engineer the weights either [analytically using a linear solution](https://doi.org/10.1111/j.1467-985X.2012.01059.x) or [numerically using a nonlinear solution](https://doi.org/10.1016/j.ecolind.2017.03.056). While the former method is far quicker than a nonlinear optimisation, it is only applicable in the case of a single level of aggregation, with an arithmetic mean, and using linear correlation as a measure. Therefore in COINr, the second method is used.
11\.2 Weighting tools in COINr
------------------------------
As seen in the chapter on [Aggregation](appendix-building-a-composite-indicator-example.html#aggregation-1), weights are used to calculate the aggregated data set, and are one of the inputs to the `aggregate()` function. These can either be directly specified as a list, or as a character string which points to one of the sets of weights stored under `.$Parameters$Weights`. There is no limit to the number of sets of weights that can be stored here. So how do we create new sets of weights?
### 11\.2\.1 Manual weighting
The simplest approach is to make manual adjustments. First, make a copy of the existing weights.
```
library(COINr6)
# quietly build data set
ASEM <- build_ASEM()
NewWeights <- ASEM$Parameters$Weights$Original
head(NewWeights)
## AgLevel Code Weight
## 1 1 Goods 1
## 2 1 Services 1
## 3 1 FDI 1
## 4 1 PRemit 1
## 5 1 ForPort 1
## 6 1 CostImpEx 1
```
The structure of the weights is a data frame, with each row being either an indicator or aggregation. The `AgLevel` column points to the aggregation level, and the `Weight` column is the weight, *which is relative inside its aggregation group*.
This last point is important so let’s clarify. When aggregating, the weights within each group are normalised to sum to one. So if we have three indicators in a group, and we specify the weights as all ones, the actual weights will each be 1/3\. This means that actually, we can put any number we like if we are aiming for equal weighting, as long as it is the same for each indicator.
Another implication of relative weighting is that the number of indicators within a group also matters. If we have two indicators in a group, equally weighted, their weight is 0\.5\. If we have ten, they are each weighted 0\.1\. Moreover, an indicator’s final weight in the index is also affected by its parent group weights (e.. sub\-pillar, pillar weights). This is why it is important to be aware of the so\-called “effective weights” which are the final weights of the indicators (and each aggregate in the index). COINr has a function to calculate this: see [Effective weights](weighting-1.html#effective-weights).
Anyway, to change the weights we can just directly modify the data frame. Let’s say we want to set the weight of the connectivity sub\-index to 0\.5\.
```
NewWeights$Weight[NewWeights$Code == "Conn"] <- 0.5
tail(NewWeights)
## AgLevel Code Weight
## 55 2 Environ 1.0
## 56 2 Social 1.0
## 57 2 SusEcFin 1.0
## 58 3 Conn 0.5
## 59 3 Sust 1.0
## 60 4 Index 1.0
```
Now, to actually use these weights in the aggregation, we can either (a) input them directly in the `aggregate()` function, or first attach them to the COIN and the call `aggregate()`, pointing to the new set of weights. The latter option seems more sensible because that way, all our sets of weights are neatly stored. Also, we can access this in the reweighting app as explained in the next section. Sets of weights need to be stored as a named data frame in `.$Parameters$Weights` to be accessible to `aggregate()`.
```
# put new weights in the COIN
ASEM$Parameters$Weights$NewWeights <- NewWeights
# Aggregate again to get new results
ASEM <- aggregate(ASEM, dset = "Normalised", agweights = "NewWeights")
```
Now the `ASEM` COIN contains the updated results using the new weights.
### 11\.2\.2 Interactive re\-weighting with ReW8R
Re\-weighting manually, as we have just seen, is quite simple. However, depending on your objectives, weighting can be an iterative process along the lines of:
1. Try a set of weights
2. Examine correlations
3. Check results, rankings
4. Adjust weights
5. Return to 1\.
Doing this at the command line can be time consuming, so COINr includes a “re\-weighting app” called `rew8r()` which lets you interactively adjust weights and explore the effects of doing so.
To run `rew8r()` you must have already aggregated your data using `aggregate()`. This is because every time weights are adjusted, `rew8r()` re\-aggregates to find new results, and it needs to know *which* aggregation method you are using. So, with your pre\-aggregated COIN at hand, simply run:
```
ASEM <- rew8r(ASEM)
```
At this point, a window/tab should open in your browser with the `rew8r()` app, which looks something like this:
Figure 11\.1: rew8r() screenshot
This may look a little over\-complicated to begin with, but let’s go through it step by step.
#### 11\.2\.2\.1 Correlations
First of all, `rew8r()` is based around correlations. Why are correlations important? Because they give an idea of how each indicator is represented in the final index, and at other aggregation levels (see earlier discussion in this chapter).
The `rew8r()` app allows you to check the correlations of anything against anything else. The two dropdown menus in the top left allow you to select which aggregation level to correlate against which other. In the screenshot above, the indicators (Level 1\) are being correlated against the pillars (Level 2\), but this can be changed to any of the levels in the index (the first level should always be below the second level otherwise it can cause errors). You can also select which type of correlation to examine, either Pearson (i.e. “standard” correlation), Spearman rank correlation, or Kendall’s tau (an alternative rank correlation).
The correlations themselves are shown in two plots \- the first is the scatter plot in the upper right part of the window, which plots the *correlation* of each indicator in Level 1 on the horizontal axis with its parent in Level 2, against its *weight* (in Level 1\) on the vertical axis. We will come back to this shortly.
Figure 11\.2: Heatmap screenshot (in rew8r)
The second plot is a heatmap of the correlation matrix (between Level 1 and Level 2 in this case) which is shown in the lower right corner. This gives the correlation values between all the indicators/groups of each level as colours according to a discrete colour map. This colour map intends to highlight highly correlated indicators (green), as well as negative or low\-correlated indicators (red). These thresholds can be adjusted by adjusting the low and high correlation threshold dropdown menus in the side panel (on the left). The aggregation groups are also shown as overlaid rectangles, and correlation values can be turned on and off using the checkbox below the heatmap. The point of this plot is to show at a glance, where there are very high or low correlations, possibly within aggregation groups. For example, indicators that are negatively correlated with their aggregation group can be problematic. Note that correlation heatmaps can also be generated independently of the app using the `plotCorr()` function \- this is described more in the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) section.
Highly\-correlated indicators are also listed in a table at the lower middle of the window. The “Ind\-Ind” tab flags any indicators that are above the high correlation threshold, and within the same aggregation group (the aggregation level immediately above the indicator level). Note that this is *not* dependent on the weights since it regards correlations between indicators only. The “Cross\-level” tab instead gives a table which flags any correlations across the two levels selected in side panel, which are above the threshold.
#### 11\.2\.2\.2 Weighting
Let us now turn back to the scatter plot. The scatter plot shows the correlation values on the horizontal axis, but also shows the weight of each indicator or aggregate in the first level selected in the dropdown menu. The purpose here is to show at a glance the weighting, and to make adjustments. To adjust a weight, either click on one of the points in the plot (hovering will show the names of the indicators or aggregates), or select the indicator/aggregate in the dropdown menu in the side panel under the “Weighting” subsection. Then adjust the weight using the slider. Doing this will automatically update all of the plots and tables, so you can see interactively how the correlations change as a result.
Figure 11\.3: Correlation scatter plot screenshots in rew8r().
Figure 11\.4: Correlation scatter plot screenshots in rew8r().
Importantly, the scatter plot only shows correlations between each indicator/aggregate in the first level *with its parent* in the second selected level. When everything is plotted on one plot, this may not be so clear, so the “Separate groups” checkbox in the side panel changes the plot to multiple sub\-plots, one for each group in the second selected level. This helps to show how correlations are distributed within each group.
The results of the index, in terms of ranks and scores of units, can be found by switching to the “Results” tab in the main window.
Other than adjusting individual weights, any set of weights that is stored in `.$Parameters$Weights` can be selected, and this will automatically show the correlations. The button “Equal weights” sets weights to be equal at the current level.
Finally, if you have adjusted the weights and you wish to save the new set of weights back to the COIN, you can do this in the “Weights output” subsection of the side panel. The current set of weights is saved by entering a name and clicking the “Save” button. This saves the set of weights to a list, and when the app is closed (using the “Close app” button), it will return them to the updated COIN under `.$Parameters$Weights`, with the specified name. You can save multiple sets of weights in the same session, by making adjustments, and assigning a name, and clicking “Save”. When you close the app, all the weight sets will be attached to the COIN. If you don’t click “Close app”, any new weight sets created in the session will be lost.
#### 11\.2\.2\.3 Remarks
The `rew8r()` app is something of a work in progress \- it is difficult to strike a balance between conveying the relevant information and conveying too much information. In future versions, it may be updated to become more streamlined.
### 11\.2\.3 Automatic re\-weighting
As discussed earlier, weighting can also be performed statistically, by optimising or targeting some statistical criteria of interest. COINr includes a few possibilities in this respect.
#### 11\.2\.3\.1 With PCA
The `getPCA()` function is used as a quick way to perform PCA on COINs. This does two things: first, to output PCA results for any aggregation level; and second, to provide PCA weights corresponding to the “loadings” of the first principle component. As discussed above, this is the linear combination that results in the most explained variance.
The other outputs of `getPCA()` are left to the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) section. Here, we will focus on weighting. To obtain PCA weights, simply run something like:
```
ASEM <- getPCA(ASEM, dset = "Aggregated", aglev = 2)
```
This calls the aggregated data set, and performs PCA on aggregation level 2, i.e. the pillars in the case of ASEM. The result is an updated COIN, with a new set of weights called `.Parameters$Weights$PCA_AggregatedL2`. This is automatically named according to the data set used and the aggregation level.
An important point here is that this weighting represents optimal properties if the components (pillars here) are aggregated directly into a single value. In this example, it is not strictly correct because above the pillar level we have another aggregation (sub\-index) before reaching the final index. This means that these weights wouldn’t be strictly correct, because they don’t account for the intermediate level. However, depending on the aggregation structure and weighting (i.e. if it is fairly even) it may give a fairly good approximation.
*Note: PCA weights have not been fully tested at the time of writing, treat with a healthy dose of skepticism.*
#### 11\.2\.3\.2 Weight optimisation
An alternative approach is to optimise weight iteratively. This is in a way less elegant (and slower) because it uses numerical optimisation, but allows much more flexibility.
The `weightOpt()` function gives several options in this respect. In short, it allows weights to be optimised to agree with either (a) some pre\-specified “importances” (perhaps obtained by the budget allocation method, or perhaps simply equal importance), or (b) to maximise the information conveyed by the composite indicator. Starting with the first option, this is how it can be applied to our example data:
```
ASEM <- weightOpt(ASEM, itarg = "equal", aglev = 3, out2 = "COIN")
## iterating... squared difference = 0.000814548399872482
## iterating... squared difference = 0.00116234608542398
## iterating... squared difference = 0.00052279688155717
## iterating... squared difference = 0.000271428850497901
## iterating... squared difference = 4.19003122660241e-05
## iterating... squared difference = 1.66726715407502e-06
## iterating... squared difference = 0.000166401854209433
## iterating... squared difference = 0.000357199714112592
## iterating... squared difference = 4.91262049731255e-05
## iterating... squared difference = 0.000189074238231704
## iterating... squared difference = 2.15217457991648e-06
## iterating... squared difference = 4.02429806062667e-05
## iterating... squared difference = 1.04404840785835e-05
## iterating... squared difference = 1.00833908145386e-05
## iterating... squared difference = 2.40420217478906e-06
## Optimisation successful!
```
This targets the aggregation level 3 (sub\-index), and the `itarg` argument here is set to “equal”, which means that the objective is to make the correlations between each sub\-index and the index equal to one another. More details of the results of this optimisation are found in the `.$Analysis$Weights` folder of the object. In particular, we can see the final correlations and the optimised weights:
```
ASEM$Analysis$Weights$OptimsedLev3$CorrResults
## Desired Obtained OptWeight
## 1 0.5 0.8971336 0.400
## 2 0.5 0.8925119 0.625
```
In this case, the weights are slightly uneven but seem reasonable. The correlations are also almost exactly equal. The full set of weights is found as a new weight set under `.$Parameters$Weights`.
Other possibilities with the `weightOpt()` function include:
* Specifying an unequal vector of target importances using the `itarg` argument. E.g. taking the sub\-pillar level, specifying `itarg = c(0.5, 1)` would optimise the weights so that the first sub\-pillar has half the correlation of the second sub\-pillar with the index.
* Optimising the weights at other aggregation levels using the `aglev` argument
* Optimising over other correlation types, e.g. rank correlations, using the `cortype` argument
* Aiming to maximise overall correlations by setting `optype = infomax`
It’s important to point out that only one level can be optimised at a time, and the optimisation is “conditional” on the weights at other levels. For example, optmising at the sub\-index level produces optimal correlations for the sub\-indexes, but not for any other level. If we then optimise at lower levels, this will change the correlations of the sub\-indexes! So probably the most sensible strategy is to optimise for the last aggregation level (before the index).
That said, an advantage of `weightOpt()` is that since it is numerical, it can be used for any type of aggregation method, any correlation type, and with any index structure. It calls the aggregation method that you last used to aggregate the index, so it rebuilds the index according to your specifications.
Other correlation types may be of interest when linear correlation (i.e. Pearson correlation) is not a sufficient measure. This is the case for example, when distributions are highly skewed. Rank correlations are robust to outliers and can handle some nonlinearity (but not non\-monotonicity). Fully nonlinear correlation methods [also exist](https://www.sciencedirect.com/science/article/pii/S1470160X17301759) but most of the time, relationships between indicators are fairly linear. Nonlinear correlation may be included in a future version of COINr.
Setting `optype = infomax` changes the optimisation criterion, to instead *maximise* the sum of the correlations, which is equivalent to maximising the information transferred to the index, and is similar to PCA. Like PCA, this can however result in fairly unequal weights, and not infrequently in negative weights.
The `weightOpt()` function uses a numerical optimisation algorithm which does not always succeed in finding a good solution. Optimisation is a large field in its own right, and is more difficult the more variables you optimise over. If `weightOpt()` fails to find a good solution, try:
* Decreasing the `toler` argument \- this decreases the acceptable error between the target importance and the importance with the final set of weights.
* Increasing the `maxiter` argument \- this is the number of iterations allowed to find a solution within the specified tolerance. By default it is set at 500\.
Even then, the algorithm may not always converge, depending on the problem.
11\.3 Effective weights
-----------------------
By now, it should be clear that the weight of an indicator is not only affected by its own weight, but also the weights of any aggregate groups to which it belongs. If the aggregation method is the arithmetic mean, the “effective weight”, i.e. the final weight of each indicator, including its parent weights, can be be easily obtained by COINr’s `effectiveWeight()` function:
```
# get effective weight info
EffWeights <- effectiveWeight(ASEM)
EffWeights$EffectiveWeightsList
## AgLevel Code EffectiveWeight
## 1 1 Goods 0.02000000
## 2 1 Services 0.02000000
## 3 1 FDI 0.02000000
## 4 1 PRemit 0.02000000
## 5 1 ForPort 0.02000000
## 6 1 CostImpEx 0.01666667
## 7 1 Tariff 0.01666667
## 8 1 TBTs 0.01666667
## 9 1 TIRcon 0.01666667
## 10 1 RTAs 0.01666667
## 11 1 Visa 0.01666667
## 12 1 StMob 0.01250000
## 13 1 Research 0.01250000
## 14 1 Pat 0.01250000
## 15 1 CultServ 0.01250000
## 16 1 CultGood 0.01250000
## 17 1 Tourist 0.01250000
## 18 1 MigStock 0.01250000
## 19 1 Lang 0.01250000
## 20 1 LPI 0.01250000
## 21 1 Flights 0.01250000
## 22 1 Ship 0.01250000
## 23 1 Bord 0.01250000
## 24 1 Elec 0.01250000
## 25 1 Gas 0.01250000
## 26 1 ConSpeed 0.01250000
## 27 1 Cov4G 0.01250000
## 28 1 Embs 0.03333333
## 29 1 IGOs 0.03333333
## 30 1 UNVote 0.03333333
## 31 1 Renew 0.03333333
## 32 1 PrimEner 0.03333333
## 33 1 CO2 0.03333333
## 34 1 MatCon 0.03333333
## 35 1 Forest 0.03333333
## 36 1 Poverty 0.01851852
## 37 1 Palma 0.01851852
## 38 1 TertGrad 0.01851852
## 39 1 FreePress 0.01851852
## 40 1 TolMin 0.01851852
## 41 1 NGOs 0.01851852
## 42 1 CPI 0.01851852
## 43 1 FemLab 0.01851852
## 44 1 WomParl 0.01851852
## 45 1 PubDebt 0.03333333
## 46 1 PrivDebt 0.03333333
## 47 1 GDPGrow 0.03333333
## 48 1 RDExp 0.03333333
## 49 1 NEET 0.03333333
## 50 2 Physical 0.10000000
## 51 2 ConEcFin 0.10000000
## 52 2 Political 0.10000000
## 53 2 Instit 0.10000000
## 54 2 P2P 0.10000000
## 55 2 Environ 0.16666667
## 56 2 Social 0.16666667
## 57 2 SusEcFin 0.16666667
## 58 3 Conn 0.50000000
## 59 3 Sust 0.50000000
## 60 4 Index 1.00000000
```
A nice way to visualise this is to call `plotframework()` (see [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis)). `effectiveWeight()` also gives other info on the parents of each indicator/aggregate which may be useful for some purposes (e.g. some types of plots external to COINr).
11\.4 Final points
------------------
Weighting and correlations is a complex topic, which is important to explore and address. On the other hand, weighting and correlations are just one part of a composite indicator, and balancing and optimising weights probably not be pursued at the expense of building a confusing composite indicator that makes no sense to most people (depending on the context and application).
11\.1 Approaches to weighting
-----------------------------
Broadly, weighting approaches can be divided into those that are based on expert judgment, and those that are based on statistical considerations. Then there are approaches that combine the two.
### 11\.1\.1 Equal weighting
The most common approach to weighting is simply to make all weights equal to one another. This may seem like an unjustifiable simplification, but in practice, experts often give near\-equal weights when asked, and statistical approaches may give weights that are so unequal that it may be hard to justify them to stakeholders. This is why many composite indicators use equal weights. A further consideration is that the results of a composite indicator are often less sensitive to their weights than you might think.
That said, a number of other methods exist, some of which are discussed in the following sections.
### 11\.1\.2 Budget allocation
Fundamentally, composite indicators involve breaking down a complex multidimensional concept into sub\-concepts, and possibly breaking those sub\-concepts down into sub\-sub\-concepts, and so on. Generally, you keep doing this until you arrive at the point where the (sub\-)^n\-concepts are sufficiently specific that they can be directly measured with a small group of indicators.
Regardless of the aggregation level that you are at, however, some of the indicators (or aggregates) in the group are likely to be more or less important to the concept you are trying to measure than others.
Take the example of university rankings. One might consider the following indicators to be relevant to the notion of “excellence” for a university:
* Number of publications in top\-tier journals (research output)
* Teaching reputation, e.g. through a survey (teaching quality)
* Research funding from private companies (industry links)
* Percentage of international students (global renown)
* Mean earning of graduates (future prospects of students)
These may all be relevant for the concept, but the relative importance is definitely debatable. For example, if you are a prospective student, you might prioritise the teaching quality and graduate earnings. If you are researcher, you might priorities publications and research funding. And so on. The two points to make here are:
1. Indicators are often not equally important to the concept, and
2. Different people have different opinions on what should be more or less important.
Bringing this back to the issue of weights, it is important to make sure that the relative contributions of the indicators to the index scorers reflect the intended importance. This can be achieved by changing the weights attached to each indicator, and the weights of higher aggregation levels.
How then do we understand which indicators should be the most important to the concept? One way is to simply use our own opinion. But this does not reflect a diversity of viewpoints.
The *budget allocation method* is a simple way of eliciting weights from a group of experts. The idea is to get a group of experts on the concept, stakeholders and end\-users, ideally from different backgrounds. Then each member of the group is given 100 “coins” which they can “spend” on the indicators. Members allocate their budget to each indicator, so that they give more of the budget to important indicators, and less to less important ones.
This is a simple way of distributing weight to indicators, in a way that is easy for people to understand and to give their opinions. You can take the results, and take the average weight for each indicator. You can even infer a weight\-distribution over each indicator which could feed into an uncertainty analysis.
### 11\.1\.3 Principle component analysis
A very different approach is to use principle component analysis (PCA). PCA is a statistical approach, which rotates your indicator data into a new set of coordinates with special properties.
One way of looking at indicator data is as data points in a multidimensional space. If we only have two indicators, then any unit can be plotted as a point on a two\-dimensional plot, where the x\-axis is the first indicator, and the y\-axis is the second.
```
Ind1 <- 35
Ind2 <- 60
plot(Ind1, Ind2, main = "World's most boring plot")
```
Each point on this beautiful base\-R plot represents a unit (here we only have one).
If we have three indicators, then we will have three axes (i.e. a 3D plot). If we have more than three, then the point lives in a *hyperspace* which we can’t really visualise.
Anyway, the main point here is that indicator data can be plotted as points in a space, where each axis (dimension) is an indicator.
What PCA does, is that it rotates the data so that the axes are no longer indicators, but instead are *linear combinations of indicators*. And this is done so that the first axis is the linear combination of indicators that explains the most variance. If this is not making much sense (and if this is the first time you have encountered PCA then it probably won’t), I would recommend to look at one of the many visualisations of PCA on the internet, e.g. [this one](https://setosa.io/ev/principal-component-analysis/).
PCA is useful for composite indicators, because if you use an arithmetic mean, then you are using a linear combination of indicators. And the first principle component gives the weights that maximise the variance of the units. In other words, if you use PCA weights, you will have the best weights for discriminating between units, and for capturing as much information from the underlying indicators.
This sounds perfect, but the downside is that PCA weights do not care about the relative importance of indicators. Typically, PCA assigns the highest weights to indicators with the highest correlations with other indicators, and this can result in a very unbalanced combination of indicators. Still, PCA has a number of nice properties, and has the advantage of being a purely statistical approach.
### 11\.1\.4 Weight optimisation
If you choose to go for the budget allocation approach, to match weights to the opinions of experts, or indeed your own opinion, there is a catch that is not very obvious. Put simply, weights do not directly translate into importance.
To understand why, we must first define what “importance” means. Actually there is more than one way to look at this, but one possible measure is to use the (nonlinear) correlation between each indicator and the overall index. If the correlation is high, the indicator is well\-reflected in the index scores, and vice versa.
If we accept this definition of importance, then it’s important to realise that this correlation is affected not only by the weights attached to each indicator, but also by the correlations *between indicators*. This means that these correlations must be accounted for in choosing weights that agree with the budgets assigned by the group of experts.
In fact, it is possible to reverse\-engineer the weights either [analytically using a linear solution](https://doi.org/10.1111/j.1467-985X.2012.01059.x) or [numerically using a nonlinear solution](https://doi.org/10.1016/j.ecolind.2017.03.056). While the former method is far quicker than a nonlinear optimisation, it is only applicable in the case of a single level of aggregation, with an arithmetic mean, and using linear correlation as a measure. Therefore in COINr, the second method is used.
### 11\.1\.1 Equal weighting
The most common approach to weighting is simply to make all weights equal to one another. This may seem like an unjustifiable simplification, but in practice, experts often give near\-equal weights when asked, and statistical approaches may give weights that are so unequal that it may be hard to justify them to stakeholders. This is why many composite indicators use equal weights. A further consideration is that the results of a composite indicator are often less sensitive to their weights than you might think.
That said, a number of other methods exist, some of which are discussed in the following sections.
### 11\.1\.2 Budget allocation
Fundamentally, composite indicators involve breaking down a complex multidimensional concept into sub\-concepts, and possibly breaking those sub\-concepts down into sub\-sub\-concepts, and so on. Generally, you keep doing this until you arrive at the point where the (sub\-)^n\-concepts are sufficiently specific that they can be directly measured with a small group of indicators.
Regardless of the aggregation level that you are at, however, some of the indicators (or aggregates) in the group are likely to be more or less important to the concept you are trying to measure than others.
Take the example of university rankings. One might consider the following indicators to be relevant to the notion of “excellence” for a university:
* Number of publications in top\-tier journals (research output)
* Teaching reputation, e.g. through a survey (teaching quality)
* Research funding from private companies (industry links)
* Percentage of international students (global renown)
* Mean earning of graduates (future prospects of students)
These may all be relevant for the concept, but the relative importance is definitely debatable. For example, if you are a prospective student, you might prioritise the teaching quality and graduate earnings. If you are researcher, you might priorities publications and research funding. And so on. The two points to make here are:
1. Indicators are often not equally important to the concept, and
2. Different people have different opinions on what should be more or less important.
Bringing this back to the issue of weights, it is important to make sure that the relative contributions of the indicators to the index scorers reflect the intended importance. This can be achieved by changing the weights attached to each indicator, and the weights of higher aggregation levels.
How then do we understand which indicators should be the most important to the concept? One way is to simply use our own opinion. But this does not reflect a diversity of viewpoints.
The *budget allocation method* is a simple way of eliciting weights from a group of experts. The idea is to get a group of experts on the concept, stakeholders and end\-users, ideally from different backgrounds. Then each member of the group is given 100 “coins” which they can “spend” on the indicators. Members allocate their budget to each indicator, so that they give more of the budget to important indicators, and less to less important ones.
This is a simple way of distributing weight to indicators, in a way that is easy for people to understand and to give their opinions. You can take the results, and take the average weight for each indicator. You can even infer a weight\-distribution over each indicator which could feed into an uncertainty analysis.
### 11\.1\.3 Principle component analysis
A very different approach is to use principle component analysis (PCA). PCA is a statistical approach, which rotates your indicator data into a new set of coordinates with special properties.
One way of looking at indicator data is as data points in a multidimensional space. If we only have two indicators, then any unit can be plotted as a point on a two\-dimensional plot, where the x\-axis is the first indicator, and the y\-axis is the second.
```
Ind1 <- 35
Ind2 <- 60
plot(Ind1, Ind2, main = "World's most boring plot")
```
Each point on this beautiful base\-R plot represents a unit (here we only have one).
If we have three indicators, then we will have three axes (i.e. a 3D plot). If we have more than three, then the point lives in a *hyperspace* which we can’t really visualise.
Anyway, the main point here is that indicator data can be plotted as points in a space, where each axis (dimension) is an indicator.
What PCA does, is that it rotates the data so that the axes are no longer indicators, but instead are *linear combinations of indicators*. And this is done so that the first axis is the linear combination of indicators that explains the most variance. If this is not making much sense (and if this is the first time you have encountered PCA then it probably won’t), I would recommend to look at one of the many visualisations of PCA on the internet, e.g. [this one](https://setosa.io/ev/principal-component-analysis/).
PCA is useful for composite indicators, because if you use an arithmetic mean, then you are using a linear combination of indicators. And the first principle component gives the weights that maximise the variance of the units. In other words, if you use PCA weights, you will have the best weights for discriminating between units, and for capturing as much information from the underlying indicators.
This sounds perfect, but the downside is that PCA weights do not care about the relative importance of indicators. Typically, PCA assigns the highest weights to indicators with the highest correlations with other indicators, and this can result in a very unbalanced combination of indicators. Still, PCA has a number of nice properties, and has the advantage of being a purely statistical approach.
### 11\.1\.4 Weight optimisation
If you choose to go for the budget allocation approach, to match weights to the opinions of experts, or indeed your own opinion, there is a catch that is not very obvious. Put simply, weights do not directly translate into importance.
To understand why, we must first define what “importance” means. Actually there is more than one way to look at this, but one possible measure is to use the (nonlinear) correlation between each indicator and the overall index. If the correlation is high, the indicator is well\-reflected in the index scores, and vice versa.
If we accept this definition of importance, then it’s important to realise that this correlation is affected not only by the weights attached to each indicator, but also by the correlations *between indicators*. This means that these correlations must be accounted for in choosing weights that agree with the budgets assigned by the group of experts.
In fact, it is possible to reverse\-engineer the weights either [analytically using a linear solution](https://doi.org/10.1111/j.1467-985X.2012.01059.x) or [numerically using a nonlinear solution](https://doi.org/10.1016/j.ecolind.2017.03.056). While the former method is far quicker than a nonlinear optimisation, it is only applicable in the case of a single level of aggregation, with an arithmetic mean, and using linear correlation as a measure. Therefore in COINr, the second method is used.
11\.2 Weighting tools in COINr
------------------------------
As seen in the chapter on [Aggregation](appendix-building-a-composite-indicator-example.html#aggregation-1), weights are used to calculate the aggregated data set, and are one of the inputs to the `aggregate()` function. These can either be directly specified as a list, or as a character string which points to one of the sets of weights stored under `.$Parameters$Weights`. There is no limit to the number of sets of weights that can be stored here. So how do we create new sets of weights?
### 11\.2\.1 Manual weighting
The simplest approach is to make manual adjustments. First, make a copy of the existing weights.
```
library(COINr6)
# quietly build data set
ASEM <- build_ASEM()
NewWeights <- ASEM$Parameters$Weights$Original
head(NewWeights)
## AgLevel Code Weight
## 1 1 Goods 1
## 2 1 Services 1
## 3 1 FDI 1
## 4 1 PRemit 1
## 5 1 ForPort 1
## 6 1 CostImpEx 1
```
The structure of the weights is a data frame, with each row being either an indicator or aggregation. The `AgLevel` column points to the aggregation level, and the `Weight` column is the weight, *which is relative inside its aggregation group*.
This last point is important so let’s clarify. When aggregating, the weights within each group are normalised to sum to one. So if we have three indicators in a group, and we specify the weights as all ones, the actual weights will each be 1/3\. This means that actually, we can put any number we like if we are aiming for equal weighting, as long as it is the same for each indicator.
Another implication of relative weighting is that the number of indicators within a group also matters. If we have two indicators in a group, equally weighted, their weight is 0\.5\. If we have ten, they are each weighted 0\.1\. Moreover, an indicator’s final weight in the index is also affected by its parent group weights (e.. sub\-pillar, pillar weights). This is why it is important to be aware of the so\-called “effective weights” which are the final weights of the indicators (and each aggregate in the index). COINr has a function to calculate this: see [Effective weights](weighting-1.html#effective-weights).
Anyway, to change the weights we can just directly modify the data frame. Let’s say we want to set the weight of the connectivity sub\-index to 0\.5\.
```
NewWeights$Weight[NewWeights$Code == "Conn"] <- 0.5
tail(NewWeights)
## AgLevel Code Weight
## 55 2 Environ 1.0
## 56 2 Social 1.0
## 57 2 SusEcFin 1.0
## 58 3 Conn 0.5
## 59 3 Sust 1.0
## 60 4 Index 1.0
```
Now, to actually use these weights in the aggregation, we can either (a) input them directly in the `aggregate()` function, or first attach them to the COIN and the call `aggregate()`, pointing to the new set of weights. The latter option seems more sensible because that way, all our sets of weights are neatly stored. Also, we can access this in the reweighting app as explained in the next section. Sets of weights need to be stored as a named data frame in `.$Parameters$Weights` to be accessible to `aggregate()`.
```
# put new weights in the COIN
ASEM$Parameters$Weights$NewWeights <- NewWeights
# Aggregate again to get new results
ASEM <- aggregate(ASEM, dset = "Normalised", agweights = "NewWeights")
```
Now the `ASEM` COIN contains the updated results using the new weights.
### 11\.2\.2 Interactive re\-weighting with ReW8R
Re\-weighting manually, as we have just seen, is quite simple. However, depending on your objectives, weighting can be an iterative process along the lines of:
1. Try a set of weights
2. Examine correlations
3. Check results, rankings
4. Adjust weights
5. Return to 1\.
Doing this at the command line can be time consuming, so COINr includes a “re\-weighting app” called `rew8r()` which lets you interactively adjust weights and explore the effects of doing so.
To run `rew8r()` you must have already aggregated your data using `aggregate()`. This is because every time weights are adjusted, `rew8r()` re\-aggregates to find new results, and it needs to know *which* aggregation method you are using. So, with your pre\-aggregated COIN at hand, simply run:
```
ASEM <- rew8r(ASEM)
```
At this point, a window/tab should open in your browser with the `rew8r()` app, which looks something like this:
Figure 11\.1: rew8r() screenshot
This may look a little over\-complicated to begin with, but let’s go through it step by step.
#### 11\.2\.2\.1 Correlations
First of all, `rew8r()` is based around correlations. Why are correlations important? Because they give an idea of how each indicator is represented in the final index, and at other aggregation levels (see earlier discussion in this chapter).
The `rew8r()` app allows you to check the correlations of anything against anything else. The two dropdown menus in the top left allow you to select which aggregation level to correlate against which other. In the screenshot above, the indicators (Level 1\) are being correlated against the pillars (Level 2\), but this can be changed to any of the levels in the index (the first level should always be below the second level otherwise it can cause errors). You can also select which type of correlation to examine, either Pearson (i.e. “standard” correlation), Spearman rank correlation, or Kendall’s tau (an alternative rank correlation).
The correlations themselves are shown in two plots \- the first is the scatter plot in the upper right part of the window, which plots the *correlation* of each indicator in Level 1 on the horizontal axis with its parent in Level 2, against its *weight* (in Level 1\) on the vertical axis. We will come back to this shortly.
Figure 11\.2: Heatmap screenshot (in rew8r)
The second plot is a heatmap of the correlation matrix (between Level 1 and Level 2 in this case) which is shown in the lower right corner. This gives the correlation values between all the indicators/groups of each level as colours according to a discrete colour map. This colour map intends to highlight highly correlated indicators (green), as well as negative or low\-correlated indicators (red). These thresholds can be adjusted by adjusting the low and high correlation threshold dropdown menus in the side panel (on the left). The aggregation groups are also shown as overlaid rectangles, and correlation values can be turned on and off using the checkbox below the heatmap. The point of this plot is to show at a glance, where there are very high or low correlations, possibly within aggregation groups. For example, indicators that are negatively correlated with their aggregation group can be problematic. Note that correlation heatmaps can also be generated independently of the app using the `plotCorr()` function \- this is described more in the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) section.
Highly\-correlated indicators are also listed in a table at the lower middle of the window. The “Ind\-Ind” tab flags any indicators that are above the high correlation threshold, and within the same aggregation group (the aggregation level immediately above the indicator level). Note that this is *not* dependent on the weights since it regards correlations between indicators only. The “Cross\-level” tab instead gives a table which flags any correlations across the two levels selected in side panel, which are above the threshold.
#### 11\.2\.2\.2 Weighting
Let us now turn back to the scatter plot. The scatter plot shows the correlation values on the horizontal axis, but also shows the weight of each indicator or aggregate in the first level selected in the dropdown menu. The purpose here is to show at a glance the weighting, and to make adjustments. To adjust a weight, either click on one of the points in the plot (hovering will show the names of the indicators or aggregates), or select the indicator/aggregate in the dropdown menu in the side panel under the “Weighting” subsection. Then adjust the weight using the slider. Doing this will automatically update all of the plots and tables, so you can see interactively how the correlations change as a result.
Figure 11\.3: Correlation scatter plot screenshots in rew8r().
Figure 11\.4: Correlation scatter plot screenshots in rew8r().
Importantly, the scatter plot only shows correlations between each indicator/aggregate in the first level *with its parent* in the second selected level. When everything is plotted on one plot, this may not be so clear, so the “Separate groups” checkbox in the side panel changes the plot to multiple sub\-plots, one for each group in the second selected level. This helps to show how correlations are distributed within each group.
The results of the index, in terms of ranks and scores of units, can be found by switching to the “Results” tab in the main window.
Other than adjusting individual weights, any set of weights that is stored in `.$Parameters$Weights` can be selected, and this will automatically show the correlations. The button “Equal weights” sets weights to be equal at the current level.
Finally, if you have adjusted the weights and you wish to save the new set of weights back to the COIN, you can do this in the “Weights output” subsection of the side panel. The current set of weights is saved by entering a name and clicking the “Save” button. This saves the set of weights to a list, and when the app is closed (using the “Close app” button), it will return them to the updated COIN under `.$Parameters$Weights`, with the specified name. You can save multiple sets of weights in the same session, by making adjustments, and assigning a name, and clicking “Save”. When you close the app, all the weight sets will be attached to the COIN. If you don’t click “Close app”, any new weight sets created in the session will be lost.
#### 11\.2\.2\.3 Remarks
The `rew8r()` app is something of a work in progress \- it is difficult to strike a balance between conveying the relevant information and conveying too much information. In future versions, it may be updated to become more streamlined.
### 11\.2\.3 Automatic re\-weighting
As discussed earlier, weighting can also be performed statistically, by optimising or targeting some statistical criteria of interest. COINr includes a few possibilities in this respect.
#### 11\.2\.3\.1 With PCA
The `getPCA()` function is used as a quick way to perform PCA on COINs. This does two things: first, to output PCA results for any aggregation level; and second, to provide PCA weights corresponding to the “loadings” of the first principle component. As discussed above, this is the linear combination that results in the most explained variance.
The other outputs of `getPCA()` are left to the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) section. Here, we will focus on weighting. To obtain PCA weights, simply run something like:
```
ASEM <- getPCA(ASEM, dset = "Aggregated", aglev = 2)
```
This calls the aggregated data set, and performs PCA on aggregation level 2, i.e. the pillars in the case of ASEM. The result is an updated COIN, with a new set of weights called `.Parameters$Weights$PCA_AggregatedL2`. This is automatically named according to the data set used and the aggregation level.
An important point here is that this weighting represents optimal properties if the components (pillars here) are aggregated directly into a single value. In this example, it is not strictly correct because above the pillar level we have another aggregation (sub\-index) before reaching the final index. This means that these weights wouldn’t be strictly correct, because they don’t account for the intermediate level. However, depending on the aggregation structure and weighting (i.e. if it is fairly even) it may give a fairly good approximation.
*Note: PCA weights have not been fully tested at the time of writing, treat with a healthy dose of skepticism.*
#### 11\.2\.3\.2 Weight optimisation
An alternative approach is to optimise weight iteratively. This is in a way less elegant (and slower) because it uses numerical optimisation, but allows much more flexibility.
The `weightOpt()` function gives several options in this respect. In short, it allows weights to be optimised to agree with either (a) some pre\-specified “importances” (perhaps obtained by the budget allocation method, or perhaps simply equal importance), or (b) to maximise the information conveyed by the composite indicator. Starting with the first option, this is how it can be applied to our example data:
```
ASEM <- weightOpt(ASEM, itarg = "equal", aglev = 3, out2 = "COIN")
## iterating... squared difference = 0.000814548399872482
## iterating... squared difference = 0.00116234608542398
## iterating... squared difference = 0.00052279688155717
## iterating... squared difference = 0.000271428850497901
## iterating... squared difference = 4.19003122660241e-05
## iterating... squared difference = 1.66726715407502e-06
## iterating... squared difference = 0.000166401854209433
## iterating... squared difference = 0.000357199714112592
## iterating... squared difference = 4.91262049731255e-05
## iterating... squared difference = 0.000189074238231704
## iterating... squared difference = 2.15217457991648e-06
## iterating... squared difference = 4.02429806062667e-05
## iterating... squared difference = 1.04404840785835e-05
## iterating... squared difference = 1.00833908145386e-05
## iterating... squared difference = 2.40420217478906e-06
## Optimisation successful!
```
This targets the aggregation level 3 (sub\-index), and the `itarg` argument here is set to “equal”, which means that the objective is to make the correlations between each sub\-index and the index equal to one another. More details of the results of this optimisation are found in the `.$Analysis$Weights` folder of the object. In particular, we can see the final correlations and the optimised weights:
```
ASEM$Analysis$Weights$OptimsedLev3$CorrResults
## Desired Obtained OptWeight
## 1 0.5 0.8971336 0.400
## 2 0.5 0.8925119 0.625
```
In this case, the weights are slightly uneven but seem reasonable. The correlations are also almost exactly equal. The full set of weights is found as a new weight set under `.$Parameters$Weights`.
Other possibilities with the `weightOpt()` function include:
* Specifying an unequal vector of target importances using the `itarg` argument. E.g. taking the sub\-pillar level, specifying `itarg = c(0.5, 1)` would optimise the weights so that the first sub\-pillar has half the correlation of the second sub\-pillar with the index.
* Optimising the weights at other aggregation levels using the `aglev` argument
* Optimising over other correlation types, e.g. rank correlations, using the `cortype` argument
* Aiming to maximise overall correlations by setting `optype = infomax`
It’s important to point out that only one level can be optimised at a time, and the optimisation is “conditional” on the weights at other levels. For example, optmising at the sub\-index level produces optimal correlations for the sub\-indexes, but not for any other level. If we then optimise at lower levels, this will change the correlations of the sub\-indexes! So probably the most sensible strategy is to optimise for the last aggregation level (before the index).
That said, an advantage of `weightOpt()` is that since it is numerical, it can be used for any type of aggregation method, any correlation type, and with any index structure. It calls the aggregation method that you last used to aggregate the index, so it rebuilds the index according to your specifications.
Other correlation types may be of interest when linear correlation (i.e. Pearson correlation) is not a sufficient measure. This is the case for example, when distributions are highly skewed. Rank correlations are robust to outliers and can handle some nonlinearity (but not non\-monotonicity). Fully nonlinear correlation methods [also exist](https://www.sciencedirect.com/science/article/pii/S1470160X17301759) but most of the time, relationships between indicators are fairly linear. Nonlinear correlation may be included in a future version of COINr.
Setting `optype = infomax` changes the optimisation criterion, to instead *maximise* the sum of the correlations, which is equivalent to maximising the information transferred to the index, and is similar to PCA. Like PCA, this can however result in fairly unequal weights, and not infrequently in negative weights.
The `weightOpt()` function uses a numerical optimisation algorithm which does not always succeed in finding a good solution. Optimisation is a large field in its own right, and is more difficult the more variables you optimise over. If `weightOpt()` fails to find a good solution, try:
* Decreasing the `toler` argument \- this decreases the acceptable error between the target importance and the importance with the final set of weights.
* Increasing the `maxiter` argument \- this is the number of iterations allowed to find a solution within the specified tolerance. By default it is set at 500\.
Even then, the algorithm may not always converge, depending on the problem.
### 11\.2\.1 Manual weighting
The simplest approach is to make manual adjustments. First, make a copy of the existing weights.
```
library(COINr6)
# quietly build data set
ASEM <- build_ASEM()
NewWeights <- ASEM$Parameters$Weights$Original
head(NewWeights)
## AgLevel Code Weight
## 1 1 Goods 1
## 2 1 Services 1
## 3 1 FDI 1
## 4 1 PRemit 1
## 5 1 ForPort 1
## 6 1 CostImpEx 1
```
The structure of the weights is a data frame, with each row being either an indicator or aggregation. The `AgLevel` column points to the aggregation level, and the `Weight` column is the weight, *which is relative inside its aggregation group*.
This last point is important so let’s clarify. When aggregating, the weights within each group are normalised to sum to one. So if we have three indicators in a group, and we specify the weights as all ones, the actual weights will each be 1/3\. This means that actually, we can put any number we like if we are aiming for equal weighting, as long as it is the same for each indicator.
Another implication of relative weighting is that the number of indicators within a group also matters. If we have two indicators in a group, equally weighted, their weight is 0\.5\. If we have ten, they are each weighted 0\.1\. Moreover, an indicator’s final weight in the index is also affected by its parent group weights (e.. sub\-pillar, pillar weights). This is why it is important to be aware of the so\-called “effective weights” which are the final weights of the indicators (and each aggregate in the index). COINr has a function to calculate this: see [Effective weights](weighting-1.html#effective-weights).
Anyway, to change the weights we can just directly modify the data frame. Let’s say we want to set the weight of the connectivity sub\-index to 0\.5\.
```
NewWeights$Weight[NewWeights$Code == "Conn"] <- 0.5
tail(NewWeights)
## AgLevel Code Weight
## 55 2 Environ 1.0
## 56 2 Social 1.0
## 57 2 SusEcFin 1.0
## 58 3 Conn 0.5
## 59 3 Sust 1.0
## 60 4 Index 1.0
```
Now, to actually use these weights in the aggregation, we can either (a) input them directly in the `aggregate()` function, or first attach them to the COIN and the call `aggregate()`, pointing to the new set of weights. The latter option seems more sensible because that way, all our sets of weights are neatly stored. Also, we can access this in the reweighting app as explained in the next section. Sets of weights need to be stored as a named data frame in `.$Parameters$Weights` to be accessible to `aggregate()`.
```
# put new weights in the COIN
ASEM$Parameters$Weights$NewWeights <- NewWeights
# Aggregate again to get new results
ASEM <- aggregate(ASEM, dset = "Normalised", agweights = "NewWeights")
```
Now the `ASEM` COIN contains the updated results using the new weights.
### 11\.2\.2 Interactive re\-weighting with ReW8R
Re\-weighting manually, as we have just seen, is quite simple. However, depending on your objectives, weighting can be an iterative process along the lines of:
1. Try a set of weights
2. Examine correlations
3. Check results, rankings
4. Adjust weights
5. Return to 1\.
Doing this at the command line can be time consuming, so COINr includes a “re\-weighting app” called `rew8r()` which lets you interactively adjust weights and explore the effects of doing so.
To run `rew8r()` you must have already aggregated your data using `aggregate()`. This is because every time weights are adjusted, `rew8r()` re\-aggregates to find new results, and it needs to know *which* aggregation method you are using. So, with your pre\-aggregated COIN at hand, simply run:
```
ASEM <- rew8r(ASEM)
```
At this point, a window/tab should open in your browser with the `rew8r()` app, which looks something like this:
Figure 11\.1: rew8r() screenshot
This may look a little over\-complicated to begin with, but let’s go through it step by step.
#### 11\.2\.2\.1 Correlations
First of all, `rew8r()` is based around correlations. Why are correlations important? Because they give an idea of how each indicator is represented in the final index, and at other aggregation levels (see earlier discussion in this chapter).
The `rew8r()` app allows you to check the correlations of anything against anything else. The two dropdown menus in the top left allow you to select which aggregation level to correlate against which other. In the screenshot above, the indicators (Level 1\) are being correlated against the pillars (Level 2\), but this can be changed to any of the levels in the index (the first level should always be below the second level otherwise it can cause errors). You can also select which type of correlation to examine, either Pearson (i.e. “standard” correlation), Spearman rank correlation, or Kendall’s tau (an alternative rank correlation).
The correlations themselves are shown in two plots \- the first is the scatter plot in the upper right part of the window, which plots the *correlation* of each indicator in Level 1 on the horizontal axis with its parent in Level 2, against its *weight* (in Level 1\) on the vertical axis. We will come back to this shortly.
Figure 11\.2: Heatmap screenshot (in rew8r)
The second plot is a heatmap of the correlation matrix (between Level 1 and Level 2 in this case) which is shown in the lower right corner. This gives the correlation values between all the indicators/groups of each level as colours according to a discrete colour map. This colour map intends to highlight highly correlated indicators (green), as well as negative or low\-correlated indicators (red). These thresholds can be adjusted by adjusting the low and high correlation threshold dropdown menus in the side panel (on the left). The aggregation groups are also shown as overlaid rectangles, and correlation values can be turned on and off using the checkbox below the heatmap. The point of this plot is to show at a glance, where there are very high or low correlations, possibly within aggregation groups. For example, indicators that are negatively correlated with their aggregation group can be problematic. Note that correlation heatmaps can also be generated independently of the app using the `plotCorr()` function \- this is described more in the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) section.
Highly\-correlated indicators are also listed in a table at the lower middle of the window. The “Ind\-Ind” tab flags any indicators that are above the high correlation threshold, and within the same aggregation group (the aggregation level immediately above the indicator level). Note that this is *not* dependent on the weights since it regards correlations between indicators only. The “Cross\-level” tab instead gives a table which flags any correlations across the two levels selected in side panel, which are above the threshold.
#### 11\.2\.2\.2 Weighting
Let us now turn back to the scatter plot. The scatter plot shows the correlation values on the horizontal axis, but also shows the weight of each indicator or aggregate in the first level selected in the dropdown menu. The purpose here is to show at a glance the weighting, and to make adjustments. To adjust a weight, either click on one of the points in the plot (hovering will show the names of the indicators or aggregates), or select the indicator/aggregate in the dropdown menu in the side panel under the “Weighting” subsection. Then adjust the weight using the slider. Doing this will automatically update all of the plots and tables, so you can see interactively how the correlations change as a result.
Figure 11\.3: Correlation scatter plot screenshots in rew8r().
Figure 11\.4: Correlation scatter plot screenshots in rew8r().
Importantly, the scatter plot only shows correlations between each indicator/aggregate in the first level *with its parent* in the second selected level. When everything is plotted on one plot, this may not be so clear, so the “Separate groups” checkbox in the side panel changes the plot to multiple sub\-plots, one for each group in the second selected level. This helps to show how correlations are distributed within each group.
The results of the index, in terms of ranks and scores of units, can be found by switching to the “Results” tab in the main window.
Other than adjusting individual weights, any set of weights that is stored in `.$Parameters$Weights` can be selected, and this will automatically show the correlations. The button “Equal weights” sets weights to be equal at the current level.
Finally, if you have adjusted the weights and you wish to save the new set of weights back to the COIN, you can do this in the “Weights output” subsection of the side panel. The current set of weights is saved by entering a name and clicking the “Save” button. This saves the set of weights to a list, and when the app is closed (using the “Close app” button), it will return them to the updated COIN under `.$Parameters$Weights`, with the specified name. You can save multiple sets of weights in the same session, by making adjustments, and assigning a name, and clicking “Save”. When you close the app, all the weight sets will be attached to the COIN. If you don’t click “Close app”, any new weight sets created in the session will be lost.
#### 11\.2\.2\.3 Remarks
The `rew8r()` app is something of a work in progress \- it is difficult to strike a balance between conveying the relevant information and conveying too much information. In future versions, it may be updated to become more streamlined.
#### 11\.2\.2\.1 Correlations
First of all, `rew8r()` is based around correlations. Why are correlations important? Because they give an idea of how each indicator is represented in the final index, and at other aggregation levels (see earlier discussion in this chapter).
The `rew8r()` app allows you to check the correlations of anything against anything else. The two dropdown menus in the top left allow you to select which aggregation level to correlate against which other. In the screenshot above, the indicators (Level 1\) are being correlated against the pillars (Level 2\), but this can be changed to any of the levels in the index (the first level should always be below the second level otherwise it can cause errors). You can also select which type of correlation to examine, either Pearson (i.e. “standard” correlation), Spearman rank correlation, or Kendall’s tau (an alternative rank correlation).
The correlations themselves are shown in two plots \- the first is the scatter plot in the upper right part of the window, which plots the *correlation* of each indicator in Level 1 on the horizontal axis with its parent in Level 2, against its *weight* (in Level 1\) on the vertical axis. We will come back to this shortly.
Figure 11\.2: Heatmap screenshot (in rew8r)
The second plot is a heatmap of the correlation matrix (between Level 1 and Level 2 in this case) which is shown in the lower right corner. This gives the correlation values between all the indicators/groups of each level as colours according to a discrete colour map. This colour map intends to highlight highly correlated indicators (green), as well as negative or low\-correlated indicators (red). These thresholds can be adjusted by adjusting the low and high correlation threshold dropdown menus in the side panel (on the left). The aggregation groups are also shown as overlaid rectangles, and correlation values can be turned on and off using the checkbox below the heatmap. The point of this plot is to show at a glance, where there are very high or low correlations, possibly within aggregation groups. For example, indicators that are negatively correlated with their aggregation group can be problematic. Note that correlation heatmaps can also be generated independently of the app using the `plotCorr()` function \- this is described more in the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) section.
Highly\-correlated indicators are also listed in a table at the lower middle of the window. The “Ind\-Ind” tab flags any indicators that are above the high correlation threshold, and within the same aggregation group (the aggregation level immediately above the indicator level). Note that this is *not* dependent on the weights since it regards correlations between indicators only. The “Cross\-level” tab instead gives a table which flags any correlations across the two levels selected in side panel, which are above the threshold.
#### 11\.2\.2\.2 Weighting
Let us now turn back to the scatter plot. The scatter plot shows the correlation values on the horizontal axis, but also shows the weight of each indicator or aggregate in the first level selected in the dropdown menu. The purpose here is to show at a glance the weighting, and to make adjustments. To adjust a weight, either click on one of the points in the plot (hovering will show the names of the indicators or aggregates), or select the indicator/aggregate in the dropdown menu in the side panel under the “Weighting” subsection. Then adjust the weight using the slider. Doing this will automatically update all of the plots and tables, so you can see interactively how the correlations change as a result.
Figure 11\.3: Correlation scatter plot screenshots in rew8r().
Figure 11\.4: Correlation scatter plot screenshots in rew8r().
Importantly, the scatter plot only shows correlations between each indicator/aggregate in the first level *with its parent* in the second selected level. When everything is plotted on one plot, this may not be so clear, so the “Separate groups” checkbox in the side panel changes the plot to multiple sub\-plots, one for each group in the second selected level. This helps to show how correlations are distributed within each group.
The results of the index, in terms of ranks and scores of units, can be found by switching to the “Results” tab in the main window.
Other than adjusting individual weights, any set of weights that is stored in `.$Parameters$Weights` can be selected, and this will automatically show the correlations. The button “Equal weights” sets weights to be equal at the current level.
Finally, if you have adjusted the weights and you wish to save the new set of weights back to the COIN, you can do this in the “Weights output” subsection of the side panel. The current set of weights is saved by entering a name and clicking the “Save” button. This saves the set of weights to a list, and when the app is closed (using the “Close app” button), it will return them to the updated COIN under `.$Parameters$Weights`, with the specified name. You can save multiple sets of weights in the same session, by making adjustments, and assigning a name, and clicking “Save”. When you close the app, all the weight sets will be attached to the COIN. If you don’t click “Close app”, any new weight sets created in the session will be lost.
#### 11\.2\.2\.3 Remarks
The `rew8r()` app is something of a work in progress \- it is difficult to strike a balance between conveying the relevant information and conveying too much information. In future versions, it may be updated to become more streamlined.
### 11\.2\.3 Automatic re\-weighting
As discussed earlier, weighting can also be performed statistically, by optimising or targeting some statistical criteria of interest. COINr includes a few possibilities in this respect.
#### 11\.2\.3\.1 With PCA
The `getPCA()` function is used as a quick way to perform PCA on COINs. This does two things: first, to output PCA results for any aggregation level; and second, to provide PCA weights corresponding to the “loadings” of the first principle component. As discussed above, this is the linear combination that results in the most explained variance.
The other outputs of `getPCA()` are left to the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) section. Here, we will focus on weighting. To obtain PCA weights, simply run something like:
```
ASEM <- getPCA(ASEM, dset = "Aggregated", aglev = 2)
```
This calls the aggregated data set, and performs PCA on aggregation level 2, i.e. the pillars in the case of ASEM. The result is an updated COIN, with a new set of weights called `.Parameters$Weights$PCA_AggregatedL2`. This is automatically named according to the data set used and the aggregation level.
An important point here is that this weighting represents optimal properties if the components (pillars here) are aggregated directly into a single value. In this example, it is not strictly correct because above the pillar level we have another aggregation (sub\-index) before reaching the final index. This means that these weights wouldn’t be strictly correct, because they don’t account for the intermediate level. However, depending on the aggregation structure and weighting (i.e. if it is fairly even) it may give a fairly good approximation.
*Note: PCA weights have not been fully tested at the time of writing, treat with a healthy dose of skepticism.*
#### 11\.2\.3\.2 Weight optimisation
An alternative approach is to optimise weight iteratively. This is in a way less elegant (and slower) because it uses numerical optimisation, but allows much more flexibility.
The `weightOpt()` function gives several options in this respect. In short, it allows weights to be optimised to agree with either (a) some pre\-specified “importances” (perhaps obtained by the budget allocation method, or perhaps simply equal importance), or (b) to maximise the information conveyed by the composite indicator. Starting with the first option, this is how it can be applied to our example data:
```
ASEM <- weightOpt(ASEM, itarg = "equal", aglev = 3, out2 = "COIN")
## iterating... squared difference = 0.000814548399872482
## iterating... squared difference = 0.00116234608542398
## iterating... squared difference = 0.00052279688155717
## iterating... squared difference = 0.000271428850497901
## iterating... squared difference = 4.19003122660241e-05
## iterating... squared difference = 1.66726715407502e-06
## iterating... squared difference = 0.000166401854209433
## iterating... squared difference = 0.000357199714112592
## iterating... squared difference = 4.91262049731255e-05
## iterating... squared difference = 0.000189074238231704
## iterating... squared difference = 2.15217457991648e-06
## iterating... squared difference = 4.02429806062667e-05
## iterating... squared difference = 1.04404840785835e-05
## iterating... squared difference = 1.00833908145386e-05
## iterating... squared difference = 2.40420217478906e-06
## Optimisation successful!
```
This targets the aggregation level 3 (sub\-index), and the `itarg` argument here is set to “equal”, which means that the objective is to make the correlations between each sub\-index and the index equal to one another. More details of the results of this optimisation are found in the `.$Analysis$Weights` folder of the object. In particular, we can see the final correlations and the optimised weights:
```
ASEM$Analysis$Weights$OptimsedLev3$CorrResults
## Desired Obtained OptWeight
## 1 0.5 0.8971336 0.400
## 2 0.5 0.8925119 0.625
```
In this case, the weights are slightly uneven but seem reasonable. The correlations are also almost exactly equal. The full set of weights is found as a new weight set under `.$Parameters$Weights`.
Other possibilities with the `weightOpt()` function include:
* Specifying an unequal vector of target importances using the `itarg` argument. E.g. taking the sub\-pillar level, specifying `itarg = c(0.5, 1)` would optimise the weights so that the first sub\-pillar has half the correlation of the second sub\-pillar with the index.
* Optimising the weights at other aggregation levels using the `aglev` argument
* Optimising over other correlation types, e.g. rank correlations, using the `cortype` argument
* Aiming to maximise overall correlations by setting `optype = infomax`
It’s important to point out that only one level can be optimised at a time, and the optimisation is “conditional” on the weights at other levels. For example, optmising at the sub\-index level produces optimal correlations for the sub\-indexes, but not for any other level. If we then optimise at lower levels, this will change the correlations of the sub\-indexes! So probably the most sensible strategy is to optimise for the last aggregation level (before the index).
That said, an advantage of `weightOpt()` is that since it is numerical, it can be used for any type of aggregation method, any correlation type, and with any index structure. It calls the aggregation method that you last used to aggregate the index, so it rebuilds the index according to your specifications.
Other correlation types may be of interest when linear correlation (i.e. Pearson correlation) is not a sufficient measure. This is the case for example, when distributions are highly skewed. Rank correlations are robust to outliers and can handle some nonlinearity (but not non\-monotonicity). Fully nonlinear correlation methods [also exist](https://www.sciencedirect.com/science/article/pii/S1470160X17301759) but most of the time, relationships between indicators are fairly linear. Nonlinear correlation may be included in a future version of COINr.
Setting `optype = infomax` changes the optimisation criterion, to instead *maximise* the sum of the correlations, which is equivalent to maximising the information transferred to the index, and is similar to PCA. Like PCA, this can however result in fairly unequal weights, and not infrequently in negative weights.
The `weightOpt()` function uses a numerical optimisation algorithm which does not always succeed in finding a good solution. Optimisation is a large field in its own right, and is more difficult the more variables you optimise over. If `weightOpt()` fails to find a good solution, try:
* Decreasing the `toler` argument \- this decreases the acceptable error between the target importance and the importance with the final set of weights.
* Increasing the `maxiter` argument \- this is the number of iterations allowed to find a solution within the specified tolerance. By default it is set at 500\.
Even then, the algorithm may not always converge, depending on the problem.
#### 11\.2\.3\.1 With PCA
The `getPCA()` function is used as a quick way to perform PCA on COINs. This does two things: first, to output PCA results for any aggregation level; and second, to provide PCA weights corresponding to the “loadings” of the first principle component. As discussed above, this is the linear combination that results in the most explained variance.
The other outputs of `getPCA()` are left to the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) section. Here, we will focus on weighting. To obtain PCA weights, simply run something like:
```
ASEM <- getPCA(ASEM, dset = "Aggregated", aglev = 2)
```
This calls the aggregated data set, and performs PCA on aggregation level 2, i.e. the pillars in the case of ASEM. The result is an updated COIN, with a new set of weights called `.Parameters$Weights$PCA_AggregatedL2`. This is automatically named according to the data set used and the aggregation level.
An important point here is that this weighting represents optimal properties if the components (pillars here) are aggregated directly into a single value. In this example, it is not strictly correct because above the pillar level we have another aggregation (sub\-index) before reaching the final index. This means that these weights wouldn’t be strictly correct, because they don’t account for the intermediate level. However, depending on the aggregation structure and weighting (i.e. if it is fairly even) it may give a fairly good approximation.
*Note: PCA weights have not been fully tested at the time of writing, treat with a healthy dose of skepticism.*
#### 11\.2\.3\.2 Weight optimisation
An alternative approach is to optimise weight iteratively. This is in a way less elegant (and slower) because it uses numerical optimisation, but allows much more flexibility.
The `weightOpt()` function gives several options in this respect. In short, it allows weights to be optimised to agree with either (a) some pre\-specified “importances” (perhaps obtained by the budget allocation method, or perhaps simply equal importance), or (b) to maximise the information conveyed by the composite indicator. Starting with the first option, this is how it can be applied to our example data:
```
ASEM <- weightOpt(ASEM, itarg = "equal", aglev = 3, out2 = "COIN")
## iterating... squared difference = 0.000814548399872482
## iterating... squared difference = 0.00116234608542398
## iterating... squared difference = 0.00052279688155717
## iterating... squared difference = 0.000271428850497901
## iterating... squared difference = 4.19003122660241e-05
## iterating... squared difference = 1.66726715407502e-06
## iterating... squared difference = 0.000166401854209433
## iterating... squared difference = 0.000357199714112592
## iterating... squared difference = 4.91262049731255e-05
## iterating... squared difference = 0.000189074238231704
## iterating... squared difference = 2.15217457991648e-06
## iterating... squared difference = 4.02429806062667e-05
## iterating... squared difference = 1.04404840785835e-05
## iterating... squared difference = 1.00833908145386e-05
## iterating... squared difference = 2.40420217478906e-06
## Optimisation successful!
```
This targets the aggregation level 3 (sub\-index), and the `itarg` argument here is set to “equal”, which means that the objective is to make the correlations between each sub\-index and the index equal to one another. More details of the results of this optimisation are found in the `.$Analysis$Weights` folder of the object. In particular, we can see the final correlations and the optimised weights:
```
ASEM$Analysis$Weights$OptimsedLev3$CorrResults
## Desired Obtained OptWeight
## 1 0.5 0.8971336 0.400
## 2 0.5 0.8925119 0.625
```
In this case, the weights are slightly uneven but seem reasonable. The correlations are also almost exactly equal. The full set of weights is found as a new weight set under `.$Parameters$Weights`.
Other possibilities with the `weightOpt()` function include:
* Specifying an unequal vector of target importances using the `itarg` argument. E.g. taking the sub\-pillar level, specifying `itarg = c(0.5, 1)` would optimise the weights so that the first sub\-pillar has half the correlation of the second sub\-pillar with the index.
* Optimising the weights at other aggregation levels using the `aglev` argument
* Optimising over other correlation types, e.g. rank correlations, using the `cortype` argument
* Aiming to maximise overall correlations by setting `optype = infomax`
It’s important to point out that only one level can be optimised at a time, and the optimisation is “conditional” on the weights at other levels. For example, optmising at the sub\-index level produces optimal correlations for the sub\-indexes, but not for any other level. If we then optimise at lower levels, this will change the correlations of the sub\-indexes! So probably the most sensible strategy is to optimise for the last aggregation level (before the index).
That said, an advantage of `weightOpt()` is that since it is numerical, it can be used for any type of aggregation method, any correlation type, and with any index structure. It calls the aggregation method that you last used to aggregate the index, so it rebuilds the index according to your specifications.
Other correlation types may be of interest when linear correlation (i.e. Pearson correlation) is not a sufficient measure. This is the case for example, when distributions are highly skewed. Rank correlations are robust to outliers and can handle some nonlinearity (but not non\-monotonicity). Fully nonlinear correlation methods [also exist](https://www.sciencedirect.com/science/article/pii/S1470160X17301759) but most of the time, relationships between indicators are fairly linear. Nonlinear correlation may be included in a future version of COINr.
Setting `optype = infomax` changes the optimisation criterion, to instead *maximise* the sum of the correlations, which is equivalent to maximising the information transferred to the index, and is similar to PCA. Like PCA, this can however result in fairly unequal weights, and not infrequently in negative weights.
The `weightOpt()` function uses a numerical optimisation algorithm which does not always succeed in finding a good solution. Optimisation is a large field in its own right, and is more difficult the more variables you optimise over. If `weightOpt()` fails to find a good solution, try:
* Decreasing the `toler` argument \- this decreases the acceptable error between the target importance and the importance with the final set of weights.
* Increasing the `maxiter` argument \- this is the number of iterations allowed to find a solution within the specified tolerance. By default it is set at 500\.
Even then, the algorithm may not always converge, depending on the problem.
11\.3 Effective weights
-----------------------
By now, it should be clear that the weight of an indicator is not only affected by its own weight, but also the weights of any aggregate groups to which it belongs. If the aggregation method is the arithmetic mean, the “effective weight”, i.e. the final weight of each indicator, including its parent weights, can be be easily obtained by COINr’s `effectiveWeight()` function:
```
# get effective weight info
EffWeights <- effectiveWeight(ASEM)
EffWeights$EffectiveWeightsList
## AgLevel Code EffectiveWeight
## 1 1 Goods 0.02000000
## 2 1 Services 0.02000000
## 3 1 FDI 0.02000000
## 4 1 PRemit 0.02000000
## 5 1 ForPort 0.02000000
## 6 1 CostImpEx 0.01666667
## 7 1 Tariff 0.01666667
## 8 1 TBTs 0.01666667
## 9 1 TIRcon 0.01666667
## 10 1 RTAs 0.01666667
## 11 1 Visa 0.01666667
## 12 1 StMob 0.01250000
## 13 1 Research 0.01250000
## 14 1 Pat 0.01250000
## 15 1 CultServ 0.01250000
## 16 1 CultGood 0.01250000
## 17 1 Tourist 0.01250000
## 18 1 MigStock 0.01250000
## 19 1 Lang 0.01250000
## 20 1 LPI 0.01250000
## 21 1 Flights 0.01250000
## 22 1 Ship 0.01250000
## 23 1 Bord 0.01250000
## 24 1 Elec 0.01250000
## 25 1 Gas 0.01250000
## 26 1 ConSpeed 0.01250000
## 27 1 Cov4G 0.01250000
## 28 1 Embs 0.03333333
## 29 1 IGOs 0.03333333
## 30 1 UNVote 0.03333333
## 31 1 Renew 0.03333333
## 32 1 PrimEner 0.03333333
## 33 1 CO2 0.03333333
## 34 1 MatCon 0.03333333
## 35 1 Forest 0.03333333
## 36 1 Poverty 0.01851852
## 37 1 Palma 0.01851852
## 38 1 TertGrad 0.01851852
## 39 1 FreePress 0.01851852
## 40 1 TolMin 0.01851852
## 41 1 NGOs 0.01851852
## 42 1 CPI 0.01851852
## 43 1 FemLab 0.01851852
## 44 1 WomParl 0.01851852
## 45 1 PubDebt 0.03333333
## 46 1 PrivDebt 0.03333333
## 47 1 GDPGrow 0.03333333
## 48 1 RDExp 0.03333333
## 49 1 NEET 0.03333333
## 50 2 Physical 0.10000000
## 51 2 ConEcFin 0.10000000
## 52 2 Political 0.10000000
## 53 2 Instit 0.10000000
## 54 2 P2P 0.10000000
## 55 2 Environ 0.16666667
## 56 2 Social 0.16666667
## 57 2 SusEcFin 0.16666667
## 58 3 Conn 0.50000000
## 59 3 Sust 0.50000000
## 60 4 Index 1.00000000
```
A nice way to visualise this is to call `plotframework()` (see [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis)). `effectiveWeight()` also gives other info on the parents of each indicator/aggregate which may be useful for some purposes (e.g. some types of plots external to COINr).
11\.4 Final points
------------------
Weighting and correlations is a complex topic, which is important to explore and address. On the other hand, weighting and correlations are just one part of a composite indicator, and balancing and optimising weights probably not be pursued at the expense of building a confusing composite indicator that makes no sense to most people (depending on the context and application).
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/visualising-results.html |
Chapter 12 Visualising results
==============================
The main results of a composite indicator are the aggregated scores and ranks. These can be explored and visualised in different ways, and the way you do this will depend on the context. For example, in building the index, it is useful to check the results at an initial stage, again after methodological adjustments, and so on through the construction process. Short static plots and tables are useful for concisely communicating results in reports or briefs.
When the index is complete, it can be presented using a dedicated web platform, which allows users to explore the results in depth. Some interesting examples of this include:
* The [Lowy Asia Power Index](https://power.lowyinstitute.org/)
* The [Cultural and Creative Cities Monitor](https://composite-indicators.jrc.ec.europa.eu/cultural-creative-cities-monitor/performance-map)
* The [SDG Index and Dashboards](https://dashboards.sdgindex.org/map)
These latter platforms are not usually built in R, but in Javascript. However, COINr does have a simple interactive results viewer that can be used for prototyping.
12\.1 Tables
------------
To see the aggregated results, one way is to look in `.$Data$Aggregated`, which contains the aggregated scores, along with any grouping variables that were attached with the original data. However, this is not always convenient because the table can be quite large, and also the index itself will be the last column. You can of course manually select columns, but to make life easier COINr has a dedicated function, `getResults()`.
```
library(COINr6)
# build example data set
ASEM <- build_ASEM() |>
suppressMessages()
# get results table, top 10
getResults(ASEM) |>
head(10)
## UnitCode UnitName Index Rank
## 1 CHE Switzerland 68.38 1
## 2 NLD Netherlands 64.82 2
## 3 DNK Denmark 64.80 3
## 4 NOR Norway 64.47 4
## 5 BEL Belgium 63.54 5
## 6 SWE Sweden 63.00 6
## 7 AUT Austria 61.90 7
## 8 LUX Luxembourg 61.58 8
## 9 DEU Germany 60.75 9
## 10 MLT Malta 60.36 10
```
By default, `getResults()` gives a table of the highest level of aggregation (i.e. typically the index), along with the scores, and the ranks. Scores are rounded to two decimal places, and this can be controlled by the `nround` argument. To see all aggregate scores, we can change the `tab_type` argument:
```
getResults(ASEM, tab_type = "Aggregates") |>
reactable::reactable()
```
Here, we have used the reactable package to display the full table interactively. Notice that the aggregation levels are ordered from the highest level downwards, which is a more intuitive way to read the table. Other options for `tab_type` are “Full”, which shows all columns in `.$Data$Aggregated` (but rearranged to make them easier to read), and “FullWithDenoms”, which also attaches the denominators. These latter two are probably mostly useful for further data processing, or passing the results on to someone else.
Often it may be more interesting to look at ranks rather than scores We can do this by setting `use = "ranks"`:
```
getResults(ASEM, tab_type = "Aggregates", use = "ranks") |>
reactable::reactable()
```
Additionally to the `getResults()` function, the `iplotTable()` function also plots tables, but interactively (and indeed using the reactable package underneath). Like many other COINr functions it takes the `isel` and `aglev` arguments to select subsets of the indicator data. Moreover, it colours table cells, similar to conditional formatting in Excel.
As an example, we can request to see all the pillar scores inside the Connectivity sub\-index. The function sorts the units by the first column by default. This function is useful for generating quick tables inside HTML reports.
```
iplotTable(ASEM, dset = "Aggregated", isel = "Conn", aglev = 2)
```
Recall that since the output of this function is a reactable table, we can also edit it further by using reactable functions.
If you want a conditionally\-formatted table like the one above, but of any data frame (e.g. the output of `getResults()`), use COINr’s `colourTable()` function which is a short cut for reactable tables, with a few options thrown in for which colours to use, sorting, and so on.
```
getResults(ASEM, tab_type = "Aggregates") |>
head(10) |>
colourTable(cell_colours = c("#99d594", "#ffffbf", "#fc8d59"), sortcol = "none")
```
For colouring, there are many websites to help select colours. [Colourbrewer](https://colorbrewer2.org/) is one useful resource for maps and heatmaps.
If an individual unit is of interest, you can also call `getUnitSummary()`, which is a function which gives a summary of the scores for an individual unit:
```
getUnitSummary(ASEM, usel = "GBR", aglevs = c(4, 3, 2)) |>
knitr::kable()
```
| Indicator | Score | Rank |
| --- | --- | --- |
| Sustainable Connectivity | 57\.76 | 15 |
| Connectivity | 52\.81 | 14 |
| Sustainability | 62\.72 | 14 |
| Physical | 51\.75 | 10 |
| Economic and Financial (Con) | 21\.17 | 32 |
| Political | 77\.08 | 13 |
| Institutional | 76\.61 | 15 |
| People to People | 37\.44 | 20 |
| Environmental | 64\.88 | 20 |
| Social | 73\.10 | 10 |
| Economic and Financial (Sus) | 50\.17 | 43 |
Finally, the strengths and weaknesses of an individual unit can be found by calling `getStrengthNWeak()`:
```
SAW <- getStrengthNWeak(ASEM, usel = "GBR")
SAW$Strengths |> knitr::kable()
```
| Code | Name | Dimension | Rank | Value | Unit |
| --- | --- | --- | --- | --- | --- |
| TIRcon | Signatory of TIR Convention | Instit | 1 | 1\.00 | (1 (yes)/0 (no)) |
| Research | Research outputs with international collaborations | P2P | 1 | 96300\.00 | Number |
| CultServ | Trade in cultural services | P2P | 1 | 9\.57 | Million USD |
| Flights | International flights passenger capacity | Physical | 1 | 211\.00 | Thousand seats |
| Embs | Embassies network | Political | 1 | 100\.00 | Number |
```
SAW$Weaknesses |> knitr::kable()
```
| Code | Name | Dimension | Rank | Value | Unit |
| --- | --- | --- | --- | --- | --- |
| TolMin | Tolerance for minorities | Social | 34 | 6\.40 | Score |
| TBTs | Technical barriers to trade | Instit | 38 | 1190\.00 | Number of measures (initiated, in force) |
| Renew | Renewable energy in total final energy consumption | Environ | 41 | 8\.71 | Percent |
| PubDebt | Public debt as a percentage of GDP | SusEcFin | 41 | 89\.00 | Percent |
| GDPGrow | GDP per capita growth | SusEcFin | 42 | 1\.13 | Percent |
These are selected as the top\-ranked and bottom\-ranked indicators for each unit, respectively, where by default the top and bottom five are selected (but this can be changed using the `topN` and `bottomN` arguments).
12\.2 Plots
-----------
Tables are useful but can be a bit dry, so you can spice this up using the bar chart and map options already mentioned in [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis).
```
iplotBar(ASEM, dset = "Aggregated", isel = "Index", usel = "GBR", aglev = 4)
```
Even better, if we are plotting aggregated data we can also break down the bars into the underlying scores. For example, to see Sustainability scores with underlying sustainability pillar scores visible:
```
iplotBar(ASEM, dset = "Aggregated", isel = "Sust", aglev = 3, stack_children = TRUE)
```
And here’s a map:
```
iplotMap(ASEM, dset = "Aggregated", isel = "Index")
```
Remember that maps only work if both (a) your units are countries, and (b) you use ISO alpha\-3 codes as unit codes.
The map and bar chart options here are very useful for quickly presenting results. However, if you want to make a really beautiful map, you should consider one of the many mapping packages in R, such as [leaflet](https://rstudio.github.io/leaflet/).
Apart from plotting individual indicators, we can inspect and compare unit scores using a radar chart. The function `iplotRadar()` allows the scores of one or more units, in a set of indicators, to be plotted on the same chart. It follows a similar syntax to `iplotTable()` and others, since it calls groups of indicators, so the `isel` and `aglev` arguments should be specified. Additionally, it requires a set of units to plot.
```
iplotRadar(ASEM, dset = "Aggregated", usel = c("CHN", "DEU"), isel = "Conn", aglev = 2)
```
Radar charts, which look exciting to newbies, but have are often grumbled about by data\-vis veterans, should be used in the right context. In the first place, they are mainly good for *comparisons* between units \- if you simply want to see the scores of one unit, a bar chart may be a better choice because it is easier to see the scores and to compare scores between one indicator and another. Second, they are useful for a smallish number of indicators. A radar chart with two or three indicators looks silly, and with ten indicators or more is hard to read. The chart above, with five indicators, is a clear comparison between two countries (in my opinion), showing that one country scores higher in all dimensions of connectivity than another, and the relative differences.
The `iplotRadar()` function is powered by Plotly, therefore it can be edited using Plotly commands. It is an interactive graphic, so you can add and remove units by clicking on the legend.
A useful feature of this function is the possibility to easily add (group) means or medians. For example, we can add the a trace representing the overall median values for each indicator:
```
iplotRadar(ASEM, dset = "Aggregated", usel = c("CHN", "DEU"), isel = "Conn", aglev = 2, addstat = "median")
```
We can also add the mean or median of a grouping variable that is present in the selected data set. Grouping variables are those that start with “Group\_”. Here we can add the GDP group mean (both the selected countries are in the “XL” group).
```
iplotRadar(ASEM, dset = "Aggregated", usel = c("CHN", "DEU"), isel = "Conn", aglev = 2, addstat = "groupmean",
statgroup = "Group_GDP", statgroup_name = "GDP")
```
12\.3 Interactive exploration
-----------------------------
COINr also offers a Shiny app called `resultsDash()` for quickly and interactively exploring the results. It features the plots described above, but in an interactive app format. The point of this is to give a tool which can be used to explore the results during development, and to demonstrate the results of the index in validation sessions with experts and stakeholders. It is not meant to replace a dedicated interactive platform. It could however be a starting point for a Shiny\-based web platform.
To call the app, simply run:
```
resultsDash(ASEM)
```
This will open a browser window or tab with the results dashboard app. The app allows you to select any of the data sets present in `.$Data`, and to plot any of the indicators inside either on a bar chart or map (see the toggle map/bar box in the bottom right corner). Units (i.e. countries here) can be compared by clicking on the bar chart or the map to select one or more countries \- this plots them on the radar chart in the bottom left. The indicator groups plotted in the radar chart are selected by the “Aggregation level” and “Aggregation group” dropdowns. Finally, the “Table” tab switches to a table view, which uses the `iplotTable()` function mentioned above.
Figure 12\.1: resultsDash screenshot
This app is useful for quickly visualising and presenting results. It might even be possible to host it online using [shinyapps](https://www.shinyapps.io/), although you might need the source code to do this. I am planning to enable online hosting more easily soon.
12\.4 Unit reports
------------------
Many composite indicators produce reports for each unit, for example the “economy profiles” in the Global Innovation Index. Typically this is a summary of the scores and ranks of the unit, possibly some figures, and maybe strengths and weaknesses.
COINr allows country reports to be generated automatically. This is done via a *parameterised R markdown document*, which includes several of the tables and figures shown in this section[4](#fn4). The COINr function `getUnitReport()` generates a report for any specified unit using an example scorecard which is found in the `/UnitReport` directory where COINr is installed on your computer. To easily find this, call:
```
system.file("UnitReport", "unit_report_source.Rmd", package = "COINr")
## [1] "C:/R/RLib/COINr/UnitReport/unit_report_source.Rmd"
```
where the directory will be different on your computer.
Opening this document in R will give you an idea of how the parameterised document works. In short, the COIN object is passed into the R Markdown document, so that anything that is inside the COIN can be accessed and used for generating tables, figures and text. The example included is just that, an example. To see how it looks, we can run:
```
getUnitReport(ASEM, usel = "NZL", out_type = ".html")
```
Calling this should create an html document in your current working directory, which will look something like this:
Figure 12\.2: Unit profile screenshot 1/2
Figure 12\.3: Unit profile screenshot 2/2
This is simply an example of the sort of thing that is possible, and the output can be either an html document, a Word document, or a pdf. It’s important to note that if the output is pdf or a Word doc, this is a *static document* which cannot include interactive plots such as those from plotly, COINr’s iplot functions and any other javascript or html widget figures. If you want to use interactive plots, but still be able to create static documents, you must install the webshot package. To do this, run:
```
install.packages("webshot")
webshot::install_phantomjs()
```
This will effectively render the figures to html, then take a static snapshot of each figure and paste it into your static document. Of course, this loses the interactive element of these plots, but can still render nice plots depending on what you are doing. If you do not have webshot installed, and try to render interactive plots to pdf or docx, you will encounter an error.
That aside, why use a parameterised document, rather than a normal R Markdown doc? Because by parameterising it, we can automatically create unit profiles for all units at once, or a subset, by inserting the relevant unit codes:
```
getUnitReport(ASEM, usel = c("AUS", "AUT", "CHN"), out_type = ".html"))
```
This generates reports for the three countries specified, but of course you can do it for all units present if you want to.
Most likely if you want to use this function you will want to create a customised template that fits your requirements. To do this, copy the existing template found in the COINr directory as described above. Then make your own modifications, and set the `rmd_template` argument to point to the new template. If the output is a Word doc, you can also customise the formatting of the document by additionally including a Word template. To do this, see the [R Markdown Guide](https://bookdown.org/yihui/rmarkdown/word-document.html). In fact, browsing this book is probably a good idea in general if you are planning to generate many unit reports or other markdown reports. You many also wish to create your own function for rendering rather than using `getUnitReport()`. The code for this function can be found on [Github](https://github.com/bluefoxr/COINr/blob/master/R/coin_resultstable.R) and this may help for inspiration.
12\.1 Tables
------------
To see the aggregated results, one way is to look in `.$Data$Aggregated`, which contains the aggregated scores, along with any grouping variables that were attached with the original data. However, this is not always convenient because the table can be quite large, and also the index itself will be the last column. You can of course manually select columns, but to make life easier COINr has a dedicated function, `getResults()`.
```
library(COINr6)
# build example data set
ASEM <- build_ASEM() |>
suppressMessages()
# get results table, top 10
getResults(ASEM) |>
head(10)
## UnitCode UnitName Index Rank
## 1 CHE Switzerland 68.38 1
## 2 NLD Netherlands 64.82 2
## 3 DNK Denmark 64.80 3
## 4 NOR Norway 64.47 4
## 5 BEL Belgium 63.54 5
## 6 SWE Sweden 63.00 6
## 7 AUT Austria 61.90 7
## 8 LUX Luxembourg 61.58 8
## 9 DEU Germany 60.75 9
## 10 MLT Malta 60.36 10
```
By default, `getResults()` gives a table of the highest level of aggregation (i.e. typically the index), along with the scores, and the ranks. Scores are rounded to two decimal places, and this can be controlled by the `nround` argument. To see all aggregate scores, we can change the `tab_type` argument:
```
getResults(ASEM, tab_type = "Aggregates") |>
reactable::reactable()
```
Here, we have used the reactable package to display the full table interactively. Notice that the aggregation levels are ordered from the highest level downwards, which is a more intuitive way to read the table. Other options for `tab_type` are “Full”, which shows all columns in `.$Data$Aggregated` (but rearranged to make them easier to read), and “FullWithDenoms”, which also attaches the denominators. These latter two are probably mostly useful for further data processing, or passing the results on to someone else.
Often it may be more interesting to look at ranks rather than scores We can do this by setting `use = "ranks"`:
```
getResults(ASEM, tab_type = "Aggregates", use = "ranks") |>
reactable::reactable()
```
Additionally to the `getResults()` function, the `iplotTable()` function also plots tables, but interactively (and indeed using the reactable package underneath). Like many other COINr functions it takes the `isel` and `aglev` arguments to select subsets of the indicator data. Moreover, it colours table cells, similar to conditional formatting in Excel.
As an example, we can request to see all the pillar scores inside the Connectivity sub\-index. The function sorts the units by the first column by default. This function is useful for generating quick tables inside HTML reports.
```
iplotTable(ASEM, dset = "Aggregated", isel = "Conn", aglev = 2)
```
Recall that since the output of this function is a reactable table, we can also edit it further by using reactable functions.
If you want a conditionally\-formatted table like the one above, but of any data frame (e.g. the output of `getResults()`), use COINr’s `colourTable()` function which is a short cut for reactable tables, with a few options thrown in for which colours to use, sorting, and so on.
```
getResults(ASEM, tab_type = "Aggregates") |>
head(10) |>
colourTable(cell_colours = c("#99d594", "#ffffbf", "#fc8d59"), sortcol = "none")
```
For colouring, there are many websites to help select colours. [Colourbrewer](https://colorbrewer2.org/) is one useful resource for maps and heatmaps.
If an individual unit is of interest, you can also call `getUnitSummary()`, which is a function which gives a summary of the scores for an individual unit:
```
getUnitSummary(ASEM, usel = "GBR", aglevs = c(4, 3, 2)) |>
knitr::kable()
```
| Indicator | Score | Rank |
| --- | --- | --- |
| Sustainable Connectivity | 57\.76 | 15 |
| Connectivity | 52\.81 | 14 |
| Sustainability | 62\.72 | 14 |
| Physical | 51\.75 | 10 |
| Economic and Financial (Con) | 21\.17 | 32 |
| Political | 77\.08 | 13 |
| Institutional | 76\.61 | 15 |
| People to People | 37\.44 | 20 |
| Environmental | 64\.88 | 20 |
| Social | 73\.10 | 10 |
| Economic and Financial (Sus) | 50\.17 | 43 |
Finally, the strengths and weaknesses of an individual unit can be found by calling `getStrengthNWeak()`:
```
SAW <- getStrengthNWeak(ASEM, usel = "GBR")
SAW$Strengths |> knitr::kable()
```
| Code | Name | Dimension | Rank | Value | Unit |
| --- | --- | --- | --- | --- | --- |
| TIRcon | Signatory of TIR Convention | Instit | 1 | 1\.00 | (1 (yes)/0 (no)) |
| Research | Research outputs with international collaborations | P2P | 1 | 96300\.00 | Number |
| CultServ | Trade in cultural services | P2P | 1 | 9\.57 | Million USD |
| Flights | International flights passenger capacity | Physical | 1 | 211\.00 | Thousand seats |
| Embs | Embassies network | Political | 1 | 100\.00 | Number |
```
SAW$Weaknesses |> knitr::kable()
```
| Code | Name | Dimension | Rank | Value | Unit |
| --- | --- | --- | --- | --- | --- |
| TolMin | Tolerance for minorities | Social | 34 | 6\.40 | Score |
| TBTs | Technical barriers to trade | Instit | 38 | 1190\.00 | Number of measures (initiated, in force) |
| Renew | Renewable energy in total final energy consumption | Environ | 41 | 8\.71 | Percent |
| PubDebt | Public debt as a percentage of GDP | SusEcFin | 41 | 89\.00 | Percent |
| GDPGrow | GDP per capita growth | SusEcFin | 42 | 1\.13 | Percent |
These are selected as the top\-ranked and bottom\-ranked indicators for each unit, respectively, where by default the top and bottom five are selected (but this can be changed using the `topN` and `bottomN` arguments).
12\.2 Plots
-----------
Tables are useful but can be a bit dry, so you can spice this up using the bar chart and map options already mentioned in [Initial visualisation and analysis](initial-visualisation-and-analysis.html#initial-visualisation-and-analysis).
```
iplotBar(ASEM, dset = "Aggregated", isel = "Index", usel = "GBR", aglev = 4)
```
Even better, if we are plotting aggregated data we can also break down the bars into the underlying scores. For example, to see Sustainability scores with underlying sustainability pillar scores visible:
```
iplotBar(ASEM, dset = "Aggregated", isel = "Sust", aglev = 3, stack_children = TRUE)
```
And here’s a map:
```
iplotMap(ASEM, dset = "Aggregated", isel = "Index")
```
Remember that maps only work if both (a) your units are countries, and (b) you use ISO alpha\-3 codes as unit codes.
The map and bar chart options here are very useful for quickly presenting results. However, if you want to make a really beautiful map, you should consider one of the many mapping packages in R, such as [leaflet](https://rstudio.github.io/leaflet/).
Apart from plotting individual indicators, we can inspect and compare unit scores using a radar chart. The function `iplotRadar()` allows the scores of one or more units, in a set of indicators, to be plotted on the same chart. It follows a similar syntax to `iplotTable()` and others, since it calls groups of indicators, so the `isel` and `aglev` arguments should be specified. Additionally, it requires a set of units to plot.
```
iplotRadar(ASEM, dset = "Aggregated", usel = c("CHN", "DEU"), isel = "Conn", aglev = 2)
```
Radar charts, which look exciting to newbies, but have are often grumbled about by data\-vis veterans, should be used in the right context. In the first place, they are mainly good for *comparisons* between units \- if you simply want to see the scores of one unit, a bar chart may be a better choice because it is easier to see the scores and to compare scores between one indicator and another. Second, they are useful for a smallish number of indicators. A radar chart with two or three indicators looks silly, and with ten indicators or more is hard to read. The chart above, with five indicators, is a clear comparison between two countries (in my opinion), showing that one country scores higher in all dimensions of connectivity than another, and the relative differences.
The `iplotRadar()` function is powered by Plotly, therefore it can be edited using Plotly commands. It is an interactive graphic, so you can add and remove units by clicking on the legend.
A useful feature of this function is the possibility to easily add (group) means or medians. For example, we can add the a trace representing the overall median values for each indicator:
```
iplotRadar(ASEM, dset = "Aggregated", usel = c("CHN", "DEU"), isel = "Conn", aglev = 2, addstat = "median")
```
We can also add the mean or median of a grouping variable that is present in the selected data set. Grouping variables are those that start with “Group\_”. Here we can add the GDP group mean (both the selected countries are in the “XL” group).
```
iplotRadar(ASEM, dset = "Aggregated", usel = c("CHN", "DEU"), isel = "Conn", aglev = 2, addstat = "groupmean",
statgroup = "Group_GDP", statgroup_name = "GDP")
```
12\.3 Interactive exploration
-----------------------------
COINr also offers a Shiny app called `resultsDash()` for quickly and interactively exploring the results. It features the plots described above, but in an interactive app format. The point of this is to give a tool which can be used to explore the results during development, and to demonstrate the results of the index in validation sessions with experts and stakeholders. It is not meant to replace a dedicated interactive platform. It could however be a starting point for a Shiny\-based web platform.
To call the app, simply run:
```
resultsDash(ASEM)
```
This will open a browser window or tab with the results dashboard app. The app allows you to select any of the data sets present in `.$Data`, and to plot any of the indicators inside either on a bar chart or map (see the toggle map/bar box in the bottom right corner). Units (i.e. countries here) can be compared by clicking on the bar chart or the map to select one or more countries \- this plots them on the radar chart in the bottom left. The indicator groups plotted in the radar chart are selected by the “Aggregation level” and “Aggregation group” dropdowns. Finally, the “Table” tab switches to a table view, which uses the `iplotTable()` function mentioned above.
Figure 12\.1: resultsDash screenshot
This app is useful for quickly visualising and presenting results. It might even be possible to host it online using [shinyapps](https://www.shinyapps.io/), although you might need the source code to do this. I am planning to enable online hosting more easily soon.
12\.4 Unit reports
------------------
Many composite indicators produce reports for each unit, for example the “economy profiles” in the Global Innovation Index. Typically this is a summary of the scores and ranks of the unit, possibly some figures, and maybe strengths and weaknesses.
COINr allows country reports to be generated automatically. This is done via a *parameterised R markdown document*, which includes several of the tables and figures shown in this section[4](#fn4). The COINr function `getUnitReport()` generates a report for any specified unit using an example scorecard which is found in the `/UnitReport` directory where COINr is installed on your computer. To easily find this, call:
```
system.file("UnitReport", "unit_report_source.Rmd", package = "COINr")
## [1] "C:/R/RLib/COINr/UnitReport/unit_report_source.Rmd"
```
where the directory will be different on your computer.
Opening this document in R will give you an idea of how the parameterised document works. In short, the COIN object is passed into the R Markdown document, so that anything that is inside the COIN can be accessed and used for generating tables, figures and text. The example included is just that, an example. To see how it looks, we can run:
```
getUnitReport(ASEM, usel = "NZL", out_type = ".html")
```
Calling this should create an html document in your current working directory, which will look something like this:
Figure 12\.2: Unit profile screenshot 1/2
Figure 12\.3: Unit profile screenshot 2/2
This is simply an example of the sort of thing that is possible, and the output can be either an html document, a Word document, or a pdf. It’s important to note that if the output is pdf or a Word doc, this is a *static document* which cannot include interactive plots such as those from plotly, COINr’s iplot functions and any other javascript or html widget figures. If you want to use interactive plots, but still be able to create static documents, you must install the webshot package. To do this, run:
```
install.packages("webshot")
webshot::install_phantomjs()
```
This will effectively render the figures to html, then take a static snapshot of each figure and paste it into your static document. Of course, this loses the interactive element of these plots, but can still render nice plots depending on what you are doing. If you do not have webshot installed, and try to render interactive plots to pdf or docx, you will encounter an error.
That aside, why use a parameterised document, rather than a normal R Markdown doc? Because by parameterising it, we can automatically create unit profiles for all units at once, or a subset, by inserting the relevant unit codes:
```
getUnitReport(ASEM, usel = c("AUS", "AUT", "CHN"), out_type = ".html"))
```
This generates reports for the three countries specified, but of course you can do it for all units present if you want to.
Most likely if you want to use this function you will want to create a customised template that fits your requirements. To do this, copy the existing template found in the COINr directory as described above. Then make your own modifications, and set the `rmd_template` argument to point to the new template. If the output is a Word doc, you can also customise the formatting of the document by additionally including a Word template. To do this, see the [R Markdown Guide](https://bookdown.org/yihui/rmarkdown/word-document.html). In fact, browsing this book is probably a good idea in general if you are planning to generate many unit reports or other markdown reports. You many also wish to create your own function for rendering rather than using `getUnitReport()`. The code for this function can be found on [Github](https://github.com/bluefoxr/COINr/blob/master/R/coin_resultstable.R) and this may help for inspiration.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/adjustments-and-comparisons.html |
Chapter 13 Adjustments and comparisons
======================================
It’s fairly common to make adjustments to the index, perhaps in terms of alternative data sets, indicators, methodological decisions, and so on. COINr allows you to (a) make fast adjustments, and (b) to compare alternative versions of the index relatively easily.
13\.1 Regeneration
------------------
One of the key advantages of working within the “COINrverse” is that (nearly) all the methodology that is applied when building a composite indicator (COIN) is stored automatically in a folder of the COIN called `.$Method`. To see what this looks like:
```
# load COINr if not loaded
library(COINr6)
# build example COIN
ASEM <- build_ASEM()
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
# look in ASEM$Method folder in R Studio...
```
Figure 13\.1: Method folder in ASEM example
The easiest way to view it is by looking in the viewer of R Studio as in the screenshot above. Essentially, the `.$Method` folder has one entry for each COINr function that was used to build the COIN, and inside each of these folders are the arguments to the function that were input. Notice that the names inside `.$Method$denominate` correspond exactly to arguments to `denominate()`, for example.
Every time a COINr “construction function” is run, the inputs to the function are automatically recorded inside the COIN. Construction functions are any of the following seven:
| Function | Description |
| --- | --- |
| `assemble()` | Assembles indicator data/metadata into a COIN |
| `checkData()` | Data availability check and unit screening |
| `denominate()` | Denominate (divide) indicators by other indicators |
| `impute()` | Impute missing data using various methods |
| `treat()` | Treat outliers with Winsorisation and transformations |
| `normalise()` | Normalise data using various methods |
| `aggregate()` | Aggregate indicators into hierarchical levels, up to index |
These are the core functions that are used to build a composite indicator, from assembling from the original data, up to aggregation.
One reason to do this is simply to have a record of what you did to arrive at the results. However, this is not the main reason (and in fact, it would be anyway good practice to make your own record by creating a script or markdown doc which records the steps). The real advantage is that results can be automatically regenerated with a handy function called `regen()`.
To regenerate the results, simply run e.g.:
```
ASEM2 <- regen(ASEM)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
```
The `regen()` function reruns all functions that are recorded in the `.$Method` folder *in the order they appear in the folder*. In this example, it runs, in order, `assemble()`, `denominate()`, `impute()`, `treat()`, `normalise()`and `aggregate()`. This replicates exactly the results. But what is the point of that? Well, it means that we can make changes to the index, generally by altering parameters in the `.$Method` folder, and then rerun everything very quickly with a single command. This will be demonstrated in the following section. After that, we will also see how to compare between alternative versions of the index.
Before that, there is one extra feature of `regen()` which is worth mentioning. The `regen()` function only runs the seven construction functions as listed above. These functions are very flexible and should encompass most needs for constructing a composite indicator. But what happens if you want to build in an extra operation, or operations, that are outside of these seven functions?
It is possible to also include “custom” chunks of code inside the COIN. Custom chunks should be written manually using the `quote()` function, to a special folder `.$Method$Custom`. These chunks can be any type of code, but the important thing is to also know *when* to run the code, i.e. at what point in the construction process.
More specifically, custom code chunks are written as a named list. The **name** of the item in the list specifies *when* to perform the operation. For example, “after\_treat” means to perform this immediately after the treatment step. Other options are e.g. “after\_normalise” or “after\_impute” – in general it should be “after\_” followed by a name of one of the names of the construction functions.
The corresponding **value** in the list should be an operation which *must* be enclosed in the `quote()` function. Clearly, the list can feature multiple operations at different points in construction. The COIN is referred to by the generic name `COIN`, rather than the name you have assigned to your COIN.
This is slightly complicated, but can be clarified a bit with an example. Let’s imagine that after Winsorisation, we would like to “reset” one of the Winsorised points to its original value. This is currently not possible inside the `treat()` function so has to be done manually. But we would like this operation to be kept when we regenerate the index with variations in methodology (e.g. trying different aggregation functions, weights, etc.).
We create a list with one operation: resetting the point after the treatment step.
```
# Create list. NOTE the use of the quote() function!
custlist = list(after_treat = quote(COIN$Data$Treated$Bord[COIN$Data$Treated$UnitCode=="BEL"] <- COIN$Data$Imputed$Bord[COIN$Data$Imputed$UnitCode=="BEL"]))
# Add the list to the $Method$Custom folder
ASEM$Method$Custom <- custlist
```
Specifically, we have replaced the Winsorised value of Belgium, for the “Bord” indicator, with its imputed value (i.e. the value it had before it was treated).
Now, when we regenerate the COIN using `regen()`, it will insert this extra line of code immediately after the treatment step. Clearly, anything we refer to in this custom code must be available at that step in the construction. For example, we can’t refer to the aggregated data set if the data has not yet been aggregated.
Using custom code may seem a bit confusing but it adds an extra layer of flexibility. It is however intended for small snippets of custom code, rather than large blocks of custom operations. If you are doing a lot of operations outside COINr, it may be better to do this in your own dedicated script or function, rather than trying to encode absolutely everything inside the COIN. However, also consider that some features of COINr, such as global sensitivity analysis, require everything to be encoded inside the COIN.
Before moving on, it’s worth reiterating that regenerating a COIN runs the construction functions in the order they appear in the Method folder. That means that if you build an index with no imputation, for example, and then later decide to make a copy which now includes imputation, you would have to insert the `.Method$imputation` entry in the place where it should occur. An easy way to do this is using R’s `append()` function.
13\.2 Adjustments
-----------------
Let us now explore how the COIN can be adjusted and regenerated. This will (hopefully) clarify why regenerating is a useful thing.
The general steps for adjustments are:
1. Copy the object
2. Adjust the index methodology or data by editing the `.$Method` folder and/or the underlying data
3. Regenerate the results
4. Compare alternatives
Copying the object is straightforward, and regeneration has been dealt with in the previous section. Comparison is also addressed in the following section. Here we will focus on adjustments and changing methodology.
### 13\.2\.1 Adding/removing indicators
First of all, let’s consider an alternative index where we decide to add or remove one or more indicators. There are different ways that we could consider doing this.
A first possibility would be to manually create a new data frame of indicator data and indicator metadata, i.e. the `IndData` and `IndMeta` inputs for `assemble()`. This is fine, but we would have to start the index again from scratch and rebuild it.
A better idea is that when we first supply the set of indicators to `assemble()`, we include all indicators that we might possibly want to include in the index, including e.g. alternative indicators with alternative data sources. Then we build different versions of the index using subsets of these indicators. This allows us to use `regen()` and therefore to make fast copies of the index.
To illustrate, consider again the ASEM example. We can rebuild the ASEM index using a subset of the indicators by using the `include` or `exclude` arguments of the `assemble()` function. Because these are stored in `.$Method`, we can easily regenerate the new results.
```
# Make a copy
ASEM_NoLPIShip <- ASEM
# Edit method: exclude two indicators
ASEM_NoLPIShip$Method$assemble$exclude <- c("LPI", "Ship")
# Regenerate results (suppress any messages)
ASEM_NoLPIShip <- regen(ASEM_NoLPIShip, quietly = TRUE)
```
Note that, in the ASEM example, by default, `ASEM$Method$assemble` doesn’t exist because the `include` or `exclude` arguments of `assemble()` are empty, and these are the only arguments to `assemble()` that are recorded in `.$Method$assemble`.
In summary, we have removed two indicators, then regenerated the results using exactly the same methodology as used before. Importantly, `include` and `exclude` operate *relative to the original data* input to `assemble()`, i.e. the data found in `.$Input$Original`. This means that if we now were to make a copy of the version excluding the two indicators above, and exclude another different indicator:
```
# Make a copy
ASEM_NoBord <- ASEM_NoLPIShip
# Edit method: exclude two indicators
ASEM_NoBord$Method$assemble$exclude <- "Bord"
# Regenerate results
ASEM_NoBord <- regen(ASEM_NoBord, quietly = TRUE)
ASEM_NoBord$Parameters$IndCodes
## [1] "Goods" "Services" "FDI" "PRemit" "ForPort" "CostImpEx"
## [7] "Tariff" "TBTs" "TIRcon" "RTAs" "Visa" "StMob"
## [13] "Research" "Pat" "CultServ" "CultGood" "Tourist" "MigStock"
## [19] "Lang" "LPI" "Flights" "Ship" "Elec" "Gas"
## [25] "ConSpeed" "Cov4G" "Embs" "IGOs" "UNVote" "Renew"
## [31] "PrimEner" "CO2" "MatCon" "Forest" "Poverty" "Palma"
## [37] "TertGrad" "FreePress" "TolMin" "NGOs" "CPI" "FemLab"
## [43] "WomParl" "PubDebt" "PrivDebt" "GDPGrow" "RDExp" "NEET"
```
…we see that “LPI” and “Ship” are once again present.
In fact, COINr has an even quicker way to add and remove indicators, which is a short cut function called `indChange()`. In one command you can add or remove indicators and regenerate the results. Unlike the method above, `indChange()` also adds and removes relative to the existing index, which may be more convenient in some circumstances. To demonstrate this, we can use the same example as above:
```
# Make a copy
ASEM_NoBord2 <- indChange(ASEM_NoLPIShip, drop = "Bord", regen = TRUE)
## COIN has been regenerated using new specs.
ASEM_NoBord2$Parameters$IndCodes
## [1] "Goods" "Services" "FDI" "PRemit" "ForPort" "CostImpEx"
## [7] "Tariff" "TBTs" "TIRcon" "RTAs" "Visa" "StMob"
## [13] "Research" "Pat" "CultServ" "CultGood" "Tourist" "MigStock"
## [19] "Lang" "Flights" "Elec" "Gas" "ConSpeed" "Cov4G"
## [25] "Embs" "IGOs" "UNVote" "Renew" "PrimEner" "CO2"
## [31] "MatCon" "Forest" "Poverty" "Palma" "TertGrad" "FreePress"
## [37] "TolMin" "NGOs" "CPI" "FemLab" "WomParl" "PubDebt"
## [43] "PrivDebt" "GDPGrow" "RDExp" "NEET"
```
And here we see that now, “Bord” has been excluded *as well as* “LPI” and “Ship”.
### 13\.2\.2 Other adjustments
We can make any methodological adjustments we want by editing any parameters in `.$Method` and then running `regen()`. For example, we can change the imputation method:
```
# Make a copy
ASEMAltImpute <- ASEM
# Edit .$Method
ASEMAltImpute$Method$impute$imtype <- "indgroup_median"
# Regenerate
ASEMAltImpute <- regen(ASEMAltImpute, quietly = TRUE)
```
We could also change the normalisation method, e.g. to use Borda scores:
```
# Make a copy
ASEMAltNorm <- ASEM
# Edit .$Method
ASEMAltNorm$Method$normalise$ntype <- "borda"
# Regenerate
ASEMAltNorm <- regen(ASEMAltNorm, quietly = TRUE)
```
and of course this extends to any parameters of any of the “construction” functions. We can even alter the underlying data directly if we want, e.g. by altering values in `.$Input$Original$Data`. In short, anything inside the COIN can be edited and then the results regenerated. This allows a fast way to make different alternative indexes and explore the effects of different methodology very quickly.
13\.3 Comparisons
-----------------
A logical follow up to making alternative indexes is to try to understand the differences between these indexes. This can of course be done manually. But to make this quicker, COINr includes a few tools to quickly inspect the differences between different COINs.
In using these tools it should be fairly evident that comparisons are made between different versions of the same index. So the two indexes must have at least some units in common. The tools are intended for methodological variations of the type shown previously in this chapter \- different aggregation, weighting, adding and removing indicators and so on. That said, since comparisons are made on units, it would also be possible to compare two totally different COINs, as long as they have units in common.
To begin with, we can make a simple bilateral comparison between two COINs. Taking two of the index versions created previously:
```
compTable(ASEM, ASEMAltNorm, dset = "Aggregated", isel = "Index") |>
head(10) |>
knitr::kable()
```
| | UnitCode | UnitName | RankCOIN1 | RankCOIN2 | RankChange | AbsRankChange |
| --- | --- | --- | --- | --- | --- | --- |
| 43 | PRT | Portugal | 27 | 16 | 11 | 11 |
| 29 | LAO | Lao PDR | 48 | 39 | 9 | 9 |
| 33 | MLT | Malta | 10 | 19 | \-9 | 9 |
| 14 | EST | Estonia | 22 | 15 | 7 | 7 |
| 21 | IDN | Indonesia | 43 | 49 | \-6 | 6 |
| 13 | ESP | Spain | 19 | 24 | \-5 | 5 |
| 19 | HRV | Croatia | 18 | 23 | \-5 | 5 |
| 17 | GBR | United Kingdom | 15 | 11 | 4 | 4 |
| 30 | LTU | Lithuania | 16 | 12 | 4 | 4 |
| 35 | MNG | Mongolia | 44 | 48 | \-4 | 4 |
The `compTable()` function allows a rank comparison of a single indicator or aggregate, between two COINs. The COINs must both share the indicator code which is assigned to `isel`. By default the output table (data frame) is sorted by the highest absolute rank change downwards, but it can be sorted by the other columns using the `sort_by` argument:
```
compTable(ASEM, ASEMAltNorm, dset = "Aggregated", isel = "Index", sort_by = "RankCOIN1") |>
head(10) |>
knitr::kable()
```
| | UnitCode | UnitName | RankCOIN1 | RankCOIN2 | RankChange | AbsRankChange |
| --- | --- | --- | --- | --- | --- | --- |
| 7 | CHE | Switzerland | 1 | 1 | 0 | 0 |
| 37 | NLD | Netherlands | 2 | 4 | \-2 | 2 |
| 12 | DNK | Denmark | 3 | 2 | 1 | 1 |
| 38 | NOR | Norway | 4 | 3 | 1 | 1 |
| 3 | BEL | Belgium | 5 | 6 | \-1 | 1 |
| 49 | SWE | Sweden | 6 | 5 | 1 | 1 |
| 2 | AUT | Austria | 7 | 7 | 0 | 0 |
| 31 | LUX | Luxembourg | 8 | 10 | \-2 | 2 |
| 11 | DEU | Germany | 9 | 8 | 1 | 1 |
| 33 | MLT | Malta | 10 | 19 | \-9 | 9 |
Why use ranks as a comparison and not scores? As explained elsewhere in this documentation, a different normalisation method or aggregation method can result in very different scores for the same unit. Since scores have no units, the scale is in many ways arbitrary, but ranks are a consistent way of comparing different versions.
For comparisons between more than two COINs, the `compTableMulti()` function can be used.
Finally, you may wish to cross\-check results of a COIN against parallel calculations. For example, you might be reproducing a composite indicator which was calculated in Excel and want to see if the results are the same. COINr has a handy function called `compareDF()`, which gives a fairly detailed comparison between two data frames. This is explained in more detail in the [Helper functions](helper-functions.html#helper-functions) chapter.
13\.1 Regeneration
------------------
One of the key advantages of working within the “COINrverse” is that (nearly) all the methodology that is applied when building a composite indicator (COIN) is stored automatically in a folder of the COIN called `.$Method`. To see what this looks like:
```
# load COINr if not loaded
library(COINr6)
# build example COIN
ASEM <- build_ASEM()
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
# look in ASEM$Method folder in R Studio...
```
Figure 13\.1: Method folder in ASEM example
The easiest way to view it is by looking in the viewer of R Studio as in the screenshot above. Essentially, the `.$Method` folder has one entry for each COINr function that was used to build the COIN, and inside each of these folders are the arguments to the function that were input. Notice that the names inside `.$Method$denominate` correspond exactly to arguments to `denominate()`, for example.
Every time a COINr “construction function” is run, the inputs to the function are automatically recorded inside the COIN. Construction functions are any of the following seven:
| Function | Description |
| --- | --- |
| `assemble()` | Assembles indicator data/metadata into a COIN |
| `checkData()` | Data availability check and unit screening |
| `denominate()` | Denominate (divide) indicators by other indicators |
| `impute()` | Impute missing data using various methods |
| `treat()` | Treat outliers with Winsorisation and transformations |
| `normalise()` | Normalise data using various methods |
| `aggregate()` | Aggregate indicators into hierarchical levels, up to index |
These are the core functions that are used to build a composite indicator, from assembling from the original data, up to aggregation.
One reason to do this is simply to have a record of what you did to arrive at the results. However, this is not the main reason (and in fact, it would be anyway good practice to make your own record by creating a script or markdown doc which records the steps). The real advantage is that results can be automatically regenerated with a handy function called `regen()`.
To regenerate the results, simply run e.g.:
```
ASEM2 <- regen(ASEM)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
```
The `regen()` function reruns all functions that are recorded in the `.$Method` folder *in the order they appear in the folder*. In this example, it runs, in order, `assemble()`, `denominate()`, `impute()`, `treat()`, `normalise()`and `aggregate()`. This replicates exactly the results. But what is the point of that? Well, it means that we can make changes to the index, generally by altering parameters in the `.$Method` folder, and then rerun everything very quickly with a single command. This will be demonstrated in the following section. After that, we will also see how to compare between alternative versions of the index.
Before that, there is one extra feature of `regen()` which is worth mentioning. The `regen()` function only runs the seven construction functions as listed above. These functions are very flexible and should encompass most needs for constructing a composite indicator. But what happens if you want to build in an extra operation, or operations, that are outside of these seven functions?
It is possible to also include “custom” chunks of code inside the COIN. Custom chunks should be written manually using the `quote()` function, to a special folder `.$Method$Custom`. These chunks can be any type of code, but the important thing is to also know *when* to run the code, i.e. at what point in the construction process.
More specifically, custom code chunks are written as a named list. The **name** of the item in the list specifies *when* to perform the operation. For example, “after\_treat” means to perform this immediately after the treatment step. Other options are e.g. “after\_normalise” or “after\_impute” – in general it should be “after\_” followed by a name of one of the names of the construction functions.
The corresponding **value** in the list should be an operation which *must* be enclosed in the `quote()` function. Clearly, the list can feature multiple operations at different points in construction. The COIN is referred to by the generic name `COIN`, rather than the name you have assigned to your COIN.
This is slightly complicated, but can be clarified a bit with an example. Let’s imagine that after Winsorisation, we would like to “reset” one of the Winsorised points to its original value. This is currently not possible inside the `treat()` function so has to be done manually. But we would like this operation to be kept when we regenerate the index with variations in methodology (e.g. trying different aggregation functions, weights, etc.).
We create a list with one operation: resetting the point after the treatment step.
```
# Create list. NOTE the use of the quote() function!
custlist = list(after_treat = quote(COIN$Data$Treated$Bord[COIN$Data$Treated$UnitCode=="BEL"] <- COIN$Data$Imputed$Bord[COIN$Data$Imputed$UnitCode=="BEL"]))
# Add the list to the $Method$Custom folder
ASEM$Method$Custom <- custlist
```
Specifically, we have replaced the Winsorised value of Belgium, for the “Bord” indicator, with its imputed value (i.e. the value it had before it was treated).
Now, when we regenerate the COIN using `regen()`, it will insert this extra line of code immediately after the treatment step. Clearly, anything we refer to in this custom code must be available at that step in the construction. For example, we can’t refer to the aggregated data set if the data has not yet been aggregated.
Using custom code may seem a bit confusing but it adds an extra layer of flexibility. It is however intended for small snippets of custom code, rather than large blocks of custom operations. If you are doing a lot of operations outside COINr, it may be better to do this in your own dedicated script or function, rather than trying to encode absolutely everything inside the COIN. However, also consider that some features of COINr, such as global sensitivity analysis, require everything to be encoded inside the COIN.
Before moving on, it’s worth reiterating that regenerating a COIN runs the construction functions in the order they appear in the Method folder. That means that if you build an index with no imputation, for example, and then later decide to make a copy which now includes imputation, you would have to insert the `.Method$imputation` entry in the place where it should occur. An easy way to do this is using R’s `append()` function.
13\.2 Adjustments
-----------------
Let us now explore how the COIN can be adjusted and regenerated. This will (hopefully) clarify why regenerating is a useful thing.
The general steps for adjustments are:
1. Copy the object
2. Adjust the index methodology or data by editing the `.$Method` folder and/or the underlying data
3. Regenerate the results
4. Compare alternatives
Copying the object is straightforward, and regeneration has been dealt with in the previous section. Comparison is also addressed in the following section. Here we will focus on adjustments and changing methodology.
### 13\.2\.1 Adding/removing indicators
First of all, let’s consider an alternative index where we decide to add or remove one or more indicators. There are different ways that we could consider doing this.
A first possibility would be to manually create a new data frame of indicator data and indicator metadata, i.e. the `IndData` and `IndMeta` inputs for `assemble()`. This is fine, but we would have to start the index again from scratch and rebuild it.
A better idea is that when we first supply the set of indicators to `assemble()`, we include all indicators that we might possibly want to include in the index, including e.g. alternative indicators with alternative data sources. Then we build different versions of the index using subsets of these indicators. This allows us to use `regen()` and therefore to make fast copies of the index.
To illustrate, consider again the ASEM example. We can rebuild the ASEM index using a subset of the indicators by using the `include` or `exclude` arguments of the `assemble()` function. Because these are stored in `.$Method`, we can easily regenerate the new results.
```
# Make a copy
ASEM_NoLPIShip <- ASEM
# Edit method: exclude two indicators
ASEM_NoLPIShip$Method$assemble$exclude <- c("LPI", "Ship")
# Regenerate results (suppress any messages)
ASEM_NoLPIShip <- regen(ASEM_NoLPIShip, quietly = TRUE)
```
Note that, in the ASEM example, by default, `ASEM$Method$assemble` doesn’t exist because the `include` or `exclude` arguments of `assemble()` are empty, and these are the only arguments to `assemble()` that are recorded in `.$Method$assemble`.
In summary, we have removed two indicators, then regenerated the results using exactly the same methodology as used before. Importantly, `include` and `exclude` operate *relative to the original data* input to `assemble()`, i.e. the data found in `.$Input$Original`. This means that if we now were to make a copy of the version excluding the two indicators above, and exclude another different indicator:
```
# Make a copy
ASEM_NoBord <- ASEM_NoLPIShip
# Edit method: exclude two indicators
ASEM_NoBord$Method$assemble$exclude <- "Bord"
# Regenerate results
ASEM_NoBord <- regen(ASEM_NoBord, quietly = TRUE)
ASEM_NoBord$Parameters$IndCodes
## [1] "Goods" "Services" "FDI" "PRemit" "ForPort" "CostImpEx"
## [7] "Tariff" "TBTs" "TIRcon" "RTAs" "Visa" "StMob"
## [13] "Research" "Pat" "CultServ" "CultGood" "Tourist" "MigStock"
## [19] "Lang" "LPI" "Flights" "Ship" "Elec" "Gas"
## [25] "ConSpeed" "Cov4G" "Embs" "IGOs" "UNVote" "Renew"
## [31] "PrimEner" "CO2" "MatCon" "Forest" "Poverty" "Palma"
## [37] "TertGrad" "FreePress" "TolMin" "NGOs" "CPI" "FemLab"
## [43] "WomParl" "PubDebt" "PrivDebt" "GDPGrow" "RDExp" "NEET"
```
…we see that “LPI” and “Ship” are once again present.
In fact, COINr has an even quicker way to add and remove indicators, which is a short cut function called `indChange()`. In one command you can add or remove indicators and regenerate the results. Unlike the method above, `indChange()` also adds and removes relative to the existing index, which may be more convenient in some circumstances. To demonstrate this, we can use the same example as above:
```
# Make a copy
ASEM_NoBord2 <- indChange(ASEM_NoLPIShip, drop = "Bord", regen = TRUE)
## COIN has been regenerated using new specs.
ASEM_NoBord2$Parameters$IndCodes
## [1] "Goods" "Services" "FDI" "PRemit" "ForPort" "CostImpEx"
## [7] "Tariff" "TBTs" "TIRcon" "RTAs" "Visa" "StMob"
## [13] "Research" "Pat" "CultServ" "CultGood" "Tourist" "MigStock"
## [19] "Lang" "Flights" "Elec" "Gas" "ConSpeed" "Cov4G"
## [25] "Embs" "IGOs" "UNVote" "Renew" "PrimEner" "CO2"
## [31] "MatCon" "Forest" "Poverty" "Palma" "TertGrad" "FreePress"
## [37] "TolMin" "NGOs" "CPI" "FemLab" "WomParl" "PubDebt"
## [43] "PrivDebt" "GDPGrow" "RDExp" "NEET"
```
And here we see that now, “Bord” has been excluded *as well as* “LPI” and “Ship”.
### 13\.2\.2 Other adjustments
We can make any methodological adjustments we want by editing any parameters in `.$Method` and then running `regen()`. For example, we can change the imputation method:
```
# Make a copy
ASEMAltImpute <- ASEM
# Edit .$Method
ASEMAltImpute$Method$impute$imtype <- "indgroup_median"
# Regenerate
ASEMAltImpute <- regen(ASEMAltImpute, quietly = TRUE)
```
We could also change the normalisation method, e.g. to use Borda scores:
```
# Make a copy
ASEMAltNorm <- ASEM
# Edit .$Method
ASEMAltNorm$Method$normalise$ntype <- "borda"
# Regenerate
ASEMAltNorm <- regen(ASEMAltNorm, quietly = TRUE)
```
and of course this extends to any parameters of any of the “construction” functions. We can even alter the underlying data directly if we want, e.g. by altering values in `.$Input$Original$Data`. In short, anything inside the COIN can be edited and then the results regenerated. This allows a fast way to make different alternative indexes and explore the effects of different methodology very quickly.
### 13\.2\.1 Adding/removing indicators
First of all, let’s consider an alternative index where we decide to add or remove one or more indicators. There are different ways that we could consider doing this.
A first possibility would be to manually create a new data frame of indicator data and indicator metadata, i.e. the `IndData` and `IndMeta` inputs for `assemble()`. This is fine, but we would have to start the index again from scratch and rebuild it.
A better idea is that when we first supply the set of indicators to `assemble()`, we include all indicators that we might possibly want to include in the index, including e.g. alternative indicators with alternative data sources. Then we build different versions of the index using subsets of these indicators. This allows us to use `regen()` and therefore to make fast copies of the index.
To illustrate, consider again the ASEM example. We can rebuild the ASEM index using a subset of the indicators by using the `include` or `exclude` arguments of the `assemble()` function. Because these are stored in `.$Method`, we can easily regenerate the new results.
```
# Make a copy
ASEM_NoLPIShip <- ASEM
# Edit method: exclude two indicators
ASEM_NoLPIShip$Method$assemble$exclude <- c("LPI", "Ship")
# Regenerate results (suppress any messages)
ASEM_NoLPIShip <- regen(ASEM_NoLPIShip, quietly = TRUE)
```
Note that, in the ASEM example, by default, `ASEM$Method$assemble` doesn’t exist because the `include` or `exclude` arguments of `assemble()` are empty, and these are the only arguments to `assemble()` that are recorded in `.$Method$assemble`.
In summary, we have removed two indicators, then regenerated the results using exactly the same methodology as used before. Importantly, `include` and `exclude` operate *relative to the original data* input to `assemble()`, i.e. the data found in `.$Input$Original`. This means that if we now were to make a copy of the version excluding the two indicators above, and exclude another different indicator:
```
# Make a copy
ASEM_NoBord <- ASEM_NoLPIShip
# Edit method: exclude two indicators
ASEM_NoBord$Method$assemble$exclude <- "Bord"
# Regenerate results
ASEM_NoBord <- regen(ASEM_NoBord, quietly = TRUE)
ASEM_NoBord$Parameters$IndCodes
## [1] "Goods" "Services" "FDI" "PRemit" "ForPort" "CostImpEx"
## [7] "Tariff" "TBTs" "TIRcon" "RTAs" "Visa" "StMob"
## [13] "Research" "Pat" "CultServ" "CultGood" "Tourist" "MigStock"
## [19] "Lang" "LPI" "Flights" "Ship" "Elec" "Gas"
## [25] "ConSpeed" "Cov4G" "Embs" "IGOs" "UNVote" "Renew"
## [31] "PrimEner" "CO2" "MatCon" "Forest" "Poverty" "Palma"
## [37] "TertGrad" "FreePress" "TolMin" "NGOs" "CPI" "FemLab"
## [43] "WomParl" "PubDebt" "PrivDebt" "GDPGrow" "RDExp" "NEET"
```
…we see that “LPI” and “Ship” are once again present.
In fact, COINr has an even quicker way to add and remove indicators, which is a short cut function called `indChange()`. In one command you can add or remove indicators and regenerate the results. Unlike the method above, `indChange()` also adds and removes relative to the existing index, which may be more convenient in some circumstances. To demonstrate this, we can use the same example as above:
```
# Make a copy
ASEM_NoBord2 <- indChange(ASEM_NoLPIShip, drop = "Bord", regen = TRUE)
## COIN has been regenerated using new specs.
ASEM_NoBord2$Parameters$IndCodes
## [1] "Goods" "Services" "FDI" "PRemit" "ForPort" "CostImpEx"
## [7] "Tariff" "TBTs" "TIRcon" "RTAs" "Visa" "StMob"
## [13] "Research" "Pat" "CultServ" "CultGood" "Tourist" "MigStock"
## [19] "Lang" "Flights" "Elec" "Gas" "ConSpeed" "Cov4G"
## [25] "Embs" "IGOs" "UNVote" "Renew" "PrimEner" "CO2"
## [31] "MatCon" "Forest" "Poverty" "Palma" "TertGrad" "FreePress"
## [37] "TolMin" "NGOs" "CPI" "FemLab" "WomParl" "PubDebt"
## [43] "PrivDebt" "GDPGrow" "RDExp" "NEET"
```
And here we see that now, “Bord” has been excluded *as well as* “LPI” and “Ship”.
### 13\.2\.2 Other adjustments
We can make any methodological adjustments we want by editing any parameters in `.$Method` and then running `regen()`. For example, we can change the imputation method:
```
# Make a copy
ASEMAltImpute <- ASEM
# Edit .$Method
ASEMAltImpute$Method$impute$imtype <- "indgroup_median"
# Regenerate
ASEMAltImpute <- regen(ASEMAltImpute, quietly = TRUE)
```
We could also change the normalisation method, e.g. to use Borda scores:
```
# Make a copy
ASEMAltNorm <- ASEM
# Edit .$Method
ASEMAltNorm$Method$normalise$ntype <- "borda"
# Regenerate
ASEMAltNorm <- regen(ASEMAltNorm, quietly = TRUE)
```
and of course this extends to any parameters of any of the “construction” functions. We can even alter the underlying data directly if we want, e.g. by altering values in `.$Input$Original$Data`. In short, anything inside the COIN can be edited and then the results regenerated. This allows a fast way to make different alternative indexes and explore the effects of different methodology very quickly.
13\.3 Comparisons
-----------------
A logical follow up to making alternative indexes is to try to understand the differences between these indexes. This can of course be done manually. But to make this quicker, COINr includes a few tools to quickly inspect the differences between different COINs.
In using these tools it should be fairly evident that comparisons are made between different versions of the same index. So the two indexes must have at least some units in common. The tools are intended for methodological variations of the type shown previously in this chapter \- different aggregation, weighting, adding and removing indicators and so on. That said, since comparisons are made on units, it would also be possible to compare two totally different COINs, as long as they have units in common.
To begin with, we can make a simple bilateral comparison between two COINs. Taking two of the index versions created previously:
```
compTable(ASEM, ASEMAltNorm, dset = "Aggregated", isel = "Index") |>
head(10) |>
knitr::kable()
```
| | UnitCode | UnitName | RankCOIN1 | RankCOIN2 | RankChange | AbsRankChange |
| --- | --- | --- | --- | --- | --- | --- |
| 43 | PRT | Portugal | 27 | 16 | 11 | 11 |
| 29 | LAO | Lao PDR | 48 | 39 | 9 | 9 |
| 33 | MLT | Malta | 10 | 19 | \-9 | 9 |
| 14 | EST | Estonia | 22 | 15 | 7 | 7 |
| 21 | IDN | Indonesia | 43 | 49 | \-6 | 6 |
| 13 | ESP | Spain | 19 | 24 | \-5 | 5 |
| 19 | HRV | Croatia | 18 | 23 | \-5 | 5 |
| 17 | GBR | United Kingdom | 15 | 11 | 4 | 4 |
| 30 | LTU | Lithuania | 16 | 12 | 4 | 4 |
| 35 | MNG | Mongolia | 44 | 48 | \-4 | 4 |
The `compTable()` function allows a rank comparison of a single indicator or aggregate, between two COINs. The COINs must both share the indicator code which is assigned to `isel`. By default the output table (data frame) is sorted by the highest absolute rank change downwards, but it can be sorted by the other columns using the `sort_by` argument:
```
compTable(ASEM, ASEMAltNorm, dset = "Aggregated", isel = "Index", sort_by = "RankCOIN1") |>
head(10) |>
knitr::kable()
```
| | UnitCode | UnitName | RankCOIN1 | RankCOIN2 | RankChange | AbsRankChange |
| --- | --- | --- | --- | --- | --- | --- |
| 7 | CHE | Switzerland | 1 | 1 | 0 | 0 |
| 37 | NLD | Netherlands | 2 | 4 | \-2 | 2 |
| 12 | DNK | Denmark | 3 | 2 | 1 | 1 |
| 38 | NOR | Norway | 4 | 3 | 1 | 1 |
| 3 | BEL | Belgium | 5 | 6 | \-1 | 1 |
| 49 | SWE | Sweden | 6 | 5 | 1 | 1 |
| 2 | AUT | Austria | 7 | 7 | 0 | 0 |
| 31 | LUX | Luxembourg | 8 | 10 | \-2 | 2 |
| 11 | DEU | Germany | 9 | 8 | 1 | 1 |
| 33 | MLT | Malta | 10 | 19 | \-9 | 9 |
Why use ranks as a comparison and not scores? As explained elsewhere in this documentation, a different normalisation method or aggregation method can result in very different scores for the same unit. Since scores have no units, the scale is in many ways arbitrary, but ranks are a consistent way of comparing different versions.
For comparisons between more than two COINs, the `compTableMulti()` function can be used.
Finally, you may wish to cross\-check results of a COIN against parallel calculations. For example, you might be reproducing a composite indicator which was calculated in Excel and want to see if the results are the same. COINr has a handy function called `compareDF()`, which gives a fairly detailed comparison between two data frames. This is explained in more detail in the [Helper functions](helper-functions.html#helper-functions) chapter.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/sensitivity-analysis.html |
Chapter 14 Sensitivity analysis
===============================
Composite indicators, like any model, have many associated uncertainties.
*Sensitivity analysis* can help to quantify the uncertainty in the scores and rankings of the composite indicator, and to identify which assumptions are driving this uncertainty, and which are less important.
14\.1 About
-----------
Sensitivity analysis is often confused with *uncertainty analysis*. Uncertainty analysis involves estimating the uncertainty in the ouputs of a system (here, the scores and ranks of the composite indicator), given the uncertainties in the inputs (here, methodological decisions, weights, etc.). The results of an uncertainty include for example confidence intervals over the ranks, median ranks, and so on.
Sensitivity analysis is an extra step after uncertainty analysis, and estimates which of the input uncertainties are driving the output uncertainty, and by how much. A rule of thumb, known as the [Pareto Principle](https://en.wikipedia.org/wiki/Pareto_principle) (or the 80/20 Rule) suggests that often, only a small proportion of the input uncertainties are causing the majority of the output uncertainty. Sensitivity analysis allows us to find which input uncertainties are significant (and therefore perhaps worthy of extra attention), and which are not important.
In reality, sensitivity analysis and uncertainty analysis can be performed simultaneously. However in both cases, the main technique is to use Monte Carlo methods. This essentially involves re\-calculating the composite indicator many times, each time randomly varying the uncertain variables (assumptions, parameters), in order to estimate the output distributions.
At first glance, one might think that sensitivity analysis can be performed by switching one assumption at a time, using the tools outlined in [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons). However, uncertainties interact with one another, and to properly understand the impact of uncertainties, one must vary uncertain parameters and assumptions *simultaneously*.
Sensitivity analysis and uncertainty analysis are large topics and are in fact research fields in their own right. To better understand them, a good starting point is [Global Sensitivity Analysis: The Primer](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470725184), and a recent summary of sensitivity analysis research can be found [here](https://www.sciencedirect.com/science/article/pii/S1364815220310112?via%3Dihub).
14\.2 Five steps
----------------
To perform an uncertainty or sensitivity analysis, one must define several things:
1. The system or model (in this case it is a composite indicator, represented as a COIN)
2. Which assumptions to treat as uncertain
3. The alternative values or distributions assigned to each uncertain assumption
4. Which output or outputs to target (i.e. to calculate confidence intervals for)
5. Methodological specifications for the sensitivity analysis itself, for example the method and the number of replications to run.
This should dispel the common idea that one can simply “run a sensitivity analysis”. In fact, all of these steps require some thought and attention, and the results of the sensitivity analysis will be themselves dependent on these choices. Let’s go through them one by one.
### 14\.2\.1 Specifying the model
First, the **system or model**. This should be clear: you need to have already built your (nominal) composite indicator in order to check the uncertainties. Usually this would involve calculating the results up to and including the aggregated index. The choices in this model should represent your “best” choices for each methodological step, for example your preferred aggregration method, preferred set of weights, and so on.
### 14\.2\.2 Which assumptions to vary
Specifying **which assumptions to vary** is more complicated. It is impossible to fully quantify the uncertainty in a composite indicator (or any model, for that matter) because there are simply so many sources of uncertainty, ranging from the input data, the choice of indicators, the structure of the index, and all the methodological steps along the way (imputation, treatment, normalisation, etc.). A reasonable approach is to identify specific assumptions and parameters that could have plausible alternatives, and can be practically varied.
The construction of composite indicators in COINr (deliberately) lends itself well to uncertainty analysis, because as we have seen in the [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons) chapter, all the methodological choices used to build the composite indicator are recorded inside the COIN, and changes can be made by simply altering these parameters and calling the `regen()` function. Sensitivity and uncertainty analysis is simply an extension of this concept \- where we create a large number of alternative COINs, each with methodological variations following the distributions assigned to each uncertain assumption.
The upshot here is that (at least in theory) any parameter from any of the construction functions (see again the table in [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons)) can be varied as part of a sensitivity analysis. This includes, for example:
* Inclusion and exclusion of indicators
* Data availability thresholds for screening units
* Alternative choices of denominators
* Alternative imputation methods
* Alternative data treatment, Winsorisation thresholds, skew and kurtosis values, transformations
* Alternative normalisation methods and parameters
* Alternative aggregration methods and weight sets
On top of this, it is possible to randomly perturb weights at each level by a specified noise factor. This is explained in more detail later in this chapter.
The reason that these can be varied *in theory* is because there may arise conflicts between methodological choices. For example, if a normalisation method results in negative values, we cannot use a default geometric mean aggregation method. For this reason, it is recommended to focus on key uncertainties and start modestly with a sensitivity analysis, working up to a more complex version if required.
### 14\.2\.3 Alternative values
Having selected some key uncertain assumptions, we must assign plausible alternative values. For example, let’s say that an uncertain assumption is the normalisation method. By default, we have used min\-max, but perhaps other methods could be reasonable alternatives. The question is then, which alternatives to test?
The answer here should not be to blindly apply all possible alternatives available, but rather to select some alternatives that represent **plausible** alternatives, ruling out any that do not fit the requirements of the index. For example, using rank normalisation is a robust method that neatly deals with outliers but by doing so, also ignores whether a unit is an exceptional performer. This may be good or bad depending on what you want to capture. If it is “good” then it could be considered as an alternative. If it does not fit the objectives, it is not a plausible alternative, so should not be included in the sensitivity analysis.
Finding plausible alternatives is not necessarily an easy task, and we must recall that in any case we will end up with a lower bound on the uncertainty, since we cannot fully test all uncertainties, as discussed previously (we recall that this is the same in any model, not just in composite indicators). Yet, we can still do a lot better than no uncertainty analysis at all.
In the end, for each uncertain parameter, we should end up with a list of alternative plausible values. Note that at the moment, COINr assumes equal probability for all alternatives, i.e. uniform distributions. This may be extended to other distributions in future releases.
### 14\.2\.4 Selecting the output
Composite indicators have multidimensional outputs \- one value for each unit and for each aggregate or normalised indicator. Typically, the most interesting outputs to look at in the context of a sensitivity analysis are the final index values, and possibly some of the underlying aggregate scores (sub\-indexes, pillars, etc.).
COINr allows us to select which inputs to target, and this is explained more below.
### 14\.2\.5 SA methodology
Finally, we have to specify what kind of analysis to perform, and how many model runs. Currently, COINr offers either an uncertainty analysis (resulting in distributions over ranks), or a sensitivity analysis (additionally showing which input uncertainties cause the most output uncertainty). As mentioned, sensitivity analysis methodology is a rabbit hole in itself, and interested readers could refer to the references at the beginning of this chapter to find out more.
Sensitivity analysis usually requires more model runs (replications of the composite indicator). Still, composite indicators are fairly cheap to evaluate, depending on the number of indicators and the complexity of the construction. Regardless of whether you run an uncertainty or sensitivity analysis, more model runs is always better because it increases the accuracy of the estimations of the distributions over ranks. If you are prepared to wait some minutes or possibly hour(s), normally this is enough to perform a fairly solid sensitivity analysis.
14\.3 Variance\-based sensitivity analysis
------------------------------------------
COINr is almost certainly the only package in any language which allows a full variance\-based (global) sensitivity analysis on a composite indicator. However, this means that variance\-based sensitivity analysis needs to be briefly explained.
Variance\-based sensitivity analysis is largely considered as the “gold standard” in testing the effects of uncertainties in modelling and more generally in systems. Briefly, the central idea is that the uncertainty in a single output \\(y\\) of a model can be encapsulated as its variance \\(V(y)\\) \- the greater the variance is, the more uncertain the output.
In a [seminal paper](https://doi.org/10.1016/S0378-4754(00)00270-6), Russian mathematician Ilya Sobol’ showed that this output variance can be decomposed into chunks which are attributable to each uncertain input and interactions between inputs. Letting \\(x\_i\\) denote the \\(i\\)th assumption to be varied, and \\(k\\) the number of uncertain assumptions:
\\\[ V(y)\=\\sum\_i V\_i\+\\sum\_i\\sum\_{j\>i}V\_{i,j}\+...\+V\_{1,2,...,k}, \\]
where
\\\[ V\_i\= V\[E(y\|x\_i)], \\\\ V\_{i,j}\=V\[E(y\|x\_i,x\_j)] \- V\[E(y\|x\_i)]\-V\[E(y\|x\_j)] \\]
and so on for higher terms. Here, \\(V(\\cdot)\\) denotes the variance operator, \\(E(\\cdot)\\) the expected value, and these terms are used directly as *sensitivity indices*, e.g. the \\(S\_{i}\=V\_i/V(y)\\) measures the contribution of the input \\(x\_i\\) to \\(V(y)\\), without including interactions with other inputs. Since they are standardised by \\(V(y)\\), each sensitivity index measures the *fraction of the variance* caused by each input (or interactions between inputs), and therefore the *fraction of the uncertainty*.
In the same paper, Sobol’ also showed how these sensitivity indices can be estimated using a Monte Carlo design. This Monte Carlo design (running the composite indicator many times with a particular combination of input values) is implemented in COINr. This follows the methodology described in [this paper](https://doi.org/10.1111/j.1467-985X.2005.00350.x), although due to the difficulty of implementing it, it has not been used very extensively in practice.
In any case, the exact details of variance\-based sensitivity analysis are better described elsewhere. Here, the important points are how to interpret the sensitivity indices. COINr produces two indices for each uncertain assumption. The first one is the *first order sensitivity index*, which is the fraction of the output variance caused by each uncertain input assumption alone and is defined as:
\\\[ S\_i \= \\frac{V\[E(y\|x\_i)]}{V(y)} \\]
Importantly, this is *averaged over variations in other input assumptions*. In other words, it is not the same as simply varying that assumption alone and seeing what happens (this result is dependent on the values of the other assumptions).
The second measure is the *total order sensitivity index*, sometimes called the *total effect index*, and is:
\\\[ S\_{Ti} \= 1 \- \\frac {V\[E\\left(y \\mid \\textbf{x}\_{\-i} \\right)]}{V(y)} \= \\frac {E\[V\\left(y \\mid
\\textbf{x}\_{\-i} \\right)]}{V(y)} \\]
where \\(\\textbf{x}\_{\-i}\\) is the set of all uncertain inputs except the \\(i\\)th. The quantity \\(S\_{Ti}\\) measures the fraction of the output variance caused by \\(x\_i\\) *and any interactions with other assumptions*.
Depending on your background, this may or may not seem a little confusing. Either way, see the examples in the rest of this chapter, which show that once calculated, the sensitivity indices are fairly intuitive.
14\.4 Sensitivity in COINr
--------------------------
To run a sensitivity analysis in COINr, the function of interest is `sensitivity()`. We must follow the steps outlined above, so first, you should have a nominal composite indicator constructed up to and including the aggregation step. Here, we use our old friend the ASEM index.
```
library(COINr6)
# build ASEM index
ASEM <- build_ASEM()
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
```
### 14\.4\.1 General specifications
Next, we have to decide which parameters to treat as uncertain. Here, we will consider three things:
1. The imputation method
2. The normalisation method
3. The weights
These are chosen as a limited set to keep things relatively simple, and because imputation is always a significant uncertainty (we are basically guessing data points). We will also consider a couple of alternative normalisation methods. Finally, we will randomly perturb the weights.
For the distributions (i.e. the plausible alternatives), we will consider the following:
* Imputation using either indicator group mean, indicator mean, or no imputation
* Normalisation using either min\-max, rank, or distance to maximum
* Perturb pillar and sub index weights by \+/\-25%
Finally, we will consider the index as the target.
Let’s enter these specifications. The most complex part is specifying the `SA_specs` argument of the `sensitivity()` function, which specifies which parameters to vary and which alternative values to use. To explain how this works, the easiest way is to construct the specifications for our example.
```
# define noise to be applied to weights
nspecs <- data.frame(AgLevel = c(2,3), NoiseFactor = c(0.25,0.25))
# create list specifying assumptions to vary and alternatives
SAspecs <- list(
impute = list(imtype = c("indgroup_mean", "ind_mean", "none")),
normalise = list(ntype = c("minmax", "rank", "dist2max")),
weights = list(NoiseSpecs = nspecs, Nominal = "Original")
)
```
Leaving aside the `nspecs` line for now (but see below for details), we focus on the list `SAspecs`, which will be used as the `SA_specs` argument of `sensitivity()`. The *names* of the first level of this list should be any of the seven construction functions. Each named element of the list is itself a list, which specifies the parameters of that function to vary, and the alternative values. In summary, the general format is:
```
function_name = list(parameter name = vector_or_list_of_alternatives)
```
There is no restriction on the number of parameters that can be varied, or the number of alternatives. You can have multiple parameters from the same function, for example.
### 14\.4\.2 Varying weights
The exception to the above format is regarding weights. Weights are input as an argument of the `aggregate()` function, but since this is a single argument, it only behaves as a single parameter. That means that we could test alternative weight sets as part of the sensitivity analysis:
```
# example entry in SA_specs list of sensitivity() [not used here]
aggregate = list(agweights = c("Weights1", "Weights2", "Weights3"))
```
where “Weights1” and friends are alternative sets of weights that we have stored in `.$Parameters$Weights`. This is a useful test but only gives a limited picture of uncertainty because we test between a small set of alternatives. It may be the best approach if we have two or three fairly clear alternatives for weighting (note: this can also be a way to exclude indicators or even entire aggregation groups, by setting certain weights to zero).
If the uncertainty is more general, i.e. we have elicited weights but we feel that there is a reasonable degree of uncertainty surrounding all weight values (which usually there is), COINr also includes the option to apply random “noise” to the weights. With this approach, for each replication of the composite indicator, a random value is added to each weight, of the form:
\\\[ w'\_i \= w\_i \+ \\epsilon\_i, \\; \\; \\epsilon\_i \\sim U\[\-\\phi w\_i, \\phi w\_i] \\]
where \\(w\_i\\) is a weight, \\(\\epsilon\_i\\) is the added noise, and \\(\\phi\\) is a “noise factor”. This means that if we set \\(\\phi \= 0\.25\\), for example, it would let \\(w\_i\\) vary between \+/\-25% of its nominal value, following a uniform distribution.
The noise factor can be different from one aggregation level to another in COINr. That means we can choose which aggregation level weights to apply noise to, and how much for each level. To specify this in the `sensitivity()` function, we add it as a “special” `weights` entry in the `SAspecs` list described above. It is special in the sense that this is the only entry that is allowed that is not a construction function name.
The weights entry was already defined above, but let’s look at it again here:
```
# define noise to be applied to weights
nspecs <- data.frame(AgLevel = c(2,3), NoiseFactor = c(0.25,0.25))
nspecs
# create list specifying assumptions to vary and alternatives
weights = list(NoiseSpecs = nspecs,
Nominal = "Original")
```
The weights list has two entries. The first is a data frame where each row is the specifications for noise to apply to the weights of an aggregation level. It has two columns: `AgLev` which is the aggregation level, and `NoiseFactor`, which is the noise factor described above. In the example here, we have specified that at aggregation levels 2 and 3 (pillars and sub\-indexes), there should be noise factors of 0\.25 (\+/\-25% of nominal weight values). Indicator weights remain always at nominal values since they are not specified in this table.
The second entry, `Nominal`, is the name of the weight set to use as the nominal values (i.e. the baseline to apply the noise to). This should correspond to a weight set present in `.$Parameters$Weights`. Here, we have set it to `Original`, which is the original weights that were input to `assemble()`.
### 14\.4\.3 Running an uncertainty analysis
To actually run the analysis, we call the `sensitivity()` function, using the specifications in the `SAspecs` list, and setting `v_targ = "Index"`, meaning that the index will be the target of the sensitivity analysis. Further, we specify that there should be 100 replications (this number is kept fairly low to limit the time taken to compile this book), and that the type of analysis should be an uncertainty analysis.
```
# run uncertainty analysis
SAresults <- sensitivity(ASEM, v_targ = "Index",
SA_specs = SAspecs,
N = 100,
SA_type = "UA")
## Iteration 1 of 100 ... 1% complete
## Iteration 2 of 100 ... 2% complete
## Iteration 3 of 100 ... 3% complete
## Iteration 4 of 100 ... 4% complete
## Iteration 5 of 100 ... 5% complete
## Iteration 6 of 100 ... 6% complete
## Iteration 7 of 100 ... 7% complete
## Iteration 8 of 100 ... 8% complete
## Iteration 9 of 100 ... 9% complete
## Iteration 10 of 100 ... 10% complete
## Iteration 11 of 100 ... 11% complete
## Iteration 12 of 100 ... 12% complete
## Iteration 13 of 100 ... 13% complete
## Iteration 14 of 100 ... 14% complete
## Iteration 15 of 100 ... 15% complete
## Iteration 16 of 100 ... 16% complete
## Iteration 17 of 100 ... 17% complete
## Iteration 18 of 100 ... 18% complete
## Iteration 19 of 100 ... 19% complete
## Iteration 20 of 100 ... 20% complete
## Iteration 21 of 100 ... 21% complete
## Iteration 22 of 100 ... 22% complete
## Iteration 23 of 100 ... 23% complete
## Iteration 24 of 100 ... 24% complete
## Iteration 25 of 100 ... 25% complete
## Iteration 26 of 100 ... 26% complete
## Iteration 27 of 100 ... 27% complete
## Iteration 28 of 100 ... 28% complete
## Iteration 29 of 100 ... 29% complete
## Iteration 30 of 100 ... 30% complete
## Iteration 31 of 100 ... 31% complete
## Iteration 32 of 100 ... 32% complete
## Iteration 33 of 100 ... 33% complete
## Iteration 34 of 100 ... 34% complete
## Iteration 35 of 100 ... 35% complete
## Iteration 36 of 100 ... 36% complete
## Iteration 37 of 100 ... 37% complete
## Iteration 38 of 100 ... 38% complete
## Iteration 39 of 100 ... 39% complete
## Iteration 40 of 100 ... 40% complete
## Iteration 41 of 100 ... 41% complete
## Iteration 42 of 100 ... 42% complete
## Iteration 43 of 100 ... 43% complete
## Iteration 44 of 100 ... 44% complete
## Iteration 45 of 100 ... 45% complete
## Iteration 46 of 100 ... 46% complete
## Iteration 47 of 100 ... 47% complete
## Iteration 48 of 100 ... 48% complete
## Iteration 49 of 100 ... 49% complete
## Iteration 50 of 100 ... 50% complete
## Iteration 51 of 100 ... 51% complete
## Iteration 52 of 100 ... 52% complete
## Iteration 53 of 100 ... 53% complete
## Iteration 54 of 100 ... 54% complete
## Iteration 55 of 100 ... 55% complete
## Iteration 56 of 100 ... 56% complete
## Iteration 57 of 100 ... 57% complete
## Iteration 58 of 100 ... 58% complete
## Iteration 59 of 100 ... 59% complete
## Iteration 60 of 100 ... 60% complete
## Iteration 61 of 100 ... 61% complete
## Iteration 62 of 100 ... 62% complete
## Iteration 63 of 100 ... 63% complete
## Iteration 64 of 100 ... 64% complete
## Iteration 65 of 100 ... 65% complete
## Iteration 66 of 100 ... 66% complete
## Iteration 67 of 100 ... 67% complete
## Iteration 68 of 100 ... 68% complete
## Iteration 69 of 100 ... 69% complete
## Iteration 70 of 100 ... 70% complete
## Iteration 71 of 100 ... 71% complete
## Iteration 72 of 100 ... 72% complete
## Iteration 73 of 100 ... 73% complete
## Iteration 74 of 100 ... 74% complete
## Iteration 75 of 100 ... 75% complete
## Iteration 76 of 100 ... 76% complete
## Iteration 77 of 100 ... 77% complete
## Iteration 78 of 100 ... 78% complete
## Iteration 79 of 100 ... 79% complete
## Iteration 80 of 100 ... 80% complete
## Iteration 81 of 100 ... 81% complete
## Iteration 82 of 100 ... 82% complete
## Iteration 83 of 100 ... 83% complete
## Iteration 84 of 100 ... 84% complete
## Iteration 85 of 100 ... 85% complete
## Iteration 86 of 100 ... 86% complete
## Iteration 87 of 100 ... 87% complete
## Iteration 88 of 100 ... 88% complete
## Iteration 89 of 100 ... 89% complete
## Iteration 90 of 100 ... 90% complete
## Iteration 91 of 100 ... 91% complete
## Iteration 92 of 100 ... 92% complete
## Iteration 93 of 100 ... 93% complete
## Iteration 94 of 100 ... 94% complete
## Iteration 95 of 100 ... 95% complete
## Iteration 96 of 100 ... 96% complete
## Iteration 97 of 100 ... 97% complete
## Iteration 98 of 100 ... 98% complete
## Iteration 99 of 100 ... 99% complete
## Iteration 100 of 100 ... 100% complete
## Time elapsed = 37.13s, average 0.37s/rep.
```
Running a sensitivity/uncertainty analysis can be time consuming and depends on the complexity of the model and the number of runs specified. This particular analysis took less than a minute for me, but could take more or less time depending on the speed of your computer.
The output of the analysis is a list with several entries:
```
# see summary of analysis
summary(SAresults)
## Length Class Mode
## Scores 101 data.frame list
## Parameters 2 data.frame list
## Ranks 101 data.frame list
## RankStats 6 data.frame list
## Nominal 3 data.frame list
## t_elapse 1 -none- numeric
## t_average 1 -none- numeric
## ParaNames 3 -none- character
```
We will go into more detail in a minute, but first we can plot the results of the uncertainty analysis using a dedicated function `plotSARanks()`:
```
plotSARanks(SAresults)
```
This plot orders the units by their nominal ranks, and plots the median rank across all the replications of the uncertainty analysis, as well as the 5th and 95th percentile rank values. The ranks are the focus of an uncertainty analysis because scores can change drastically depending on the method. Ranks are the only comparable metric across different composite indicator methodologies[5](#fn5). The `plotSARanks()` function also gives options for line and dot colours.
Let us now look at the elements in `SAresults`. The data frames `SAresults$Scores` and `SAresults$Ranks` give the scores and ranks respectively for each replication of the composite indicator. The `SAresults$Parameters` data frame gives the uncertain parameter values used for each iteration:
```
head(SAresults$Parameters)
## imtype ntype
## 1 indgroup_mean minmax
## 2 ind_mean dist2max
## 3 ind_mean rank
## 4 indgroup_mean minmax
## 5 ind_mean rank
## 6 none minmax
```
Here, each column is a parameter that was varied in the analysis. Note that this does not include the weights used for each iteration since these are themselves data frames.
The `SAresults$RankStats` table gives a summary of the main statistics of the ranks of each unit, including mean, median, and percentiles:
```
SAresults$RankStats
## UnitCode Nominal Mean Median Q5 Q95
## 1 AUT 7 6.93 7.0 5.00 8.00
## 2 BEL 5 5.36 5.0 3.00 7.05
## 3 BGR 30 29.65 30.0 28.00 31.00
## 4 HRV 18 20.41 20.0 17.00 25.00
## 5 CYP 29 29.91 30.0 28.00 32.00
## 6 CZE 17 17.21 17.0 16.00 19.00
## 7 DNK 3 2.60 2.0 2.00 5.00
## 8 EST 22 19.85 20.0 12.00 25.05
## 9 FIN 13 13.47 14.0 11.00 16.00
## 10 FRA 21 20.40 20.0 18.00 23.05
## 11 DEU 9 8.83 9.0 8.00 11.00
## 12 GRC 32 32.01 32.0 30.00 34.00
## 13 HUN 20 20.84 21.0 19.00 24.00
## 14 IRL 12 12.04 11.0 10.00 16.00
## 15 ITA 28 27.95 28.0 27.00 29.00
## 16 LVA 23 22.40 23.0 19.95 25.00
## 17 LTU 16 15.08 16.0 11.00 18.00
## 18 LUX 8 8.80 9.0 6.00 12.00
## 19 MLT 10 13.24 12.0 9.00 19.00
## 20 NLD 2 3.07 3.0 2.00 4.00
## 21 NOR 4 3.59 4.0 2.00 5.00
## 22 POL 26 26.14 26.0 25.00 28.00
## 23 PRT 27 23.72 26.0 15.00 27.00
## 24 ROU 25 24.15 25.0 19.00 28.00
## 25 SVK 24 23.47 23.0 22.00 26.00
## 26 SVN 11 10.04 10.0 8.00 12.00
## 27 ESP 19 22.13 22.0 18.00 26.05
## 28 SWE 6 5.93 6.0 5.00 7.00
## 29 CHE 1 1.00 1.0 1.00 1.00
## 30 GBR 15 14.04 15.0 11.00 16.00
## 31 AUS 35 35.78 35.0 34.00 39.00
## 32 BGD 46 45.65 46.0 41.00 50.00
## 33 BRN 40 38.80 38.0 35.00 45.00
## 34 KHM 37 36.57 37.0 33.00 39.00
## 35 CHN 49 49.47 49.5 47.00 51.00
## 36 IND 45 45.10 45.0 42.00 48.00
## 37 IDN 43 45.08 45.0 41.00 49.00
## 38 JPN 34 33.67 34.0 32.00 35.00
## 39 KAZ 47 45.76 46.0 42.00 49.00
## 40 KOR 31 30.92 31.0 28.00 33.00
## 41 LAO 48 42.42 41.0 35.95 49.00
## 42 MYS 39 40.04 40.0 36.00 45.05
## 43 MNG 44 45.34 46.0 40.00 49.00
## 44 MMR 41 41.94 42.0 38.00 47.05
## 45 NZL 33 33.00 33.0 31.00 36.00
## 46 PAK 50 48.40 48.5 46.00 50.00
## 47 PHL 38 40.76 41.0 37.95 44.05
## 48 RUS 51 50.60 51.0 50.00 51.00
## 49 SGP 14 13.58 13.0 10.00 17.00
## 50 THA 42 41.45 41.0 39.00 44.00
## 51 VNM 36 37.41 37.5 36.00 39.00
```
It also includes the nominal ranks for comparison. Finally, `SAresults$Nominal` gives a summary of nominal ranks and scores.
If this information is not enough, the `store_results` argument to `sensitivity()` gives options of what information to store for each iteration. For example, `store_results = "results+method"` returns the information described so far, plus the full .$Method list of each replication (i.e. the full specifications used to build the composite indicator). Setting `store_results = "results+COIN"` stores all results and the complete COIN of each replication. Of course, this latter option in particular will take up more memory.
### 14\.4\.4 Running a sensitivity analysis
Running a sensitivity analysis, i.e. understanding the individual contributions of input uncertainties, is similar to an uncertainty analysis, but there are some extra considerations to keep in mind.
The first is that the target of the sensitivity analysis will be different. Whereas when you run an uncertainty analysis, you can calculate confidence intervals for each unit, with a sensitivity analysis a different approach is taken because otherwise, you would have a set of sensitivity indices for each single unit, and this would be hard to interpret. Instead, COINr calculates sensitivity with respect to the *average absolute rank change*, between nominal and perturbed values.
The second consideration is that the number of replications required to a sensitivity analysis is quite a bit higher than the number required for an uncertainty analysis. In fact, when you specify `N` in the `sensitivity()` function, and set to `SAtype = "SA"`, the actual number of replications is \\(N\_T \= N(d \+2\)\\), where \\(d\\) is the number of uncertain input parameters/assumptions.
To run the sensitivity analysis, the format is very similar to the uncertainty analysis. We simply run `sensitivity()` with `SA_specs` in exactly the same format as previously (we can use the same list), and set `SA_type = "SA"`.
```
# Not actually run here. If you run this, it will take a few minutes
SAresults <- sensitivity(ASEM, v_targ = "Index",
SA_specs = SAspecs,
N = 500,
SA_type = "SA", Nboot = 1000)
```
The output of the sensitivity analysis is a list. This is in fact an extended version of the list when `SA_type = "UA"`, in that it has the recorded scores for each iteration, ranks for each iteration, as well as confidence intervals for ranks, and all the other outputs associated with `SA_type = "SA"`.
Additionally it outputs a data frame `.$Sensitivity`, which gives first and total order sensitivity indices for each input variable. Moreover, if `Nboot` is specified in `sensitivity()`, it provides estimated confidence intervals for each sensitivity index using bootstrapping.
```
roundDF(SAresults$Sensitivity)
## Variable Si STi Si_q5 Si_q95 STi_q5 STi_q95
## 1 imtype 0.00 0.04 -0.04 0.05 0.03 0.04
## 2 ntype 0.54 0.80 0.30 0.77 0.72 0.88
## 3 weights 0.23 0.17 0.12 0.34 0.15 0.19
```
Recall that the target output in the sensitivity analysis is the mean absolute rank change (let’s call it the MARC from here). How do we interpret these results? The first order sensitivity index \\(S\_i\\) can be interpreted as the uncertainty caused by the effect of the \\(i\\)th uncertain parameter/assumption *on its own*. The total order sensitivity index is the uncertainty caused by the effect of the \\(i\\)th uncertain parameter/assumption, *including its interactions with other inputs*.
To help understand this, we will use COINr’s plotting function for sensitivity analysis, `plotSA`.
```
# plot bar chart
plotSA(SAresults, ptype = "bar")
```
This plot shows the three uncertainties that we introduced on the x\-axis: the imputation type, the normalisation type, and the weights. The y\-axis is the sensitivity index, and the total height of each bar is the total effect index \\(S\_{Ti}\\), i.e. the uncertainty caused by the variable on its own (the *main effect*) as well as its interactions. Then each bar is divided into the interaction effects and the main effects.
What this shows is that the normalisation type is the most important uncertainty, followed by the weights, and last by the imputation type. In fact, the choice of imputation method (between the ones specified) is effectively insignificant. We can also see that the nomalisation has a significant interaction effect, probably with the imputation method. Whereas the weights don’t seem to interact with the other inputs.
Another way of looking at this is in a pie chart:
```
# plot bar chart
plotSA(SAresults, ptype = "pie")
```
Here we can see that more than half of the uncertainty is caused by the normalisation method choice alone, while a bit less than a quarter is caused by the weights, and the remainder by interactions.
It is likely that the normalisation method is important because one of the choices was rank normalisation, which radically alters the distribution of each indicator.
Finally, to see the uncertainty on the estimates of each sensitivity index we can use a box plot (or error bars):
```
# plot bar chart
plotSA(SAresults, ptype = "box")
```
This will not work unless `Nboot` was specified. Bootstrapping allows confidence intervals to be estimated, and this shows that the estimates of total effect indices (\\(S\_{Ti}\\)) are quite reliable. Whereas the confidence intervals are much wider on the first order indices (\\(S\_{i}\\)). Still, with the number of runs applied, it is possible to draw robust conclusions. If the confidence intervals were still too wide here, you could consider increasing `N` \- this should lead to narrower intervals, although you might have to wait for a bit.
14\.5 Removing indicators and aggregates
----------------------------------------
COINr also has a tool which lies somewhere in between the ad\-hoc adjustments and comparisons described in the previous chapter, and the formal sensitivity analysis described here. The `removeElements()` function answers the question: if we remove an indicator, what would happen to the rankings?
This question is relevant because sometimes you may be in doubt about whether an indicator is necessary in the framework. If you were to remove it, would things change that much? It can also be an alternative measure of “indicator importance/influence” \- an indicator which changes very little when removed could be said to be uninfluential.
To make this test, we simply call the `removeElements()` function as follows, which sequentially removes all elements of a given level, with replacement, and returns the average (absolute) rank change each time. Here we do the analysis at pillar level (level 2\).
```
CheckPillars <- removeElements(ASEM, 2, "Index")
## Iteration 1 of 8
## Iteration 2 of 8
## Iteration 3 of 8
## Iteration 4 of 8
## Iteration 5 of 8
## Iteration 6 of 8
## Iteration 7 of 8
## Iteration 8 of 8
```
The output here is a list with the scores and ranks each time for the selected target (by default, the highest level of aggregation is targeted), as well as average rank changes. A good way to view the output is with a bar chart.
```
library(ggplot2)
ggplot(data.frame(Pillar = names(CheckPillars$MeanAbsDiff[-1]),
Impact = CheckPillars$MeanAbsDiff[-1]),
aes(x=Pillar, y=Impact)) +
geom_bar(stat = "identity") +
theme(axis.text.x = element_text(angle = 45, hjust=1))
```
This plot shows that the effect of removing pillars from the framework is definitely not equal. In fact, the “Social” pillar has nearly four times the impact of the “ConEcFin” pillar on average rank change.
14\.1 About
-----------
Sensitivity analysis is often confused with *uncertainty analysis*. Uncertainty analysis involves estimating the uncertainty in the ouputs of a system (here, the scores and ranks of the composite indicator), given the uncertainties in the inputs (here, methodological decisions, weights, etc.). The results of an uncertainty include for example confidence intervals over the ranks, median ranks, and so on.
Sensitivity analysis is an extra step after uncertainty analysis, and estimates which of the input uncertainties are driving the output uncertainty, and by how much. A rule of thumb, known as the [Pareto Principle](https://en.wikipedia.org/wiki/Pareto_principle) (or the 80/20 Rule) suggests that often, only a small proportion of the input uncertainties are causing the majority of the output uncertainty. Sensitivity analysis allows us to find which input uncertainties are significant (and therefore perhaps worthy of extra attention), and which are not important.
In reality, sensitivity analysis and uncertainty analysis can be performed simultaneously. However in both cases, the main technique is to use Monte Carlo methods. This essentially involves re\-calculating the composite indicator many times, each time randomly varying the uncertain variables (assumptions, parameters), in order to estimate the output distributions.
At first glance, one might think that sensitivity analysis can be performed by switching one assumption at a time, using the tools outlined in [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons). However, uncertainties interact with one another, and to properly understand the impact of uncertainties, one must vary uncertain parameters and assumptions *simultaneously*.
Sensitivity analysis and uncertainty analysis are large topics and are in fact research fields in their own right. To better understand them, a good starting point is [Global Sensitivity Analysis: The Primer](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470725184), and a recent summary of sensitivity analysis research can be found [here](https://www.sciencedirect.com/science/article/pii/S1364815220310112?via%3Dihub).
14\.2 Five steps
----------------
To perform an uncertainty or sensitivity analysis, one must define several things:
1. The system or model (in this case it is a composite indicator, represented as a COIN)
2. Which assumptions to treat as uncertain
3. The alternative values or distributions assigned to each uncertain assumption
4. Which output or outputs to target (i.e. to calculate confidence intervals for)
5. Methodological specifications for the sensitivity analysis itself, for example the method and the number of replications to run.
This should dispel the common idea that one can simply “run a sensitivity analysis”. In fact, all of these steps require some thought and attention, and the results of the sensitivity analysis will be themselves dependent on these choices. Let’s go through them one by one.
### 14\.2\.1 Specifying the model
First, the **system or model**. This should be clear: you need to have already built your (nominal) composite indicator in order to check the uncertainties. Usually this would involve calculating the results up to and including the aggregated index. The choices in this model should represent your “best” choices for each methodological step, for example your preferred aggregration method, preferred set of weights, and so on.
### 14\.2\.2 Which assumptions to vary
Specifying **which assumptions to vary** is more complicated. It is impossible to fully quantify the uncertainty in a composite indicator (or any model, for that matter) because there are simply so many sources of uncertainty, ranging from the input data, the choice of indicators, the structure of the index, and all the methodological steps along the way (imputation, treatment, normalisation, etc.). A reasonable approach is to identify specific assumptions and parameters that could have plausible alternatives, and can be practically varied.
The construction of composite indicators in COINr (deliberately) lends itself well to uncertainty analysis, because as we have seen in the [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons) chapter, all the methodological choices used to build the composite indicator are recorded inside the COIN, and changes can be made by simply altering these parameters and calling the `regen()` function. Sensitivity and uncertainty analysis is simply an extension of this concept \- where we create a large number of alternative COINs, each with methodological variations following the distributions assigned to each uncertain assumption.
The upshot here is that (at least in theory) any parameter from any of the construction functions (see again the table in [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons)) can be varied as part of a sensitivity analysis. This includes, for example:
* Inclusion and exclusion of indicators
* Data availability thresholds for screening units
* Alternative choices of denominators
* Alternative imputation methods
* Alternative data treatment, Winsorisation thresholds, skew and kurtosis values, transformations
* Alternative normalisation methods and parameters
* Alternative aggregration methods and weight sets
On top of this, it is possible to randomly perturb weights at each level by a specified noise factor. This is explained in more detail later in this chapter.
The reason that these can be varied *in theory* is because there may arise conflicts between methodological choices. For example, if a normalisation method results in negative values, we cannot use a default geometric mean aggregation method. For this reason, it is recommended to focus on key uncertainties and start modestly with a sensitivity analysis, working up to a more complex version if required.
### 14\.2\.3 Alternative values
Having selected some key uncertain assumptions, we must assign plausible alternative values. For example, let’s say that an uncertain assumption is the normalisation method. By default, we have used min\-max, but perhaps other methods could be reasonable alternatives. The question is then, which alternatives to test?
The answer here should not be to blindly apply all possible alternatives available, but rather to select some alternatives that represent **plausible** alternatives, ruling out any that do not fit the requirements of the index. For example, using rank normalisation is a robust method that neatly deals with outliers but by doing so, also ignores whether a unit is an exceptional performer. This may be good or bad depending on what you want to capture. If it is “good” then it could be considered as an alternative. If it does not fit the objectives, it is not a plausible alternative, so should not be included in the sensitivity analysis.
Finding plausible alternatives is not necessarily an easy task, and we must recall that in any case we will end up with a lower bound on the uncertainty, since we cannot fully test all uncertainties, as discussed previously (we recall that this is the same in any model, not just in composite indicators). Yet, we can still do a lot better than no uncertainty analysis at all.
In the end, for each uncertain parameter, we should end up with a list of alternative plausible values. Note that at the moment, COINr assumes equal probability for all alternatives, i.e. uniform distributions. This may be extended to other distributions in future releases.
### 14\.2\.4 Selecting the output
Composite indicators have multidimensional outputs \- one value for each unit and for each aggregate or normalised indicator. Typically, the most interesting outputs to look at in the context of a sensitivity analysis are the final index values, and possibly some of the underlying aggregate scores (sub\-indexes, pillars, etc.).
COINr allows us to select which inputs to target, and this is explained more below.
### 14\.2\.5 SA methodology
Finally, we have to specify what kind of analysis to perform, and how many model runs. Currently, COINr offers either an uncertainty analysis (resulting in distributions over ranks), or a sensitivity analysis (additionally showing which input uncertainties cause the most output uncertainty). As mentioned, sensitivity analysis methodology is a rabbit hole in itself, and interested readers could refer to the references at the beginning of this chapter to find out more.
Sensitivity analysis usually requires more model runs (replications of the composite indicator). Still, composite indicators are fairly cheap to evaluate, depending on the number of indicators and the complexity of the construction. Regardless of whether you run an uncertainty or sensitivity analysis, more model runs is always better because it increases the accuracy of the estimations of the distributions over ranks. If you are prepared to wait some minutes or possibly hour(s), normally this is enough to perform a fairly solid sensitivity analysis.
### 14\.2\.1 Specifying the model
First, the **system or model**. This should be clear: you need to have already built your (nominal) composite indicator in order to check the uncertainties. Usually this would involve calculating the results up to and including the aggregated index. The choices in this model should represent your “best” choices for each methodological step, for example your preferred aggregration method, preferred set of weights, and so on.
### 14\.2\.2 Which assumptions to vary
Specifying **which assumptions to vary** is more complicated. It is impossible to fully quantify the uncertainty in a composite indicator (or any model, for that matter) because there are simply so many sources of uncertainty, ranging from the input data, the choice of indicators, the structure of the index, and all the methodological steps along the way (imputation, treatment, normalisation, etc.). A reasonable approach is to identify specific assumptions and parameters that could have plausible alternatives, and can be practically varied.
The construction of composite indicators in COINr (deliberately) lends itself well to uncertainty analysis, because as we have seen in the [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons) chapter, all the methodological choices used to build the composite indicator are recorded inside the COIN, and changes can be made by simply altering these parameters and calling the `regen()` function. Sensitivity and uncertainty analysis is simply an extension of this concept \- where we create a large number of alternative COINs, each with methodological variations following the distributions assigned to each uncertain assumption.
The upshot here is that (at least in theory) any parameter from any of the construction functions (see again the table in [Adjustments and comparisons](adjustments-and-comparisons.html#adjustments-and-comparisons)) can be varied as part of a sensitivity analysis. This includes, for example:
* Inclusion and exclusion of indicators
* Data availability thresholds for screening units
* Alternative choices of denominators
* Alternative imputation methods
* Alternative data treatment, Winsorisation thresholds, skew and kurtosis values, transformations
* Alternative normalisation methods and parameters
* Alternative aggregration methods and weight sets
On top of this, it is possible to randomly perturb weights at each level by a specified noise factor. This is explained in more detail later in this chapter.
The reason that these can be varied *in theory* is because there may arise conflicts between methodological choices. For example, if a normalisation method results in negative values, we cannot use a default geometric mean aggregation method. For this reason, it is recommended to focus on key uncertainties and start modestly with a sensitivity analysis, working up to a more complex version if required.
### 14\.2\.3 Alternative values
Having selected some key uncertain assumptions, we must assign plausible alternative values. For example, let’s say that an uncertain assumption is the normalisation method. By default, we have used min\-max, but perhaps other methods could be reasonable alternatives. The question is then, which alternatives to test?
The answer here should not be to blindly apply all possible alternatives available, but rather to select some alternatives that represent **plausible** alternatives, ruling out any that do not fit the requirements of the index. For example, using rank normalisation is a robust method that neatly deals with outliers but by doing so, also ignores whether a unit is an exceptional performer. This may be good or bad depending on what you want to capture. If it is “good” then it could be considered as an alternative. If it does not fit the objectives, it is not a plausible alternative, so should not be included in the sensitivity analysis.
Finding plausible alternatives is not necessarily an easy task, and we must recall that in any case we will end up with a lower bound on the uncertainty, since we cannot fully test all uncertainties, as discussed previously (we recall that this is the same in any model, not just in composite indicators). Yet, we can still do a lot better than no uncertainty analysis at all.
In the end, for each uncertain parameter, we should end up with a list of alternative plausible values. Note that at the moment, COINr assumes equal probability for all alternatives, i.e. uniform distributions. This may be extended to other distributions in future releases.
### 14\.2\.4 Selecting the output
Composite indicators have multidimensional outputs \- one value for each unit and for each aggregate or normalised indicator. Typically, the most interesting outputs to look at in the context of a sensitivity analysis are the final index values, and possibly some of the underlying aggregate scores (sub\-indexes, pillars, etc.).
COINr allows us to select which inputs to target, and this is explained more below.
### 14\.2\.5 SA methodology
Finally, we have to specify what kind of analysis to perform, and how many model runs. Currently, COINr offers either an uncertainty analysis (resulting in distributions over ranks), or a sensitivity analysis (additionally showing which input uncertainties cause the most output uncertainty). As mentioned, sensitivity analysis methodology is a rabbit hole in itself, and interested readers could refer to the references at the beginning of this chapter to find out more.
Sensitivity analysis usually requires more model runs (replications of the composite indicator). Still, composite indicators are fairly cheap to evaluate, depending on the number of indicators and the complexity of the construction. Regardless of whether you run an uncertainty or sensitivity analysis, more model runs is always better because it increases the accuracy of the estimations of the distributions over ranks. If you are prepared to wait some minutes or possibly hour(s), normally this is enough to perform a fairly solid sensitivity analysis.
14\.3 Variance\-based sensitivity analysis
------------------------------------------
COINr is almost certainly the only package in any language which allows a full variance\-based (global) sensitivity analysis on a composite indicator. However, this means that variance\-based sensitivity analysis needs to be briefly explained.
Variance\-based sensitivity analysis is largely considered as the “gold standard” in testing the effects of uncertainties in modelling and more generally in systems. Briefly, the central idea is that the uncertainty in a single output \\(y\\) of a model can be encapsulated as its variance \\(V(y)\\) \- the greater the variance is, the more uncertain the output.
In a [seminal paper](https://doi.org/10.1016/S0378-4754(00)00270-6), Russian mathematician Ilya Sobol’ showed that this output variance can be decomposed into chunks which are attributable to each uncertain input and interactions between inputs. Letting \\(x\_i\\) denote the \\(i\\)th assumption to be varied, and \\(k\\) the number of uncertain assumptions:
\\\[ V(y)\=\\sum\_i V\_i\+\\sum\_i\\sum\_{j\>i}V\_{i,j}\+...\+V\_{1,2,...,k}, \\]
where
\\\[ V\_i\= V\[E(y\|x\_i)], \\\\ V\_{i,j}\=V\[E(y\|x\_i,x\_j)] \- V\[E(y\|x\_i)]\-V\[E(y\|x\_j)] \\]
and so on for higher terms. Here, \\(V(\\cdot)\\) denotes the variance operator, \\(E(\\cdot)\\) the expected value, and these terms are used directly as *sensitivity indices*, e.g. the \\(S\_{i}\=V\_i/V(y)\\) measures the contribution of the input \\(x\_i\\) to \\(V(y)\\), without including interactions with other inputs. Since they are standardised by \\(V(y)\\), each sensitivity index measures the *fraction of the variance* caused by each input (or interactions between inputs), and therefore the *fraction of the uncertainty*.
In the same paper, Sobol’ also showed how these sensitivity indices can be estimated using a Monte Carlo design. This Monte Carlo design (running the composite indicator many times with a particular combination of input values) is implemented in COINr. This follows the methodology described in [this paper](https://doi.org/10.1111/j.1467-985X.2005.00350.x), although due to the difficulty of implementing it, it has not been used very extensively in practice.
In any case, the exact details of variance\-based sensitivity analysis are better described elsewhere. Here, the important points are how to interpret the sensitivity indices. COINr produces two indices for each uncertain assumption. The first one is the *first order sensitivity index*, which is the fraction of the output variance caused by each uncertain input assumption alone and is defined as:
\\\[ S\_i \= \\frac{V\[E(y\|x\_i)]}{V(y)} \\]
Importantly, this is *averaged over variations in other input assumptions*. In other words, it is not the same as simply varying that assumption alone and seeing what happens (this result is dependent on the values of the other assumptions).
The second measure is the *total order sensitivity index*, sometimes called the *total effect index*, and is:
\\\[ S\_{Ti} \= 1 \- \\frac {V\[E\\left(y \\mid \\textbf{x}\_{\-i} \\right)]}{V(y)} \= \\frac {E\[V\\left(y \\mid
\\textbf{x}\_{\-i} \\right)]}{V(y)} \\]
where \\(\\textbf{x}\_{\-i}\\) is the set of all uncertain inputs except the \\(i\\)th. The quantity \\(S\_{Ti}\\) measures the fraction of the output variance caused by \\(x\_i\\) *and any interactions with other assumptions*.
Depending on your background, this may or may not seem a little confusing. Either way, see the examples in the rest of this chapter, which show that once calculated, the sensitivity indices are fairly intuitive.
14\.4 Sensitivity in COINr
--------------------------
To run a sensitivity analysis in COINr, the function of interest is `sensitivity()`. We must follow the steps outlined above, so first, you should have a nominal composite indicator constructed up to and including the aggregation step. Here, we use our old friend the ASEM index.
```
library(COINr6)
# build ASEM index
ASEM <- build_ASEM()
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
```
### 14\.4\.1 General specifications
Next, we have to decide which parameters to treat as uncertain. Here, we will consider three things:
1. The imputation method
2. The normalisation method
3. The weights
These are chosen as a limited set to keep things relatively simple, and because imputation is always a significant uncertainty (we are basically guessing data points). We will also consider a couple of alternative normalisation methods. Finally, we will randomly perturb the weights.
For the distributions (i.e. the plausible alternatives), we will consider the following:
* Imputation using either indicator group mean, indicator mean, or no imputation
* Normalisation using either min\-max, rank, or distance to maximum
* Perturb pillar and sub index weights by \+/\-25%
Finally, we will consider the index as the target.
Let’s enter these specifications. The most complex part is specifying the `SA_specs` argument of the `sensitivity()` function, which specifies which parameters to vary and which alternative values to use. To explain how this works, the easiest way is to construct the specifications for our example.
```
# define noise to be applied to weights
nspecs <- data.frame(AgLevel = c(2,3), NoiseFactor = c(0.25,0.25))
# create list specifying assumptions to vary and alternatives
SAspecs <- list(
impute = list(imtype = c("indgroup_mean", "ind_mean", "none")),
normalise = list(ntype = c("minmax", "rank", "dist2max")),
weights = list(NoiseSpecs = nspecs, Nominal = "Original")
)
```
Leaving aside the `nspecs` line for now (but see below for details), we focus on the list `SAspecs`, which will be used as the `SA_specs` argument of `sensitivity()`. The *names* of the first level of this list should be any of the seven construction functions. Each named element of the list is itself a list, which specifies the parameters of that function to vary, and the alternative values. In summary, the general format is:
```
function_name = list(parameter name = vector_or_list_of_alternatives)
```
There is no restriction on the number of parameters that can be varied, or the number of alternatives. You can have multiple parameters from the same function, for example.
### 14\.4\.2 Varying weights
The exception to the above format is regarding weights. Weights are input as an argument of the `aggregate()` function, but since this is a single argument, it only behaves as a single parameter. That means that we could test alternative weight sets as part of the sensitivity analysis:
```
# example entry in SA_specs list of sensitivity() [not used here]
aggregate = list(agweights = c("Weights1", "Weights2", "Weights3"))
```
where “Weights1” and friends are alternative sets of weights that we have stored in `.$Parameters$Weights`. This is a useful test but only gives a limited picture of uncertainty because we test between a small set of alternatives. It may be the best approach if we have two or three fairly clear alternatives for weighting (note: this can also be a way to exclude indicators or even entire aggregation groups, by setting certain weights to zero).
If the uncertainty is more general, i.e. we have elicited weights but we feel that there is a reasonable degree of uncertainty surrounding all weight values (which usually there is), COINr also includes the option to apply random “noise” to the weights. With this approach, for each replication of the composite indicator, a random value is added to each weight, of the form:
\\\[ w'\_i \= w\_i \+ \\epsilon\_i, \\; \\; \\epsilon\_i \\sim U\[\-\\phi w\_i, \\phi w\_i] \\]
where \\(w\_i\\) is a weight, \\(\\epsilon\_i\\) is the added noise, and \\(\\phi\\) is a “noise factor”. This means that if we set \\(\\phi \= 0\.25\\), for example, it would let \\(w\_i\\) vary between \+/\-25% of its nominal value, following a uniform distribution.
The noise factor can be different from one aggregation level to another in COINr. That means we can choose which aggregation level weights to apply noise to, and how much for each level. To specify this in the `sensitivity()` function, we add it as a “special” `weights` entry in the `SAspecs` list described above. It is special in the sense that this is the only entry that is allowed that is not a construction function name.
The weights entry was already defined above, but let’s look at it again here:
```
# define noise to be applied to weights
nspecs <- data.frame(AgLevel = c(2,3), NoiseFactor = c(0.25,0.25))
nspecs
# create list specifying assumptions to vary and alternatives
weights = list(NoiseSpecs = nspecs,
Nominal = "Original")
```
The weights list has two entries. The first is a data frame where each row is the specifications for noise to apply to the weights of an aggregation level. It has two columns: `AgLev` which is the aggregation level, and `NoiseFactor`, which is the noise factor described above. In the example here, we have specified that at aggregation levels 2 and 3 (pillars and sub\-indexes), there should be noise factors of 0\.25 (\+/\-25% of nominal weight values). Indicator weights remain always at nominal values since they are not specified in this table.
The second entry, `Nominal`, is the name of the weight set to use as the nominal values (i.e. the baseline to apply the noise to). This should correspond to a weight set present in `.$Parameters$Weights`. Here, we have set it to `Original`, which is the original weights that were input to `assemble()`.
### 14\.4\.3 Running an uncertainty analysis
To actually run the analysis, we call the `sensitivity()` function, using the specifications in the `SAspecs` list, and setting `v_targ = "Index"`, meaning that the index will be the target of the sensitivity analysis. Further, we specify that there should be 100 replications (this number is kept fairly low to limit the time taken to compile this book), and that the type of analysis should be an uncertainty analysis.
```
# run uncertainty analysis
SAresults <- sensitivity(ASEM, v_targ = "Index",
SA_specs = SAspecs,
N = 100,
SA_type = "UA")
## Iteration 1 of 100 ... 1% complete
## Iteration 2 of 100 ... 2% complete
## Iteration 3 of 100 ... 3% complete
## Iteration 4 of 100 ... 4% complete
## Iteration 5 of 100 ... 5% complete
## Iteration 6 of 100 ... 6% complete
## Iteration 7 of 100 ... 7% complete
## Iteration 8 of 100 ... 8% complete
## Iteration 9 of 100 ... 9% complete
## Iteration 10 of 100 ... 10% complete
## Iteration 11 of 100 ... 11% complete
## Iteration 12 of 100 ... 12% complete
## Iteration 13 of 100 ... 13% complete
## Iteration 14 of 100 ... 14% complete
## Iteration 15 of 100 ... 15% complete
## Iteration 16 of 100 ... 16% complete
## Iteration 17 of 100 ... 17% complete
## Iteration 18 of 100 ... 18% complete
## Iteration 19 of 100 ... 19% complete
## Iteration 20 of 100 ... 20% complete
## Iteration 21 of 100 ... 21% complete
## Iteration 22 of 100 ... 22% complete
## Iteration 23 of 100 ... 23% complete
## Iteration 24 of 100 ... 24% complete
## Iteration 25 of 100 ... 25% complete
## Iteration 26 of 100 ... 26% complete
## Iteration 27 of 100 ... 27% complete
## Iteration 28 of 100 ... 28% complete
## Iteration 29 of 100 ... 29% complete
## Iteration 30 of 100 ... 30% complete
## Iteration 31 of 100 ... 31% complete
## Iteration 32 of 100 ... 32% complete
## Iteration 33 of 100 ... 33% complete
## Iteration 34 of 100 ... 34% complete
## Iteration 35 of 100 ... 35% complete
## Iteration 36 of 100 ... 36% complete
## Iteration 37 of 100 ... 37% complete
## Iteration 38 of 100 ... 38% complete
## Iteration 39 of 100 ... 39% complete
## Iteration 40 of 100 ... 40% complete
## Iteration 41 of 100 ... 41% complete
## Iteration 42 of 100 ... 42% complete
## Iteration 43 of 100 ... 43% complete
## Iteration 44 of 100 ... 44% complete
## Iteration 45 of 100 ... 45% complete
## Iteration 46 of 100 ... 46% complete
## Iteration 47 of 100 ... 47% complete
## Iteration 48 of 100 ... 48% complete
## Iteration 49 of 100 ... 49% complete
## Iteration 50 of 100 ... 50% complete
## Iteration 51 of 100 ... 51% complete
## Iteration 52 of 100 ... 52% complete
## Iteration 53 of 100 ... 53% complete
## Iteration 54 of 100 ... 54% complete
## Iteration 55 of 100 ... 55% complete
## Iteration 56 of 100 ... 56% complete
## Iteration 57 of 100 ... 57% complete
## Iteration 58 of 100 ... 58% complete
## Iteration 59 of 100 ... 59% complete
## Iteration 60 of 100 ... 60% complete
## Iteration 61 of 100 ... 61% complete
## Iteration 62 of 100 ... 62% complete
## Iteration 63 of 100 ... 63% complete
## Iteration 64 of 100 ... 64% complete
## Iteration 65 of 100 ... 65% complete
## Iteration 66 of 100 ... 66% complete
## Iteration 67 of 100 ... 67% complete
## Iteration 68 of 100 ... 68% complete
## Iteration 69 of 100 ... 69% complete
## Iteration 70 of 100 ... 70% complete
## Iteration 71 of 100 ... 71% complete
## Iteration 72 of 100 ... 72% complete
## Iteration 73 of 100 ... 73% complete
## Iteration 74 of 100 ... 74% complete
## Iteration 75 of 100 ... 75% complete
## Iteration 76 of 100 ... 76% complete
## Iteration 77 of 100 ... 77% complete
## Iteration 78 of 100 ... 78% complete
## Iteration 79 of 100 ... 79% complete
## Iteration 80 of 100 ... 80% complete
## Iteration 81 of 100 ... 81% complete
## Iteration 82 of 100 ... 82% complete
## Iteration 83 of 100 ... 83% complete
## Iteration 84 of 100 ... 84% complete
## Iteration 85 of 100 ... 85% complete
## Iteration 86 of 100 ... 86% complete
## Iteration 87 of 100 ... 87% complete
## Iteration 88 of 100 ... 88% complete
## Iteration 89 of 100 ... 89% complete
## Iteration 90 of 100 ... 90% complete
## Iteration 91 of 100 ... 91% complete
## Iteration 92 of 100 ... 92% complete
## Iteration 93 of 100 ... 93% complete
## Iteration 94 of 100 ... 94% complete
## Iteration 95 of 100 ... 95% complete
## Iteration 96 of 100 ... 96% complete
## Iteration 97 of 100 ... 97% complete
## Iteration 98 of 100 ... 98% complete
## Iteration 99 of 100 ... 99% complete
## Iteration 100 of 100 ... 100% complete
## Time elapsed = 37.13s, average 0.37s/rep.
```
Running a sensitivity/uncertainty analysis can be time consuming and depends on the complexity of the model and the number of runs specified. This particular analysis took less than a minute for me, but could take more or less time depending on the speed of your computer.
The output of the analysis is a list with several entries:
```
# see summary of analysis
summary(SAresults)
## Length Class Mode
## Scores 101 data.frame list
## Parameters 2 data.frame list
## Ranks 101 data.frame list
## RankStats 6 data.frame list
## Nominal 3 data.frame list
## t_elapse 1 -none- numeric
## t_average 1 -none- numeric
## ParaNames 3 -none- character
```
We will go into more detail in a minute, but first we can plot the results of the uncertainty analysis using a dedicated function `plotSARanks()`:
```
plotSARanks(SAresults)
```
This plot orders the units by their nominal ranks, and plots the median rank across all the replications of the uncertainty analysis, as well as the 5th and 95th percentile rank values. The ranks are the focus of an uncertainty analysis because scores can change drastically depending on the method. Ranks are the only comparable metric across different composite indicator methodologies[5](#fn5). The `plotSARanks()` function also gives options for line and dot colours.
Let us now look at the elements in `SAresults`. The data frames `SAresults$Scores` and `SAresults$Ranks` give the scores and ranks respectively for each replication of the composite indicator. The `SAresults$Parameters` data frame gives the uncertain parameter values used for each iteration:
```
head(SAresults$Parameters)
## imtype ntype
## 1 indgroup_mean minmax
## 2 ind_mean dist2max
## 3 ind_mean rank
## 4 indgroup_mean minmax
## 5 ind_mean rank
## 6 none minmax
```
Here, each column is a parameter that was varied in the analysis. Note that this does not include the weights used for each iteration since these are themselves data frames.
The `SAresults$RankStats` table gives a summary of the main statistics of the ranks of each unit, including mean, median, and percentiles:
```
SAresults$RankStats
## UnitCode Nominal Mean Median Q5 Q95
## 1 AUT 7 6.93 7.0 5.00 8.00
## 2 BEL 5 5.36 5.0 3.00 7.05
## 3 BGR 30 29.65 30.0 28.00 31.00
## 4 HRV 18 20.41 20.0 17.00 25.00
## 5 CYP 29 29.91 30.0 28.00 32.00
## 6 CZE 17 17.21 17.0 16.00 19.00
## 7 DNK 3 2.60 2.0 2.00 5.00
## 8 EST 22 19.85 20.0 12.00 25.05
## 9 FIN 13 13.47 14.0 11.00 16.00
## 10 FRA 21 20.40 20.0 18.00 23.05
## 11 DEU 9 8.83 9.0 8.00 11.00
## 12 GRC 32 32.01 32.0 30.00 34.00
## 13 HUN 20 20.84 21.0 19.00 24.00
## 14 IRL 12 12.04 11.0 10.00 16.00
## 15 ITA 28 27.95 28.0 27.00 29.00
## 16 LVA 23 22.40 23.0 19.95 25.00
## 17 LTU 16 15.08 16.0 11.00 18.00
## 18 LUX 8 8.80 9.0 6.00 12.00
## 19 MLT 10 13.24 12.0 9.00 19.00
## 20 NLD 2 3.07 3.0 2.00 4.00
## 21 NOR 4 3.59 4.0 2.00 5.00
## 22 POL 26 26.14 26.0 25.00 28.00
## 23 PRT 27 23.72 26.0 15.00 27.00
## 24 ROU 25 24.15 25.0 19.00 28.00
## 25 SVK 24 23.47 23.0 22.00 26.00
## 26 SVN 11 10.04 10.0 8.00 12.00
## 27 ESP 19 22.13 22.0 18.00 26.05
## 28 SWE 6 5.93 6.0 5.00 7.00
## 29 CHE 1 1.00 1.0 1.00 1.00
## 30 GBR 15 14.04 15.0 11.00 16.00
## 31 AUS 35 35.78 35.0 34.00 39.00
## 32 BGD 46 45.65 46.0 41.00 50.00
## 33 BRN 40 38.80 38.0 35.00 45.00
## 34 KHM 37 36.57 37.0 33.00 39.00
## 35 CHN 49 49.47 49.5 47.00 51.00
## 36 IND 45 45.10 45.0 42.00 48.00
## 37 IDN 43 45.08 45.0 41.00 49.00
## 38 JPN 34 33.67 34.0 32.00 35.00
## 39 KAZ 47 45.76 46.0 42.00 49.00
## 40 KOR 31 30.92 31.0 28.00 33.00
## 41 LAO 48 42.42 41.0 35.95 49.00
## 42 MYS 39 40.04 40.0 36.00 45.05
## 43 MNG 44 45.34 46.0 40.00 49.00
## 44 MMR 41 41.94 42.0 38.00 47.05
## 45 NZL 33 33.00 33.0 31.00 36.00
## 46 PAK 50 48.40 48.5 46.00 50.00
## 47 PHL 38 40.76 41.0 37.95 44.05
## 48 RUS 51 50.60 51.0 50.00 51.00
## 49 SGP 14 13.58 13.0 10.00 17.00
## 50 THA 42 41.45 41.0 39.00 44.00
## 51 VNM 36 37.41 37.5 36.00 39.00
```
It also includes the nominal ranks for comparison. Finally, `SAresults$Nominal` gives a summary of nominal ranks and scores.
If this information is not enough, the `store_results` argument to `sensitivity()` gives options of what information to store for each iteration. For example, `store_results = "results+method"` returns the information described so far, plus the full .$Method list of each replication (i.e. the full specifications used to build the composite indicator). Setting `store_results = "results+COIN"` stores all results and the complete COIN of each replication. Of course, this latter option in particular will take up more memory.
### 14\.4\.4 Running a sensitivity analysis
Running a sensitivity analysis, i.e. understanding the individual contributions of input uncertainties, is similar to an uncertainty analysis, but there are some extra considerations to keep in mind.
The first is that the target of the sensitivity analysis will be different. Whereas when you run an uncertainty analysis, you can calculate confidence intervals for each unit, with a sensitivity analysis a different approach is taken because otherwise, you would have a set of sensitivity indices for each single unit, and this would be hard to interpret. Instead, COINr calculates sensitivity with respect to the *average absolute rank change*, between nominal and perturbed values.
The second consideration is that the number of replications required to a sensitivity analysis is quite a bit higher than the number required for an uncertainty analysis. In fact, when you specify `N` in the `sensitivity()` function, and set to `SAtype = "SA"`, the actual number of replications is \\(N\_T \= N(d \+2\)\\), where \\(d\\) is the number of uncertain input parameters/assumptions.
To run the sensitivity analysis, the format is very similar to the uncertainty analysis. We simply run `sensitivity()` with `SA_specs` in exactly the same format as previously (we can use the same list), and set `SA_type = "SA"`.
```
# Not actually run here. If you run this, it will take a few minutes
SAresults <- sensitivity(ASEM, v_targ = "Index",
SA_specs = SAspecs,
N = 500,
SA_type = "SA", Nboot = 1000)
```
The output of the sensitivity analysis is a list. This is in fact an extended version of the list when `SA_type = "UA"`, in that it has the recorded scores for each iteration, ranks for each iteration, as well as confidence intervals for ranks, and all the other outputs associated with `SA_type = "SA"`.
Additionally it outputs a data frame `.$Sensitivity`, which gives first and total order sensitivity indices for each input variable. Moreover, if `Nboot` is specified in `sensitivity()`, it provides estimated confidence intervals for each sensitivity index using bootstrapping.
```
roundDF(SAresults$Sensitivity)
## Variable Si STi Si_q5 Si_q95 STi_q5 STi_q95
## 1 imtype 0.00 0.04 -0.04 0.05 0.03 0.04
## 2 ntype 0.54 0.80 0.30 0.77 0.72 0.88
## 3 weights 0.23 0.17 0.12 0.34 0.15 0.19
```
Recall that the target output in the sensitivity analysis is the mean absolute rank change (let’s call it the MARC from here). How do we interpret these results? The first order sensitivity index \\(S\_i\\) can be interpreted as the uncertainty caused by the effect of the \\(i\\)th uncertain parameter/assumption *on its own*. The total order sensitivity index is the uncertainty caused by the effect of the \\(i\\)th uncertain parameter/assumption, *including its interactions with other inputs*.
To help understand this, we will use COINr’s plotting function for sensitivity analysis, `plotSA`.
```
# plot bar chart
plotSA(SAresults, ptype = "bar")
```
This plot shows the three uncertainties that we introduced on the x\-axis: the imputation type, the normalisation type, and the weights. The y\-axis is the sensitivity index, and the total height of each bar is the total effect index \\(S\_{Ti}\\), i.e. the uncertainty caused by the variable on its own (the *main effect*) as well as its interactions. Then each bar is divided into the interaction effects and the main effects.
What this shows is that the normalisation type is the most important uncertainty, followed by the weights, and last by the imputation type. In fact, the choice of imputation method (between the ones specified) is effectively insignificant. We can also see that the nomalisation has a significant interaction effect, probably with the imputation method. Whereas the weights don’t seem to interact with the other inputs.
Another way of looking at this is in a pie chart:
```
# plot bar chart
plotSA(SAresults, ptype = "pie")
```
Here we can see that more than half of the uncertainty is caused by the normalisation method choice alone, while a bit less than a quarter is caused by the weights, and the remainder by interactions.
It is likely that the normalisation method is important because one of the choices was rank normalisation, which radically alters the distribution of each indicator.
Finally, to see the uncertainty on the estimates of each sensitivity index we can use a box plot (or error bars):
```
# plot bar chart
plotSA(SAresults, ptype = "box")
```
This will not work unless `Nboot` was specified. Bootstrapping allows confidence intervals to be estimated, and this shows that the estimates of total effect indices (\\(S\_{Ti}\\)) are quite reliable. Whereas the confidence intervals are much wider on the first order indices (\\(S\_{i}\\)). Still, with the number of runs applied, it is possible to draw robust conclusions. If the confidence intervals were still too wide here, you could consider increasing `N` \- this should lead to narrower intervals, although you might have to wait for a bit.
### 14\.4\.1 General specifications
Next, we have to decide which parameters to treat as uncertain. Here, we will consider three things:
1. The imputation method
2. The normalisation method
3. The weights
These are chosen as a limited set to keep things relatively simple, and because imputation is always a significant uncertainty (we are basically guessing data points). We will also consider a couple of alternative normalisation methods. Finally, we will randomly perturb the weights.
For the distributions (i.e. the plausible alternatives), we will consider the following:
* Imputation using either indicator group mean, indicator mean, or no imputation
* Normalisation using either min\-max, rank, or distance to maximum
* Perturb pillar and sub index weights by \+/\-25%
Finally, we will consider the index as the target.
Let’s enter these specifications. The most complex part is specifying the `SA_specs` argument of the `sensitivity()` function, which specifies which parameters to vary and which alternative values to use. To explain how this works, the easiest way is to construct the specifications for our example.
```
# define noise to be applied to weights
nspecs <- data.frame(AgLevel = c(2,3), NoiseFactor = c(0.25,0.25))
# create list specifying assumptions to vary and alternatives
SAspecs <- list(
impute = list(imtype = c("indgroup_mean", "ind_mean", "none")),
normalise = list(ntype = c("minmax", "rank", "dist2max")),
weights = list(NoiseSpecs = nspecs, Nominal = "Original")
)
```
Leaving aside the `nspecs` line for now (but see below for details), we focus on the list `SAspecs`, which will be used as the `SA_specs` argument of `sensitivity()`. The *names* of the first level of this list should be any of the seven construction functions. Each named element of the list is itself a list, which specifies the parameters of that function to vary, and the alternative values. In summary, the general format is:
```
function_name = list(parameter name = vector_or_list_of_alternatives)
```
There is no restriction on the number of parameters that can be varied, or the number of alternatives. You can have multiple parameters from the same function, for example.
### 14\.4\.2 Varying weights
The exception to the above format is regarding weights. Weights are input as an argument of the `aggregate()` function, but since this is a single argument, it only behaves as a single parameter. That means that we could test alternative weight sets as part of the sensitivity analysis:
```
# example entry in SA_specs list of sensitivity() [not used here]
aggregate = list(agweights = c("Weights1", "Weights2", "Weights3"))
```
where “Weights1” and friends are alternative sets of weights that we have stored in `.$Parameters$Weights`. This is a useful test but only gives a limited picture of uncertainty because we test between a small set of alternatives. It may be the best approach if we have two or three fairly clear alternatives for weighting (note: this can also be a way to exclude indicators or even entire aggregation groups, by setting certain weights to zero).
If the uncertainty is more general, i.e. we have elicited weights but we feel that there is a reasonable degree of uncertainty surrounding all weight values (which usually there is), COINr also includes the option to apply random “noise” to the weights. With this approach, for each replication of the composite indicator, a random value is added to each weight, of the form:
\\\[ w'\_i \= w\_i \+ \\epsilon\_i, \\; \\; \\epsilon\_i \\sim U\[\-\\phi w\_i, \\phi w\_i] \\]
where \\(w\_i\\) is a weight, \\(\\epsilon\_i\\) is the added noise, and \\(\\phi\\) is a “noise factor”. This means that if we set \\(\\phi \= 0\.25\\), for example, it would let \\(w\_i\\) vary between \+/\-25% of its nominal value, following a uniform distribution.
The noise factor can be different from one aggregation level to another in COINr. That means we can choose which aggregation level weights to apply noise to, and how much for each level. To specify this in the `sensitivity()` function, we add it as a “special” `weights` entry in the `SAspecs` list described above. It is special in the sense that this is the only entry that is allowed that is not a construction function name.
The weights entry was already defined above, but let’s look at it again here:
```
# define noise to be applied to weights
nspecs <- data.frame(AgLevel = c(2,3), NoiseFactor = c(0.25,0.25))
nspecs
# create list specifying assumptions to vary and alternatives
weights = list(NoiseSpecs = nspecs,
Nominal = "Original")
```
The weights list has two entries. The first is a data frame where each row is the specifications for noise to apply to the weights of an aggregation level. It has two columns: `AgLev` which is the aggregation level, and `NoiseFactor`, which is the noise factor described above. In the example here, we have specified that at aggregation levels 2 and 3 (pillars and sub\-indexes), there should be noise factors of 0\.25 (\+/\-25% of nominal weight values). Indicator weights remain always at nominal values since they are not specified in this table.
The second entry, `Nominal`, is the name of the weight set to use as the nominal values (i.e. the baseline to apply the noise to). This should correspond to a weight set present in `.$Parameters$Weights`. Here, we have set it to `Original`, which is the original weights that were input to `assemble()`.
### 14\.4\.3 Running an uncertainty analysis
To actually run the analysis, we call the `sensitivity()` function, using the specifications in the `SAspecs` list, and setting `v_targ = "Index"`, meaning that the index will be the target of the sensitivity analysis. Further, we specify that there should be 100 replications (this number is kept fairly low to limit the time taken to compile this book), and that the type of analysis should be an uncertainty analysis.
```
# run uncertainty analysis
SAresults <- sensitivity(ASEM, v_targ = "Index",
SA_specs = SAspecs,
N = 100,
SA_type = "UA")
## Iteration 1 of 100 ... 1% complete
## Iteration 2 of 100 ... 2% complete
## Iteration 3 of 100 ... 3% complete
## Iteration 4 of 100 ... 4% complete
## Iteration 5 of 100 ... 5% complete
## Iteration 6 of 100 ... 6% complete
## Iteration 7 of 100 ... 7% complete
## Iteration 8 of 100 ... 8% complete
## Iteration 9 of 100 ... 9% complete
## Iteration 10 of 100 ... 10% complete
## Iteration 11 of 100 ... 11% complete
## Iteration 12 of 100 ... 12% complete
## Iteration 13 of 100 ... 13% complete
## Iteration 14 of 100 ... 14% complete
## Iteration 15 of 100 ... 15% complete
## Iteration 16 of 100 ... 16% complete
## Iteration 17 of 100 ... 17% complete
## Iteration 18 of 100 ... 18% complete
## Iteration 19 of 100 ... 19% complete
## Iteration 20 of 100 ... 20% complete
## Iteration 21 of 100 ... 21% complete
## Iteration 22 of 100 ... 22% complete
## Iteration 23 of 100 ... 23% complete
## Iteration 24 of 100 ... 24% complete
## Iteration 25 of 100 ... 25% complete
## Iteration 26 of 100 ... 26% complete
## Iteration 27 of 100 ... 27% complete
## Iteration 28 of 100 ... 28% complete
## Iteration 29 of 100 ... 29% complete
## Iteration 30 of 100 ... 30% complete
## Iteration 31 of 100 ... 31% complete
## Iteration 32 of 100 ... 32% complete
## Iteration 33 of 100 ... 33% complete
## Iteration 34 of 100 ... 34% complete
## Iteration 35 of 100 ... 35% complete
## Iteration 36 of 100 ... 36% complete
## Iteration 37 of 100 ... 37% complete
## Iteration 38 of 100 ... 38% complete
## Iteration 39 of 100 ... 39% complete
## Iteration 40 of 100 ... 40% complete
## Iteration 41 of 100 ... 41% complete
## Iteration 42 of 100 ... 42% complete
## Iteration 43 of 100 ... 43% complete
## Iteration 44 of 100 ... 44% complete
## Iteration 45 of 100 ... 45% complete
## Iteration 46 of 100 ... 46% complete
## Iteration 47 of 100 ... 47% complete
## Iteration 48 of 100 ... 48% complete
## Iteration 49 of 100 ... 49% complete
## Iteration 50 of 100 ... 50% complete
## Iteration 51 of 100 ... 51% complete
## Iteration 52 of 100 ... 52% complete
## Iteration 53 of 100 ... 53% complete
## Iteration 54 of 100 ... 54% complete
## Iteration 55 of 100 ... 55% complete
## Iteration 56 of 100 ... 56% complete
## Iteration 57 of 100 ... 57% complete
## Iteration 58 of 100 ... 58% complete
## Iteration 59 of 100 ... 59% complete
## Iteration 60 of 100 ... 60% complete
## Iteration 61 of 100 ... 61% complete
## Iteration 62 of 100 ... 62% complete
## Iteration 63 of 100 ... 63% complete
## Iteration 64 of 100 ... 64% complete
## Iteration 65 of 100 ... 65% complete
## Iteration 66 of 100 ... 66% complete
## Iteration 67 of 100 ... 67% complete
## Iteration 68 of 100 ... 68% complete
## Iteration 69 of 100 ... 69% complete
## Iteration 70 of 100 ... 70% complete
## Iteration 71 of 100 ... 71% complete
## Iteration 72 of 100 ... 72% complete
## Iteration 73 of 100 ... 73% complete
## Iteration 74 of 100 ... 74% complete
## Iteration 75 of 100 ... 75% complete
## Iteration 76 of 100 ... 76% complete
## Iteration 77 of 100 ... 77% complete
## Iteration 78 of 100 ... 78% complete
## Iteration 79 of 100 ... 79% complete
## Iteration 80 of 100 ... 80% complete
## Iteration 81 of 100 ... 81% complete
## Iteration 82 of 100 ... 82% complete
## Iteration 83 of 100 ... 83% complete
## Iteration 84 of 100 ... 84% complete
## Iteration 85 of 100 ... 85% complete
## Iteration 86 of 100 ... 86% complete
## Iteration 87 of 100 ... 87% complete
## Iteration 88 of 100 ... 88% complete
## Iteration 89 of 100 ... 89% complete
## Iteration 90 of 100 ... 90% complete
## Iteration 91 of 100 ... 91% complete
## Iteration 92 of 100 ... 92% complete
## Iteration 93 of 100 ... 93% complete
## Iteration 94 of 100 ... 94% complete
## Iteration 95 of 100 ... 95% complete
## Iteration 96 of 100 ... 96% complete
## Iteration 97 of 100 ... 97% complete
## Iteration 98 of 100 ... 98% complete
## Iteration 99 of 100 ... 99% complete
## Iteration 100 of 100 ... 100% complete
## Time elapsed = 37.13s, average 0.37s/rep.
```
Running a sensitivity/uncertainty analysis can be time consuming and depends on the complexity of the model and the number of runs specified. This particular analysis took less than a minute for me, but could take more or less time depending on the speed of your computer.
The output of the analysis is a list with several entries:
```
# see summary of analysis
summary(SAresults)
## Length Class Mode
## Scores 101 data.frame list
## Parameters 2 data.frame list
## Ranks 101 data.frame list
## RankStats 6 data.frame list
## Nominal 3 data.frame list
## t_elapse 1 -none- numeric
## t_average 1 -none- numeric
## ParaNames 3 -none- character
```
We will go into more detail in a minute, but first we can plot the results of the uncertainty analysis using a dedicated function `plotSARanks()`:
```
plotSARanks(SAresults)
```
This plot orders the units by their nominal ranks, and plots the median rank across all the replications of the uncertainty analysis, as well as the 5th and 95th percentile rank values. The ranks are the focus of an uncertainty analysis because scores can change drastically depending on the method. Ranks are the only comparable metric across different composite indicator methodologies[5](#fn5). The `plotSARanks()` function also gives options for line and dot colours.
Let us now look at the elements in `SAresults`. The data frames `SAresults$Scores` and `SAresults$Ranks` give the scores and ranks respectively for each replication of the composite indicator. The `SAresults$Parameters` data frame gives the uncertain parameter values used for each iteration:
```
head(SAresults$Parameters)
## imtype ntype
## 1 indgroup_mean minmax
## 2 ind_mean dist2max
## 3 ind_mean rank
## 4 indgroup_mean minmax
## 5 ind_mean rank
## 6 none minmax
```
Here, each column is a parameter that was varied in the analysis. Note that this does not include the weights used for each iteration since these are themselves data frames.
The `SAresults$RankStats` table gives a summary of the main statistics of the ranks of each unit, including mean, median, and percentiles:
```
SAresults$RankStats
## UnitCode Nominal Mean Median Q5 Q95
## 1 AUT 7 6.93 7.0 5.00 8.00
## 2 BEL 5 5.36 5.0 3.00 7.05
## 3 BGR 30 29.65 30.0 28.00 31.00
## 4 HRV 18 20.41 20.0 17.00 25.00
## 5 CYP 29 29.91 30.0 28.00 32.00
## 6 CZE 17 17.21 17.0 16.00 19.00
## 7 DNK 3 2.60 2.0 2.00 5.00
## 8 EST 22 19.85 20.0 12.00 25.05
## 9 FIN 13 13.47 14.0 11.00 16.00
## 10 FRA 21 20.40 20.0 18.00 23.05
## 11 DEU 9 8.83 9.0 8.00 11.00
## 12 GRC 32 32.01 32.0 30.00 34.00
## 13 HUN 20 20.84 21.0 19.00 24.00
## 14 IRL 12 12.04 11.0 10.00 16.00
## 15 ITA 28 27.95 28.0 27.00 29.00
## 16 LVA 23 22.40 23.0 19.95 25.00
## 17 LTU 16 15.08 16.0 11.00 18.00
## 18 LUX 8 8.80 9.0 6.00 12.00
## 19 MLT 10 13.24 12.0 9.00 19.00
## 20 NLD 2 3.07 3.0 2.00 4.00
## 21 NOR 4 3.59 4.0 2.00 5.00
## 22 POL 26 26.14 26.0 25.00 28.00
## 23 PRT 27 23.72 26.0 15.00 27.00
## 24 ROU 25 24.15 25.0 19.00 28.00
## 25 SVK 24 23.47 23.0 22.00 26.00
## 26 SVN 11 10.04 10.0 8.00 12.00
## 27 ESP 19 22.13 22.0 18.00 26.05
## 28 SWE 6 5.93 6.0 5.00 7.00
## 29 CHE 1 1.00 1.0 1.00 1.00
## 30 GBR 15 14.04 15.0 11.00 16.00
## 31 AUS 35 35.78 35.0 34.00 39.00
## 32 BGD 46 45.65 46.0 41.00 50.00
## 33 BRN 40 38.80 38.0 35.00 45.00
## 34 KHM 37 36.57 37.0 33.00 39.00
## 35 CHN 49 49.47 49.5 47.00 51.00
## 36 IND 45 45.10 45.0 42.00 48.00
## 37 IDN 43 45.08 45.0 41.00 49.00
## 38 JPN 34 33.67 34.0 32.00 35.00
## 39 KAZ 47 45.76 46.0 42.00 49.00
## 40 KOR 31 30.92 31.0 28.00 33.00
## 41 LAO 48 42.42 41.0 35.95 49.00
## 42 MYS 39 40.04 40.0 36.00 45.05
## 43 MNG 44 45.34 46.0 40.00 49.00
## 44 MMR 41 41.94 42.0 38.00 47.05
## 45 NZL 33 33.00 33.0 31.00 36.00
## 46 PAK 50 48.40 48.5 46.00 50.00
## 47 PHL 38 40.76 41.0 37.95 44.05
## 48 RUS 51 50.60 51.0 50.00 51.00
## 49 SGP 14 13.58 13.0 10.00 17.00
## 50 THA 42 41.45 41.0 39.00 44.00
## 51 VNM 36 37.41 37.5 36.00 39.00
```
It also includes the nominal ranks for comparison. Finally, `SAresults$Nominal` gives a summary of nominal ranks and scores.
If this information is not enough, the `store_results` argument to `sensitivity()` gives options of what information to store for each iteration. For example, `store_results = "results+method"` returns the information described so far, plus the full .$Method list of each replication (i.e. the full specifications used to build the composite indicator). Setting `store_results = "results+COIN"` stores all results and the complete COIN of each replication. Of course, this latter option in particular will take up more memory.
### 14\.4\.4 Running a sensitivity analysis
Running a sensitivity analysis, i.e. understanding the individual contributions of input uncertainties, is similar to an uncertainty analysis, but there are some extra considerations to keep in mind.
The first is that the target of the sensitivity analysis will be different. Whereas when you run an uncertainty analysis, you can calculate confidence intervals for each unit, with a sensitivity analysis a different approach is taken because otherwise, you would have a set of sensitivity indices for each single unit, and this would be hard to interpret. Instead, COINr calculates sensitivity with respect to the *average absolute rank change*, between nominal and perturbed values.
The second consideration is that the number of replications required to a sensitivity analysis is quite a bit higher than the number required for an uncertainty analysis. In fact, when you specify `N` in the `sensitivity()` function, and set to `SAtype = "SA"`, the actual number of replications is \\(N\_T \= N(d \+2\)\\), where \\(d\\) is the number of uncertain input parameters/assumptions.
To run the sensitivity analysis, the format is very similar to the uncertainty analysis. We simply run `sensitivity()` with `SA_specs` in exactly the same format as previously (we can use the same list), and set `SA_type = "SA"`.
```
# Not actually run here. If you run this, it will take a few minutes
SAresults <- sensitivity(ASEM, v_targ = "Index",
SA_specs = SAspecs,
N = 500,
SA_type = "SA", Nboot = 1000)
```
The output of the sensitivity analysis is a list. This is in fact an extended version of the list when `SA_type = "UA"`, in that it has the recorded scores for each iteration, ranks for each iteration, as well as confidence intervals for ranks, and all the other outputs associated with `SA_type = "SA"`.
Additionally it outputs a data frame `.$Sensitivity`, which gives first and total order sensitivity indices for each input variable. Moreover, if `Nboot` is specified in `sensitivity()`, it provides estimated confidence intervals for each sensitivity index using bootstrapping.
```
roundDF(SAresults$Sensitivity)
## Variable Si STi Si_q5 Si_q95 STi_q5 STi_q95
## 1 imtype 0.00 0.04 -0.04 0.05 0.03 0.04
## 2 ntype 0.54 0.80 0.30 0.77 0.72 0.88
## 3 weights 0.23 0.17 0.12 0.34 0.15 0.19
```
Recall that the target output in the sensitivity analysis is the mean absolute rank change (let’s call it the MARC from here). How do we interpret these results? The first order sensitivity index \\(S\_i\\) can be interpreted as the uncertainty caused by the effect of the \\(i\\)th uncertain parameter/assumption *on its own*. The total order sensitivity index is the uncertainty caused by the effect of the \\(i\\)th uncertain parameter/assumption, *including its interactions with other inputs*.
To help understand this, we will use COINr’s plotting function for sensitivity analysis, `plotSA`.
```
# plot bar chart
plotSA(SAresults, ptype = "bar")
```
This plot shows the three uncertainties that we introduced on the x\-axis: the imputation type, the normalisation type, and the weights. The y\-axis is the sensitivity index, and the total height of each bar is the total effect index \\(S\_{Ti}\\), i.e. the uncertainty caused by the variable on its own (the *main effect*) as well as its interactions. Then each bar is divided into the interaction effects and the main effects.
What this shows is that the normalisation type is the most important uncertainty, followed by the weights, and last by the imputation type. In fact, the choice of imputation method (between the ones specified) is effectively insignificant. We can also see that the nomalisation has a significant interaction effect, probably with the imputation method. Whereas the weights don’t seem to interact with the other inputs.
Another way of looking at this is in a pie chart:
```
# plot bar chart
plotSA(SAresults, ptype = "pie")
```
Here we can see that more than half of the uncertainty is caused by the normalisation method choice alone, while a bit less than a quarter is caused by the weights, and the remainder by interactions.
It is likely that the normalisation method is important because one of the choices was rank normalisation, which radically alters the distribution of each indicator.
Finally, to see the uncertainty on the estimates of each sensitivity index we can use a box plot (or error bars):
```
# plot bar chart
plotSA(SAresults, ptype = "box")
```
This will not work unless `Nboot` was specified. Bootstrapping allows confidence intervals to be estimated, and this shows that the estimates of total effect indices (\\(S\_{Ti}\\)) are quite reliable. Whereas the confidence intervals are much wider on the first order indices (\\(S\_{i}\\)). Still, with the number of runs applied, it is possible to draw robust conclusions. If the confidence intervals were still too wide here, you could consider increasing `N` \- this should lead to narrower intervals, although you might have to wait for a bit.
14\.5 Removing indicators and aggregates
----------------------------------------
COINr also has a tool which lies somewhere in between the ad\-hoc adjustments and comparisons described in the previous chapter, and the formal sensitivity analysis described here. The `removeElements()` function answers the question: if we remove an indicator, what would happen to the rankings?
This question is relevant because sometimes you may be in doubt about whether an indicator is necessary in the framework. If you were to remove it, would things change that much? It can also be an alternative measure of “indicator importance/influence” \- an indicator which changes very little when removed could be said to be uninfluential.
To make this test, we simply call the `removeElements()` function as follows, which sequentially removes all elements of a given level, with replacement, and returns the average (absolute) rank change each time. Here we do the analysis at pillar level (level 2\).
```
CheckPillars <- removeElements(ASEM, 2, "Index")
## Iteration 1 of 8
## Iteration 2 of 8
## Iteration 3 of 8
## Iteration 4 of 8
## Iteration 5 of 8
## Iteration 6 of 8
## Iteration 7 of 8
## Iteration 8 of 8
```
The output here is a list with the scores and ranks each time for the selected target (by default, the highest level of aggregation is targeted), as well as average rank changes. A good way to view the output is with a bar chart.
```
library(ggplot2)
ggplot(data.frame(Pillar = names(CheckPillars$MeanAbsDiff[-1]),
Impact = CheckPillars$MeanAbsDiff[-1]),
aes(x=Pillar, y=Impact)) +
geom_bar(stat = "identity") +
theme(axis.text.x = element_text(angle = 45, hjust=1))
```
This plot shows that the effect of removing pillars from the framework is definitely not equal. In fact, the “Social” pillar has nearly four times the impact of the “ConEcFin” pillar on average rank change.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/helper-functions.html |
Chapter 15 Helper functions
===========================
15\.1 R Interfaces
------------------
### 15\.1\.1 Data import
COINr has a couple of useful functions to help import and export data and metadata. You might have heard of the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en) which is an Excel\-based tool for building and analysing composite indicators, similar in fact to COINr[6](#fn6). With the `coinToolIn()` function you can import data directly from the COIN Tool to cross check or extend your analysis in COINr.
To demonstrate, we can take the example version of the COIN Tool, which you can download [here](https://composite-indicators.jrc.ec.europa.eu/sites/default/files/COIN_Tool_v1_LITE_exampledata.xlsm). Then it’s as simple as running:
```
library(COINr6)
# This is the file path and name where the COIN Tool is downloaded to
# You could also just put it in your project directory.
fname <- "C:/Users/becke/Downloads/COIN_Tool_v1_LITE_exampledata.xlsm"
dflist <- COINToolIn(fname)
## Imported 15 indicators and 28 units.
```
The output of this function is a list with the three data frame inputs to `assemble()`.
```
dflist
## $IndData
## # A tibble: 28 x 17
## UnitName UnitCode ind.01 ind.02 ind.03 ind.04 ind.05 ind.06 ind.07 ind.08
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Austria AT 13.3 80.4 492. 14.9 68.8 34 3.7 87.6
## 2 Belgium BE 15.1 71.8 503. 7 59.6 24 5.2 81.2
## 3 Bulgaria BG 12.3 78.1 440. 2.2 51.3 15 10.6 72
## 4 Cyprus CY 14 76 438. 6.9 16.7 23 3.6 73.4
## 5 Czech Repub~ CZ 13.5 87.6 491. 8.8 73.2 27 3.9 86.7
## 6 Germany DE 9.7 80.2 508. 8.5 46.3 30 5.6 90.1
## 7 Denmark DK 9.7 73 504. 27.7 40.6 39 3.8 83.9
## 8 Estonia EE 8.6 83.3 524. 15.7 35.7 37 4.1 77.1
## 9 Greece EL 11.8 70 458. 4 31.5 30 3.9 49.2
## 10 Spain ES 14.9 57.4 491. 9.4 34.8 33 11.4 68
## # ... with 18 more rows, and 7 more variables: ind.09 <dbl>, ind.10 <dbl>,
## # ind.11 <dbl>, ind.12 <dbl>, ind.13 <dbl>, ind.14 <dbl>, ind.15 <dbl>
##
## $IndMeta
## # A tibble: 15 x 9
## IndCode IndName GPupper GPlower Direction IndWeight Agg1 Agg2 Agg3
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr> <chr>
## 1 ind.01 Pre-primary pu~ NA NA -1 0.4 sp.01 p.01 Index
## 2 ind.02 Share of popul~ NA NA 1 0.3 sp.01 p.01 Index
## 3 ind.03 Reading, maths~ NA NA 1 0.3 sp.01 p.01 Index
## 4 ind.04 Recent training NA NA 1 0.3 sp.02 p.01 Index
## 5 ind.05 VET students NA NA 1 0.35 sp.02 p.01 Index
## 6 ind.06 High computer ~ NA NA 1 0.35 sp.02 p.01 Index
## 7 ind.07 Early leavers ~ NA NA -1 0.7 sp.03 p.02 Index
## 8 ind.08 Recent graduat~ NA NA 1 0.3 sp.03 p.02 Index
## 9 ind.09 Activity rate ~ NA NA 1 0.5 sp.04 p.02 Index
## 10 ind.10 Activity rate ~ NA NA 1 0.5 sp.04 p.02 Index
## 11 ind.11 Long-term unem~ NA NA -1 0.4 sp.05 p.03 Index
## 12 ind.12 Underemployed ~ NA NA -1 0.6 sp.05 p.03 Index
## 13 ind.13 Higher educati~ NA NA -1 0.4 sp.06 p.03 Index
## 14 ind.14 ISCED 5-8 prop~ NA NA -1 0.1 sp.06 p.03 Index
## 15 ind.15 Qualification ~ NA NA -1 0.5 sp.06 p.03 Index
##
## $AggMeta
## # A tibble: 10 x 4
## AgLevel Code Name Weight
## <dbl> <chr> <chr> <dbl>
## 1 4 Index European Skills Index 1
## 2 3 p.01 Skills Development 0.3
## 3 3 p.02 Skills Activation 0.3
## 4 3 p.03 Skills Matching 0.4
## 5 2 sp.01 Compulsory education 0.5
## 6 2 sp.02 Training and tertiary education 0.5
## 7 2 sp.03 Transition to work 0.5
## 8 2 sp.04 Activity rates 0.5
## 9 2 sp.05 Unemployment 0.4
## 10 2 sp.06 Skills mismatch 0.6
```
Because the COIN Tool uses numeric codes for indicators such as `ind.01`, you might want slightly more informative codes. The best way to do this is to name the codes yourself, but a quick solution is to set `makecodes = TRUE` in `COINToolIn()`. This generates short codes based on the indicator names. It will not yield perfect results, but for a quick analysis it might be sufficient. At least, you could use this and then modify the results by hand.
```
dflist <- COINToolIn(fname, makecodes = TRUE)
## Imported 15 indicators and 28 units.
dflist$IndMeta$IndCode
## [1] "Pre-Pupi" "SharPopu" "ReadMath" "ReceTrai" "Stud"
## [6] "HighComp" "EarlLeav" "ReceGrad" "ActiRate" "ActiRate_1"
## [11] "LongUnem" "UndePart" "HighEduc" "Isce5-Pr" "QualMism"
```
While the codes could certainly be improved, it’s a lot better than uninformative numbers. Finally, we can assemble the output into a COIN and begin the construction.
```
ESI <- assemble(dflist[[1]],dflist[[2]],dflist[[3]],)
## -----------------
## No denominators detected.
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 15
## Number of units = 28
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 6 aggregate groups: sp.01, sp.02, sp.03, sp.04, sp.05, sp.06
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 3 aggregate groups: p.01, p.02, p.03
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
### 15\.1\.2 Export to Excel
Trigger warning for R purists! Sometimes it’s easier to look at your data in Excel. There, I said it. R is great for doing all kinds of complicated tasks, but if you just want to look at big tables of numbers and play around with them, maybe make a few quick graphs, then Excel is a great tool.
Actually Excel is kind of underrated by many people who are used to programming in R or Python, Matlab or even Stata. It has a lot of clever tools that not many people know about. But more importantly, Excel is a *lingua franca* between all kinds of professions \- you can pass an Excel spreadsheet to almost anyone and they will be able to take a look at it and use the data. Try doing that with an R or Python script.
It just boils down to using the right tool for the right job. Anyway, with that aside, let’s look at COINr’s `coin2Excel()` function. You put in your COIN, and it will write a spreadsheet.
```
# Build ASEM index
ASEM <- build_ASEM()
# Get some statistics
ASEM <- getStats(ASEM, dset = "Raw")
# write to Excel
coin2Excel(ASEM, fname = "ASEMresults.xlsx")
```
The spreadsheet will contain a number of tabs, including:
* The indicator data, metadata and aggregation metadata that was input to COINr
* All data sets in the `.$Data` folder, e.g. raw, treated, normalised, aggregated, etc.
* (almost) All data frames found in the `.$Analysis` folder, i.e. statistics tables, outlier flags, correlation tables.
15\.2 Selecting data sets and indicators
----------------------------------------
The `getIn()` function is widely used by many `COINr` functions. It is used for selecting specific data sets, and returning subsets of indicators. While some of this can be achieved fairly easily with base R, or `dplyr::select()`, subsetting in a hierarchical context can be more awkward. That’s where `getIn()` steps in to help.
Although it was made to be used internally, it might also help in other contexts. Note that this can work on COINs or data frames, but is most useful with COINs.
Let’s take some examples. First, we can get a whole data set. `getIn()` will retrieve any of the data sets in the `.$Data` folder, as well as the denominators.
```
# Build data set first, if not already done
ASEM <- build_ASEM()
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
# Get raw data set
datalist <- getIn(ASEM, dset = "Raw")
datalist$ind_data_only
## # A tibble: 51 x 49
## Goods Services FDI PRemit ForPort CostImpEx Tariff TBTs TIRcon RTAs
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 278. 108. 5 5.07 809. 0 1.6 1144 1 30
## 2 598. 216. 5.71 13.4 1574. 0 1.6 1348 1 30
## 3 42.8 13.0 1.35 1.04 15.5 52 1.6 1140 1 30
## 4 28.4 17.4 0.387 1.56 16.9 0 1.6 1179 1 30
## 5 8.77 15.2 1.23 0.477 40.8 100 1.6 1141 1 30
## 6 274. 43.5 3.88 4.69 108. 0 1.6 1456 1 30
## 7 147. 114. 9.1 2.19 1021. 0 1.6 1393 1 30
## 8 28.2 10.2 0.58 0.589 17.4 0 1.6 1153 1 30
## 9 102. 53.8 6.03 1.51 748. 70 1.6 1215 1 30
## 10 849. 471. 30.9 30.2 6745. 0 1.6 1385 1 30
## # ... with 41 more rows, and 39 more variables: Visa <dbl>, StMob <dbl>,
## # Research <dbl>, Pat <dbl>, CultServ <dbl>, CultGood <dbl>, Tourist <dbl>,
## # MigStock <dbl>, Lang <dbl>, LPI <dbl>, Flights <dbl>, Ship <dbl>,
## # Bord <dbl>, Elec <dbl>, Gas <dbl>, ConSpeed <dbl>, Cov4G <dbl>, Embs <dbl>,
## # IGOs <dbl>, UNVote <dbl>, Renew <dbl>, PrimEner <dbl>, CO2 <dbl>,
## # MatCon <dbl>, Forest <dbl>, Poverty <dbl>, Palma <dbl>, TertGrad <dbl>,
## # FreePress <dbl>, TolMin <dbl>, NGOs <dbl>, CPI <dbl>, FemLab <dbl>, ...
```
The output, here `datalist` is a list containing the full data set `.$ind_data`, the data set `.$ind_data_only` only including numerical (indicator) columns, as well as unit codes, indicator codes and names, and the object type.
More usefully, we can get specific indicators:
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Raw", icodes = c("Flights", "LPI"))
datalist$ind_data_only
## # A tibble: 51 x 2
## Flights LPI
## <dbl> <dbl>
## 1 29.0 4.10
## 2 31.9 4.11
## 3 9.24 2.81
## 4 9.25 3.16
## 5 8.75 3.00
## 6 15.3 3.67
## 7 32.8 3.82
## 8 3.13 3.36
## 9 18.9 3.92
## 10 97.6 3.90
## # ... with 41 more rows
```
More usefully still, we can get groups of indicators based on their groupings. For example, we can ask for indicators that belong to the “Political” group:
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Raw", icodes = "Political", aglev = 1)
datalist$ind_data_only
## # A tibble: 51 x 3
## Embs IGOs UNVote
## <dbl> <dbl> <dbl>
## 1 88 227 42.6
## 2 84 248 43.0
## 3 67 209 43.0
## 4 62 197 42.7
## 5 43 172 42.3
## 6 84 201 42.2
## 7 77 259 42.8
## 8 46 194 42.9
## 9 74 269 42.7
## 10 100 329 40.4
## # ... with 41 more rows
```
To do this, you have to specify the `aglev` argument, which specifies which level to retrieve the indicators from. Before the data set is aggregated, you can anyway only select the indicators, but for the aggregated data set, the situation is more complex. To illustrate, we can call the Connectivity sub\-index, first asking for all indicators:
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Aggregated", icodes = "Conn", aglev = 1)
datalist$ind_data_only
## # A tibble: 51 x 30
## Goods Services FDI PRemit ForPort CostImpEx Tariff TBTs TIRcon RTAs Visa
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 40.7 33.0 6.24 16.8 34.2 100 79.2 34.8 100 64.4 86.8
## 2 80.1 58.7 5.79 40.9 55.7 100 79.2 23.2 100 64.4 86.8
## 3 47.1 28.4 15.8 26.9 4.82 91.3 79.2 35.0 100 64.4 85.7
## 4 29.7 41.6 2.26 43.9 5.45 100 79.2 32.8 100 64.4 84.6
## 5 21.6 100 43.1 33.6 33.6 83.2 79.2 35.0 100 64.4 85.7
## 6 88.9 25.5 11.6 33.9 9.12 100 79.2 17.0 100 64.4 85.7
## 7 24.4 46.0 19.0 7.73 55.1 100 79.2 20.6 100 64.4 87.9
## 8 75.4 55.4 15.4 35.8 12.3 100 79.2 34.3 100 64.4 85.7
## 9 20.8 25.9 15.7 6.50 51.9 88.2 79.2 30.7 100 64.4 87.9
## 10 15.1 21.1 6.04 15.7 45.3 100 79.2 21.0 100 64.4 87.9
## # ... with 41 more rows, and 19 more variables: StMob <dbl>, Research <dbl>,
## # Pat <dbl>, CultServ <dbl>, CultGood <dbl>, Tourist <dbl>, MigStock <dbl>,
## # Lang <dbl>, LPI <dbl>, Flights <dbl>, Ship <dbl>, Bord <dbl>, Elec <dbl>,
## # Gas <dbl>, ConSpeed <dbl>, Cov4G <dbl>, Embs <dbl>, IGOs <dbl>,
## # UNVote <dbl>
```
Or we can call all the pillars belonging to “Connectivity”, i.e. the level below:
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Aggregated", icodes = "Conn", aglev = 2)
datalist$ind_data_only
## # A tibble: 51 x 5
## ConEcFin Instit P2P Physical Political
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 26.2 77.5 54.1 41.1 78.2
## 2 48.2 75.6 43.3 72.0 80.8
## 3 24.6 75.9 27.1 28.4 67.5
## 4 24.6 76.8 39.1 47.6 62.6
## 5 46.4 74.6 59.8 29.5 48.7
## 6 33.8 74.4 39.6 41.5 71.0
## 7 30.4 75.4 51.0 50.5 78.1
## 8 38.9 77.3 53.5 42.5 55.4
## 9 24.1 75.1 39.1 44.5 77.9
## 10 20.6 75.4 28.4 42.4 87.6
## # ... with 41 more rows
```
Finally, if we want “Conn” itself, we can just call it directly with no `aglev` specified.
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Aggregated", icodes = "Conn")
datalist$ind_data_only
## # A tibble: 51 x 1
## Conn
## <dbl>
## 1 55.4
## 2 64.0
## 3 44.7
## 4 50.1
## 5 51.8
## 6 52.1
## 7 57.1
## 8 53.5
## 9 52.1
## 10 50.9
## # ... with 41 more rows
```
We can also use `getIn()` with data frames, and it will behave in more or less the same way, except a data frame has no information about the structure of the index. Here, `getIn()` returns what it can, and arguments like `dset` and `aglev` are ignored.
```
# use the ASEM indicator data frame directly
datalist <- getIn(ASEMIndData, icodes = c("LPI", "Goods"))
datalist$ind_data_only
## # A tibble: 51 x 2
## LPI Goods
## <dbl> <dbl>
## 1 4.10 278.
## 2 4.11 598.
## 3 2.81 42.8
## 4 3.16 28.4
## 5 3.00 8.77
## 6 3.67 274.
## 7 3.82 147.
## 8 3.36 28.2
## 9 3.92 102.
## 10 3.90 849.
## # ... with 41 more rows
```
15\.3 Data frame tools
----------------------
The `roundDF()` function is a small helper function for rounding data frames that contain a mix of numeric and non\-numeric columns. This is very handy for presenting tables generated by COINr in documents.
```
# use the ASEM indicator data frame directly
ASEMIndData |>
roundDF(decimals = 3) |>
reactable::reactable(defaultPageSize = 5, highlight = TRUE, wrap = F)
```
By default, numbers are rounded to two decimal places.
A similar function, `rankDF()`, converts a data frame of numbers/scores to ranks, ignoring any non\-numeric columns.
```
ASEMIndData[c("UnitCode", "LPI", "Goods")] |>
rankDF() |>
head(10)
## UnitCode LPI Goods
## 1 AUT 7 18
## 2 BEL 6 8
## 3 BGR 44 39
## 4 HRV 35 43
## 5 CYP 37 49
## 6 CZE 20 20
## 7 DNK 13 25
## 8 EST 29 45
## 9 FIN 11 32
## 10 FRA 12 3
```
Note that there are different ways to rank values, depending on how you deal with ties. The `rankDF()` function uses so\-called “sport” ranking, where ties are assigned the highest rank value. See `?rank` for some information on this.
In\-group ranking is also possible using `rankDF()` by specifying a column of the data frame to use as a grouping variable.
```
ASEMIndData[c("UnitCode", "Group_GDP", "Goods", "LPI")] |>
rankDF(use_group = "Group_GDP") |>
head(10)
## # A tibble: 10 x 4
## UnitCode Group_GDP Goods LPI
## <chr> <chr> <dbl> <dbl>
## 1 AUT L 7 5
## 2 BEL L 2 4
## 3 BGR S 2 10
## 4 HRV S 5 6
## 5 CYP S 11 8
## 6 CZE M 1 2
## 7 DNK L 10 7
## 8 EST S 7 3
## 9 FIN M 7 1
## 10 FRA XL 3 4
```
A final helpful function is `compareDF()`, which gives a detailed comparison of two similar data frames. This is particularly useful when comparing results from one calculation to another. For example, let’s say you have built a composite indicator in Excel, but want to replicate it in COINr, to use some of the COINr tools. But having built it, you want to be sure that the results are exactly the same. From experience, it is actually quite common for differences to appear, which could be due to calculation errors, but also sometimes due to differences in the way that Excel calculates certain things. Anyway, how can you check that the two sets of results are the same, and if not, what are the differences?
The `compareDF()` function is intended to compare two data frames that have the same variables and rows, but possibly in a different order. Moreover it allows to compare at a specified number of significant figures and will tell you specifically what the differences are. This is probably best clarified with an example.
```
# take a sample of indicator data (including the UnitCode column)
data1 <- ASEMIndData[c(2,12:15)]
# copy the data
data2 <- data1
# make a change: replace one value in data2 by NA
data2[1,2] <- NA
# compare data frames
compareDF(data1, data2, matchcol = "UnitCode")
## $Same
## [1] FALSE
##
## $Details
## Column TheSame Comment NDifferent
## 1 UnitCode TRUE Non-numerical and identical 0
## 2 LPI FALSE Numerical and different at 5 sf. 1
## 3 Flights TRUE Numerical and identical to 5 sf. 0
## 4 Ship TRUE Numerical and identical to 5 sf. 0
## 5 Bord TRUE Numerical and identical to 5 sf. 0
##
## $Differences
## $Differences$LPI
## UnitCode df1 df2
## 1 AUT 4.097985 NA
```
The `compareDF()` function requires a column to be specified which is used to match rows. It will then check:
* Whether the two data frames are the same size
* Whether the column names are the same, and that the matching column has the same entries
* Column by column that the elements are the same, after sorting according to the matching column
The output summarises whether the two data frames are the same (`TRUE`/`FALSE`), which columns are different and how, and finally identifies specific data points which are different and returns both values.
15\.1 R Interfaces
------------------
### 15\.1\.1 Data import
COINr has a couple of useful functions to help import and export data and metadata. You might have heard of the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en) which is an Excel\-based tool for building and analysing composite indicators, similar in fact to COINr[6](#fn6). With the `coinToolIn()` function you can import data directly from the COIN Tool to cross check or extend your analysis in COINr.
To demonstrate, we can take the example version of the COIN Tool, which you can download [here](https://composite-indicators.jrc.ec.europa.eu/sites/default/files/COIN_Tool_v1_LITE_exampledata.xlsm). Then it’s as simple as running:
```
library(COINr6)
# This is the file path and name where the COIN Tool is downloaded to
# You could also just put it in your project directory.
fname <- "C:/Users/becke/Downloads/COIN_Tool_v1_LITE_exampledata.xlsm"
dflist <- COINToolIn(fname)
## Imported 15 indicators and 28 units.
```
The output of this function is a list with the three data frame inputs to `assemble()`.
```
dflist
## $IndData
## # A tibble: 28 x 17
## UnitName UnitCode ind.01 ind.02 ind.03 ind.04 ind.05 ind.06 ind.07 ind.08
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Austria AT 13.3 80.4 492. 14.9 68.8 34 3.7 87.6
## 2 Belgium BE 15.1 71.8 503. 7 59.6 24 5.2 81.2
## 3 Bulgaria BG 12.3 78.1 440. 2.2 51.3 15 10.6 72
## 4 Cyprus CY 14 76 438. 6.9 16.7 23 3.6 73.4
## 5 Czech Repub~ CZ 13.5 87.6 491. 8.8 73.2 27 3.9 86.7
## 6 Germany DE 9.7 80.2 508. 8.5 46.3 30 5.6 90.1
## 7 Denmark DK 9.7 73 504. 27.7 40.6 39 3.8 83.9
## 8 Estonia EE 8.6 83.3 524. 15.7 35.7 37 4.1 77.1
## 9 Greece EL 11.8 70 458. 4 31.5 30 3.9 49.2
## 10 Spain ES 14.9 57.4 491. 9.4 34.8 33 11.4 68
## # ... with 18 more rows, and 7 more variables: ind.09 <dbl>, ind.10 <dbl>,
## # ind.11 <dbl>, ind.12 <dbl>, ind.13 <dbl>, ind.14 <dbl>, ind.15 <dbl>
##
## $IndMeta
## # A tibble: 15 x 9
## IndCode IndName GPupper GPlower Direction IndWeight Agg1 Agg2 Agg3
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr> <chr>
## 1 ind.01 Pre-primary pu~ NA NA -1 0.4 sp.01 p.01 Index
## 2 ind.02 Share of popul~ NA NA 1 0.3 sp.01 p.01 Index
## 3 ind.03 Reading, maths~ NA NA 1 0.3 sp.01 p.01 Index
## 4 ind.04 Recent training NA NA 1 0.3 sp.02 p.01 Index
## 5 ind.05 VET students NA NA 1 0.35 sp.02 p.01 Index
## 6 ind.06 High computer ~ NA NA 1 0.35 sp.02 p.01 Index
## 7 ind.07 Early leavers ~ NA NA -1 0.7 sp.03 p.02 Index
## 8 ind.08 Recent graduat~ NA NA 1 0.3 sp.03 p.02 Index
## 9 ind.09 Activity rate ~ NA NA 1 0.5 sp.04 p.02 Index
## 10 ind.10 Activity rate ~ NA NA 1 0.5 sp.04 p.02 Index
## 11 ind.11 Long-term unem~ NA NA -1 0.4 sp.05 p.03 Index
## 12 ind.12 Underemployed ~ NA NA -1 0.6 sp.05 p.03 Index
## 13 ind.13 Higher educati~ NA NA -1 0.4 sp.06 p.03 Index
## 14 ind.14 ISCED 5-8 prop~ NA NA -1 0.1 sp.06 p.03 Index
## 15 ind.15 Qualification ~ NA NA -1 0.5 sp.06 p.03 Index
##
## $AggMeta
## # A tibble: 10 x 4
## AgLevel Code Name Weight
## <dbl> <chr> <chr> <dbl>
## 1 4 Index European Skills Index 1
## 2 3 p.01 Skills Development 0.3
## 3 3 p.02 Skills Activation 0.3
## 4 3 p.03 Skills Matching 0.4
## 5 2 sp.01 Compulsory education 0.5
## 6 2 sp.02 Training and tertiary education 0.5
## 7 2 sp.03 Transition to work 0.5
## 8 2 sp.04 Activity rates 0.5
## 9 2 sp.05 Unemployment 0.4
## 10 2 sp.06 Skills mismatch 0.6
```
Because the COIN Tool uses numeric codes for indicators such as `ind.01`, you might want slightly more informative codes. The best way to do this is to name the codes yourself, but a quick solution is to set `makecodes = TRUE` in `COINToolIn()`. This generates short codes based on the indicator names. It will not yield perfect results, but for a quick analysis it might be sufficient. At least, you could use this and then modify the results by hand.
```
dflist <- COINToolIn(fname, makecodes = TRUE)
## Imported 15 indicators and 28 units.
dflist$IndMeta$IndCode
## [1] "Pre-Pupi" "SharPopu" "ReadMath" "ReceTrai" "Stud"
## [6] "HighComp" "EarlLeav" "ReceGrad" "ActiRate" "ActiRate_1"
## [11] "LongUnem" "UndePart" "HighEduc" "Isce5-Pr" "QualMism"
```
While the codes could certainly be improved, it’s a lot better than uninformative numbers. Finally, we can assemble the output into a COIN and begin the construction.
```
ESI <- assemble(dflist[[1]],dflist[[2]],dflist[[3]],)
## -----------------
## No denominators detected.
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 15
## Number of units = 28
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 6 aggregate groups: sp.01, sp.02, sp.03, sp.04, sp.05, sp.06
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 3 aggregate groups: p.01, p.02, p.03
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
### 15\.1\.2 Export to Excel
Trigger warning for R purists! Sometimes it’s easier to look at your data in Excel. There, I said it. R is great for doing all kinds of complicated tasks, but if you just want to look at big tables of numbers and play around with them, maybe make a few quick graphs, then Excel is a great tool.
Actually Excel is kind of underrated by many people who are used to programming in R or Python, Matlab or even Stata. It has a lot of clever tools that not many people know about. But more importantly, Excel is a *lingua franca* between all kinds of professions \- you can pass an Excel spreadsheet to almost anyone and they will be able to take a look at it and use the data. Try doing that with an R or Python script.
It just boils down to using the right tool for the right job. Anyway, with that aside, let’s look at COINr’s `coin2Excel()` function. You put in your COIN, and it will write a spreadsheet.
```
# Build ASEM index
ASEM <- build_ASEM()
# Get some statistics
ASEM <- getStats(ASEM, dset = "Raw")
# write to Excel
coin2Excel(ASEM, fname = "ASEMresults.xlsx")
```
The spreadsheet will contain a number of tabs, including:
* The indicator data, metadata and aggregation metadata that was input to COINr
* All data sets in the `.$Data` folder, e.g. raw, treated, normalised, aggregated, etc.
* (almost) All data frames found in the `.$Analysis` folder, i.e. statistics tables, outlier flags, correlation tables.
### 15\.1\.1 Data import
COINr has a couple of useful functions to help import and export data and metadata. You might have heard of the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en) which is an Excel\-based tool for building and analysing composite indicators, similar in fact to COINr[6](#fn6). With the `coinToolIn()` function you can import data directly from the COIN Tool to cross check or extend your analysis in COINr.
To demonstrate, we can take the example version of the COIN Tool, which you can download [here](https://composite-indicators.jrc.ec.europa.eu/sites/default/files/COIN_Tool_v1_LITE_exampledata.xlsm). Then it’s as simple as running:
```
library(COINr6)
# This is the file path and name where the COIN Tool is downloaded to
# You could also just put it in your project directory.
fname <- "C:/Users/becke/Downloads/COIN_Tool_v1_LITE_exampledata.xlsm"
dflist <- COINToolIn(fname)
## Imported 15 indicators and 28 units.
```
The output of this function is a list with the three data frame inputs to `assemble()`.
```
dflist
## $IndData
## # A tibble: 28 x 17
## UnitName UnitCode ind.01 ind.02 ind.03 ind.04 ind.05 ind.06 ind.07 ind.08
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Austria AT 13.3 80.4 492. 14.9 68.8 34 3.7 87.6
## 2 Belgium BE 15.1 71.8 503. 7 59.6 24 5.2 81.2
## 3 Bulgaria BG 12.3 78.1 440. 2.2 51.3 15 10.6 72
## 4 Cyprus CY 14 76 438. 6.9 16.7 23 3.6 73.4
## 5 Czech Repub~ CZ 13.5 87.6 491. 8.8 73.2 27 3.9 86.7
## 6 Germany DE 9.7 80.2 508. 8.5 46.3 30 5.6 90.1
## 7 Denmark DK 9.7 73 504. 27.7 40.6 39 3.8 83.9
## 8 Estonia EE 8.6 83.3 524. 15.7 35.7 37 4.1 77.1
## 9 Greece EL 11.8 70 458. 4 31.5 30 3.9 49.2
## 10 Spain ES 14.9 57.4 491. 9.4 34.8 33 11.4 68
## # ... with 18 more rows, and 7 more variables: ind.09 <dbl>, ind.10 <dbl>,
## # ind.11 <dbl>, ind.12 <dbl>, ind.13 <dbl>, ind.14 <dbl>, ind.15 <dbl>
##
## $IndMeta
## # A tibble: 15 x 9
## IndCode IndName GPupper GPlower Direction IndWeight Agg1 Agg2 Agg3
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr> <chr>
## 1 ind.01 Pre-primary pu~ NA NA -1 0.4 sp.01 p.01 Index
## 2 ind.02 Share of popul~ NA NA 1 0.3 sp.01 p.01 Index
## 3 ind.03 Reading, maths~ NA NA 1 0.3 sp.01 p.01 Index
## 4 ind.04 Recent training NA NA 1 0.3 sp.02 p.01 Index
## 5 ind.05 VET students NA NA 1 0.35 sp.02 p.01 Index
## 6 ind.06 High computer ~ NA NA 1 0.35 sp.02 p.01 Index
## 7 ind.07 Early leavers ~ NA NA -1 0.7 sp.03 p.02 Index
## 8 ind.08 Recent graduat~ NA NA 1 0.3 sp.03 p.02 Index
## 9 ind.09 Activity rate ~ NA NA 1 0.5 sp.04 p.02 Index
## 10 ind.10 Activity rate ~ NA NA 1 0.5 sp.04 p.02 Index
## 11 ind.11 Long-term unem~ NA NA -1 0.4 sp.05 p.03 Index
## 12 ind.12 Underemployed ~ NA NA -1 0.6 sp.05 p.03 Index
## 13 ind.13 Higher educati~ NA NA -1 0.4 sp.06 p.03 Index
## 14 ind.14 ISCED 5-8 prop~ NA NA -1 0.1 sp.06 p.03 Index
## 15 ind.15 Qualification ~ NA NA -1 0.5 sp.06 p.03 Index
##
## $AggMeta
## # A tibble: 10 x 4
## AgLevel Code Name Weight
## <dbl> <chr> <chr> <dbl>
## 1 4 Index European Skills Index 1
## 2 3 p.01 Skills Development 0.3
## 3 3 p.02 Skills Activation 0.3
## 4 3 p.03 Skills Matching 0.4
## 5 2 sp.01 Compulsory education 0.5
## 6 2 sp.02 Training and tertiary education 0.5
## 7 2 sp.03 Transition to work 0.5
## 8 2 sp.04 Activity rates 0.5
## 9 2 sp.05 Unemployment 0.4
## 10 2 sp.06 Skills mismatch 0.6
```
Because the COIN Tool uses numeric codes for indicators such as `ind.01`, you might want slightly more informative codes. The best way to do this is to name the codes yourself, but a quick solution is to set `makecodes = TRUE` in `COINToolIn()`. This generates short codes based on the indicator names. It will not yield perfect results, but for a quick analysis it might be sufficient. At least, you could use this and then modify the results by hand.
```
dflist <- COINToolIn(fname, makecodes = TRUE)
## Imported 15 indicators and 28 units.
dflist$IndMeta$IndCode
## [1] "Pre-Pupi" "SharPopu" "ReadMath" "ReceTrai" "Stud"
## [6] "HighComp" "EarlLeav" "ReceGrad" "ActiRate" "ActiRate_1"
## [11] "LongUnem" "UndePart" "HighEduc" "Isce5-Pr" "QualMism"
```
While the codes could certainly be improved, it’s a lot better than uninformative numbers. Finally, we can assemble the output into a COIN and begin the construction.
```
ESI <- assemble(dflist[[1]],dflist[[2]],dflist[[3]],)
## -----------------
## No denominators detected.
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 15
## Number of units = 28
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 6 aggregate groups: sp.01, sp.02, sp.03, sp.04, sp.05, sp.06
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 3 aggregate groups: p.01, p.02, p.03
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
### 15\.1\.2 Export to Excel
Trigger warning for R purists! Sometimes it’s easier to look at your data in Excel. There, I said it. R is great for doing all kinds of complicated tasks, but if you just want to look at big tables of numbers and play around with them, maybe make a few quick graphs, then Excel is a great tool.
Actually Excel is kind of underrated by many people who are used to programming in R or Python, Matlab or even Stata. It has a lot of clever tools that not many people know about. But more importantly, Excel is a *lingua franca* between all kinds of professions \- you can pass an Excel spreadsheet to almost anyone and they will be able to take a look at it and use the data. Try doing that with an R or Python script.
It just boils down to using the right tool for the right job. Anyway, with that aside, let’s look at COINr’s `coin2Excel()` function. You put in your COIN, and it will write a spreadsheet.
```
# Build ASEM index
ASEM <- build_ASEM()
# Get some statistics
ASEM <- getStats(ASEM, dset = "Raw")
# write to Excel
coin2Excel(ASEM, fname = "ASEMresults.xlsx")
```
The spreadsheet will contain a number of tabs, including:
* The indicator data, metadata and aggregation metadata that was input to COINr
* All data sets in the `.$Data` folder, e.g. raw, treated, normalised, aggregated, etc.
* (almost) All data frames found in the `.$Analysis` folder, i.e. statistics tables, outlier flags, correlation tables.
15\.2 Selecting data sets and indicators
----------------------------------------
The `getIn()` function is widely used by many `COINr` functions. It is used for selecting specific data sets, and returning subsets of indicators. While some of this can be achieved fairly easily with base R, or `dplyr::select()`, subsetting in a hierarchical context can be more awkward. That’s where `getIn()` steps in to help.
Although it was made to be used internally, it might also help in other contexts. Note that this can work on COINs or data frames, but is most useful with COINs.
Let’s take some examples. First, we can get a whole data set. `getIn()` will retrieve any of the data sets in the `.$Data` folder, as well as the denominators.
```
# Build data set first, if not already done
ASEM <- build_ASEM()
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
# Get raw data set
datalist <- getIn(ASEM, dset = "Raw")
datalist$ind_data_only
## # A tibble: 51 x 49
## Goods Services FDI PRemit ForPort CostImpEx Tariff TBTs TIRcon RTAs
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 278. 108. 5 5.07 809. 0 1.6 1144 1 30
## 2 598. 216. 5.71 13.4 1574. 0 1.6 1348 1 30
## 3 42.8 13.0 1.35 1.04 15.5 52 1.6 1140 1 30
## 4 28.4 17.4 0.387 1.56 16.9 0 1.6 1179 1 30
## 5 8.77 15.2 1.23 0.477 40.8 100 1.6 1141 1 30
## 6 274. 43.5 3.88 4.69 108. 0 1.6 1456 1 30
## 7 147. 114. 9.1 2.19 1021. 0 1.6 1393 1 30
## 8 28.2 10.2 0.58 0.589 17.4 0 1.6 1153 1 30
## 9 102. 53.8 6.03 1.51 748. 70 1.6 1215 1 30
## 10 849. 471. 30.9 30.2 6745. 0 1.6 1385 1 30
## # ... with 41 more rows, and 39 more variables: Visa <dbl>, StMob <dbl>,
## # Research <dbl>, Pat <dbl>, CultServ <dbl>, CultGood <dbl>, Tourist <dbl>,
## # MigStock <dbl>, Lang <dbl>, LPI <dbl>, Flights <dbl>, Ship <dbl>,
## # Bord <dbl>, Elec <dbl>, Gas <dbl>, ConSpeed <dbl>, Cov4G <dbl>, Embs <dbl>,
## # IGOs <dbl>, UNVote <dbl>, Renew <dbl>, PrimEner <dbl>, CO2 <dbl>,
## # MatCon <dbl>, Forest <dbl>, Poverty <dbl>, Palma <dbl>, TertGrad <dbl>,
## # FreePress <dbl>, TolMin <dbl>, NGOs <dbl>, CPI <dbl>, FemLab <dbl>, ...
```
The output, here `datalist` is a list containing the full data set `.$ind_data`, the data set `.$ind_data_only` only including numerical (indicator) columns, as well as unit codes, indicator codes and names, and the object type.
More usefully, we can get specific indicators:
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Raw", icodes = c("Flights", "LPI"))
datalist$ind_data_only
## # A tibble: 51 x 2
## Flights LPI
## <dbl> <dbl>
## 1 29.0 4.10
## 2 31.9 4.11
## 3 9.24 2.81
## 4 9.25 3.16
## 5 8.75 3.00
## 6 15.3 3.67
## 7 32.8 3.82
## 8 3.13 3.36
## 9 18.9 3.92
## 10 97.6 3.90
## # ... with 41 more rows
```
More usefully still, we can get groups of indicators based on their groupings. For example, we can ask for indicators that belong to the “Political” group:
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Raw", icodes = "Political", aglev = 1)
datalist$ind_data_only
## # A tibble: 51 x 3
## Embs IGOs UNVote
## <dbl> <dbl> <dbl>
## 1 88 227 42.6
## 2 84 248 43.0
## 3 67 209 43.0
## 4 62 197 42.7
## 5 43 172 42.3
## 6 84 201 42.2
## 7 77 259 42.8
## 8 46 194 42.9
## 9 74 269 42.7
## 10 100 329 40.4
## # ... with 41 more rows
```
To do this, you have to specify the `aglev` argument, which specifies which level to retrieve the indicators from. Before the data set is aggregated, you can anyway only select the indicators, but for the aggregated data set, the situation is more complex. To illustrate, we can call the Connectivity sub\-index, first asking for all indicators:
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Aggregated", icodes = "Conn", aglev = 1)
datalist$ind_data_only
## # A tibble: 51 x 30
## Goods Services FDI PRemit ForPort CostImpEx Tariff TBTs TIRcon RTAs Visa
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 40.7 33.0 6.24 16.8 34.2 100 79.2 34.8 100 64.4 86.8
## 2 80.1 58.7 5.79 40.9 55.7 100 79.2 23.2 100 64.4 86.8
## 3 47.1 28.4 15.8 26.9 4.82 91.3 79.2 35.0 100 64.4 85.7
## 4 29.7 41.6 2.26 43.9 5.45 100 79.2 32.8 100 64.4 84.6
## 5 21.6 100 43.1 33.6 33.6 83.2 79.2 35.0 100 64.4 85.7
## 6 88.9 25.5 11.6 33.9 9.12 100 79.2 17.0 100 64.4 85.7
## 7 24.4 46.0 19.0 7.73 55.1 100 79.2 20.6 100 64.4 87.9
## 8 75.4 55.4 15.4 35.8 12.3 100 79.2 34.3 100 64.4 85.7
## 9 20.8 25.9 15.7 6.50 51.9 88.2 79.2 30.7 100 64.4 87.9
## 10 15.1 21.1 6.04 15.7 45.3 100 79.2 21.0 100 64.4 87.9
## # ... with 41 more rows, and 19 more variables: StMob <dbl>, Research <dbl>,
## # Pat <dbl>, CultServ <dbl>, CultGood <dbl>, Tourist <dbl>, MigStock <dbl>,
## # Lang <dbl>, LPI <dbl>, Flights <dbl>, Ship <dbl>, Bord <dbl>, Elec <dbl>,
## # Gas <dbl>, ConSpeed <dbl>, Cov4G <dbl>, Embs <dbl>, IGOs <dbl>,
## # UNVote <dbl>
```
Or we can call all the pillars belonging to “Connectivity”, i.e. the level below:
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Aggregated", icodes = "Conn", aglev = 2)
datalist$ind_data_only
## # A tibble: 51 x 5
## ConEcFin Instit P2P Physical Political
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 26.2 77.5 54.1 41.1 78.2
## 2 48.2 75.6 43.3 72.0 80.8
## 3 24.6 75.9 27.1 28.4 67.5
## 4 24.6 76.8 39.1 47.6 62.6
## 5 46.4 74.6 59.8 29.5 48.7
## 6 33.8 74.4 39.6 41.5 71.0
## 7 30.4 75.4 51.0 50.5 78.1
## 8 38.9 77.3 53.5 42.5 55.4
## 9 24.1 75.1 39.1 44.5 77.9
## 10 20.6 75.4 28.4 42.4 87.6
## # ... with 41 more rows
```
Finally, if we want “Conn” itself, we can just call it directly with no `aglev` specified.
```
# Get raw data set
datalist <- getIn(ASEM, dset = "Aggregated", icodes = "Conn")
datalist$ind_data_only
## # A tibble: 51 x 1
## Conn
## <dbl>
## 1 55.4
## 2 64.0
## 3 44.7
## 4 50.1
## 5 51.8
## 6 52.1
## 7 57.1
## 8 53.5
## 9 52.1
## 10 50.9
## # ... with 41 more rows
```
We can also use `getIn()` with data frames, and it will behave in more or less the same way, except a data frame has no information about the structure of the index. Here, `getIn()` returns what it can, and arguments like `dset` and `aglev` are ignored.
```
# use the ASEM indicator data frame directly
datalist <- getIn(ASEMIndData, icodes = c("LPI", "Goods"))
datalist$ind_data_only
## # A tibble: 51 x 2
## LPI Goods
## <dbl> <dbl>
## 1 4.10 278.
## 2 4.11 598.
## 3 2.81 42.8
## 4 3.16 28.4
## 5 3.00 8.77
## 6 3.67 274.
## 7 3.82 147.
## 8 3.36 28.2
## 9 3.92 102.
## 10 3.90 849.
## # ... with 41 more rows
```
15\.3 Data frame tools
----------------------
The `roundDF()` function is a small helper function for rounding data frames that contain a mix of numeric and non\-numeric columns. This is very handy for presenting tables generated by COINr in documents.
```
# use the ASEM indicator data frame directly
ASEMIndData |>
roundDF(decimals = 3) |>
reactable::reactable(defaultPageSize = 5, highlight = TRUE, wrap = F)
```
By default, numbers are rounded to two decimal places.
A similar function, `rankDF()`, converts a data frame of numbers/scores to ranks, ignoring any non\-numeric columns.
```
ASEMIndData[c("UnitCode", "LPI", "Goods")] |>
rankDF() |>
head(10)
## UnitCode LPI Goods
## 1 AUT 7 18
## 2 BEL 6 8
## 3 BGR 44 39
## 4 HRV 35 43
## 5 CYP 37 49
## 6 CZE 20 20
## 7 DNK 13 25
## 8 EST 29 45
## 9 FIN 11 32
## 10 FRA 12 3
```
Note that there are different ways to rank values, depending on how you deal with ties. The `rankDF()` function uses so\-called “sport” ranking, where ties are assigned the highest rank value. See `?rank` for some information on this.
In\-group ranking is also possible using `rankDF()` by specifying a column of the data frame to use as a grouping variable.
```
ASEMIndData[c("UnitCode", "Group_GDP", "Goods", "LPI")] |>
rankDF(use_group = "Group_GDP") |>
head(10)
## # A tibble: 10 x 4
## UnitCode Group_GDP Goods LPI
## <chr> <chr> <dbl> <dbl>
## 1 AUT L 7 5
## 2 BEL L 2 4
## 3 BGR S 2 10
## 4 HRV S 5 6
## 5 CYP S 11 8
## 6 CZE M 1 2
## 7 DNK L 10 7
## 8 EST S 7 3
## 9 FIN M 7 1
## 10 FRA XL 3 4
```
A final helpful function is `compareDF()`, which gives a detailed comparison of two similar data frames. This is particularly useful when comparing results from one calculation to another. For example, let’s say you have built a composite indicator in Excel, but want to replicate it in COINr, to use some of the COINr tools. But having built it, you want to be sure that the results are exactly the same. From experience, it is actually quite common for differences to appear, which could be due to calculation errors, but also sometimes due to differences in the way that Excel calculates certain things. Anyway, how can you check that the two sets of results are the same, and if not, what are the differences?
The `compareDF()` function is intended to compare two data frames that have the same variables and rows, but possibly in a different order. Moreover it allows to compare at a specified number of significant figures and will tell you specifically what the differences are. This is probably best clarified with an example.
```
# take a sample of indicator data (including the UnitCode column)
data1 <- ASEMIndData[c(2,12:15)]
# copy the data
data2 <- data1
# make a change: replace one value in data2 by NA
data2[1,2] <- NA
# compare data frames
compareDF(data1, data2, matchcol = "UnitCode")
## $Same
## [1] FALSE
##
## $Details
## Column TheSame Comment NDifferent
## 1 UnitCode TRUE Non-numerical and identical 0
## 2 LPI FALSE Numerical and different at 5 sf. 1
## 3 Flights TRUE Numerical and identical to 5 sf. 0
## 4 Ship TRUE Numerical and identical to 5 sf. 0
## 5 Bord TRUE Numerical and identical to 5 sf. 0
##
## $Differences
## $Differences$LPI
## UnitCode df1 df2
## 1 AUT 4.097985 NA
```
The `compareDF()` function requires a column to be specified which is used to match rows. It will then check:
* Whether the two data frames are the same size
* Whether the column names are the same, and that the matching column has the same entries
* Column by column that the elements are the same, after sorting according to the matching column
The output summarises whether the two data frames are the same (`TRUE`/`FALSE`), which columns are different and how, and finally identifies specific data points which are different and returns both values.
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/appendix-building-a-composite-indicator-example.html |
Chapter 18 Appendix: Building a Composite Indicator Example
===========================================================
Here we go through the main steps to building a composite indicator in COINr. This is effectively a condensed version of some earlier chapters in this book.
18\.1 Before COINr (BC)
-----------------------
Before even getting to COINr you will have to take a number of steps. The steps can be summarised by e.g. the European Commission’s [Ten Step Guide](https://knowledge4policy.ec.europa.eu/publication/your-10-step-pocket-guide-composite-indicators-scoreboards_en). In short, before getting to COINr you should have:
1. Tried to summarise all elements of the concept you are trying to capture, including by e.g.
* A literature review
* Talking to experts and stakeholders if possible
* Reviewing any other indicator frameworks on the topic
2. Constructed a “conceptual framework” using the information in 1\. For a composite indicator, this should be a hierarchical structure.
3. Gathered indicator data (these need not be the final set of indicators)
Depending on how thorough you are, and how accessible your data is, this process can take a surprisingly long time. On these topics, there is a lot of good training material available through the European Commission’s Competence Centre for Composite Indicators and Scoreboards [here](https://knowledge4policy.ec.europa.eu/composite-indicators/2019-jrc-week-composite-indicators-scoreboards_en).
18\.2 Load data and assemble
----------------------------
Having got a set of preliminary indicator data, the next step is to load data into R in some way. Remember that to build a composite indicator with COINr you need to first build a COIN. To build a COIN you need three data frames, as explained in detail in [COINs: the currency of COINr](coins-the-currency-of-coinr.html#coins-the-currency-of-coinr):
* Indicator data
* Indicator metadata
* Aggregation metadata
Where these data frames come from is up to you. You might want to load data into R and then assemble them using your own script. Alternatively, you may wish to assemble these in Excel, for example, then read the sheets of the spreadsheet into R.
Let’s assume now that you have got these three data frames in the correct formats. Now you simply put them into `assemble()` to build your COIN. Here I will use the built\-in data frames in COINr:
```
library(COINr6)
ASEM <- assemble(IndData = ASEMIndData,
IndMeta = ASEMIndMeta,
AggMeta = ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
18\.3 Check the data
--------------------
At this point, it’s worth checking the COIN to make sure that everything is as you would expect. One thing is to check the framework:
```
plotframework(ASEM)
```
You can also explore the indicator data using COINr’s `indDash()` app:
```
# not run here because requires an active R session
indDash(ASEM)
```
I would also recommend to check the statistics of each indicator using `getStats()`. Look for any indicators that are highly skewed, have few unique values, have low data availability, or strong correlations with other indicators or with denominators.
```
# load reactable for viewing table (in R you can do this instead with R Studio)
library(reactable)
library(magrittr)
# get stats
ASEM <- getStats(ASEM, dset = "Raw")
## Number of collinear indicators = 3
## Number of signficant negative indicator correlations = 322
## Number of indicators with high denominator correlations = 7
# view stats table
ASEM$Analysis$Raw$StatTable %>%
roundDF() %>%
reactable()
```
At this point you may decide to check individual indicators, and some may be added or excluded.
18\.4 Denomination
------------------
If you want to divide indicators by e.g. GDP or population to be able to compare small countries with larger ones, you can use `denominate()`. The specifications are either made initially in `IndMeta`, or as arguments to `denominate()`. In the case of the ASEM data set, these are included in `IndMeta` so the command is very simple (run `View(ASEMIndMeta)` to see). We will afterwards check the new stats to see what has changed.
```
# create denominated data set
ASEM <- denominate(ASEM, dset = "Raw")
# get stats of denominated data
ASEM <- getStats(ASEM, dset = "Denominated")
## Number of collinear indicators = 0
## Number of signficant negative indicator correlations = 440
## Number of indicators with high denominator correlations = 0
# view stats table
ASEM$Analysis$Raw$StatTable %>%
roundDF() %>%
reactable()
```
According to the new table, there are now no high correlations with denominators, which indicates some kind of success.
18\.5 Imputation
----------------
If you have missing data (and if you don’t, you are fortunate), you can choose to impute the data. If you *don’t* impute missing data, you can still build a composite indicator though.
We can use one of COINr’s options for imputing data. Like other commands, we operate on the COIN and the output is an updated COIN. Actually at this point I will make two alternatives \- one which imputes missing data using a group median, and another which performs no imputation.
```
# Make a copy
ASEM_noImpute <- ASEM
# impute one
ASEM <- impute(ASEM, dset = "Denominated", imtype = "indgroup_mean", groupvar = "Group_GDP")
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
```
18\.6 Data treatment
--------------------
Next we may wish to treat the data for outliers. Here we apply a standard approach which Winsorises each indicator up to a specified limit of points, in order to bring skew and kurtosis below specified thresholds. If Winsorisation fails, it applies a log transformation or similar.
```
ASEM <- treat(ASEM, dset = "Imputed", winmax = 5)
ASEM_noImpute <- treat(ASEM_noImpute, dset = "Denominated", winmax = 5)
```
Following treatment, it is a good idea to check which indicators were treated and how:
```
library(dplyr)
ASEM$Analysis$Treated$TreatSummary %>%
filter(Treatment != "None")
## IndCode Low High TreatSpec Treatment
## V2 Services 0 4 Default, winmax = 5 Winsorised 4 points
## V3 FDI 0 2 Default, winmax = 5 Winsorised 2 points
## V5 ForPort 0 5 Default, winmax = 5 Winsorised 3 points
## V6 CostImpEx 0 1 Default, winmax = 5 Winsorised 1 points
## V7 Tariff 0 4 Default, winmax = 5 Winsorised 4 points
## V12 StMob 0 2 Default, winmax = 5 Winsorised 2 points
## V15 CultServ 0 4 Default, winmax = 5 Winsorised 3 points
## V21 Flights 0 1 Default, winmax = 5 Winsorised 1 points
## V23 Bord 0 3 Default, winmax = 5 Winsorised 3 points
## V25 Gas 0 3 Default, winmax = 5 Winsorised 3 points
## V35 Forest 0 2 Default, winmax = 5 Winsorised 2 points
## V36 Poverty 0 3 Default, winmax = 5 Winsorised 3 points
## V41 NGOs 0 3 Default, winmax = 5 Winsorised 3 points
## V43 FemLab 1 0 Default, winmax = 5 Winsorised 1 points
## V45 PubDebt 0 1 Default, winmax = 5 Winsorised 1 points
```
It is also a good idea to visualise and compare the treated data against the untreated data. The best way to do this interactively is to call `indDash()` again, which allows comparison of treated and untreated indicators side by side. We can also do this manually (or for presentation) for specific indicators:
```
iplotIndDist2(ASEM, dsets = c("Imputed", "Treated"), icodes = "Services", ptype = "Scatter")
```
This shows the Winsorisation of four points for the “Services” indicator. This could also be plotted in different ways using box plots or violin plots.
18\.7 Normalisation
-------------------
The next step would be to normalise the data. In the ASEM index we will use a simple min\-max normalisation in the \\(\[0, 100]\\) interval.
```
ASEM <- normalise(ASEM, dset = "Treated", ntype = "minmax", npara = list(minmax = c(0,100)))
ASEM_noImpute <- normalise(ASEM_noImpute, dset = "Treated", ntype = "minmax", npara = list(minmax = c(0,100)))
```
Again, we could visualise and check stats here but to keep things shorter we’ll skip that for now.
18\.8 Aggregation
-----------------
The last construction step (apart from iterative changes) is to aggregate. Again we use a simple arithmetic mean. The structure of the index is stored in the `IndMeta` data frame inside the COIN, so we only need to specify which data set to aggregate, and which method to use.
```
ASEM <- aggregate(ASEM, agtype = "arith_mean", dset = "Normalised")
ASEM_noImpute <- aggregate(ASEM_noImpute, agtype = "arith_mean", dset = "Normalised")
```
18\.9 Visualisation
-------------------
We can now visualise our results. A good way at the index level is a stacked bar chart.
```
iplotBar(ASEM, dset = "Aggregated", isel = "Index", aglev = 4, stack_children = T)
```
This can look a bit strange because in fact the ASEM index was only aggregated up to the sustainability and connectivity sub\-indexes. We can also plot that:
```
iplotIndDist2(ASEM, dsets = "Aggregated", icodes = "Index")
```
We may also plot the results on a map. Here we’ll only plot connectivity:
```
iplotMap(ASEM, dset = "Aggregated", isel = "Conn")
```
For a bit more detail we can also generate a table.
```
getResults(ASEM, tab_type = "Summary") %>%
knitr::kable()
```
| UnitCode | UnitName | Index | Rank |
| --- | --- | --- | --- |
| CHE | Switzerland | 67\.70 | 1 |
| DNK | Denmark | 64\.15 | 2 |
| NLD | Netherlands | 63\.70 | 3 |
| NOR | Norway | 63\.66 | 4 |
| BEL | Belgium | 62\.88 | 5 |
| SWE | Sweden | 62\.43 | 6 |
| LUX | Luxembourg | 61\.51 | 7 |
| AUT | Austria | 61\.47 | 8 |
| DEU | Germany | 60\.36 | 9 |
| MLT | Malta | 60\.14 | 10 |
| SVN | Slovenia | 60\.04 | 11 |
| IRL | Ireland | 60\.02 | 12 |
| SGP | Singapore | 59\.00 | 13 |
| FIN | Finland | 58\.17 | 14 |
| GBR | United Kingdom | 57\.15 | 15 |
| LTU | Lithuania | 56\.87 | 16 |
| CZE | Czech Republic | 56\.47 | 17 |
| HRV | Croatia | 55\.51 | 18 |
| HUN | Hungary | 55\.22 | 19 |
| EST | Estonia | 55\.11 | 20 |
| FRA | France | 54\.82 | 21 |
| LVA | Latvia | 54\.72 | 22 |
| SVK | Slovakia | 54\.67 | 23 |
| ROU | Romania | 54\.46 | 24 |
| ESP | Spain | 54\.04 | 25 |
| POL | Poland | 53\.87 | 26 |
| PRT | Portugal | 53\.03 | 27 |
| ITA | Italy | 51\.38 | 28 |
| BGR | Bulgaria | 50\.68 | 29 |
| CYP | Cyprus | 50\.63 | 30 |
| KOR | Korea | 50\.20 | 31 |
| NZL | New Zealand | 48\.94 | 32 |
| GRC | Greece | 48\.64 | 33 |
| JPN | Japan | 47\.24 | 34 |
| KHM | Cambodia | 45\.63 | 35 |
| BRN | Brunei Darussalam | 45\.56 | 36 |
| AUS | Australia | 45\.42 | 37 |
| VNM | Vietnam | 43\.55 | 38 |
| LAO | Lao PDR | 42\.52 | 39 |
| MYS | Malaysia | 42\.25 | 40 |
| MMR | Myanmar | 41\.95 | 41 |
| PHL | Philippines | 41\.33 | 42 |
| THA | Thailand | 41\.27 | 43 |
| MNG | Mongolia | 40\.46 | 44 |
| IDN | Indonesia | 39\.93 | 45 |
| IND | India | 39\.48 | 46 |
| KAZ | Kazakhstan | 38\.95 | 47 |
| BGD | Bangladesh | 38\.92 | 48 |
| CHN | China | 37\.83 | 49 |
| PAK | Pakistan | 37\.69 | 50 |
| RUS | Russian Federation | 35\.45 | 51 |
18\.10 Comparison
-----------------
Since we built two slightly different indexes, it makes sense also to check the difference. How do the ranks change if we do or do not impute?
```
compTable(ASEM, ASEM_noImpute, dset = "Aggregated", isel = "Index")
## UnitCode UnitName RankCOIN1 RankCOIN2 RankChange AbsRankChange
## 29 LAO Lao PDR 39 45 -6 6
## 22 IND India 46 42 4 4
## 41 PHL Philippines 42 39 3 3
## 1 AUS Australia 37 35 2 2
## 6 BRN Brunei Darussalam 36 38 -2 2
## 16 FRA France 21 19 2 2
## 27 KHM Cambodia 35 37 -2 2
## 33 MLT Malta 10 12 -2 2
## 34 MMR Myanmar 41 43 -2 2
## 35 MNG Mongolia 44 46 -2 2
## 50 THA Thailand 43 41 2 2
## 51 VNM Vietnam 38 36 2 2
## 2 AUT Austria 8 7 1 1
## 5 BGR Bulgaria 29 30 -1 1
## 9 CYP Cyprus 30 29 1 1
## 12 DNK Denmark 2 3 -1 1
## 13 ESP Spain 25 24 1 1
## 14 EST Estonia 20 21 -1 1
## 18 GRC Greece 33 32 1 1
## 20 HUN Hungary 19 20 -1 1
## 21 IDN Indonesia 45 44 1 1
## 23 IRL Ireland 12 11 1 1
## 31 LUX Luxembourg 7 8 -1 1
## 32 LVA Latvia 22 23 -1 1
## 37 NLD Netherlands 3 2 1 1
## 39 NZL New Zealand 32 33 -1 1
## 44 ROU Romania 24 25 -1 1
## 47 SVK Slovakia 23 22 1 1
## 48 SVN Slovenia 11 10 1 1
## 3 BEL Belgium 5 5 0 0
## 4 BGD Bangladesh 48 48 0 0
## 7 CHE Switzerland 1 1 0 0
## 8 CHN China 49 49 0 0
## 10 CZE Czech Republic 17 17 0 0
## 11 DEU Germany 9 9 0 0
## 15 FIN Finland 14 14 0 0
## 17 GBR United Kingdom 15 15 0 0
## 19 HRV Croatia 18 18 0 0
## 24 ITA Italy 28 28 0 0
## 25 JPN Japan 34 34 0 0
## 26 KAZ Kazakhstan 47 47 0 0
## 28 KOR Korea 31 31 0 0
## 30 LTU Lithuania 16 16 0 0
## 36 MYS Malaysia 40 40 0 0
## 38 NOR Norway 4 4 0 0
## 40 PAK Pakistan 50 50 0 0
## 42 POL Poland 26 26 0 0
## 43 PRT Portugal 27 27 0 0
## 45 RUS Russian Federation 51 51 0 0
## 46 SGP Singapore 13 13 0 0
## 49 SWE Sweden 6 6 0 0
```
This shows that the maximum rank change is 6 places, at the index level.
18\.11 Export
-------------
Most likely not everyone you present the results to will want to see it in R. COINr has a simple but quick way to export the entire contents of the COIN to Excel (effectively, it exports all data frames that are present). We will first generate a results table of the main results, attach it to the COIN, then export to Excel. Note that the results are anyway present in `ASEM$Data$Aggregated`, but the `getResults()` function provides tables that are better to present (the highest levels of aggregation are the first columns, rather than the last, and it is sorted by index score).
```
# Write full results table to COIN
getResults(ASEM, tab_type = "FullWithDenoms", out2 = "COIN")
# Export entire COIN to Excel
coin2Excel(ASEM, "ASEM_results.xlsx")
```
18\.12 Summary
--------------
This is a fast example of COINr functions for building a composite indicator. Normally this would be an iterative process, but it showcases how simple commands can be used to do fairly complex operations in many cases. There is a lot more to all of the functions used here, and you should check the respective chapters of this book to better tune them to your needs.
18\.1 Before COINr (BC)
-----------------------
Before even getting to COINr you will have to take a number of steps. The steps can be summarised by e.g. the European Commission’s [Ten Step Guide](https://knowledge4policy.ec.europa.eu/publication/your-10-step-pocket-guide-composite-indicators-scoreboards_en). In short, before getting to COINr you should have:
1. Tried to summarise all elements of the concept you are trying to capture, including by e.g.
* A literature review
* Talking to experts and stakeholders if possible
* Reviewing any other indicator frameworks on the topic
2. Constructed a “conceptual framework” using the information in 1\. For a composite indicator, this should be a hierarchical structure.
3. Gathered indicator data (these need not be the final set of indicators)
Depending on how thorough you are, and how accessible your data is, this process can take a surprisingly long time. On these topics, there is a lot of good training material available through the European Commission’s Competence Centre for Composite Indicators and Scoreboards [here](https://knowledge4policy.ec.europa.eu/composite-indicators/2019-jrc-week-composite-indicators-scoreboards_en).
18\.2 Load data and assemble
----------------------------
Having got a set of preliminary indicator data, the next step is to load data into R in some way. Remember that to build a composite indicator with COINr you need to first build a COIN. To build a COIN you need three data frames, as explained in detail in [COINs: the currency of COINr](coins-the-currency-of-coinr.html#coins-the-currency-of-coinr):
* Indicator data
* Indicator metadata
* Aggregation metadata
Where these data frames come from is up to you. You might want to load data into R and then assemble them using your own script. Alternatively, you may wish to assemble these in Excel, for example, then read the sheets of the spreadsheet into R.
Let’s assume now that you have got these three data frames in the correct formats. Now you simply put them into `assemble()` to build your COIN. Here I will use the built\-in data frames in COINr:
```
library(COINr6)
ASEM <- assemble(IndData = ASEMIndData,
IndMeta = ASEMIndMeta,
AggMeta = ASEMAggMeta)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
```
18\.3 Check the data
--------------------
At this point, it’s worth checking the COIN to make sure that everything is as you would expect. One thing is to check the framework:
```
plotframework(ASEM)
```
You can also explore the indicator data using COINr’s `indDash()` app:
```
# not run here because requires an active R session
indDash(ASEM)
```
I would also recommend to check the statistics of each indicator using `getStats()`. Look for any indicators that are highly skewed, have few unique values, have low data availability, or strong correlations with other indicators or with denominators.
```
# load reactable for viewing table (in R you can do this instead with R Studio)
library(reactable)
library(magrittr)
# get stats
ASEM <- getStats(ASEM, dset = "Raw")
## Number of collinear indicators = 3
## Number of signficant negative indicator correlations = 322
## Number of indicators with high denominator correlations = 7
# view stats table
ASEM$Analysis$Raw$StatTable %>%
roundDF() %>%
reactable()
```
At this point you may decide to check individual indicators, and some may be added or excluded.
18\.4 Denomination
------------------
If you want to divide indicators by e.g. GDP or population to be able to compare small countries with larger ones, you can use `denominate()`. The specifications are either made initially in `IndMeta`, or as arguments to `denominate()`. In the case of the ASEM data set, these are included in `IndMeta` so the command is very simple (run `View(ASEMIndMeta)` to see). We will afterwards check the new stats to see what has changed.
```
# create denominated data set
ASEM <- denominate(ASEM, dset = "Raw")
# get stats of denominated data
ASEM <- getStats(ASEM, dset = "Denominated")
## Number of collinear indicators = 0
## Number of signficant negative indicator correlations = 440
## Number of indicators with high denominator correlations = 0
# view stats table
ASEM$Analysis$Raw$StatTable %>%
roundDF() %>%
reactable()
```
According to the new table, there are now no high correlations with denominators, which indicates some kind of success.
18\.5 Imputation
----------------
If you have missing data (and if you don’t, you are fortunate), you can choose to impute the data. If you *don’t* impute missing data, you can still build a composite indicator though.
We can use one of COINr’s options for imputing data. Like other commands, we operate on the COIN and the output is an updated COIN. Actually at this point I will make two alternatives \- one which imputes missing data using a group median, and another which performs no imputation.
```
# Make a copy
ASEM_noImpute <- ASEM
# impute one
ASEM <- impute(ASEM, dset = "Denominated", imtype = "indgroup_mean", groupvar = "Group_GDP")
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
```
18\.6 Data treatment
--------------------
Next we may wish to treat the data for outliers. Here we apply a standard approach which Winsorises each indicator up to a specified limit of points, in order to bring skew and kurtosis below specified thresholds. If Winsorisation fails, it applies a log transformation or similar.
```
ASEM <- treat(ASEM, dset = "Imputed", winmax = 5)
ASEM_noImpute <- treat(ASEM_noImpute, dset = "Denominated", winmax = 5)
```
Following treatment, it is a good idea to check which indicators were treated and how:
```
library(dplyr)
ASEM$Analysis$Treated$TreatSummary %>%
filter(Treatment != "None")
## IndCode Low High TreatSpec Treatment
## V2 Services 0 4 Default, winmax = 5 Winsorised 4 points
## V3 FDI 0 2 Default, winmax = 5 Winsorised 2 points
## V5 ForPort 0 5 Default, winmax = 5 Winsorised 3 points
## V6 CostImpEx 0 1 Default, winmax = 5 Winsorised 1 points
## V7 Tariff 0 4 Default, winmax = 5 Winsorised 4 points
## V12 StMob 0 2 Default, winmax = 5 Winsorised 2 points
## V15 CultServ 0 4 Default, winmax = 5 Winsorised 3 points
## V21 Flights 0 1 Default, winmax = 5 Winsorised 1 points
## V23 Bord 0 3 Default, winmax = 5 Winsorised 3 points
## V25 Gas 0 3 Default, winmax = 5 Winsorised 3 points
## V35 Forest 0 2 Default, winmax = 5 Winsorised 2 points
## V36 Poverty 0 3 Default, winmax = 5 Winsorised 3 points
## V41 NGOs 0 3 Default, winmax = 5 Winsorised 3 points
## V43 FemLab 1 0 Default, winmax = 5 Winsorised 1 points
## V45 PubDebt 0 1 Default, winmax = 5 Winsorised 1 points
```
It is also a good idea to visualise and compare the treated data against the untreated data. The best way to do this interactively is to call `indDash()` again, which allows comparison of treated and untreated indicators side by side. We can also do this manually (or for presentation) for specific indicators:
```
iplotIndDist2(ASEM, dsets = c("Imputed", "Treated"), icodes = "Services", ptype = "Scatter")
```
This shows the Winsorisation of four points for the “Services” indicator. This could also be plotted in different ways using box plots or violin plots.
18\.7 Normalisation
-------------------
The next step would be to normalise the data. In the ASEM index we will use a simple min\-max normalisation in the \\(\[0, 100]\\) interval.
```
ASEM <- normalise(ASEM, dset = "Treated", ntype = "minmax", npara = list(minmax = c(0,100)))
ASEM_noImpute <- normalise(ASEM_noImpute, dset = "Treated", ntype = "minmax", npara = list(minmax = c(0,100)))
```
Again, we could visualise and check stats here but to keep things shorter we’ll skip that for now.
18\.8 Aggregation
-----------------
The last construction step (apart from iterative changes) is to aggregate. Again we use a simple arithmetic mean. The structure of the index is stored in the `IndMeta` data frame inside the COIN, so we only need to specify which data set to aggregate, and which method to use.
```
ASEM <- aggregate(ASEM, agtype = "arith_mean", dset = "Normalised")
ASEM_noImpute <- aggregate(ASEM_noImpute, agtype = "arith_mean", dset = "Normalised")
```
18\.9 Visualisation
-------------------
We can now visualise our results. A good way at the index level is a stacked bar chart.
```
iplotBar(ASEM, dset = "Aggregated", isel = "Index", aglev = 4, stack_children = T)
```
This can look a bit strange because in fact the ASEM index was only aggregated up to the sustainability and connectivity sub\-indexes. We can also plot that:
```
iplotIndDist2(ASEM, dsets = "Aggregated", icodes = "Index")
```
We may also plot the results on a map. Here we’ll only plot connectivity:
```
iplotMap(ASEM, dset = "Aggregated", isel = "Conn")
```
For a bit more detail we can also generate a table.
```
getResults(ASEM, tab_type = "Summary") %>%
knitr::kable()
```
| UnitCode | UnitName | Index | Rank |
| --- | --- | --- | --- |
| CHE | Switzerland | 67\.70 | 1 |
| DNK | Denmark | 64\.15 | 2 |
| NLD | Netherlands | 63\.70 | 3 |
| NOR | Norway | 63\.66 | 4 |
| BEL | Belgium | 62\.88 | 5 |
| SWE | Sweden | 62\.43 | 6 |
| LUX | Luxembourg | 61\.51 | 7 |
| AUT | Austria | 61\.47 | 8 |
| DEU | Germany | 60\.36 | 9 |
| MLT | Malta | 60\.14 | 10 |
| SVN | Slovenia | 60\.04 | 11 |
| IRL | Ireland | 60\.02 | 12 |
| SGP | Singapore | 59\.00 | 13 |
| FIN | Finland | 58\.17 | 14 |
| GBR | United Kingdom | 57\.15 | 15 |
| LTU | Lithuania | 56\.87 | 16 |
| CZE | Czech Republic | 56\.47 | 17 |
| HRV | Croatia | 55\.51 | 18 |
| HUN | Hungary | 55\.22 | 19 |
| EST | Estonia | 55\.11 | 20 |
| FRA | France | 54\.82 | 21 |
| LVA | Latvia | 54\.72 | 22 |
| SVK | Slovakia | 54\.67 | 23 |
| ROU | Romania | 54\.46 | 24 |
| ESP | Spain | 54\.04 | 25 |
| POL | Poland | 53\.87 | 26 |
| PRT | Portugal | 53\.03 | 27 |
| ITA | Italy | 51\.38 | 28 |
| BGR | Bulgaria | 50\.68 | 29 |
| CYP | Cyprus | 50\.63 | 30 |
| KOR | Korea | 50\.20 | 31 |
| NZL | New Zealand | 48\.94 | 32 |
| GRC | Greece | 48\.64 | 33 |
| JPN | Japan | 47\.24 | 34 |
| KHM | Cambodia | 45\.63 | 35 |
| BRN | Brunei Darussalam | 45\.56 | 36 |
| AUS | Australia | 45\.42 | 37 |
| VNM | Vietnam | 43\.55 | 38 |
| LAO | Lao PDR | 42\.52 | 39 |
| MYS | Malaysia | 42\.25 | 40 |
| MMR | Myanmar | 41\.95 | 41 |
| PHL | Philippines | 41\.33 | 42 |
| THA | Thailand | 41\.27 | 43 |
| MNG | Mongolia | 40\.46 | 44 |
| IDN | Indonesia | 39\.93 | 45 |
| IND | India | 39\.48 | 46 |
| KAZ | Kazakhstan | 38\.95 | 47 |
| BGD | Bangladesh | 38\.92 | 48 |
| CHN | China | 37\.83 | 49 |
| PAK | Pakistan | 37\.69 | 50 |
| RUS | Russian Federation | 35\.45 | 51 |
18\.10 Comparison
-----------------
Since we built two slightly different indexes, it makes sense also to check the difference. How do the ranks change if we do or do not impute?
```
compTable(ASEM, ASEM_noImpute, dset = "Aggregated", isel = "Index")
## UnitCode UnitName RankCOIN1 RankCOIN2 RankChange AbsRankChange
## 29 LAO Lao PDR 39 45 -6 6
## 22 IND India 46 42 4 4
## 41 PHL Philippines 42 39 3 3
## 1 AUS Australia 37 35 2 2
## 6 BRN Brunei Darussalam 36 38 -2 2
## 16 FRA France 21 19 2 2
## 27 KHM Cambodia 35 37 -2 2
## 33 MLT Malta 10 12 -2 2
## 34 MMR Myanmar 41 43 -2 2
## 35 MNG Mongolia 44 46 -2 2
## 50 THA Thailand 43 41 2 2
## 51 VNM Vietnam 38 36 2 2
## 2 AUT Austria 8 7 1 1
## 5 BGR Bulgaria 29 30 -1 1
## 9 CYP Cyprus 30 29 1 1
## 12 DNK Denmark 2 3 -1 1
## 13 ESP Spain 25 24 1 1
## 14 EST Estonia 20 21 -1 1
## 18 GRC Greece 33 32 1 1
## 20 HUN Hungary 19 20 -1 1
## 21 IDN Indonesia 45 44 1 1
## 23 IRL Ireland 12 11 1 1
## 31 LUX Luxembourg 7 8 -1 1
## 32 LVA Latvia 22 23 -1 1
## 37 NLD Netherlands 3 2 1 1
## 39 NZL New Zealand 32 33 -1 1
## 44 ROU Romania 24 25 -1 1
## 47 SVK Slovakia 23 22 1 1
## 48 SVN Slovenia 11 10 1 1
## 3 BEL Belgium 5 5 0 0
## 4 BGD Bangladesh 48 48 0 0
## 7 CHE Switzerland 1 1 0 0
## 8 CHN China 49 49 0 0
## 10 CZE Czech Republic 17 17 0 0
## 11 DEU Germany 9 9 0 0
## 15 FIN Finland 14 14 0 0
## 17 GBR United Kingdom 15 15 0 0
## 19 HRV Croatia 18 18 0 0
## 24 ITA Italy 28 28 0 0
## 25 JPN Japan 34 34 0 0
## 26 KAZ Kazakhstan 47 47 0 0
## 28 KOR Korea 31 31 0 0
## 30 LTU Lithuania 16 16 0 0
## 36 MYS Malaysia 40 40 0 0
## 38 NOR Norway 4 4 0 0
## 40 PAK Pakistan 50 50 0 0
## 42 POL Poland 26 26 0 0
## 43 PRT Portugal 27 27 0 0
## 45 RUS Russian Federation 51 51 0 0
## 46 SGP Singapore 13 13 0 0
## 49 SWE Sweden 6 6 0 0
```
This shows that the maximum rank change is 6 places, at the index level.
18\.11 Export
-------------
Most likely not everyone you present the results to will want to see it in R. COINr has a simple but quick way to export the entire contents of the COIN to Excel (effectively, it exports all data frames that are present). We will first generate a results table of the main results, attach it to the COIN, then export to Excel. Note that the results are anyway present in `ASEM$Data$Aggregated`, but the `getResults()` function provides tables that are better to present (the highest levels of aggregation are the first columns, rather than the last, and it is sorted by index score).
```
# Write full results table to COIN
getResults(ASEM, tab_type = "FullWithDenoms", out2 = "COIN")
# Export entire COIN to Excel
coin2Excel(ASEM, "ASEM_results.xlsx")
```
18\.12 Summary
--------------
This is a fast example of COINr functions for building a composite indicator. Normally this would be an iterative process, but it showcases how simple commands can be used to do fairly complex operations in many cases. There is a lot more to all of the functions used here, and you should check the respective chapters of this book to better tune them to your needs.
| Social Science |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/setup-packages.html |
6 Setup: Packages
=================
*Purpose*: Every time you start an analysis, you will need to *load packages*.
This is a quick warmup to get you in this habit.
*Reading*: (None)
**Note**: If you open this `.Rmd` file in RStudio you can *execute* the chunks of code (stuff between triple\-ticks) by clicking on the green arrow to the top\-right of the chunk, or by placing your cursor within a chunk (between the ticks) and pressing `Ctrl + Shift + Enter`. See [here](https://rmarkdown.rstudio.com/authoring_quick_tour.html) for a brief introduction to RMarakdown files. Note that in RStudio you can Shift \+ Click (CMD \+ Click) to follow a link.
### 6\.0\.1 **q1** Create a new code chunk and prepare to load the `tidyverse`.
In [RStudio](https://bookdown.org/yihui/rmarkdown/r-code.html) use the shortcut
`Ctrl + Alt + I` (`Cmd + Option + I` on Mac). Type the command
`library(tidyverse`).
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
Make sure to load packages at the *beginning* of your notebooks! This is a
best\-practice.
### 6\.0\.2 **q2** Run the chunk with `library(tidyverse)`. What packages are loaded?
In
[RStudio](https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts)
press `Ctrl + Alt + T` (`Cmd + Option + T` on Mac) to run the code chunk at your
cursor.
ggplot2, tibble, tidyr, readr, purrr, dplyr, stringr, forcats
### 6\.0\.3 **q3** What are the main differences between `install.packages()` and `library()`? How often do you need to call each one?
`install.packages()` downloads and installs packages into R’s library of
packages on your computer. After a package is installed, it is available for
use, but not loaded to use. `library()` loads a package from the library for
use. The package remains loaded until your session ends, such as when you
quit R. You only need to call `install.packages()` once. You need to call
`library()` for each new session.
Packages are frequently updated. You can check for updates and update your
packages by using the Update button in the Packages tab of RStudio (lower
right pane).
### 6\.0\.1 **q1** Create a new code chunk and prepare to load the `tidyverse`.
In [RStudio](https://bookdown.org/yihui/rmarkdown/r-code.html) use the shortcut
`Ctrl + Alt + I` (`Cmd + Option + I` on Mac). Type the command
`library(tidyverse`).
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
Make sure to load packages at the *beginning* of your notebooks! This is a
best\-practice.
### 6\.0\.2 **q2** Run the chunk with `library(tidyverse)`. What packages are loaded?
In
[RStudio](https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts)
press `Ctrl + Alt + T` (`Cmd + Option + T` on Mac) to run the code chunk at your
cursor.
ggplot2, tibble, tidyr, readr, purrr, dplyr, stringr, forcats
### 6\.0\.3 **q3** What are the main differences between `install.packages()` and `library()`? How often do you need to call each one?
`install.packages()` downloads and installs packages into R’s library of
packages on your computer. After a package is installed, it is available for
use, but not loaded to use. `library()` loads a package from the library for
use. The package remains loaded until your session ends, such as when you
quit R. You only need to call `install.packages()` once. You need to call
`library()` for each new session.
Packages are frequently updated. You can check for updates and update your
packages by using the Update button in the Packages tab of RStudio (lower
right pane).
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/setup-function-basics.html |
7 Setup: Function Basics
========================
*Purpose*: Functions are our primary tool in carying out data analysis with the
`tidyverse`. It is unreasonable to expect yourself to memorize every function
and all its details. To that end, we’ll learn some basic *function literacy* in
R; how to inspect a function, look up its documentation, and find examples on a
function’s use.
*Reading*: [Programming Basics](https://rstudio.cloud/learn/primers/1.2).
*Topics*: `functions`, `arguments`
*Reading Time*: \~ 10 minutes
### 7\.0\.1 **q1** How do you find help on a function? Get help on the built\-in `rnorm` function.
```
?rnorm
```
### 7\.0\.2 **q2** How do you show the source code for a function?
```
rnorm
```
```
## function (n, mean = 0, sd = 1)
## .Call(C_rnorm, n, mean, sd)
## <bytecode: 0x7fd6458d8b00>
## <environment: namespace:stats>
```
### 7\.0\.3 **q3** Using either the documentation or the source, determine the arguments for `rnorm`.
The arguments for `rnorm` are:
* `n`: The number of samples to draw
* `mean`: The mean of the normal
* `sd`: The standard deviation of the normal
### 7\.0\.4 **q4** Scroll to the bottom of the help for the `library()` function. How do you
list all available packages?
Calling `library()` without arguments lists all the available packages.
The **examples** in the help documentation are often *extremely* helpful for
learning how to use a function (or reminding yourself how its used)! Get used to
checking the examples, as they’re a great resource.
### 7\.0\.1 **q1** How do you find help on a function? Get help on the built\-in `rnorm` function.
```
?rnorm
```
### 7\.0\.2 **q2** How do you show the source code for a function?
```
rnorm
```
```
## function (n, mean = 0, sd = 1)
## .Call(C_rnorm, n, mean, sd)
## <bytecode: 0x7fd6458d8b00>
## <environment: namespace:stats>
```
### 7\.0\.3 **q3** Using either the documentation or the source, determine the arguments for `rnorm`.
The arguments for `rnorm` are:
* `n`: The number of samples to draw
* `mean`: The mean of the normal
* `sd`: The standard deviation of the normal
### 7\.0\.4 **q4** Scroll to the bottom of the help for the `library()` function. How do you
list all available packages?
Calling `library()` without arguments lists all the available packages.
The **examples** in the help documentation are often *extremely* helpful for
learning how to use a function (or reminding yourself how its used)! Get used to
checking the examples, as they’re a great resource.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-basics.html |
8 Data: Basics
==============
*Purpose*: When first studying a new dataset, there are very simple checks we
should perform first. These are those checks.
Additionally, we’ll have our first look at the *pipe operator*, which will be
super useful for writing code that’s readable.
*Reading*: (None)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
8\.1 First Checks
-----------------
### 8\.1\.1 **q0** Run the following chunk:
*Hint*: You can do this either by clicking the green arrow at the top\-right of
the chunk, or by using the keybaord shortcut `Shift` \+ `Cmd/Ctrl` \+ `Enter`.
```
head(iris)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
```
This is a *dataset*; the fundamental object we’ll study throughout this course.
Some nomenclature:
* The `1, 2, 3, ...` on the left enumerate the **rows** of the dataset
* The names `Sepal.Length`, `Sepal.Width`, `...` name the **columns** of the dataset
* The column `Sepal.Length` takes **numeric** values
* The column `Species` takes **string** values
### 8\.1\.2 **q1** Load the `tidyverse` and inspect the `diamonds` dataset. What do the
`cut`, `color`, and `clarity` variables mean?
*Hint*: You can run `?diamonds` to get information on a built\-in dataset.
```
?diamonds
```
### 8\.1\.3 **q2** Run `glimpse(diamonds)`; what variables does `diamonds` have?
```
glimpse(diamonds)
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
The `diamonds` dataset has variables `carat, cut, color, clarity, depth, table, price, x, y, z`.
### 8\.1\.4 **q3** Run `summary(diamonds)`; what are the common values for each of the
variables? How widely do each of the variables vary?
*Hint*: The `Median` and `Mean` are common values, while `Min` and `Max` give us
a sense of variation.
**Observations**:
* `carat` seems to be bounded between `0` and `5`
* The highest\-priced diamond in this set is $18,823!
* Some of the variables do not have `min, max` etc. values. These are *factors*; variables that take one of a finite set of possible values.
You should always analyze your dataset in the simplest way possible, build
hypotheses, and devise more specific analyses to probe those hypotheses. The
`glimpse()` and `summary()` functions are two of the simplest tools we have.
8\.2 The Pipe Operator
----------------------
Throughout this class we’re going to make heavy use of the *pipe operator*
`%>%`. This handy little function will help us make our code more readable.
Whenever you see `%>%`, you can translate that into the word “then”. For
instance
```
diamonds %>%
group_by(cut) %>%
summarize(carat_mean = mean(carat))
```
```
## # A tibble: 5 × 2
## cut carat_mean
## <ord> <dbl>
## 1 Fair 1.05
## 2 Good 0.849
## 3 Very Good 0.806
## 4 Premium 0.892
## 5 Ideal 0.703
```
Would translate into the tiny “story”
* Take the `diamonds` dataset, *then*
* Group it by the variable `cut`, *then*
* summarize it by computing the `mean` of `carat`
*What the pipe actually does*. The pipe operator `LHS %>% RHS` takes its
left\-hand side (LHS) and inserts it as an the first argument to the function on
its right\-hand side (RHS). So the pipe will let us take `glimpse(diamonds)` and
turn it into `diamonds %>% glimpse()`.
### 8\.2\.1 **q4** Use the pipe operator to re\-write `summary(diamonds)`.
```
diamonds %>% summary()
```
```
## carat cut color clarity depth
## Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00
## 1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00
## Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80
## Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75
## 3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50
## Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00
## J: 2808 (Other): 2531
## table price x y
## Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000
## 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720
## Median :57.00 Median : 2401 Median : 5.700 Median : 5.710
## Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735
## 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540
## Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900
##
## z
## Min. : 0.000
## 1st Qu.: 2.910
## Median : 3.530
## Mean : 3.539
## 3rd Qu.: 4.040
## Max. :31.800
##
```
8\.3 Reading Data
-----------------
So far we’ve only been looking at built\-in datasets. Ultimately, we’ll want to read in our own data. We’ll get to the art of loading and *wrangling* data later, but for now, know that the `readr` package provides us tools to read data. Let’s quickly practice loading data below.
### 8\.3\.1 **q5** Use the function `read_csv()` to load the file `"./data/tiny.csv"`.
```
df_q5 <-
read_csv("./data/tiny.csv")
```
```
## Rows: 2 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (2): x, y
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
df_q5
```
```
## # A tibble: 2 × 2
## x y
## <dbl> <dbl>
## 1 1 2
## 2 3 4
```
8\.1 First Checks
-----------------
### 8\.1\.1 **q0** Run the following chunk:
*Hint*: You can do this either by clicking the green arrow at the top\-right of
the chunk, or by using the keybaord shortcut `Shift` \+ `Cmd/Ctrl` \+ `Enter`.
```
head(iris)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
```
This is a *dataset*; the fundamental object we’ll study throughout this course.
Some nomenclature:
* The `1, 2, 3, ...` on the left enumerate the **rows** of the dataset
* The names `Sepal.Length`, `Sepal.Width`, `...` name the **columns** of the dataset
* The column `Sepal.Length` takes **numeric** values
* The column `Species` takes **string** values
### 8\.1\.2 **q1** Load the `tidyverse` and inspect the `diamonds` dataset. What do the
`cut`, `color`, and `clarity` variables mean?
*Hint*: You can run `?diamonds` to get information on a built\-in dataset.
```
?diamonds
```
### 8\.1\.3 **q2** Run `glimpse(diamonds)`; what variables does `diamonds` have?
```
glimpse(diamonds)
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
The `diamonds` dataset has variables `carat, cut, color, clarity, depth, table, price, x, y, z`.
### 8\.1\.4 **q3** Run `summary(diamonds)`; what are the common values for each of the
variables? How widely do each of the variables vary?
*Hint*: The `Median` and `Mean` are common values, while `Min` and `Max` give us
a sense of variation.
**Observations**:
* `carat` seems to be bounded between `0` and `5`
* The highest\-priced diamond in this set is $18,823!
* Some of the variables do not have `min, max` etc. values. These are *factors*; variables that take one of a finite set of possible values.
You should always analyze your dataset in the simplest way possible, build
hypotheses, and devise more specific analyses to probe those hypotheses. The
`glimpse()` and `summary()` functions are two of the simplest tools we have.
### 8\.1\.1 **q0** Run the following chunk:
*Hint*: You can do this either by clicking the green arrow at the top\-right of
the chunk, or by using the keybaord shortcut `Shift` \+ `Cmd/Ctrl` \+ `Enter`.
```
head(iris)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
```
This is a *dataset*; the fundamental object we’ll study throughout this course.
Some nomenclature:
* The `1, 2, 3, ...` on the left enumerate the **rows** of the dataset
* The names `Sepal.Length`, `Sepal.Width`, `...` name the **columns** of the dataset
* The column `Sepal.Length` takes **numeric** values
* The column `Species` takes **string** values
### 8\.1\.2 **q1** Load the `tidyverse` and inspect the `diamonds` dataset. What do the
`cut`, `color`, and `clarity` variables mean?
*Hint*: You can run `?diamonds` to get information on a built\-in dataset.
```
?diamonds
```
### 8\.1\.3 **q2** Run `glimpse(diamonds)`; what variables does `diamonds` have?
```
glimpse(diamonds)
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
The `diamonds` dataset has variables `carat, cut, color, clarity, depth, table, price, x, y, z`.
### 8\.1\.4 **q3** Run `summary(diamonds)`; what are the common values for each of the
variables? How widely do each of the variables vary?
*Hint*: The `Median` and `Mean` are common values, while `Min` and `Max` give us
a sense of variation.
**Observations**:
* `carat` seems to be bounded between `0` and `5`
* The highest\-priced diamond in this set is $18,823!
* Some of the variables do not have `min, max` etc. values. These are *factors*; variables that take one of a finite set of possible values.
You should always analyze your dataset in the simplest way possible, build
hypotheses, and devise more specific analyses to probe those hypotheses. The
`glimpse()` and `summary()` functions are two of the simplest tools we have.
8\.2 The Pipe Operator
----------------------
Throughout this class we’re going to make heavy use of the *pipe operator*
`%>%`. This handy little function will help us make our code more readable.
Whenever you see `%>%`, you can translate that into the word “then”. For
instance
```
diamonds %>%
group_by(cut) %>%
summarize(carat_mean = mean(carat))
```
```
## # A tibble: 5 × 2
## cut carat_mean
## <ord> <dbl>
## 1 Fair 1.05
## 2 Good 0.849
## 3 Very Good 0.806
## 4 Premium 0.892
## 5 Ideal 0.703
```
Would translate into the tiny “story”
* Take the `diamonds` dataset, *then*
* Group it by the variable `cut`, *then*
* summarize it by computing the `mean` of `carat`
*What the pipe actually does*. The pipe operator `LHS %>% RHS` takes its
left\-hand side (LHS) and inserts it as an the first argument to the function on
its right\-hand side (RHS). So the pipe will let us take `glimpse(diamonds)` and
turn it into `diamonds %>% glimpse()`.
### 8\.2\.1 **q4** Use the pipe operator to re\-write `summary(diamonds)`.
```
diamonds %>% summary()
```
```
## carat cut color clarity depth
## Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00
## 1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00
## Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80
## Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75
## 3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50
## Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00
## J: 2808 (Other): 2531
## table price x y
## Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000
## 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720
## Median :57.00 Median : 2401 Median : 5.700 Median : 5.710
## Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735
## 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540
## Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900
##
## z
## Min. : 0.000
## 1st Qu.: 2.910
## Median : 3.530
## Mean : 3.539
## 3rd Qu.: 4.040
## Max. :31.800
##
```
### 8\.2\.1 **q4** Use the pipe operator to re\-write `summary(diamonds)`.
```
diamonds %>% summary()
```
```
## carat cut color clarity depth
## Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00
## 1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00
## Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80
## Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75
## 3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50
## Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00
## J: 2808 (Other): 2531
## table price x y
## Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000
## 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720
## Median :57.00 Median : 2401 Median : 5.700 Median : 5.710
## Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735
## 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540
## Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900
##
## z
## Min. : 0.000
## 1st Qu.: 2.910
## Median : 3.530
## Mean : 3.539
## 3rd Qu.: 4.040
## Max. :31.800
##
```
8\.3 Reading Data
-----------------
So far we’ve only been looking at built\-in datasets. Ultimately, we’ll want to read in our own data. We’ll get to the art of loading and *wrangling* data later, but for now, know that the `readr` package provides us tools to read data. Let’s quickly practice loading data below.
### 8\.3\.1 **q5** Use the function `read_csv()` to load the file `"./data/tiny.csv"`.
```
df_q5 <-
read_csv("./data/tiny.csv")
```
```
## Rows: 2 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (2): x, y
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
df_q5
```
```
## # A tibble: 2 × 2
## x y
## <dbl> <dbl>
## 1 1 2
## 2 3 4
```
### 8\.3\.1 **q5** Use the function `read_csv()` to load the file `"./data/tiny.csv"`.
```
df_q5 <-
read_csv("./data/tiny.csv")
```
```
## Rows: 2 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (2): x, y
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
df_q5
```
```
## # A tibble: 2 × 2
## x y
## <dbl> <dbl>
## 1 1 2
## 2 3 4
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/setup-documentation.html |
9 Setup: Documentation
======================
*Purpose*: No programmer memorizes every fact about every function. Expert
programmers get used to quickly reading *documentation*, which allows them to
look up the facts they need, when they need them. Just as you had to learn how
to read English, you will have to learn how to consult documentation. This
exercise will get you started.
*Reading*: [Getting help with R](https://www.r-project.org/help.html) (Vignettes and Code Demonstrations)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
The `vignette()` function allows us to look up
[vignettes](https://stat.ethz.ch/R-manual/R-devel/library/utils/html/vignette.html);
short narrative\-form tutorials associated with a package, written by its
developers.
### 9\.0\.1 **q1** Use `vignette(package = ???)` (fill in the ???) to look up vignettes
associated with `"dplyr"`. What vignettes are available?
```
vignette(package = "dplyr")
```
The `dplyr` package has vignettes `compatibility`, `dplyr`, `programming`,
`two-table`, and `window-functions`. We’ll cover many of these topics
in this course!
Once we know *what* vignettes are available, we can use the same function to
read a particular vignette.
### 9\.0\.2 **q2** Use `vignette(???, package = "dplyr")` to read the vignette on `dplyr`.
Read this vignette up to the first note on `filter()`. Use `filter()` to select
only those rows of the `iris` dataset where `Species == "setosa"`.
*Note*: This should open up your browser.
```
iris %>%
as_tibble() %>%
filter(
# TODO: Filter on Species "setosa"
Species == "setosa"
)
```
```
## # A tibble: 50 × 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 40 more rows
```
Vignettes are useful when we only know *generally* what we’re looking for. Once
we know the verbs (functions) we want to use, we need more specific help.
### 9\.0\.3 **q3** Remember back to `e-setup02-functions`; how do we look up help for a
specific function?
We have a few options:
* We can use `?function` to look up the help for a function
* We can execute `function` (without parentheses) to show the source code
Sometimes we’ll be working with a function, but we won’t *quite* know how to get
it to do what we need. In this case, consulting the function’s documentation can
be *extremely* helpful.
### 9\.0\.4 **q4** Use your knowledge of documentation lookup to answer the following
question: How could we `filter` the `iris` dataset to return only those rows
with `Sepal.Length` between `5.1` and `6.4`?
```
iris %>%
as_tibble() %>%
filter(
5.1 <= Sepal.Length,
Sepal.Length <= 6.4
)
```
```
## # A tibble: 83 × 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 5.4 3.9 1.7 0.4 setosa
## 3 5.4 3.7 1.5 0.2 setosa
## 4 5.8 4 1.2 0.2 setosa
## 5 5.7 4.4 1.5 0.4 setosa
## 6 5.4 3.9 1.3 0.4 setosa
## 7 5.1 3.5 1.4 0.3 setosa
## 8 5.7 3.8 1.7 0.3 setosa
## 9 5.1 3.8 1.5 0.3 setosa
## 10 5.4 3.4 1.7 0.2 setosa
## # … with 73 more rows
```
We have at least two options:
* We can use two lines of filter
* We can use the helper function `between()`
On other occasions we’ll know a function, but would like to know about other,
related functions. In this case, it’s useful to be able to trace the `function`
back to its parent `package`. Then we can read the vignettes on the package to
learn more.
### 9\.0\.5 **q5** Look up the documentation on `cut_number`; what package does it come
from? What about `parse_number()`? What about `row_number()`?
* `ggplot2::cut_number()`
* `readr::parse_number()`
* `dplyr::row_number()`
### 9\.0\.1 **q1** Use `vignette(package = ???)` (fill in the ???) to look up vignettes
associated with `"dplyr"`. What vignettes are available?
```
vignette(package = "dplyr")
```
The `dplyr` package has vignettes `compatibility`, `dplyr`, `programming`,
`two-table`, and `window-functions`. We’ll cover many of these topics
in this course!
Once we know *what* vignettes are available, we can use the same function to
read a particular vignette.
### 9\.0\.2 **q2** Use `vignette(???, package = "dplyr")` to read the vignette on `dplyr`.
Read this vignette up to the first note on `filter()`. Use `filter()` to select
only those rows of the `iris` dataset where `Species == "setosa"`.
*Note*: This should open up your browser.
```
iris %>%
as_tibble() %>%
filter(
# TODO: Filter on Species "setosa"
Species == "setosa"
)
```
```
## # A tibble: 50 × 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 40 more rows
```
Vignettes are useful when we only know *generally* what we’re looking for. Once
we know the verbs (functions) we want to use, we need more specific help.
### 9\.0\.3 **q3** Remember back to `e-setup02-functions`; how do we look up help for a
specific function?
We have a few options:
* We can use `?function` to look up the help for a function
* We can execute `function` (without parentheses) to show the source code
Sometimes we’ll be working with a function, but we won’t *quite* know how to get
it to do what we need. In this case, consulting the function’s documentation can
be *extremely* helpful.
### 9\.0\.4 **q4** Use your knowledge of documentation lookup to answer the following
question: How could we `filter` the `iris` dataset to return only those rows
with `Sepal.Length` between `5.1` and `6.4`?
```
iris %>%
as_tibble() %>%
filter(
5.1 <= Sepal.Length,
Sepal.Length <= 6.4
)
```
```
## # A tibble: 83 × 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 5.4 3.9 1.7 0.4 setosa
## 3 5.4 3.7 1.5 0.2 setosa
## 4 5.8 4 1.2 0.2 setosa
## 5 5.7 4.4 1.5 0.4 setosa
## 6 5.4 3.9 1.3 0.4 setosa
## 7 5.1 3.5 1.4 0.3 setosa
## 8 5.7 3.8 1.7 0.3 setosa
## 9 5.1 3.8 1.5 0.3 setosa
## 10 5.4 3.4 1.7 0.2 setosa
## # … with 73 more rows
```
We have at least two options:
* We can use two lines of filter
* We can use the helper function `between()`
On other occasions we’ll know a function, but would like to know about other,
related functions. In this case, it’s useful to be able to trace the `function`
back to its parent `package`. Then we can read the vignettes on the package to
learn more.
### 9\.0\.5 **q5** Look up the documentation on `cut_number`; what package does it come
from? What about `parse_number()`? What about `row_number()`?
* `ggplot2::cut_number()`
* `readr::parse_number()`
* `dplyr::row_number()`
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/communication-code-style.html |
10 Communication: Code Style
============================
*Purpose*: From the tidyverse style guide “Good coding style is like correct
punctuation: you can manage without it, butitsuremakesthingseasiertoread.” We
will follow the [tidyverse style guide](https://style.tidyverse.org/); Google’s
internal [R style guide](https://google.github.io/styleguide/Rguide.html) is
actually based on these guidelines!
*Reading*: [tidyverse style guide](https://style.tidyverse.org/).
*Topics*: [Spacing](https://style.tidyverse.org/syntax.html#spacing) (subsection only), [Pipes](https://style.tidyverse.org/pipes.html) (whole section)
*Reading Time*: \~ 10 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 10\.0\.1 **q1** Re\-write according to the style guide
*Hint*: The pipe operator `%>%` will help make this code more readable.
```
## Original code; hard to read
summarize(group_by(diamonds, cut), mean_price = mean(price))
```
```
## # A tibble: 5 × 2
## cut mean_price
## <ord> <dbl>
## 1 Fair 4359.
## 2 Good 3929.
## 3 Very Good 3982.
## 4 Premium 4584.
## 5 Ideal 3458.
```
```
diamonds %>%
group_by(cut) %>%
summarize(
mean_price = mean(price)
)
```
```
## # A tibble: 5 × 2
## cut mean_price
## <ord> <dbl>
## 1 Fair 4359.
## 2 Good 3929.
## 3 Very Good 3982.
## 4 Premium 4584.
## 5 Ideal 3458.
```
### 10\.0\.2 **q2** Re\-write according to the style guide
*Hint*: There are *particular rules* about spacing!
```
## NOTE: You can copy this code to the chunk below
iris %>%
mutate(Sepal.Area=Sepal.Length*Sepal.Width) %>%
group_by( Species ) %>%
summarize_if(is.numeric,mean)%>%
ungroup() %>%
pivot_longer( names_to="measure",values_to="value",cols=-Species ) %>%
arrange(value )
```
```
## # A tibble: 15 × 3
## Species measure value
## <fct> <chr> <dbl>
## 1 setosa Petal.Width 0.246
## 2 versicolor Petal.Width 1.33
## 3 setosa Petal.Length 1.46
## 4 virginica Petal.Width 2.03
## 5 versicolor Sepal.Width 2.77
## 6 virginica Sepal.Width 2.97
## 7 setosa Sepal.Width 3.43
## 8 versicolor Petal.Length 4.26
## 9 setosa Sepal.Length 5.01
## 10 virginica Petal.Length 5.55
## 11 versicolor Sepal.Length 5.94
## 12 virginica Sepal.Length 6.59
## 13 versicolor Sepal.Area 16.5
## 14 setosa Sepal.Area 17.3
## 15 virginica Sepal.Area 19.7
```
```
iris %>%
mutate(Sepal.Area = Sepal.Length * Sepal.Width) %>%
group_by(Species) %>%
summarize_if(is.numeric, mean) %>%
ungroup() %>%
pivot_longer(names_to = "measure", values_to = "value", cols = -Species) %>%
arrange(value)
```
```
## # A tibble: 15 × 3
## Species measure value
## <fct> <chr> <dbl>
## 1 setosa Petal.Width 0.246
## 2 versicolor Petal.Width 1.33
## 3 setosa Petal.Length 1.46
## 4 virginica Petal.Width 2.03
## 5 versicolor Sepal.Width 2.77
## 6 virginica Sepal.Width 2.97
## 7 setosa Sepal.Width 3.43
## 8 versicolor Petal.Length 4.26
## 9 setosa Sepal.Length 5.01
## 10 virginica Petal.Length 5.55
## 11 versicolor Sepal.Length 5.94
## 12 virginica Sepal.Length 6.59
## 13 versicolor Sepal.Area 16.5
## 14 setosa Sepal.Area 17.3
## 15 virginica Sepal.Area 19.7
```
### 10\.0\.3 **q3** Re\-write according to the style guide
*Hint*: What do we do about long lines?
```
iris %>%
group_by(Species) %>%
summarize(Sepal.Length = mean(Sepal.Length), Sepal.Width = mean(Sepal.Width), Petal.Length = mean(Petal.Length), Petal.Width = mean(Petal.Width))
```
```
## # A tibble: 3 × 5
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.01 3.43 1.46 0.246
## 2 versicolor 5.94 2.77 4.26 1.33
## 3 virginica 6.59 2.97 5.55 2.03
```
```
iris %>%
group_by(Species) %>%
summarize(
Sepal.Length = mean(Sepal.Length),
Sepal.Width = mean(Sepal.Width),
Petal.Length = mean(Petal.Length),
Petal.Width = mean(Petal.Width)
)
```
```
## # A tibble: 3 × 5
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.01 3.43 1.46 0.246
## 2 versicolor 5.94 2.77 4.26 1.33
## 3 virginica 6.59 2.97 5.55 2.03
```
The following is an *addition* I’m making to the “effective styleguide” for the
class: Rather than doing this:
```
## NOTE: No need to edit, just an example
ggplot(diamonds, aes(carat, price)) +
geom_point()
```
Instead, do this:
```
## NOTE: No need to edit, just an example
diamonds %>%
ggplot(aes(carat, price)) +
geom_point()
```
This may seem like a small difference (it is), but getting in this habit will
pay off when we start combining data operations with plotting; for instance:
```
## NOTE: No need to edit, just an example
diamonds %>%
filter(1.5 <= carat, carat <= 2.0) %>%
ggplot(aes(carat, price)) +
geom_point()
```
Getting in the habit of “putting the data first” will make it easier for you to
add preprocessing steps. Also, you can easily “disable” the plot to inspect your
preprocessing while debugging; that is:
```
## NOTE: No need to edit, just an example
diamonds %>%
filter(1.5 <= carat, carat <= 2.0) %>%
glimpse()
```
```
## Rows: 4,346
## Columns: 10
## $ carat <dbl> 1.50, 1.52, 1.52, 1.50, 1.50, 1.50, 1.51, 1.52, 1.52, 1.50, 1.…
## $ cut <ord> Fair, Good, Good, Fair, Good, Premium, Good, Fair, Premium, Pr…
## $ color <ord> H, E, E, H, G, H, G, H, I, H, I, F, F, I, H, D, G, F, H, H, E,…
## $ clarity <ord> I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1…
## $ depth <dbl> 65.6, 57.3, 57.3, 69.3, 57.4, 60.1, 64.0, 64.9, 61.2, 61.1, 63…
## $ table <dbl> 54, 58, 58, 61, 62, 57, 59, 58, 58, 59, 61, 59, 56, 58, 61, 60…
## $ price <int> 2964, 3105, 3105, 3175, 3179, 3457, 3497, 3504, 3541, 3599, 36…
## $ x <dbl> 7.26, 7.53, 7.53, 6.99, 7.56, 7.40, 7.29, 7.18, 7.43, 7.37, 7.…
## $ y <dbl> 7.09, 7.42, 7.42, 6.81, 7.39, 7.28, 7.17, 7.13, 7.35, 7.26, 7.…
## $ z <dbl> 4.70, 4.28, 4.28, 4.78, 4.29, 4.42, 4.63, 4.65, 4.52, 4.47, 4.…
```
```
## ggplot(aes(carat, price)) +
## geom_point()
```
I’ll enforce this “data first” style, but after this class you are (of course)
free to write code however you like!
### 10\.0\.4 **q4** Re\-write according to the style guide
*Hint*: Put the data first!
```
ggplot(
iris %>%
pivot_longer(
names_to = c("Part", ".value"),
names_sep = "\\.",
cols = -Species
),
aes(Width, Length, color = Part)
) +
geom_point() +
facet_wrap(~Species)
```
```
iris %>%
pivot_longer(
names_to = c("Part", ".value"),
names_sep = "\\.",
cols = -Species
) %>%
ggplot(aes(Width, Length, color = Part)) +
geom_point() +
facet_wrap(~Species)
```
### 10\.0\.1 **q1** Re\-write according to the style guide
*Hint*: The pipe operator `%>%` will help make this code more readable.
```
## Original code; hard to read
summarize(group_by(diamonds, cut), mean_price = mean(price))
```
```
## # A tibble: 5 × 2
## cut mean_price
## <ord> <dbl>
## 1 Fair 4359.
## 2 Good 3929.
## 3 Very Good 3982.
## 4 Premium 4584.
## 5 Ideal 3458.
```
```
diamonds %>%
group_by(cut) %>%
summarize(
mean_price = mean(price)
)
```
```
## # A tibble: 5 × 2
## cut mean_price
## <ord> <dbl>
## 1 Fair 4359.
## 2 Good 3929.
## 3 Very Good 3982.
## 4 Premium 4584.
## 5 Ideal 3458.
```
### 10\.0\.2 **q2** Re\-write according to the style guide
*Hint*: There are *particular rules* about spacing!
```
## NOTE: You can copy this code to the chunk below
iris %>%
mutate(Sepal.Area=Sepal.Length*Sepal.Width) %>%
group_by( Species ) %>%
summarize_if(is.numeric,mean)%>%
ungroup() %>%
pivot_longer( names_to="measure",values_to="value",cols=-Species ) %>%
arrange(value )
```
```
## # A tibble: 15 × 3
## Species measure value
## <fct> <chr> <dbl>
## 1 setosa Petal.Width 0.246
## 2 versicolor Petal.Width 1.33
## 3 setosa Petal.Length 1.46
## 4 virginica Petal.Width 2.03
## 5 versicolor Sepal.Width 2.77
## 6 virginica Sepal.Width 2.97
## 7 setosa Sepal.Width 3.43
## 8 versicolor Petal.Length 4.26
## 9 setosa Sepal.Length 5.01
## 10 virginica Petal.Length 5.55
## 11 versicolor Sepal.Length 5.94
## 12 virginica Sepal.Length 6.59
## 13 versicolor Sepal.Area 16.5
## 14 setosa Sepal.Area 17.3
## 15 virginica Sepal.Area 19.7
```
```
iris %>%
mutate(Sepal.Area = Sepal.Length * Sepal.Width) %>%
group_by(Species) %>%
summarize_if(is.numeric, mean) %>%
ungroup() %>%
pivot_longer(names_to = "measure", values_to = "value", cols = -Species) %>%
arrange(value)
```
```
## # A tibble: 15 × 3
## Species measure value
## <fct> <chr> <dbl>
## 1 setosa Petal.Width 0.246
## 2 versicolor Petal.Width 1.33
## 3 setosa Petal.Length 1.46
## 4 virginica Petal.Width 2.03
## 5 versicolor Sepal.Width 2.77
## 6 virginica Sepal.Width 2.97
## 7 setosa Sepal.Width 3.43
## 8 versicolor Petal.Length 4.26
## 9 setosa Sepal.Length 5.01
## 10 virginica Petal.Length 5.55
## 11 versicolor Sepal.Length 5.94
## 12 virginica Sepal.Length 6.59
## 13 versicolor Sepal.Area 16.5
## 14 setosa Sepal.Area 17.3
## 15 virginica Sepal.Area 19.7
```
### 10\.0\.3 **q3** Re\-write according to the style guide
*Hint*: What do we do about long lines?
```
iris %>%
group_by(Species) %>%
summarize(Sepal.Length = mean(Sepal.Length), Sepal.Width = mean(Sepal.Width), Petal.Length = mean(Petal.Length), Petal.Width = mean(Petal.Width))
```
```
## # A tibble: 3 × 5
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.01 3.43 1.46 0.246
## 2 versicolor 5.94 2.77 4.26 1.33
## 3 virginica 6.59 2.97 5.55 2.03
```
```
iris %>%
group_by(Species) %>%
summarize(
Sepal.Length = mean(Sepal.Length),
Sepal.Width = mean(Sepal.Width),
Petal.Length = mean(Petal.Length),
Petal.Width = mean(Petal.Width)
)
```
```
## # A tibble: 3 × 5
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 setosa 5.01 3.43 1.46 0.246
## 2 versicolor 5.94 2.77 4.26 1.33
## 3 virginica 6.59 2.97 5.55 2.03
```
The following is an *addition* I’m making to the “effective styleguide” for the
class: Rather than doing this:
```
## NOTE: No need to edit, just an example
ggplot(diamonds, aes(carat, price)) +
geom_point()
```
Instead, do this:
```
## NOTE: No need to edit, just an example
diamonds %>%
ggplot(aes(carat, price)) +
geom_point()
```
This may seem like a small difference (it is), but getting in this habit will
pay off when we start combining data operations with plotting; for instance:
```
## NOTE: No need to edit, just an example
diamonds %>%
filter(1.5 <= carat, carat <= 2.0) %>%
ggplot(aes(carat, price)) +
geom_point()
```
Getting in the habit of “putting the data first” will make it easier for you to
add preprocessing steps. Also, you can easily “disable” the plot to inspect your
preprocessing while debugging; that is:
```
## NOTE: No need to edit, just an example
diamonds %>%
filter(1.5 <= carat, carat <= 2.0) %>%
glimpse()
```
```
## Rows: 4,346
## Columns: 10
## $ carat <dbl> 1.50, 1.52, 1.52, 1.50, 1.50, 1.50, 1.51, 1.52, 1.52, 1.50, 1.…
## $ cut <ord> Fair, Good, Good, Fair, Good, Premium, Good, Fair, Premium, Pr…
## $ color <ord> H, E, E, H, G, H, G, H, I, H, I, F, F, I, H, D, G, F, H, H, E,…
## $ clarity <ord> I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1, I1…
## $ depth <dbl> 65.6, 57.3, 57.3, 69.3, 57.4, 60.1, 64.0, 64.9, 61.2, 61.1, 63…
## $ table <dbl> 54, 58, 58, 61, 62, 57, 59, 58, 58, 59, 61, 59, 56, 58, 61, 60…
## $ price <int> 2964, 3105, 3105, 3175, 3179, 3457, 3497, 3504, 3541, 3599, 36…
## $ x <dbl> 7.26, 7.53, 7.53, 6.99, 7.56, 7.40, 7.29, 7.18, 7.43, 7.37, 7.…
## $ y <dbl> 7.09, 7.42, 7.42, 6.81, 7.39, 7.28, 7.17, 7.13, 7.35, 7.26, 7.…
## $ z <dbl> 4.70, 4.28, 4.28, 4.78, 4.29, 4.42, 4.63, 4.65, 4.52, 4.47, 4.…
```
```
## ggplot(aes(carat, price)) +
## geom_point()
```
I’ll enforce this “data first” style, but after this class you are (of course)
free to write code however you like!
### 10\.0\.4 **q4** Re\-write according to the style guide
*Hint*: Put the data first!
```
ggplot(
iris %>%
pivot_longer(
names_to = c("Part", ".value"),
names_sep = "\\.",
cols = -Species
),
aes(Width, Length, color = Part)
) +
geom_point() +
facet_wrap(~Species)
```
```
iris %>%
pivot_longer(
names_to = c("Part", ".value"),
names_sep = "\\.",
cols = -Species
) %>%
ggplot(aes(Width, Length, color = Part)) +
geom_point() +
facet_wrap(~Species)
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/setup-rstudio-shortcuts.html |
11 Setup: RStudio Shortcuts
===========================
*Purpose*: Your ability to get stuff done is highly dependent on your fluency
with your tools. One aspect of fluency is knowing and *using* keyboard
shortcuts. In this exercise, we’ll go over some of the most important ones.
*Reading*: [Keyboard
shortcuts](https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts); [Code Chunk Options](https://rmarkdown.rstudio.com/lesson-3.html)
*Note*: Use this reading to look up answers to the questions below. Rather than
*memorizing* this information, I recommend you download a
[cheatsheet](https://rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf),
and either print it out or save it in a convenient place on your computer. Get
used to referencing your cheatsheets while doing data science—practice makes
perfect!
### 11\.0\.1 **q1** What do the following keyboard shortcuts do?
* Within the script editor or a chunk
+ `Alt` \+ `-`
Insert assignment operator `<-`
```
* `Shift` + `Cmd/Ctrl` + `M`
```
Insert pipe operator `%>%`
```
* `Cmd/Ctrl` + `Enter`
```
If no text is highlighted, run complete expression that cursor is
on, and advance to the next line. If text is highlighted, run
selected text.
```
* `F1` (Note: on a Mac you need to press `fn` + `F1`)
```
If cursor is on a function name, pull up the help page for the function.
```
* `Cmd/Ctrl` + `Shift` + `C`
```
Comment/uncomment the code at the cursor, or highlighted code.
* Within R Markdown
+ `Cmd/Ctrl` \+ `Alt` \+ `I`
Insert chunk.
* Within a chunk
+ `Shift` \+ `Cmd/Ctrl` \+ `Enter`
Run current chunk.
```
* `Ctrl` + `I` (`Cmd` + `I`)
```
Re\-indent selected lines.
Try this below!
```
## This is what it should look like
c(
"foo",
"bar",
"goo",
"gah"
)
```
```
## [1] "foo" "bar" "goo" "gah"
```
### 11\.0\.2 **q2** For a chunk, what header option do you use to
* Run the code, don’t display it, but show its results?
`echo=FALSE`
* Run the code, but don’t display it or its results?
`include=FALSE`
### 11\.0\.3 **q3** How do stop the code in a chunk from running once it has started?
Click the red square in the top right corner of the chunk.
### 11\.0\.4 **q4** How do you show the “Document Outline” in RStudio?
*Hint*: Try googling “rstudio document outline”
Use the keyboard shortcut `Ctrl + Shift + O` (Windows), `Super + Shift + O` (Mac), or click the icon in the top\-right of RStudio (it’s between the Publish and Compass icons).
### 11\.0\.1 **q1** What do the following keyboard shortcuts do?
* Within the script editor or a chunk
+ `Alt` \+ `-`
Insert assignment operator `<-`
```
* `Shift` + `Cmd/Ctrl` + `M`
```
Insert pipe operator `%>%`
```
* `Cmd/Ctrl` + `Enter`
```
If no text is highlighted, run complete expression that cursor is
on, and advance to the next line. If text is highlighted, run
selected text.
```
* `F1` (Note: on a Mac you need to press `fn` + `F1`)
```
If cursor is on a function name, pull up the help page for the function.
```
* `Cmd/Ctrl` + `Shift` + `C`
```
Comment/uncomment the code at the cursor, or highlighted code.
* Within R Markdown
+ `Cmd/Ctrl` \+ `Alt` \+ `I`
Insert chunk.
* Within a chunk
+ `Shift` \+ `Cmd/Ctrl` \+ `Enter`
Run current chunk.
```
* `Ctrl` + `I` (`Cmd` + `I`)
```
Re\-indent selected lines.
Try this below!
```
## This is what it should look like
c(
"foo",
"bar",
"goo",
"gah"
)
```
```
## [1] "foo" "bar" "goo" "gah"
```
### 11\.0\.2 **q2** For a chunk, what header option do you use to
* Run the code, don’t display it, but show its results?
`echo=FALSE`
* Run the code, but don’t display it or its results?
`include=FALSE`
### 11\.0\.3 **q3** How do stop the code in a chunk from running once it has started?
Click the red square in the top right corner of the chunk.
### 11\.0\.4 **q4** How do you show the “Document Outline” in RStudio?
*Hint*: Try googling “rstudio document outline”
Use the keyboard shortcut `Ctrl + Shift + O` (Windows), `Super + Shift + O` (Mac), or click the icon in the top\-right of RStudio (it’s between the Publish and Compass icons).
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/setup-vector-basics.html |
12 Setup: Vector Basics
=======================
*Purpose*: *Vectors* are the most important object we’ll work with when doing
data science. To that end, let’s learn some basics about vectors.
*Reading*: [Programming Basics](https://rstudio.cloud/learn/primers/1.2).
*Topics*: `vectors`
*Reading Time*: \~10 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 12\.0\.1 **q1** What single\-letter `R` function do you use to create vectors with specific entries? Use that function to create a vector with the numbers `1, 2, 3` below.
```
vec_q1 <- c(1, 2, 3)
vec_q1
```
```
## [1] 1 2 3
```
Use the following tests to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(length(vec_q1) == 3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(mean(vec_q1) == 2)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 12\.0\.2 **q2** Did you know that you can use `c()` to *extend* a vector as well? Use this to add the extra entry `4` to `vec_q1`.
```
vec_q2 <- c(vec_q1, 4)
vec_q2
```
```
## [1] 1 2 3 4
```
Use the following tests to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(length(vec_q2) == 4)
```
```
## [1] TRUE
```
```
assertthat::assert_that(mean(vec_q2) == 2.5)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 12\.0\.1 **q1** What single\-letter `R` function do you use to create vectors with specific entries? Use that function to create a vector with the numbers `1, 2, 3` below.
```
vec_q1 <- c(1, 2, 3)
vec_q1
```
```
## [1] 1 2 3
```
Use the following tests to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(length(vec_q1) == 3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(mean(vec_q1) == 2)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 12\.0\.2 **q2** Did you know that you can use `c()` to *extend* a vector as well? Use this to add the extra entry `4` to `vec_q1`.
```
vec_q2 <- c(vec_q1, 4)
vec_q2
```
```
## [1] 1 2 3 4
```
Use the following tests to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(length(vec_q2) == 4)
```
```
## [1] TRUE
```
```
assertthat::assert_that(mean(vec_q2) == 2.5)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/setup-types.html |
13 Setup: Types
===============
*Purpose*: Vectors can hold data of only one *type*. While this isn’t a course on computer science, there are some type “gotchas” to look out for when doing data science. This exercise will help us get ahead of those issues.
*Reading*: [Types](https://rstudio.cloud/learn/primers/1.2)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
13\.1 Objects vs Strings
------------------------
### 13\.1\.1 **q1** Describe what is wrong with the code below.
```
## TASK: Describe what went wrong here
## Set our airport
airport <- "BOS"
## Check our airport value
airport == ATL
```
**Observations**:
* `ATL` (without quotes) is trying to refer to an object (variable); we would need to write `"ATL"` (with quotes) to produce a string.
13\.2 Casting
-------------
Sometimes our data will not be in the form we want; in this case we may need to *cast* the data to another format.
* `as.integer(x)` converts to integer
* `as.numeric(x)` converts to real (floating point)
* `as.character(x)` converts to character (string)
* `as.logical(x)` converts to logical (boolean)
### 13\.2\.1 **q2** Cast the following vector `v_string` to integers.
```
v_string <- c("00", "45", "90")
v_integer <- as.integer(v_string)
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
v_integer,
c(0L, 45L, 90L)
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
13\.1 Objects vs Strings
------------------------
### 13\.1\.1 **q1** Describe what is wrong with the code below.
```
## TASK: Describe what went wrong here
## Set our airport
airport <- "BOS"
## Check our airport value
airport == ATL
```
**Observations**:
* `ATL` (without quotes) is trying to refer to an object (variable); we would need to write `"ATL"` (with quotes) to produce a string.
### 13\.1\.1 **q1** Describe what is wrong with the code below.
```
## TASK: Describe what went wrong here
## Set our airport
airport <- "BOS"
## Check our airport value
airport == ATL
```
**Observations**:
* `ATL` (without quotes) is trying to refer to an object (variable); we would need to write `"ATL"` (with quotes) to produce a string.
13\.2 Casting
-------------
Sometimes our data will not be in the form we want; in this case we may need to *cast* the data to another format.
* `as.integer(x)` converts to integer
* `as.numeric(x)` converts to real (floating point)
* `as.character(x)` converts to character (string)
* `as.logical(x)` converts to logical (boolean)
### 13\.2\.1 **q2** Cast the following vector `v_string` to integers.
```
v_string <- c("00", "45", "90")
v_integer <- as.integer(v_string)
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
v_integer,
c(0L, 45L, 90L)
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
### 13\.2\.1 **q2** Cast the following vector `v_string` to integers.
```
v_string <- c("00", "45", "90")
v_integer <- as.integer(v_string)
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
v_integer,
c(0L, 45L, 90L)
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-data-visualization-basics.html |
14 Vis: Data Visualization Basics
=================================
*Purpose*: The most powerful way for us to learn about a dataset is to
*visualize the data*. Throughout this class we will make extensive use of the
*grammar of graphics*, a powerful graphical programming *grammar* that will
allow us to create just about any graph you can imagine!
*Reading*: [Data Visualization Basics](https://rstudio.cloud/learn/primers/1.1). *Note*: In RStudio use `Ctrl + Click` (Mac `Command + Click`) to follow the link.
*Topics*: `Welcome`, `A code template`, `Aesthetic mappings`.
*Reading Time*: \~ 30 minutes
### 14\.0\.1 **q1** Inspect the `diamonds` dataset. What do the `cut`, `color`, and `clarity` variables mean?
*Hint*: We learned how to inspect a dataset in `e-data-00-basics`!
```
?diamonds
```
### 14\.0\.2 **q2** Use your “standard checks” to determine what variables the dataset has.
```
glimpse(diamonds)
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
The `diamonds` dataset has variables `carat, cut, color, clarity, depth, table, price, x, y, z`.
Now that we have the list of variables in the dataset, we know what we can visualize!
### 14\.0\.3 **q3** Using `ggplot`, visualize `price` vs `carat` with points. What trend do
you observe?
*Hint*: Usually the language `y` vs `x` refers to the `vertical axis` vs
`horizontal axis`. This is the opposite order from the way we often specify `x, y` pairs. Language is hard!
```
## TODO: Complete this code
ggplot(diamonds) +
geom_point(aes(x = carat, y = price))
```
**Observations**:
* `price` generally increases with `carat`
* The trend is not ‘clean’; there is no single curve in the relationship
14\.1 A note on *aesthetics*
----------------------------
The function `aes()` is short for *aesthetics*. Aesthetics in ggplot are the
mapping of variables in a dataframe to visual elements in the graph. For
instance, in the plot above you assigned `carat` to the `x` aesthetic, and
`price` to the `y` aesthetic. But there are *many more* aesthetics you can set,
some of which vary based on the `geom_` you are using to visualize. The next
question will explore this idea more.
### 14\.1\.1 **q4** Create a new graph to visualize `price`, `carat`, and `cut`
simultaneously.
*Hint*: Remember that you can add additional aesthetic mappings in `aes()`. Some options include `size`, `color`, and `shape`.
```
## TODO: Complete this code
ggplot(diamonds) +
geom_point(aes(x = carat, y = price, color = cut))
```
**Observations**:
* `price` generally increases with `carat`
* The `cut` helps explain the variation in price;
+ `Ideal` cut diamonds tend to be more expensive
+ `Fair` cut diamonds tend to be less expensive
### 14\.0\.1 **q1** Inspect the `diamonds` dataset. What do the `cut`, `color`, and `clarity` variables mean?
*Hint*: We learned how to inspect a dataset in `e-data-00-basics`!
```
?diamonds
```
### 14\.0\.2 **q2** Use your “standard checks” to determine what variables the dataset has.
```
glimpse(diamonds)
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
The `diamonds` dataset has variables `carat, cut, color, clarity, depth, table, price, x, y, z`.
Now that we have the list of variables in the dataset, we know what we can visualize!
### 14\.0\.3 **q3** Using `ggplot`, visualize `price` vs `carat` with points. What trend do
you observe?
*Hint*: Usually the language `y` vs `x` refers to the `vertical axis` vs
`horizontal axis`. This is the opposite order from the way we often specify `x, y` pairs. Language is hard!
```
## TODO: Complete this code
ggplot(diamonds) +
geom_point(aes(x = carat, y = price))
```
**Observations**:
* `price` generally increases with `carat`
* The trend is not ‘clean’; there is no single curve in the relationship
14\.1 A note on *aesthetics*
----------------------------
The function `aes()` is short for *aesthetics*. Aesthetics in ggplot are the
mapping of variables in a dataframe to visual elements in the graph. For
instance, in the plot above you assigned `carat` to the `x` aesthetic, and
`price` to the `y` aesthetic. But there are *many more* aesthetics you can set,
some of which vary based on the `geom_` you are using to visualize. The next
question will explore this idea more.
### 14\.1\.1 **q4** Create a new graph to visualize `price`, `carat`, and `cut`
simultaneously.
*Hint*: Remember that you can add additional aesthetic mappings in `aes()`. Some options include `size`, `color`, and `shape`.
```
## TODO: Complete this code
ggplot(diamonds) +
geom_point(aes(x = carat, y = price, color = cut))
```
**Observations**:
* `price` generally increases with `carat`
* The `cut` helps explain the variation in price;
+ `Ideal` cut diamonds tend to be more expensive
+ `Fair` cut diamonds tend to be less expensive
### 14\.1\.1 **q4** Create a new graph to visualize `price`, `carat`, and `cut`
simultaneously.
*Hint*: Remember that you can add additional aesthetic mappings in `aes()`. Some options include `size`, `color`, and `shape`.
```
## TODO: Complete this code
ggplot(diamonds) +
geom_point(aes(x = carat, y = price, color = cut))
```
**Observations**:
* `price` generally increases with `carat`
* The `cut` helps explain the variation in price;
+ `Ideal` cut diamonds tend to be more expensive
+ `Fair` cut diamonds tend to be less expensive
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/communication-active-and-constructive-responding.html |
15 Communication: Active and Constructive Responding
====================================================
*Purpose*: We’re going to spend **a lot** of time in this class discussing datasets. To help prepare ourselves for productive discussions, we’re going to use some ideas from [positive psychology](https://en.wikipedia.org/wiki/Positive_psychology) to help make our discussions more pleasant and productive.
*Reading*: [GoStrengths: What is Active and Constructive Responding?](https://gostrengths.com/what-is-active-and-constructive-responding/)
*Reading Time*: \~ 10 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
15\.1 Review: Active and Constructive Responding (ACR)
------------------------------------------------------
The reading introduces the idea of *active and constructive responding* (ACR) to *good news*. The following table summarizes the four primary ways of responding to someone when presented with good news:
| | Active | Passive |
| --- | --- | --- |
| Constructive | Good! | Bad |
| Destructive | Bad | Danger zone! |
We’ll call this the *active\-passive\-constructive\-destructive* (APCD) framework.
For example, if your friend tells you “Hey, I just got an A on my exam!”, you could respond in a number of different ways:
* “That’s terrific! I remember you were studying really hard. What do you want to do to celebrate?” (*Active and Constructive*)
+ This is *active* in the sense that it responds to and builds upon the substance of your friend’s statement.
+ This is *constructive* in that it conveys positive emotion.
+ (This is the best way you can respond to good news \[1]!)
* “Oh that’s cool, I guess.” (*Passive and Constructive*)
+ This is *constructive* in that it conveys positive emotion.
+ However this is *passive* in that it does not actively engage with your friend’s statement.
* “But the professor was grading on a curve! Gah, that means I probably didn’t get an A.” (*Active and Destructive*)
+ This is *active* in the sense that it responds to and builds upon the substance of your friend’s statement.
+ However this is *destructive* in that it conveys a negative reaction to your friend’s statement.
* “Dude whatever. I got an internship!” *Passive and Destructive*:
+ However this is *passive* in that it does not actively engage with your friend’s statement.
+ This is also *destructive* in that it implicitly conveys a negative reaction by ignoring the substance of your friend’s statement.
+ (This is the worst way you can respond to good news!)
Active and constructive responding (ACR) has been shown to be associated with more positive interpersonal relationships \[1]. If you want want to have better platonic or romantic relationships, you sould practice ACR!
15\.2 Having Productive Discussions
-----------------------------------
What does ACR have to do with data science? Imagine you’re in a meeting with some data science colleagues, and the presenter shows the following graph.
```
## NOTE: No need to edit; just run and inspect
diamonds %>%
slice_sample(n = 1000) %>%
ggplot(aes(carat, price)) +
geom_point() +
scale_x_log10() +
scale_y_log10() +
theme_minimal() +
labs(x = "", y = "")
```
Let’s use the active\-passive\-constructive\-destructive (APCD) response framework above to analyze some *discussion scenarios*. Imagine the following scenario:
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Um, what are the axes here?”
>
>
> *Presenter*: **“Obviously the graph shows price vs carat. On the next slide…”**
Here the bolded response is *actively* engaging with your colleague’s question, but is doing so in a very *destructive* way. It’s likely your colleague feels hurt, and is less likely now to speak up in meetings. That’s a bad thing!
### 15\.2\.1 **q1** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Um, what are the axes here?”
>
>
> *Presenter*: **“Woah good catch there, they’re unlabeled. The vertical shows price, and the horizontal shows carat.”**
**Observations**:
* This response is **active** as it directly engages with the colleague’s question.
* This response is **constructive** as it validates the colleague’s question before providing an answer.
### 15\.2\.2 **q2** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Um, what are the axes here?”
>
>
> *Presenter*: **“Woah good catch there, they’re unlabeled. Moving on….”**
**Observations**:
* This response is **passive** as it does not actually engage with the colleague’s question.
* This response is **constructive** as it validates the colleague’s question, even though it does not provide an answer.
**Intermediate Conclusion**: We can quite naturally think of discussion responses in the APCD framework. Therefore, we should try to give active and constructive responses in data discussions!
15\.3 Active and Constructive Questions
---------------------------------------
With a bit of imagination, we can apply the APCD framework to *questions* as well. Let’s keep looking at the same graph.
```
## NOTE: No need to edit; just run and inspect
diamonds %>%
slice_sample(n = 1000) %>%
ggplot(aes(carat, price)) +
geom_point() +
scale_x_log10() +
scale_y_log10() +
theme_minimal() +
labs(x = "", y = "")
```
Imagine the following scenario:
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: **“Percival, why are you so terrible at making graphs?”**
>
>
> *Presenter*: “…”
Here the bolded question is *passively* engaging with the presenter’s content (it’s not referring to any issues in the graph, but rather making an *ad hominem* attack of the presenter) and *destructively* making a value judgment on the presenter. This is a truly horrendous way to ask a question.
Let’s look at a **far more effective** way to ask a question:
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: **“Percival, this is an interesting finding but I’m confused by this graph—what do the axes represent?”**
>
>
> *Presenter*: “Oh, the labels are missing! The vertical shows price, and the horizontal shows carat.”
Here the bolded question is *actively* engaging with the presenter’s content (the colleague makes it clear what the problem with the graph is) and *constructively* validates the presenter’s findings. This is far more effective than the question posed above.
As a bonus, this question also illustrates another tip: Making “I” statements. To see the difference, let’s look at a subtly\-different version of the same question:
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Percival, this is an interesting finding **but your graph is very confusing**—what do the axes represent?”
>
>
> *Presenter*: “Oh, the labels are missing! The vertical shows price, and the horizontal shows carat.”
In this second version the colleague is making a value\-judgment about **the graph itself**; in the first version the colleague is making a statement **about their own subjective experience**. The second version is debatable (the presenter may think the graph is clear), but only a jerk would disagree with a person’s subjective experience.
Let’s practice once more!
### 15\.3\.1 **q3** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: **“What is your opinion on the morality of aesthetic diamonds?”**
>
>
> *Presenter*: “Umm….”
**Observations**:
* This response is **passive** as is a clear departure from the topic the presenter is trying to discuss.
* Honestly I find it difficult to rate this as constructive or destructive. This comment breaks the framework!
15\.4 Refrences
---------------
* \[1] Gable, S. L., Reis, H. T., Impett, E. A., \& Asher, E. R. (2004\). What do you do when things go right? The intrapersonal and interpersonal benefits of sharing positive events. Journal of Personality and Social Psychology, 87, 228\-245\.
15\.1 Review: Active and Constructive Responding (ACR)
------------------------------------------------------
The reading introduces the idea of *active and constructive responding* (ACR) to *good news*. The following table summarizes the four primary ways of responding to someone when presented with good news:
| | Active | Passive |
| --- | --- | --- |
| Constructive | Good! | Bad |
| Destructive | Bad | Danger zone! |
We’ll call this the *active\-passive\-constructive\-destructive* (APCD) framework.
For example, if your friend tells you “Hey, I just got an A on my exam!”, you could respond in a number of different ways:
* “That’s terrific! I remember you were studying really hard. What do you want to do to celebrate?” (*Active and Constructive*)
+ This is *active* in the sense that it responds to and builds upon the substance of your friend’s statement.
+ This is *constructive* in that it conveys positive emotion.
+ (This is the best way you can respond to good news \[1]!)
* “Oh that’s cool, I guess.” (*Passive and Constructive*)
+ This is *constructive* in that it conveys positive emotion.
+ However this is *passive* in that it does not actively engage with your friend’s statement.
* “But the professor was grading on a curve! Gah, that means I probably didn’t get an A.” (*Active and Destructive*)
+ This is *active* in the sense that it responds to and builds upon the substance of your friend’s statement.
+ However this is *destructive* in that it conveys a negative reaction to your friend’s statement.
* “Dude whatever. I got an internship!” *Passive and Destructive*:
+ However this is *passive* in that it does not actively engage with your friend’s statement.
+ This is also *destructive* in that it implicitly conveys a negative reaction by ignoring the substance of your friend’s statement.
+ (This is the worst way you can respond to good news!)
Active and constructive responding (ACR) has been shown to be associated with more positive interpersonal relationships \[1]. If you want want to have better platonic or romantic relationships, you sould practice ACR!
15\.2 Having Productive Discussions
-----------------------------------
What does ACR have to do with data science? Imagine you’re in a meeting with some data science colleagues, and the presenter shows the following graph.
```
## NOTE: No need to edit; just run and inspect
diamonds %>%
slice_sample(n = 1000) %>%
ggplot(aes(carat, price)) +
geom_point() +
scale_x_log10() +
scale_y_log10() +
theme_minimal() +
labs(x = "", y = "")
```
Let’s use the active\-passive\-constructive\-destructive (APCD) response framework above to analyze some *discussion scenarios*. Imagine the following scenario:
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Um, what are the axes here?”
>
>
> *Presenter*: **“Obviously the graph shows price vs carat. On the next slide…”**
Here the bolded response is *actively* engaging with your colleague’s question, but is doing so in a very *destructive* way. It’s likely your colleague feels hurt, and is less likely now to speak up in meetings. That’s a bad thing!
### 15\.2\.1 **q1** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Um, what are the axes here?”
>
>
> *Presenter*: **“Woah good catch there, they’re unlabeled. The vertical shows price, and the horizontal shows carat.”**
**Observations**:
* This response is **active** as it directly engages with the colleague’s question.
* This response is **constructive** as it validates the colleague’s question before providing an answer.
### 15\.2\.2 **q2** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Um, what are the axes here?”
>
>
> *Presenter*: **“Woah good catch there, they’re unlabeled. Moving on….”**
**Observations**:
* This response is **passive** as it does not actually engage with the colleague’s question.
* This response is **constructive** as it validates the colleague’s question, even though it does not provide an answer.
**Intermediate Conclusion**: We can quite naturally think of discussion responses in the APCD framework. Therefore, we should try to give active and constructive responses in data discussions!
### 15\.2\.1 **q1** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Um, what are the axes here?”
>
>
> *Presenter*: **“Woah good catch there, they’re unlabeled. The vertical shows price, and the horizontal shows carat.”**
**Observations**:
* This response is **active** as it directly engages with the colleague’s question.
* This response is **constructive** as it validates the colleague’s question before providing an answer.
### 15\.2\.2 **q2** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Um, what are the axes here?”
>
>
> *Presenter*: **“Woah good catch there, they’re unlabeled. Moving on….”**
**Observations**:
* This response is **passive** as it does not actually engage with the colleague’s question.
* This response is **constructive** as it validates the colleague’s question, even though it does not provide an answer.
**Intermediate Conclusion**: We can quite naturally think of discussion responses in the APCD framework. Therefore, we should try to give active and constructive responses in data discussions!
15\.3 Active and Constructive Questions
---------------------------------------
With a bit of imagination, we can apply the APCD framework to *questions* as well. Let’s keep looking at the same graph.
```
## NOTE: No need to edit; just run and inspect
diamonds %>%
slice_sample(n = 1000) %>%
ggplot(aes(carat, price)) +
geom_point() +
scale_x_log10() +
scale_y_log10() +
theme_minimal() +
labs(x = "", y = "")
```
Imagine the following scenario:
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: **“Percival, why are you so terrible at making graphs?”**
>
>
> *Presenter*: “…”
Here the bolded question is *passively* engaging with the presenter’s content (it’s not referring to any issues in the graph, but rather making an *ad hominem* attack of the presenter) and *destructively* making a value judgment on the presenter. This is a truly horrendous way to ask a question.
Let’s look at a **far more effective** way to ask a question:
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: **“Percival, this is an interesting finding but I’m confused by this graph—what do the axes represent?”**
>
>
> *Presenter*: “Oh, the labels are missing! The vertical shows price, and the horizontal shows carat.”
Here the bolded question is *actively* engaging with the presenter’s content (the colleague makes it clear what the problem with the graph is) and *constructively* validates the presenter’s findings. This is far more effective than the question posed above.
As a bonus, this question also illustrates another tip: Making “I” statements. To see the difference, let’s look at a subtly\-different version of the same question:
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: “Percival, this is an interesting finding **but your graph is very confusing**—what do the axes represent?”
>
>
> *Presenter*: “Oh, the labels are missing! The vertical shows price, and the horizontal shows carat.”
In this second version the colleague is making a value\-judgment about **the graph itself**; in the first version the colleague is making a statement **about their own subjective experience**. The second version is debatable (the presenter may think the graph is clear), but only a jerk would disagree with a person’s subjective experience.
Let’s practice once more!
### 15\.3\.1 **q3** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: **“What is your opinion on the morality of aesthetic diamonds?”**
>
>
> *Presenter*: “Umm….”
**Observations**:
* This response is **passive** as is a clear departure from the topic the presenter is trying to discuss.
* Honestly I find it difficult to rate this as constructive or destructive. This comment breaks the framework!
### 15\.3\.1 **q3** Identify the nature of the bolded line below, rating it as either **active vs passive** and **constructive vs destructive**.
> *Presenter*: “Clearly, we can see a positive correlation between price and carat.”
>
>
> *Colleague*: **“What is your opinion on the morality of aesthetic diamonds?”**
>
>
> *Presenter*: “Umm….”
**Observations**:
* This response is **passive** as is a clear departure from the topic the presenter is trying to discuss.
* Honestly I find it difficult to rate this as constructive or destructive. This comment breaks the framework!
15\.4 Refrences
---------------
* \[1] Gable, S. L., Reis, H. T., Impett, E. A., \& Asher, E. R. (2004\). What do you do when things go right? The intrapersonal and interpersonal benefits of sharing positive events. Journal of Personality and Social Psychology, 87, 228\-245\.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-eda-basics.html |
16 Stats: EDA Basics
====================
*Purpose*: *Exploratory Data Analysis* (EDA) is a **crucial** skill for a
practicing data scientist. Unfortunately, much like human\-centered design EDA is
hard to teach. This is because EDA is **not** a strict procedure, so much as it
is a **mindset**. Also, much like human\-centered design, EDA is an *iterative,
nonlinear process*. There are two key principles to keep in mind when doing EDA:
* 1. Curiosity: Generate lots of ideas and hypotheses about your data.
* 2. Skepticism: Remain unconvinced of those ideas, unless you can find credible
patterns to support them.
Since EDA is both *crucial* and *difficult*, we will practice doing EDA *a lot*
in this course!
*Reading*: [Exploratory Data Analysis](https://rstudio.cloud/learn/primers/3.1)
*Topics*: (All topics)
*Reading Time*: \~45 minutes
*Note*: This exercise will consist of interpreting pre\-made graphs. You can run
the whole notebook to generate all the figures at once. Just make sure to do
all the exercises and write your observations!
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 16\.0\.1 **q0** Remember from `e02-data-basics` there were *simple checks* we’re supposed
to do? Do those simple checks on the diamonds dataset below.
```
## TODO: Recall time! Do our first simple checks on the diamonds dataset
diamonds %>% glimpse
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
I’m going to walk you through a train of thought I had when studying the
diamonds dataset.
There are four standard “C’s” of
[judging](https://en.wikipedia.org/wiki/Diamond_(gemstone)) a diamond.\[1] These
are `carat, cut, color` and `clarity`, all of which are in the `diamonds`
dataset.
16\.1 Hypothesis 1
------------------
**Here’s a hypothesis**: `Ideal` is the “best” value of `cut` for a diamond.
Since an `Ideal` cut seems more labor\-intensive, I hypothesize that `Ideal` cut
diamonds are less numerous than other cuts.
### 16\.1\.1 **q1** Run the chunk below, and study the plot. Was hypothesis 1 correct? Why
or why not?
```
diamonds %>%
ggplot(aes(cut)) +
geom_bar()
```
**Observations**:
\- The truth is actually the opposite! `Ideal` cut diamonds are *more* numerous than all other cuts! Perhaps because cutting a diamond is easier than mining a new one, gemcutters add value to a diamond by striving for an ideal cut.
16\.2 Hypothesis 2
------------------
**Another hypothesis**: The `Ideal` cut diamonds should be the most pricey.
### 16\.2\.1 **q2\.1** Study the following graph; does it support, contradict, or not relate to hypothesis 2?
*Hint*: Is this an effective graph? Why or why not?
```
diamonds %>%
ggplot(aes(cut, price)) +
geom_point()
```
**Observations**:
\- This graph is virtually useless! There is severe overplotting. We cannot tell address Hypothesis 2 with this graph
The following is a set of *boxplots*; the middle bar denotes the median, the
boxes denote the *quartiles* (upper and lower “quarters” of the data), and the
lines and dots denote large values and outliers.
### 16\.2\.2 **q2\.2** Study the following graph; does it support or contradict hypothesis 2?
```
diamonds %>%
ggplot(aes(cut, price)) +
geom_boxplot()
```
**Observations**:
\- Surprisingly, `Ideal` diamonds tend to be the *least* pricey! This was very surprising to me.
16\.3 Unraveling Hypothesis 2
-----------------------------
Upon making the graph in **q2\.2**, I was very surprised. So I did some reading
on diamond cuts. It turns out that some gemcutters [sacrifice cut for
carat](https://en.wikipedia.org/wiki/Diamond_(gemstone)#Cut). Could this effect
explain the surprising pattern above?
### 16\.3\.1 **q3** Study the following graph; does it support a “carat over cut” hypothesis? How might this relate to price?
*Hint*: The article linked above will help you answer these questions!
```
diamonds %>%
ggplot(aes(cut, carat)) +
geom_boxplot()
```
**Observations**:
\- The median of `Ideal` diamonds is a fair bit lower in `carat` than other cuts. This provides some evidence that gemcutters trade `cut` for `carat`.
\- The very largest `carat` diamonds tend to be of `Fair` cut; this makese sense, as cutting the gemstone will only reduce weight.
\- It seems that many diamond purchasers are more interested in carat than fine cut. This provides some rationale for why `Ideal` diamonds are cheaper; they are necessarily lower\-carat.
16\.4 Footnotes
---------------
\[1] Don’t mistake my focus on `diamonds` as an endorsement of the diamond
industry. In my opinion aesthetic diamonds are a morally dubious scam.
### 16\.0\.1 **q0** Remember from `e02-data-basics` there were *simple checks* we’re supposed
to do? Do those simple checks on the diamonds dataset below.
```
## TODO: Recall time! Do our first simple checks on the diamonds dataset
diamonds %>% glimpse
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
I’m going to walk you through a train of thought I had when studying the
diamonds dataset.
There are four standard “C’s” of
[judging](https://en.wikipedia.org/wiki/Diamond_(gemstone)) a diamond.\[1] These
are `carat, cut, color` and `clarity`, all of which are in the `diamonds`
dataset.
16\.1 Hypothesis 1
------------------
**Here’s a hypothesis**: `Ideal` is the “best” value of `cut` for a diamond.
Since an `Ideal` cut seems more labor\-intensive, I hypothesize that `Ideal` cut
diamonds are less numerous than other cuts.
### 16\.1\.1 **q1** Run the chunk below, and study the plot. Was hypothesis 1 correct? Why
or why not?
```
diamonds %>%
ggplot(aes(cut)) +
geom_bar()
```
**Observations**:
\- The truth is actually the opposite! `Ideal` cut diamonds are *more* numerous than all other cuts! Perhaps because cutting a diamond is easier than mining a new one, gemcutters add value to a diamond by striving for an ideal cut.
### 16\.1\.1 **q1** Run the chunk below, and study the plot. Was hypothesis 1 correct? Why
or why not?
```
diamonds %>%
ggplot(aes(cut)) +
geom_bar()
```
**Observations**:
\- The truth is actually the opposite! `Ideal` cut diamonds are *more* numerous than all other cuts! Perhaps because cutting a diamond is easier than mining a new one, gemcutters add value to a diamond by striving for an ideal cut.
16\.2 Hypothesis 2
------------------
**Another hypothesis**: The `Ideal` cut diamonds should be the most pricey.
### 16\.2\.1 **q2\.1** Study the following graph; does it support, contradict, or not relate to hypothesis 2?
*Hint*: Is this an effective graph? Why or why not?
```
diamonds %>%
ggplot(aes(cut, price)) +
geom_point()
```
**Observations**:
\- This graph is virtually useless! There is severe overplotting. We cannot tell address Hypothesis 2 with this graph
The following is a set of *boxplots*; the middle bar denotes the median, the
boxes denote the *quartiles* (upper and lower “quarters” of the data), and the
lines and dots denote large values and outliers.
### 16\.2\.2 **q2\.2** Study the following graph; does it support or contradict hypothesis 2?
```
diamonds %>%
ggplot(aes(cut, price)) +
geom_boxplot()
```
**Observations**:
\- Surprisingly, `Ideal` diamonds tend to be the *least* pricey! This was very surprising to me.
### 16\.2\.1 **q2\.1** Study the following graph; does it support, contradict, or not relate to hypothesis 2?
*Hint*: Is this an effective graph? Why or why not?
```
diamonds %>%
ggplot(aes(cut, price)) +
geom_point()
```
**Observations**:
\- This graph is virtually useless! There is severe overplotting. We cannot tell address Hypothesis 2 with this graph
The following is a set of *boxplots*; the middle bar denotes the median, the
boxes denote the *quartiles* (upper and lower “quarters” of the data), and the
lines and dots denote large values and outliers.
### 16\.2\.2 **q2\.2** Study the following graph; does it support or contradict hypothesis 2?
```
diamonds %>%
ggplot(aes(cut, price)) +
geom_boxplot()
```
**Observations**:
\- Surprisingly, `Ideal` diamonds tend to be the *least* pricey! This was very surprising to me.
16\.3 Unraveling Hypothesis 2
-----------------------------
Upon making the graph in **q2\.2**, I was very surprised. So I did some reading
on diamond cuts. It turns out that some gemcutters [sacrifice cut for
carat](https://en.wikipedia.org/wiki/Diamond_(gemstone)#Cut). Could this effect
explain the surprising pattern above?
### 16\.3\.1 **q3** Study the following graph; does it support a “carat over cut” hypothesis? How might this relate to price?
*Hint*: The article linked above will help you answer these questions!
```
diamonds %>%
ggplot(aes(cut, carat)) +
geom_boxplot()
```
**Observations**:
\- The median of `Ideal` diamonds is a fair bit lower in `carat` than other cuts. This provides some evidence that gemcutters trade `cut` for `carat`.
\- The very largest `carat` diamonds tend to be of `Fair` cut; this makese sense, as cutting the gemstone will only reduce weight.
\- It seems that many diamond purchasers are more interested in carat than fine cut. This provides some rationale for why `Ideal` diamonds are cheaper; they are necessarily lower\-carat.
### 16\.3\.1 **q3** Study the following graph; does it support a “carat over cut” hypothesis? How might this relate to price?
*Hint*: The article linked above will help you answer these questions!
```
diamonds %>%
ggplot(aes(cut, carat)) +
geom_boxplot()
```
**Observations**:
\- The median of `Ideal` diamonds is a fair bit lower in `carat` than other cuts. This provides some evidence that gemcutters trade `cut` for `carat`.
\- The very largest `carat` diamonds tend to be of `Fair` cut; this makese sense, as cutting the gemstone will only reduce weight.
\- It seems that many diamond purchasers are more interested in carat than fine cut. This provides some rationale for why `Ideal` diamonds are cheaper; they are necessarily lower\-carat.
16\.4 Footnotes
---------------
\[1] Don’t mistake my focus on `diamonds` as an endorsement of the diamond
industry. In my opinion aesthetic diamonds are a morally dubious scam.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-isolating-data.html |
17 Data: Isolating Data
=======================
*Purpose*: One of the keys to a successful analysis is the ability to *focus* on
particular topics. When analyzing a dataset, our ability to focus is tied to our
facility at *isolating data*. In this exercise, you will practice isolating
columns with `select()`, picking specific rows with `filter()`, and sorting your
data with `arrange()` to see what rises to the top.
*Reading*: [Isolating Data with dplyr](https://rstudio.cloud/learn/primers/2.2) (All topics)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(nycflights13) # For `flights` data
```
We’ll use the `nycflights13` dataset for this exercise; upon loading the
package, the data are stored in the variable name `flights`. For instance:
```
flights %>% glimpse()
```
```
## Rows: 336,776
## Columns: 19
## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2…
## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 558, 558, …
## $ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 600, 600, …
## $ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2, -2, -1…
## $ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 753, 849,…
## $ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 745, 851,…
## $ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -3, 7, -1…
## $ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV", "B6", "…
## $ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79, 301, 4…
## $ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN", "N394…
## $ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR", "LGA",…
## $ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL", "IAD",…
## $ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138, 149, 1…
## $ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 944, 733, …
## $ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 6, 6, 6…
## $ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, 0, 59, 0…
## $ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013-01-01 0…
```
### 17\.0\.1 **q1** Select all the variables whose name ends with `_time`.
```
df_q1 <- flights %>% select(ends_with("_time"))
df_q1
```
```
## # A tibble: 336,776 × 5
## dep_time sched_dep_time arr_time sched_arr_time air_time
## <int> <int> <int> <int> <dbl>
## 1 517 515 830 819 227
## 2 533 529 850 830 227
## 3 542 540 923 850 160
## 4 544 545 1004 1022 183
## 5 554 600 812 837 116
## 6 554 558 740 728 150
## 7 555 600 913 854 158
## 8 557 600 709 723 53
## 9 557 600 838 846 140
## 10 558 600 753 745 138
## # … with 336,766 more rows
```
The following is a *unit test* of your code; if you managed to solve task **q2**
correctly, the following code will execute without error.
```
## NOTE: No need to change this
assertthat::assert_that(
all(names(df_q1) %>% str_detect(., "_time$"))
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 17\.0\.2 **q2** Re\-arrange the columns to place `dest, origin, carrier` at the front, but retain all other columns.
*Hint*: The function `everything()` will be useful!
```
df_q2 <- flights %>% select(dest, origin, carrier, everything())
df_q2
```
```
## # A tibble: 336,776 × 19
## dest origin carrier year month day dep_time sched_dep_t…¹ dep_d…² arr_t…³
## <chr> <chr> <chr> <int> <int> <int> <int> <int> <dbl> <int>
## 1 IAH EWR UA 2013 1 1 517 515 2 830
## 2 IAH LGA UA 2013 1 1 533 529 4 850
## 3 MIA JFK AA 2013 1 1 542 540 2 923
## 4 BQN JFK B6 2013 1 1 544 545 -1 1004
## 5 ATL LGA DL 2013 1 1 554 600 -6 812
## 6 ORD EWR UA 2013 1 1 554 558 -4 740
## 7 FLL EWR B6 2013 1 1 555 600 -5 913
## 8 IAD LGA EV 2013 1 1 557 600 -3 709
## 9 MCO JFK B6 2013 1 1 557 600 -3 838
## 10 ORD LGA AA 2013 1 1 558 600 -2 753
## # … with 336,766 more rows, 9 more variables: sched_arr_time <int>,
## # arr_delay <dbl>, flight <int>, tailnum <chr>, air_time <dbl>,
## # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>, and abbreviated
## # variable names ¹sched_dep_time, ²dep_delay, ³arr_time
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(
assertthat::are_equal(names(df_q2)[1:5], c("dest", "origin", "carrier", "year", "month"))
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
Since R will only show the first few columns of a tibble, using `select()` in
this fashion will help us see the values of particular columns.
### 17\.0\.3 **q3** Fix the following code. What is the mistake here? What is the code trying to accomplish?
```
flights %>% filter(dest == "LAX")
```
```
## # A tibble: 16,174 × 19
## year month day dep_time sched_de…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
## <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr>
## 1 2013 1 1 558 600 -2 924 917 7 UA
## 2 2013 1 1 628 630 -2 1016 947 29 UA
## 3 2013 1 1 658 700 -2 1027 1025 2 VX
## 4 2013 1 1 702 700 2 1058 1014 44 B6
## 5 2013 1 1 743 730 13 1107 1100 7 AA
## 6 2013 1 1 828 823 5 1150 1143 7 UA
## 7 2013 1 1 829 830 -1 1152 1200 -8 UA
## 8 2013 1 1 856 900 -4 1226 1220 6 AA
## 9 2013 1 1 859 900 -1 1223 1225 -2 VX
## 10 2013 1 1 921 900 21 1237 1227 10 DL
## # … with 16,164 more rows, 9 more variables: flight <int>, tailnum <chr>,
## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time, ⁵arr_delay
```
The next error is *far more insidious*….
### 17\.0\.4 **q4** This code doesn’t quite what the user intended. What went wrong?
```
flights %>% filter(dest == "BOS")
```
```
## # A tibble: 15,508 × 19
## year month day dep_time sched_de…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
## <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr>
## 1 2013 1 1 559 559 0 702 706 -4 B6
## 2 2013 1 1 639 640 -1 739 749 -10 B6
## 3 2013 1 1 801 805 -4 900 919 -19 B6
## 4 2013 1 1 803 810 -7 903 925 -22 AA
## 5 2013 1 1 820 830 -10 940 954 -14 DL
## 6 2013 1 1 923 919 4 1026 1030 -4 B6
## 7 2013 1 1 957 733 144 1056 853 123 UA
## 8 2013 1 1 1033 1017 16 1130 1136 -6 UA
## 9 2013 1 1 1154 1200 -6 1253 1306 -13 B6
## 10 2013 1 1 1237 1245 -8 1340 1350 -10 AA
## # … with 15,498 more rows, 9 more variables: flight <int>, tailnum <chr>,
## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time, ⁵arr_delay
```
It will take practice to get used to when and when not to use quotations. Don’t
worry—we’ll get lots of practice!
This dataset is called `nycflights`; in what sense is it focused on New York
city? Let’s do a quick check to get an idea:
### 17\.0\.5 **q5** Perform **two** filters; first
```
df_q5a <-
flights %>%
filter(
dest == "JFK" | dest == "LGA" | dest == "EWR"
)
df_q5b <-
flights %>%
filter(
origin == "JFK" | origin == "LGA" | origin == "EWR"
)
```
Use the following code to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(
df_q5a %>%
mutate(flag = dest %in% c("JFK", "LGA", "EWR")) %>%
summarize(flag = all(flag)) %>%
pull(flag)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
df_q5b %>%
mutate(flag = origin %in% c("JFK", "LGA", "EWR")) %>%
summarize(flag = all(flag)) %>%
pull(flag)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
The fact that **all** observations have their origin in NYC highlights (but
does not itself prove!) that these data were collected to study flights
departing from the NYC area.
*Aside*: Data are not just numbers. Data are *numbers with context*. Every
dataset is put together for some reason. This reason will inform what
observations (rows) and variables (columns) are *in the data*, and which are
*not in the data*. Conversely, thinking carefully about what data a person or
organization bothered to collect—and what they ignored—can tell you
something about the *perspective* of those who collected the data. Thinking
about these issues is partly what separates **data science** from programming or
machine learning. (`end-rant`)
### 17\.0\.6 **q6** Sort the flights in *descending* order by their `air_time`. Bring `air_time, dest` to the front. What can you tell about the longest flights?
```
df_q6 <-
flights %>%
select(air_time, dest, everything()) %>%
arrange(desc(air_time))
df_q6
```
```
## # A tibble: 336,776 × 19
## air_time dest year month day dep_time sched_dep…¹ dep_d…² arr_t…³ sched…⁴
## <dbl> <chr> <int> <int> <int> <int> <int> <dbl> <int> <int>
## 1 695 HNL 2013 3 17 1337 1335 2 1937 1836
## 2 691 HNL 2013 2 6 853 900 -7 1542 1540
## 3 686 HNL 2013 3 15 1001 1000 1 1551 1530
## 4 686 HNL 2013 3 17 1006 1000 6 1607 1530
## 5 683 HNL 2013 3 16 1001 1000 1 1544 1530
## 6 679 HNL 2013 2 5 900 900 0 1555 1540
## 7 676 HNL 2013 11 12 936 930 6 1630 1530
## 8 676 HNL 2013 3 14 958 1000 -2 1542 1530
## 9 675 HNL 2013 11 20 1006 1000 6 1639 1555
## 10 671 HNL 2013 3 15 1342 1335 7 1924 1836
## # … with 336,766 more rows, 9 more variables: arr_delay <dbl>, carrier <chr>,
## # flight <int>, tailnum <chr>, origin <chr>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time
```
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q6 %>% head(1) %>% pull(air_time),
flights %>% pull(air_time) %>% max(na.rm = TRUE)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
assertthat::are_equal(
df_q6 %>% filter(!is.na(air_time)) %>% tail(1) %>% pull(air_time),
flights %>% pull(air_time) %>% min(na.rm = TRUE)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
assertthat::are_equal(
names(df_q6)[1:2],
c("air_time", "dest")
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
The longest flights are to “HNL”; this makes sense—Honolulu is quite far
from NYC!
### 17\.0\.1 **q1** Select all the variables whose name ends with `_time`.
```
df_q1 <- flights %>% select(ends_with("_time"))
df_q1
```
```
## # A tibble: 336,776 × 5
## dep_time sched_dep_time arr_time sched_arr_time air_time
## <int> <int> <int> <int> <dbl>
## 1 517 515 830 819 227
## 2 533 529 850 830 227
## 3 542 540 923 850 160
## 4 544 545 1004 1022 183
## 5 554 600 812 837 116
## 6 554 558 740 728 150
## 7 555 600 913 854 158
## 8 557 600 709 723 53
## 9 557 600 838 846 140
## 10 558 600 753 745 138
## # … with 336,766 more rows
```
The following is a *unit test* of your code; if you managed to solve task **q2**
correctly, the following code will execute without error.
```
## NOTE: No need to change this
assertthat::assert_that(
all(names(df_q1) %>% str_detect(., "_time$"))
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 17\.0\.2 **q2** Re\-arrange the columns to place `dest, origin, carrier` at the front, but retain all other columns.
*Hint*: The function `everything()` will be useful!
```
df_q2 <- flights %>% select(dest, origin, carrier, everything())
df_q2
```
```
## # A tibble: 336,776 × 19
## dest origin carrier year month day dep_time sched_dep_t…¹ dep_d…² arr_t…³
## <chr> <chr> <chr> <int> <int> <int> <int> <int> <dbl> <int>
## 1 IAH EWR UA 2013 1 1 517 515 2 830
## 2 IAH LGA UA 2013 1 1 533 529 4 850
## 3 MIA JFK AA 2013 1 1 542 540 2 923
## 4 BQN JFK B6 2013 1 1 544 545 -1 1004
## 5 ATL LGA DL 2013 1 1 554 600 -6 812
## 6 ORD EWR UA 2013 1 1 554 558 -4 740
## 7 FLL EWR B6 2013 1 1 555 600 -5 913
## 8 IAD LGA EV 2013 1 1 557 600 -3 709
## 9 MCO JFK B6 2013 1 1 557 600 -3 838
## 10 ORD LGA AA 2013 1 1 558 600 -2 753
## # … with 336,766 more rows, 9 more variables: sched_arr_time <int>,
## # arr_delay <dbl>, flight <int>, tailnum <chr>, air_time <dbl>,
## # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>, and abbreviated
## # variable names ¹sched_dep_time, ²dep_delay, ³arr_time
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(
assertthat::are_equal(names(df_q2)[1:5], c("dest", "origin", "carrier", "year", "month"))
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
Since R will only show the first few columns of a tibble, using `select()` in
this fashion will help us see the values of particular columns.
### 17\.0\.3 **q3** Fix the following code. What is the mistake here? What is the code trying to accomplish?
```
flights %>% filter(dest == "LAX")
```
```
## # A tibble: 16,174 × 19
## year month day dep_time sched_de…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
## <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr>
## 1 2013 1 1 558 600 -2 924 917 7 UA
## 2 2013 1 1 628 630 -2 1016 947 29 UA
## 3 2013 1 1 658 700 -2 1027 1025 2 VX
## 4 2013 1 1 702 700 2 1058 1014 44 B6
## 5 2013 1 1 743 730 13 1107 1100 7 AA
## 6 2013 1 1 828 823 5 1150 1143 7 UA
## 7 2013 1 1 829 830 -1 1152 1200 -8 UA
## 8 2013 1 1 856 900 -4 1226 1220 6 AA
## 9 2013 1 1 859 900 -1 1223 1225 -2 VX
## 10 2013 1 1 921 900 21 1237 1227 10 DL
## # … with 16,164 more rows, 9 more variables: flight <int>, tailnum <chr>,
## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time, ⁵arr_delay
```
The next error is *far more insidious*….
### 17\.0\.4 **q4** This code doesn’t quite what the user intended. What went wrong?
```
flights %>% filter(dest == "BOS")
```
```
## # A tibble: 15,508 × 19
## year month day dep_time sched_de…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
## <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr>
## 1 2013 1 1 559 559 0 702 706 -4 B6
## 2 2013 1 1 639 640 -1 739 749 -10 B6
## 3 2013 1 1 801 805 -4 900 919 -19 B6
## 4 2013 1 1 803 810 -7 903 925 -22 AA
## 5 2013 1 1 820 830 -10 940 954 -14 DL
## 6 2013 1 1 923 919 4 1026 1030 -4 B6
## 7 2013 1 1 957 733 144 1056 853 123 UA
## 8 2013 1 1 1033 1017 16 1130 1136 -6 UA
## 9 2013 1 1 1154 1200 -6 1253 1306 -13 B6
## 10 2013 1 1 1237 1245 -8 1340 1350 -10 AA
## # … with 15,498 more rows, 9 more variables: flight <int>, tailnum <chr>,
## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time, ⁵arr_delay
```
It will take practice to get used to when and when not to use quotations. Don’t
worry—we’ll get lots of practice!
This dataset is called `nycflights`; in what sense is it focused on New York
city? Let’s do a quick check to get an idea:
### 17\.0\.5 **q5** Perform **two** filters; first
```
df_q5a <-
flights %>%
filter(
dest == "JFK" | dest == "LGA" | dest == "EWR"
)
df_q5b <-
flights %>%
filter(
origin == "JFK" | origin == "LGA" | origin == "EWR"
)
```
Use the following code to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(
df_q5a %>%
mutate(flag = dest %in% c("JFK", "LGA", "EWR")) %>%
summarize(flag = all(flag)) %>%
pull(flag)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
df_q5b %>%
mutate(flag = origin %in% c("JFK", "LGA", "EWR")) %>%
summarize(flag = all(flag)) %>%
pull(flag)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
The fact that **all** observations have their origin in NYC highlights (but
does not itself prove!) that these data were collected to study flights
departing from the NYC area.
*Aside*: Data are not just numbers. Data are *numbers with context*. Every
dataset is put together for some reason. This reason will inform what
observations (rows) and variables (columns) are *in the data*, and which are
*not in the data*. Conversely, thinking carefully about what data a person or
organization bothered to collect—and what they ignored—can tell you
something about the *perspective* of those who collected the data. Thinking
about these issues is partly what separates **data science** from programming or
machine learning. (`end-rant`)
### 17\.0\.6 **q6** Sort the flights in *descending* order by their `air_time`. Bring `air_time, dest` to the front. What can you tell about the longest flights?
```
df_q6 <-
flights %>%
select(air_time, dest, everything()) %>%
arrange(desc(air_time))
df_q6
```
```
## # A tibble: 336,776 × 19
## air_time dest year month day dep_time sched_dep…¹ dep_d…² arr_t…³ sched…⁴
## <dbl> <chr> <int> <int> <int> <int> <int> <dbl> <int> <int>
## 1 695 HNL 2013 3 17 1337 1335 2 1937 1836
## 2 691 HNL 2013 2 6 853 900 -7 1542 1540
## 3 686 HNL 2013 3 15 1001 1000 1 1551 1530
## 4 686 HNL 2013 3 17 1006 1000 6 1607 1530
## 5 683 HNL 2013 3 16 1001 1000 1 1544 1530
## 6 679 HNL 2013 2 5 900 900 0 1555 1540
## 7 676 HNL 2013 11 12 936 930 6 1630 1530
## 8 676 HNL 2013 3 14 958 1000 -2 1542 1530
## 9 675 HNL 2013 11 20 1006 1000 6 1639 1555
## 10 671 HNL 2013 3 15 1342 1335 7 1924 1836
## # … with 336,766 more rows, 9 more variables: arr_delay <dbl>, carrier <chr>,
## # flight <int>, tailnum <chr>, origin <chr>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time
```
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q6 %>% head(1) %>% pull(air_time),
flights %>% pull(air_time) %>% max(na.rm = TRUE)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
assertthat::are_equal(
df_q6 %>% filter(!is.na(air_time)) %>% tail(1) %>% pull(air_time),
flights %>% pull(air_time) %>% min(na.rm = TRUE)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
assertthat::are_equal(
names(df_q6)[1:2],
c("air_time", "dest")
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
The longest flights are to “HNL”; this makes sense—Honolulu is quite far
from NYC!
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-bar-charts.html |
18 Vis: Bar Charts
==================
*Purpose*: *Bar charts* are a key tool for EDA. In this exercise, we’ll learn
how to construct a variety of different bar charts, as well as when—and when
*not*—to use various charts.
*Reading*: [Bar Charts](https://rstudio.cloud/learn/primers/3.2)
*Topics*: (All topics)
*Reading Time*: \~30 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 18\.0\.1 **q1** In the reading, you learned the relation between `geom_bar()` and
`geom_col()`. Use that knowledge to convert the following `geom_bar()` plot into
the same visual using `geom_col()`.
```
mpg %>%
count(trans) %>%
ggplot(aes(x = trans, y = n)) +
geom_col()
```
The reading mentioned that when using `geom_col()` our x\-y data should be
`1-to-1`. This next exercise will probe what happens when our data are not
`1-to-1`, and yet we use a `geom_col()`. Note that a
[one\-to\-one](https://en.wikipedia.org/wiki/Injective_function) function is one
where each input leads to a single output. For the `mpg` dataset, we can see
that the pairs `cty, hwy` clearly don’t have this one\-to\-one property:
```
## NOTE: Run this chunk for an illustration
mpg %>% filter(cty == 20)
```
```
## # A tibble: 11 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 2 2008 4 manu… f 20 31 p comp…
## 2 audi a4 quattro 2 2008 4 manu… 4 20 28 p comp…
## 3 hyundai tiburon 2 2008 4 manu… f 20 28 r subc…
## 4 hyundai tiburon 2 2008 4 auto… f 20 27 r subc…
## 5 subaru forester … 2.5 2008 4 manu… 4 20 27 r suv
## 6 subaru forester … 2.5 2008 4 auto… 4 20 26 r suv
## 7 subaru impreza a… 2.5 2008 4 auto… 4 20 25 p comp…
## 8 subaru impreza a… 2.5 2008 4 auto… 4 20 27 r comp…
## 9 subaru impreza a… 2.5 2008 4 manu… 4 20 27 r comp…
## 10 volkswagen new beetle 2.5 2008 5 manu… f 20 28 r subc…
## 11 volkswagen new beetle 2.5 2008 5 auto… f 20 29 r subc…
```
### 18\.0\.2 **q2** The following code attempts to visualize `cty, hwy` from `mpg` using
`geom_col()`. There’s something fishy about the `hwy` values; what’s wrong here?
*Hint*: Try changing the `position` parameter for `geom_col()`.
```
mpg %>%
ggplot(aes(x = cty, y = hwy)) +
geom_col(position = "dodge")
```
**Observations**:
\- Since `position = "stacked"` is the default for `geom_col()`, we see not the real `hwy` values, but effectively a sum at each `cty` value!
A more standard way to visualize this kind of data is a *scatterplot*, which
we’ll study later. For now, here’s an example of a more effective way to
visualize `cty` vs `hwy`:
```
## NOTE: Run this chunk for an illustration
mpg %>%
ggplot(aes(cty, hwy)) +
geom_point()
```
### 18\.0\.3 **q3** The following are two different visualizations of the `mpg` dataset.
Document your observations between the `v1` and `v2` visuals. Then, determine
which—`v1` or `v2`—enabled you to make more observations. What was the
difference between the two visuals?
```
## TODO: Run this code without changing, describe your observations on the data
mpg %>%
ggplot(aes(class, fill = class)) +
geom_bar()
```
**Observations**:
In this dataset:
\- `SUV`’s are most numerous, followed by `compact` and `midsize`
\- There are very few `2seater` vehicles
```
## TODO: Run this code without changing, describe your observations on the data
mpg %>%
ggplot(aes(class, fill = drv)) +
geom_bar()
```
**Observations**:
In this dataset:
\- `SUV`’s are most numerous, followed by `compact` and `midsize`
\- There are very few `2seater` vehicles
\- `pickup`’s and `SUV`’s tend to have `4` wheel drive
\- `compact`’s and `midsize` tend to have `f` drive
\- All the `2seater` vehicles are `r` drive
**Compare `v1` and `v2`**:
* Which visualization—`v1` or `v2`—enabled you to make more observations?
+ `v2` enabled me to make more observations
* What was the difference between `v1` and `v2`?
+ `v1` showed the same variable `class` using two aesthetics
+ `v2` showed two variables `class` and `drv` using two aesthetics
### 18\.0\.4 **q4** The following code has a bug; it does not do what its author intended.
Identify and fix the bug. What does the resulting graph tell you about the
relation between `manufacturer` and `class` of cars in this dataset?
*Note*: I use a `theme()` call to rotate the x\-axis labels. We’ll learn how to
do this in a future exercise.
```
mpg %>%
ggplot(aes(x = manufacturer, fill = class)) +
geom_bar(position = "dodge") +
theme(axis.text.x = element_text(angle = 270, vjust = 0.5, hjust = 0))
```
**Observations**
\- Certain manufacturers seem to favor particular classes of car. For instance,
*in this dataset*:
\- Jeep, Land Rover, Lincoln, and Mercury only have `suv`’s
\- Audi, Toyota, and Volkswagen favor `compact`
\- Dodge favors `pickup`
### 18\.0\.5 **q5** The following graph is hard to read. What other form of faceting would
make the visual more convenient to read? Modify the code below with your
suggested improvement.
```
mpg %>%
ggplot(aes(x = cyl)) +
geom_bar() +
facet_wrap(~ manufacturer)
```
### 18\.0\.1 **q1** In the reading, you learned the relation between `geom_bar()` and
`geom_col()`. Use that knowledge to convert the following `geom_bar()` plot into
the same visual using `geom_col()`.
```
mpg %>%
count(trans) %>%
ggplot(aes(x = trans, y = n)) +
geom_col()
```
The reading mentioned that when using `geom_col()` our x\-y data should be
`1-to-1`. This next exercise will probe what happens when our data are not
`1-to-1`, and yet we use a `geom_col()`. Note that a
[one\-to\-one](https://en.wikipedia.org/wiki/Injective_function) function is one
where each input leads to a single output. For the `mpg` dataset, we can see
that the pairs `cty, hwy` clearly don’t have this one\-to\-one property:
```
## NOTE: Run this chunk for an illustration
mpg %>% filter(cty == 20)
```
```
## # A tibble: 11 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 2 2008 4 manu… f 20 31 p comp…
## 2 audi a4 quattro 2 2008 4 manu… 4 20 28 p comp…
## 3 hyundai tiburon 2 2008 4 manu… f 20 28 r subc…
## 4 hyundai tiburon 2 2008 4 auto… f 20 27 r subc…
## 5 subaru forester … 2.5 2008 4 manu… 4 20 27 r suv
## 6 subaru forester … 2.5 2008 4 auto… 4 20 26 r suv
## 7 subaru impreza a… 2.5 2008 4 auto… 4 20 25 p comp…
## 8 subaru impreza a… 2.5 2008 4 auto… 4 20 27 r comp…
## 9 subaru impreza a… 2.5 2008 4 manu… 4 20 27 r comp…
## 10 volkswagen new beetle 2.5 2008 5 manu… f 20 28 r subc…
## 11 volkswagen new beetle 2.5 2008 5 auto… f 20 29 r subc…
```
### 18\.0\.2 **q2** The following code attempts to visualize `cty, hwy` from `mpg` using
`geom_col()`. There’s something fishy about the `hwy` values; what’s wrong here?
*Hint*: Try changing the `position` parameter for `geom_col()`.
```
mpg %>%
ggplot(aes(x = cty, y = hwy)) +
geom_col(position = "dodge")
```
**Observations**:
\- Since `position = "stacked"` is the default for `geom_col()`, we see not the real `hwy` values, but effectively a sum at each `cty` value!
A more standard way to visualize this kind of data is a *scatterplot*, which
we’ll study later. For now, here’s an example of a more effective way to
visualize `cty` vs `hwy`:
```
## NOTE: Run this chunk for an illustration
mpg %>%
ggplot(aes(cty, hwy)) +
geom_point()
```
### 18\.0\.3 **q3** The following are two different visualizations of the `mpg` dataset.
Document your observations between the `v1` and `v2` visuals. Then, determine
which—`v1` or `v2`—enabled you to make more observations. What was the
difference between the two visuals?
```
## TODO: Run this code without changing, describe your observations on the data
mpg %>%
ggplot(aes(class, fill = class)) +
geom_bar()
```
**Observations**:
In this dataset:
\- `SUV`’s are most numerous, followed by `compact` and `midsize`
\- There are very few `2seater` vehicles
```
## TODO: Run this code without changing, describe your observations on the data
mpg %>%
ggplot(aes(class, fill = drv)) +
geom_bar()
```
**Observations**:
In this dataset:
\- `SUV`’s are most numerous, followed by `compact` and `midsize`
\- There are very few `2seater` vehicles
\- `pickup`’s and `SUV`’s tend to have `4` wheel drive
\- `compact`’s and `midsize` tend to have `f` drive
\- All the `2seater` vehicles are `r` drive
**Compare `v1` and `v2`**:
* Which visualization—`v1` or `v2`—enabled you to make more observations?
+ `v2` enabled me to make more observations
* What was the difference between `v1` and `v2`?
+ `v1` showed the same variable `class` using two aesthetics
+ `v2` showed two variables `class` and `drv` using two aesthetics
### 18\.0\.4 **q4** The following code has a bug; it does not do what its author intended.
Identify and fix the bug. What does the resulting graph tell you about the
relation between `manufacturer` and `class` of cars in this dataset?
*Note*: I use a `theme()` call to rotate the x\-axis labels. We’ll learn how to
do this in a future exercise.
```
mpg %>%
ggplot(aes(x = manufacturer, fill = class)) +
geom_bar(position = "dodge") +
theme(axis.text.x = element_text(angle = 270, vjust = 0.5, hjust = 0))
```
**Observations**
\- Certain manufacturers seem to favor particular classes of car. For instance,
*in this dataset*:
\- Jeep, Land Rover, Lincoln, and Mercury only have `suv`’s
\- Audi, Toyota, and Volkswagen favor `compact`
\- Dodge favors `pickup`
### 18\.0\.5 **q5** The following graph is hard to read. What other form of faceting would
make the visual more convenient to read? Modify the code below with your
suggested improvement.
```
mpg %>%
ggplot(aes(x = cyl)) +
geom_bar() +
facet_wrap(~ manufacturer)
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-deriving-quantities.html |
19 Data: Deriving Quantities
============================
*Purpose*: Often our data will not tell us *directly* what we want to know; in
these cases we need to *derive* new quantities from our data. In this exercise,
we’ll work with `mutate()` to create new columns by operating on existing
variables, and use `group_by()` with `summarize()` to compute aggregate
statistics (summaries!) of our data.
*Reading*: [Derive Information with dplyr](https://rstudio.cloud/learn/primers/2.3)
*Topics*: (All topics, except *Challenges*)
*Reading Time*: \~60 minutes
*Note*: I’m considering splitting this exercise into two parts; I welcome
feedback on this idea.
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 19\.0\.1 **q1** What is the difference between these two versions? How are they the same? How are they different?
```
## Version 1
filter(diamonds, cut == "Ideal")
```
```
## # A tibble: 21,551 × 10
## carat cut color clarity depth table price x y z
## <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
## 1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43
## 2 0.23 Ideal J VS1 62.8 56 340 3.93 3.9 2.46
## 3 0.31 Ideal J SI2 62.2 54 344 4.35 4.37 2.71
## 4 0.3 Ideal I SI2 62 54 348 4.31 4.34 2.68
## 5 0.33 Ideal I SI2 61.8 55 403 4.49 4.51 2.78
## 6 0.33 Ideal I SI2 61.2 56 403 4.49 4.5 2.75
## 7 0.33 Ideal J SI1 61.1 56 403 4.49 4.55 2.76
## 8 0.23 Ideal G VS1 61.9 54 404 3.93 3.95 2.44
## 9 0.32 Ideal I SI1 60.9 55 404 4.45 4.48 2.72
## 10 0.3 Ideal I SI2 61 59 405 4.3 4.33 2.63
## # … with 21,541 more rows
```
```
## Version 2
diamonds %>% filter(cut == "Ideal")
```
```
## # A tibble: 21,551 × 10
## carat cut color clarity depth table price x y z
## <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
## 1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43
## 2 0.23 Ideal J VS1 62.8 56 340 3.93 3.9 2.46
## 3 0.31 Ideal J SI2 62.2 54 344 4.35 4.37 2.71
## 4 0.3 Ideal I SI2 62 54 348 4.31 4.34 2.68
## 5 0.33 Ideal I SI2 61.8 55 403 4.49 4.51 2.78
## 6 0.33 Ideal I SI2 61.2 56 403 4.49 4.5 2.75
## 7 0.33 Ideal J SI1 61.1 56 403 4.49 4.55 2.76
## 8 0.23 Ideal G VS1 61.9 54 404 3.93 3.95 2.44
## 9 0.32 Ideal I SI1 60.9 55 404 4.45 4.48 2.72
## 10 0.3 Ideal I SI2 61 59 405 4.3 4.33 2.63
## # … with 21,541 more rows
```
These two lines carry out the same computation. However, Version 2 uses the pipe `%>%`,
which takes its left\-hand\-side (LHS) and inserts the LHS as the first argument of
the right\-hand\-side (RHS).
The reading mentioned various kinds of *summary functions*, which are summarized
in the table below:
19\.1 Summary Functions
-----------------------
| Type | Functions |
| --- | --- |
| Location | `mean(x), median(x), quantile(x, p), min(x), max(x)` |
| Spread | `sd(x), var(x), IQR(x), mad(x)` |
| Position | `first(x), nth(x, n), last(x)` |
| Counts | `n_distinct(x), n()` |
| Logical | `sum(!is.na(x)), mean(y == 0)` |
### 19\.1\.1 **q2** Using `summarize()` and a logical summary function, determine the number of rows with `Ideal` `cut`. Save this value to a column called `n_ideal`.
```
df_q2 <-
diamonds %>%
summarize(n_ideal = sum(cut == "Ideal"))
df_q2
```
```
## # A tibble: 1 × 1
## n_ideal
## <int>
## 1 21551
```
The following test will verify that your `df_q2` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q2 %>% pull(n_ideal),
21551
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
### 19\.1\.2 **q3** The function `group_by()` modifies how other dplyr verbs function. Uncomment the `group_by()` below, and describe how the result changes.
```
diamonds %>%
## group_by(color, clarity) %>%
summarize(price = mean(price))
```
```
## # A tibble: 1 × 1
## price
## <dbl>
## 1 3933.
```
* The commented version computes a summary over the entire dataframe
* The uncommented version computes summaries over groups of `color` and `clarity`
### 19\.1\.3 Vectorized Functions
| Type | Functions |
| --- | --- |
| Arithmetic ops. | `+, -, *, /, ^` |
| Modular arith. | `%/%, %%` |
| Logical comp. | `<, <=, >, >=, !=, ==` |
| Logarithms | `log(x), log2(x), log10(x)` |
| Offsets | `lead(x), lag(x)` |
| Cumulants | `cumsum(x), cumprod(x), cummin(x), cummax(x), cummean(x)` |
| Ranking | `min_rank(x), row_number(x), dense_rank(x), percent_rank(x), cume_dist(x), ntile(x)` |
### 19\.1\.4 **q4** Comment on why the difference is so large.
The `depth` variable is supposedly computed via `depth_computed = 100 * 2 * z / (x + y)`. Compute `diff = depth - depth_computed`: This is a measure of
discrepancy between the given and computed depth. Additionally, compute the
*coefficient of variation* `cov = sd(x) / mean(x)` for both `depth` and `diff`:
This is a dimensionless measure of variation in a variable. Assign the resulting
tibble to `df_q4`, and assign the appropriate values to `cov_depth` and
`cov_diff`. Comment on the relative values of `cov_depth` and `cov_diff`; why is
`cov_diff` so large?
*Note*: Confusingly, the documentation for `diamonds` leaves out the factor of `100` in the computation of `depth`.
```
df_q4 <-
diamonds %>%
mutate(
depth_computed = 100 * 2 * z / (x + y),
diff = depth - depth_computed
) %>%
summarize(
depth_mean = mean(depth, na.rm = TRUE),
depth_sd = sd(depth, na.rm = TRUE),
cov_depth = depth_sd / depth_mean,
diff_mean = mean(diff, na.rm = TRUE),
diff_sd = sd(diff, na.rm = TRUE),
cov_diff = diff_sd / diff_mean,
c_diff = IQR(diff, na.rm = TRUE) / median(diff, na.rm = TRUE)
)
df_q4
```
```
## # A tibble: 1 × 7
## depth_mean depth_sd cov_depth diff_mean diff_sd cov_diff c_diff
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 61.7 1.43 0.0232 0.00528 2.63 498. -7.50e12
```
**Observations**
* `cov_depth` is tiny; there’s not much variation in the depth, compared to its scale.
* `cov_diff` is enormous! This is because the mean difference between `depth` and `depth_computed` is small, which causes the `cov` to blow up.
The following test will verify that your `df_q4` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(abs(df_q4 %>% pull(cov_depth) - 0.02320057) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(df_q4 %>% pull(cov_diff) - 497.5585) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
There is nonzero difference between `depth` and the computed `depth`; this
raises some questions about how `depth` was actually computed! It’s often a good
idea to try to check derived quantities in your data, if possible. These can
sometimes uncover errors in the data!
### 19\.1\.5 **q5** Compute and observe
Compute the `price_mean = mean(price)`, `price_sd = sd(price)`, and `price_cov = price_sd / price_mean` for each `cut` of diamond. What observations can you make
about the various cuts? Do those observations match your expectations?
```
df_q5 <-
diamonds %>%
group_by(cut) %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_cov = price_sd / price_mean
)
df_q5
```
```
## # A tibble: 5 × 4
## cut price_mean price_sd price_cov
## <ord> <dbl> <dbl> <dbl>
## 1 Fair 4359. 3560. 0.817
## 2 Good 3929. 3682. 0.937
## 3 Very Good 3982. 3936. 0.988
## 4 Premium 4584. 4349. 0.949
## 5 Ideal 3458. 3808. 1.10
```
The following test will verify that your `df_q5` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q5 %>%
select(cut, price_cov) %>%
mutate(price_cov = round(price_cov, digits = 3)),
tibble(
cut = c("Fair", "Good", "Very Good", "Premium", "Ideal"),
price_cov = c(0.817, 0.937, 0.988, 0.949, 1.101)
) %>%
mutate(cut = fct_inorder(cut, ordered = TRUE))
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
**Observations**:
* I would expect the `Ideal` diamonds to have the highest price, but that’s not the case!
* The `COV` for each cut is very large, on the order of 80 to 110 percent! To me, this implies that the other variables `clarity, carat, color` account for a large portion of the diamond price.
### 19\.0\.1 **q1** What is the difference between these two versions? How are they the same? How are they different?
```
## Version 1
filter(diamonds, cut == "Ideal")
```
```
## # A tibble: 21,551 × 10
## carat cut color clarity depth table price x y z
## <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
## 1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43
## 2 0.23 Ideal J VS1 62.8 56 340 3.93 3.9 2.46
## 3 0.31 Ideal J SI2 62.2 54 344 4.35 4.37 2.71
## 4 0.3 Ideal I SI2 62 54 348 4.31 4.34 2.68
## 5 0.33 Ideal I SI2 61.8 55 403 4.49 4.51 2.78
## 6 0.33 Ideal I SI2 61.2 56 403 4.49 4.5 2.75
## 7 0.33 Ideal J SI1 61.1 56 403 4.49 4.55 2.76
## 8 0.23 Ideal G VS1 61.9 54 404 3.93 3.95 2.44
## 9 0.32 Ideal I SI1 60.9 55 404 4.45 4.48 2.72
## 10 0.3 Ideal I SI2 61 59 405 4.3 4.33 2.63
## # … with 21,541 more rows
```
```
## Version 2
diamonds %>% filter(cut == "Ideal")
```
```
## # A tibble: 21,551 × 10
## carat cut color clarity depth table price x y z
## <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
## 1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43
## 2 0.23 Ideal J VS1 62.8 56 340 3.93 3.9 2.46
## 3 0.31 Ideal J SI2 62.2 54 344 4.35 4.37 2.71
## 4 0.3 Ideal I SI2 62 54 348 4.31 4.34 2.68
## 5 0.33 Ideal I SI2 61.8 55 403 4.49 4.51 2.78
## 6 0.33 Ideal I SI2 61.2 56 403 4.49 4.5 2.75
## 7 0.33 Ideal J SI1 61.1 56 403 4.49 4.55 2.76
## 8 0.23 Ideal G VS1 61.9 54 404 3.93 3.95 2.44
## 9 0.32 Ideal I SI1 60.9 55 404 4.45 4.48 2.72
## 10 0.3 Ideal I SI2 61 59 405 4.3 4.33 2.63
## # … with 21,541 more rows
```
These two lines carry out the same computation. However, Version 2 uses the pipe `%>%`,
which takes its left\-hand\-side (LHS) and inserts the LHS as the first argument of
the right\-hand\-side (RHS).
The reading mentioned various kinds of *summary functions*, which are summarized
in the table below:
19\.1 Summary Functions
-----------------------
| Type | Functions |
| --- | --- |
| Location | `mean(x), median(x), quantile(x, p), min(x), max(x)` |
| Spread | `sd(x), var(x), IQR(x), mad(x)` |
| Position | `first(x), nth(x, n), last(x)` |
| Counts | `n_distinct(x), n()` |
| Logical | `sum(!is.na(x)), mean(y == 0)` |
### 19\.1\.1 **q2** Using `summarize()` and a logical summary function, determine the number of rows with `Ideal` `cut`. Save this value to a column called `n_ideal`.
```
df_q2 <-
diamonds %>%
summarize(n_ideal = sum(cut == "Ideal"))
df_q2
```
```
## # A tibble: 1 × 1
## n_ideal
## <int>
## 1 21551
```
The following test will verify that your `df_q2` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q2 %>% pull(n_ideal),
21551
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
### 19\.1\.2 **q3** The function `group_by()` modifies how other dplyr verbs function. Uncomment the `group_by()` below, and describe how the result changes.
```
diamonds %>%
## group_by(color, clarity) %>%
summarize(price = mean(price))
```
```
## # A tibble: 1 × 1
## price
## <dbl>
## 1 3933.
```
* The commented version computes a summary over the entire dataframe
* The uncommented version computes summaries over groups of `color` and `clarity`
### 19\.1\.3 Vectorized Functions
| Type | Functions |
| --- | --- |
| Arithmetic ops. | `+, -, *, /, ^` |
| Modular arith. | `%/%, %%` |
| Logical comp. | `<, <=, >, >=, !=, ==` |
| Logarithms | `log(x), log2(x), log10(x)` |
| Offsets | `lead(x), lag(x)` |
| Cumulants | `cumsum(x), cumprod(x), cummin(x), cummax(x), cummean(x)` |
| Ranking | `min_rank(x), row_number(x), dense_rank(x), percent_rank(x), cume_dist(x), ntile(x)` |
### 19\.1\.4 **q4** Comment on why the difference is so large.
The `depth` variable is supposedly computed via `depth_computed = 100 * 2 * z / (x + y)`. Compute `diff = depth - depth_computed`: This is a measure of
discrepancy between the given and computed depth. Additionally, compute the
*coefficient of variation* `cov = sd(x) / mean(x)` for both `depth` and `diff`:
This is a dimensionless measure of variation in a variable. Assign the resulting
tibble to `df_q4`, and assign the appropriate values to `cov_depth` and
`cov_diff`. Comment on the relative values of `cov_depth` and `cov_diff`; why is
`cov_diff` so large?
*Note*: Confusingly, the documentation for `diamonds` leaves out the factor of `100` in the computation of `depth`.
```
df_q4 <-
diamonds %>%
mutate(
depth_computed = 100 * 2 * z / (x + y),
diff = depth - depth_computed
) %>%
summarize(
depth_mean = mean(depth, na.rm = TRUE),
depth_sd = sd(depth, na.rm = TRUE),
cov_depth = depth_sd / depth_mean,
diff_mean = mean(diff, na.rm = TRUE),
diff_sd = sd(diff, na.rm = TRUE),
cov_diff = diff_sd / diff_mean,
c_diff = IQR(diff, na.rm = TRUE) / median(diff, na.rm = TRUE)
)
df_q4
```
```
## # A tibble: 1 × 7
## depth_mean depth_sd cov_depth diff_mean diff_sd cov_diff c_diff
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 61.7 1.43 0.0232 0.00528 2.63 498. -7.50e12
```
**Observations**
* `cov_depth` is tiny; there’s not much variation in the depth, compared to its scale.
* `cov_diff` is enormous! This is because the mean difference between `depth` and `depth_computed` is small, which causes the `cov` to blow up.
The following test will verify that your `df_q4` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(abs(df_q4 %>% pull(cov_depth) - 0.02320057) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(df_q4 %>% pull(cov_diff) - 497.5585) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
There is nonzero difference between `depth` and the computed `depth`; this
raises some questions about how `depth` was actually computed! It’s often a good
idea to try to check derived quantities in your data, if possible. These can
sometimes uncover errors in the data!
### 19\.1\.5 **q5** Compute and observe
Compute the `price_mean = mean(price)`, `price_sd = sd(price)`, and `price_cov = price_sd / price_mean` for each `cut` of diamond. What observations can you make
about the various cuts? Do those observations match your expectations?
```
df_q5 <-
diamonds %>%
group_by(cut) %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_cov = price_sd / price_mean
)
df_q5
```
```
## # A tibble: 5 × 4
## cut price_mean price_sd price_cov
## <ord> <dbl> <dbl> <dbl>
## 1 Fair 4359. 3560. 0.817
## 2 Good 3929. 3682. 0.937
## 3 Very Good 3982. 3936. 0.988
## 4 Premium 4584. 4349. 0.949
## 5 Ideal 3458. 3808. 1.10
```
The following test will verify that your `df_q5` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q5 %>%
select(cut, price_cov) %>%
mutate(price_cov = round(price_cov, digits = 3)),
tibble(
cut = c("Fair", "Good", "Very Good", "Premium", "Ideal"),
price_cov = c(0.817, 0.937, 0.988, 0.949, 1.101)
) %>%
mutate(cut = fct_inorder(cut, ordered = TRUE))
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
**Observations**:
* I would expect the `Ideal` diamonds to have the highest price, but that’s not the case!
* The `COV` for each cut is very large, on the order of 80 to 110 percent! To me, this implies that the other variables `clarity, carat, color` account for a large portion of the diamond price.
### 19\.1\.1 **q2** Using `summarize()` and a logical summary function, determine the number of rows with `Ideal` `cut`. Save this value to a column called `n_ideal`.
```
df_q2 <-
diamonds %>%
summarize(n_ideal = sum(cut == "Ideal"))
df_q2
```
```
## # A tibble: 1 × 1
## n_ideal
## <int>
## 1 21551
```
The following test will verify that your `df_q2` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q2 %>% pull(n_ideal),
21551
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
### 19\.1\.2 **q3** The function `group_by()` modifies how other dplyr verbs function. Uncomment the `group_by()` below, and describe how the result changes.
```
diamonds %>%
## group_by(color, clarity) %>%
summarize(price = mean(price))
```
```
## # A tibble: 1 × 1
## price
## <dbl>
## 1 3933.
```
* The commented version computes a summary over the entire dataframe
* The uncommented version computes summaries over groups of `color` and `clarity`
### 19\.1\.3 Vectorized Functions
| Type | Functions |
| --- | --- |
| Arithmetic ops. | `+, -, *, /, ^` |
| Modular arith. | `%/%, %%` |
| Logical comp. | `<, <=, >, >=, !=, ==` |
| Logarithms | `log(x), log2(x), log10(x)` |
| Offsets | `lead(x), lag(x)` |
| Cumulants | `cumsum(x), cumprod(x), cummin(x), cummax(x), cummean(x)` |
| Ranking | `min_rank(x), row_number(x), dense_rank(x), percent_rank(x), cume_dist(x), ntile(x)` |
### 19\.1\.4 **q4** Comment on why the difference is so large.
The `depth` variable is supposedly computed via `depth_computed = 100 * 2 * z / (x + y)`. Compute `diff = depth - depth_computed`: This is a measure of
discrepancy between the given and computed depth. Additionally, compute the
*coefficient of variation* `cov = sd(x) / mean(x)` for both `depth` and `diff`:
This is a dimensionless measure of variation in a variable. Assign the resulting
tibble to `df_q4`, and assign the appropriate values to `cov_depth` and
`cov_diff`. Comment on the relative values of `cov_depth` and `cov_diff`; why is
`cov_diff` so large?
*Note*: Confusingly, the documentation for `diamonds` leaves out the factor of `100` in the computation of `depth`.
```
df_q4 <-
diamonds %>%
mutate(
depth_computed = 100 * 2 * z / (x + y),
diff = depth - depth_computed
) %>%
summarize(
depth_mean = mean(depth, na.rm = TRUE),
depth_sd = sd(depth, na.rm = TRUE),
cov_depth = depth_sd / depth_mean,
diff_mean = mean(diff, na.rm = TRUE),
diff_sd = sd(diff, na.rm = TRUE),
cov_diff = diff_sd / diff_mean,
c_diff = IQR(diff, na.rm = TRUE) / median(diff, na.rm = TRUE)
)
df_q4
```
```
## # A tibble: 1 × 7
## depth_mean depth_sd cov_depth diff_mean diff_sd cov_diff c_diff
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 61.7 1.43 0.0232 0.00528 2.63 498. -7.50e12
```
**Observations**
* `cov_depth` is tiny; there’s not much variation in the depth, compared to its scale.
* `cov_diff` is enormous! This is because the mean difference between `depth` and `depth_computed` is small, which causes the `cov` to blow up.
The following test will verify that your `df_q4` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(abs(df_q4 %>% pull(cov_depth) - 0.02320057) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(df_q4 %>% pull(cov_diff) - 497.5585) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
There is nonzero difference between `depth` and the computed `depth`; this
raises some questions about how `depth` was actually computed! It’s often a good
idea to try to check derived quantities in your data, if possible. These can
sometimes uncover errors in the data!
### 19\.1\.5 **q5** Compute and observe
Compute the `price_mean = mean(price)`, `price_sd = sd(price)`, and `price_cov = price_sd / price_mean` for each `cut` of diamond. What observations can you make
about the various cuts? Do those observations match your expectations?
```
df_q5 <-
diamonds %>%
group_by(cut) %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_cov = price_sd / price_mean
)
df_q5
```
```
## # A tibble: 5 × 4
## cut price_mean price_sd price_cov
## <ord> <dbl> <dbl> <dbl>
## 1 Fair 4359. 3560. 0.817
## 2 Good 3929. 3682. 0.937
## 3 Very Good 3982. 3936. 0.988
## 4 Premium 4584. 4349. 0.949
## 5 Ideal 3458. 3808. 1.10
```
The following test will verify that your `df_q5` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q5 %>%
select(cut, price_cov) %>%
mutate(price_cov = round(price_cov, digits = 3)),
tibble(
cut = c("Fair", "Good", "Very Good", "Premium", "Ideal"),
price_cov = c(0.817, 0.937, 0.988, 0.949, 1.101)
) %>%
mutate(cut = fct_inorder(cut, ordered = TRUE))
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
**Observations**:
* I would expect the `Ideal` diamonds to have the highest price, but that’s not the case!
* The `COV` for each cut is very large, on the order of 80 to 110 percent! To me, this implies that the other variables `clarity, carat, color` account for a large portion of the diamond price.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-histograms.html |
20 Vis: Histograms
==================
*Purpose*: *Histograms* are a key tool for EDA. In this exercise we’ll get a little more practice constructing and interpreting histograms and densities.
*Reading*: [Histograms](https://rstudio.cloud/learn/primers/3.3)
*Topics*: (All topics)
*Reading Time*: \~20 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 20\.0\.1 **q1** Using the graphs generated in the chunks `q1-vis1` and `q1-vis2` below, answer:
* Which `class` has the most vehicles?
* Which `class` has the broadest distribution of `cty` values?
* Which graph—`vis1` or `vis2`—best helps you answer each question?
```
## NOTE: No need to modify
mpg %>%
ggplot(aes(cty, color = class)) +
geom_freqpoly(bins = 10)
```
* From this graph, it’s easy to see that `suv` is the most numerous class
```
## NOTE: No need to modify
mpg %>%
ggplot(aes(cty, color = class)) +
geom_density()
```
* From this graph, it’s easy to see that `subcompact` has the broadest distribution
In my opinion, it’s easier to see the broadness of `subcompact` by the density plot `q1-vis2`.
In the previous exercise, we learned how to *facet* a graph. Let’s use that part of the grammar of graphics to clean up the graph above.
### 20\.0\.2 **q2** Modify `q1-vis2` to use a `facet_wrap()` on the `class`. “Free” the vertical axis with the `scales` keyword to allow for a different y scale in each facet.
```
mpg %>%
ggplot(aes(cty)) +
geom_density() +
facet_wrap(~class, scales = "free_y")
```
In the reading, we learned that the “most important thing” to keep in mind with `geom_histogram()` and `geom_freqpoly()` is to *explore different binwidths*. We’ll explore this idea in the next question.
### 20\.0\.3 **q3** Analyze the following graph; make sure to test different binwidths. What patterns do you see? Which patterns remain as you change the binwidth?
```
## TODO: Run this chunk; play with differnet bin widths
diamonds %>%
filter(carat < 1.1) %>%
ggplot(aes(carat)) +
geom_histogram(binwidth = 0.01, boundary = 0.005) +
scale_x_continuous(
breaks = seq(0, 1, by = 0.1)
)
```
**Observations**
\- The largest number of diamonds tend to fall on *or above* even 10\-ths of a carat.
\- The peak near `0.5` is very broad, compared to the others.
\- The peak at `0.3` is most numerous
### 20\.0\.1 **q1** Using the graphs generated in the chunks `q1-vis1` and `q1-vis2` below, answer:
* Which `class` has the most vehicles?
* Which `class` has the broadest distribution of `cty` values?
* Which graph—`vis1` or `vis2`—best helps you answer each question?
```
## NOTE: No need to modify
mpg %>%
ggplot(aes(cty, color = class)) +
geom_freqpoly(bins = 10)
```
* From this graph, it’s easy to see that `suv` is the most numerous class
```
## NOTE: No need to modify
mpg %>%
ggplot(aes(cty, color = class)) +
geom_density()
```
* From this graph, it’s easy to see that `subcompact` has the broadest distribution
In my opinion, it’s easier to see the broadness of `subcompact` by the density plot `q1-vis2`.
In the previous exercise, we learned how to *facet* a graph. Let’s use that part of the grammar of graphics to clean up the graph above.
### 20\.0\.2 **q2** Modify `q1-vis2` to use a `facet_wrap()` on the `class`. “Free” the vertical axis with the `scales` keyword to allow for a different y scale in each facet.
```
mpg %>%
ggplot(aes(cty)) +
geom_density() +
facet_wrap(~class, scales = "free_y")
```
In the reading, we learned that the “most important thing” to keep in mind with `geom_histogram()` and `geom_freqpoly()` is to *explore different binwidths*. We’ll explore this idea in the next question.
### 20\.0\.3 **q3** Analyze the following graph; make sure to test different binwidths. What patterns do you see? Which patterns remain as you change the binwidth?
```
## TODO: Run this chunk; play with differnet bin widths
diamonds %>%
filter(carat < 1.1) %>%
ggplot(aes(carat)) +
geom_histogram(binwidth = 0.01, boundary = 0.005) +
scale_x_continuous(
breaks = seq(0, 1, by = 0.1)
)
```
**Observations**
\- The largest number of diamonds tend to fall on *or above* even 10\-ths of a carat.
\- The peak near `0.5` is very broad, compared to the others.
\- The peak at `0.3` is most numerous
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/reproducibility-structuring-directories.html |
21 Reproducibility: Structuring Directories
===========================================
*Purpose*: A key part of being successful in data science is *getting
organized*. In this exercise, you’ll set yourself up for future success by
learning how to *structure* a project in terms of its folder hierarchy: its
*directories*. You’ll learn how to isolate your projects, structure your
subdirectories, and protect your raw data.
*Reading*: [Project Structure](https://reproducible-science-curriculum.github.io/organization-RR-Jupyter/02-directories/) (Skim: Provided so you can see an example of a well\-structured project directory)
21\.1 Reading: Structuring a Project
------------------------------------
In this exercise, we’ll assume that you are working on a project.
21\.2 Isolate your project
--------------------------
While there are [many
ways](https://medium.com/javascript-in-plain-english/how-to-organize-project-files-4b9d9fd02e73)
to *organize* a given project, you should **definitely** organize your work into
individual *projects*. Each project should have its own directory, and should
generally have all its related files under that directory.
For instance, `data-science-curriculum` is in my mind an entire project, containing both exercises and challenges. Thus for me, it makes sense to organize all of these *subdirectories* under `data-science-curriculum`.
21\.3 Structure your subdirectories
-----------------------------------
Subdirectories are folders (directories!) *under* the main (root) project
folder. For instance, the `data-science-curriculum` has subdirectories
`exercises`, `challenges`, and more.
Each subdirectory should have some *purpose*. For instance, there are both
`exercises` and `exercises_sequenced` directories. I build the exercises in
`exercises` out\-of\-sequence, because I don’t want to have to decide on which
specific day each exercise should occur *before* I start building it. However,
it’s useful for you the student to be able to see all the exercises in daily
order, so I have an additional directory where I can place the sequenced
exercises.
Knitr will *automatically* create some sensible subdirectories for report files;
for instance, when you knit a document knitr will automatically create a
`filename_files` directory with image files. This helps prevent clutter in your
directory, and places the file adjacent to the source file in your
(sub)directory.
Your project can have deeper levels of *hierarchy*: *subdirectors under
subdirectories*. For instance, the `exercises` folder has a subdirectory
`exercises/data`; this is where exercise datasets live. In your own projects,
you should keep your data *unaltered* in some kind of data directory.
21\.4 Protect your data!
------------------------
**Never** make alterations to your raw data! Even if your data have errors, it’s
far better to *document* those errors somewhere, such that you have a papertrail
for what changed and why. It is far better practice to keep your data unedited,
and simply write *processed* data to an additional file.
If your data needs preprocessing—say to fix errors or simply to wrangle
data—write a preprocessing Rmarkdown file that takes in the raw data, and
writes out a processed version of the data. You can then load the processed data
in later scripts, and all of your processing steps will be permanently
documented.
21\.5 Practice: Set up a project
--------------------------------
Let’s set you up for future success by creating a directory for your Project 1\.
### 21\.5\.1 **q1** Create a project directory in your personal data science rep called
`p01-name`. If you already know what you’re going to work on, feel free to
replace `name` with something sensible. If you don’t know what to call it yet,
don’t worry—you can change it later!
### 21\.5\.2 **q2** Create a `data` directory under `p01-name`; you should then have
`p01-name/data`. This is where you will put any data files you read or write in the project.
There’s a *trick* to committing an empty folder with Git. We’ll need to
introduce some kind of file to preserve an empty directory structure.
### 21\.5\.3 **q3** Create an empty file called `.gitignore` under your `data` folder. You
can do this from the root of your data science repo by using the following from
your command line:
```
# Move to your data science directory
$ ~/path/to/your/data-science-work # Replace with your real directory!
# Add the special .gitignore file
$ touch p01-name/data/.gitignore
```
A `.gitignore` file is a [useful
tool](https://www.atlassian.com/git/tutorials/saving-changes/gitignore); it
tells git what kinds of files to *ignore* when it comes to tracking files. We’re
using it here for a different purpose; we can now *commit* that `.gitignore` in
order to commit our directory structure.
```
$ git add p01-name/data/.gitignore
$ git commit -m "add p01 directory structure"
```
Commit and push your directory structure. You can fill this in once you start
Project 1\.
**Key Takeaways**:
* **Isolate** your project by making a folder *for that specific project*.
* **Structure** your project by creating directories for different filetypes: data, analysis code, and outputs.
* **Protect** your raw data somewhere, **do not** make irreversible edits to the raw data.
21\.1 Reading: Structuring a Project
------------------------------------
In this exercise, we’ll assume that you are working on a project.
21\.2 Isolate your project
--------------------------
While there are [many
ways](https://medium.com/javascript-in-plain-english/how-to-organize-project-files-4b9d9fd02e73)
to *organize* a given project, you should **definitely** organize your work into
individual *projects*. Each project should have its own directory, and should
generally have all its related files under that directory.
For instance, `data-science-curriculum` is in my mind an entire project, containing both exercises and challenges. Thus for me, it makes sense to organize all of these *subdirectories* under `data-science-curriculum`.
21\.3 Structure your subdirectories
-----------------------------------
Subdirectories are folders (directories!) *under* the main (root) project
folder. For instance, the `data-science-curriculum` has subdirectories
`exercises`, `challenges`, and more.
Each subdirectory should have some *purpose*. For instance, there are both
`exercises` and `exercises_sequenced` directories. I build the exercises in
`exercises` out\-of\-sequence, because I don’t want to have to decide on which
specific day each exercise should occur *before* I start building it. However,
it’s useful for you the student to be able to see all the exercises in daily
order, so I have an additional directory where I can place the sequenced
exercises.
Knitr will *automatically* create some sensible subdirectories for report files;
for instance, when you knit a document knitr will automatically create a
`filename_files` directory with image files. This helps prevent clutter in your
directory, and places the file adjacent to the source file in your
(sub)directory.
Your project can have deeper levels of *hierarchy*: *subdirectors under
subdirectories*. For instance, the `exercises` folder has a subdirectory
`exercises/data`; this is where exercise datasets live. In your own projects,
you should keep your data *unaltered* in some kind of data directory.
21\.4 Protect your data!
------------------------
**Never** make alterations to your raw data! Even if your data have errors, it’s
far better to *document* those errors somewhere, such that you have a papertrail
for what changed and why. It is far better practice to keep your data unedited,
and simply write *processed* data to an additional file.
If your data needs preprocessing—say to fix errors or simply to wrangle
data—write a preprocessing Rmarkdown file that takes in the raw data, and
writes out a processed version of the data. You can then load the processed data
in later scripts, and all of your processing steps will be permanently
documented.
21\.5 Practice: Set up a project
--------------------------------
Let’s set you up for future success by creating a directory for your Project 1\.
### 21\.5\.1 **q1** Create a project directory in your personal data science rep called
`p01-name`. If you already know what you’re going to work on, feel free to
replace `name` with something sensible. If you don’t know what to call it yet,
don’t worry—you can change it later!
### 21\.5\.2 **q2** Create a `data` directory under `p01-name`; you should then have
`p01-name/data`. This is where you will put any data files you read or write in the project.
There’s a *trick* to committing an empty folder with Git. We’ll need to
introduce some kind of file to preserve an empty directory structure.
### 21\.5\.3 **q3** Create an empty file called `.gitignore` under your `data` folder. You
can do this from the root of your data science repo by using the following from
your command line:
```
# Move to your data science directory
$ ~/path/to/your/data-science-work # Replace with your real directory!
# Add the special .gitignore file
$ touch p01-name/data/.gitignore
```
A `.gitignore` file is a [useful
tool](https://www.atlassian.com/git/tutorials/saving-changes/gitignore); it
tells git what kinds of files to *ignore* when it comes to tracking files. We’re
using it here for a different purpose; we can now *commit* that `.gitignore` in
order to commit our directory structure.
```
$ git add p01-name/data/.gitignore
$ git commit -m "add p01 directory structure"
```
Commit and push your directory structure. You can fill this in once you start
Project 1\.
**Key Takeaways**:
* **Isolate** your project by making a folder *for that specific project*.
* **Structure** your project by creating directories for different filetypes: data, analysis code, and outputs.
* **Protect** your raw data somewhere, **do not** make irreversible edits to the raw data.
### 21\.5\.1 **q1** Create a project directory in your personal data science rep called
`p01-name`. If you already know what you’re going to work on, feel free to
replace `name` with something sensible. If you don’t know what to call it yet,
don’t worry—you can change it later!
### 21\.5\.2 **q2** Create a `data` directory under `p01-name`; you should then have
`p01-name/data`. This is where you will put any data files you read or write in the project.
There’s a *trick* to committing an empty folder with Git. We’ll need to
introduce some kind of file to preserve an empty directory structure.
### 21\.5\.3 **q3** Create an empty file called `.gitignore` under your `data` folder. You
can do this from the root of your data science repo by using the following from
your command line:
```
# Move to your data science directory
$ ~/path/to/your/data-science-work # Replace with your real directory!
# Add the special .gitignore file
$ touch p01-name/data/.gitignore
```
A `.gitignore` file is a [useful
tool](https://www.atlassian.com/git/tutorials/saving-changes/gitignore); it
tells git what kinds of files to *ignore* when it comes to tracking files. We’re
using it here for a different purpose; we can now *commit* that `.gitignore` in
order to commit our directory structure.
```
$ git add p01-name/data/.gitignore
$ git commit -m "add p01 directory structure"
```
Commit and push your directory structure. You can fill this in once you start
Project 1\.
**Key Takeaways**:
* **Isolate** your project by making a folder *for that specific project*.
* **Structure** your project by creating directories for different filetypes: data, analysis code, and outputs.
* **Protect** your raw data somewhere, **do not** make irreversible edits to the raw data.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-intro-to-densities.html |
22 Stats: Intro to Densities
============================
*Purpose*: We will use *densities* to model and understand data. To that end, we’ll need to understand some key ideas about densities. Namely, we’ll discuss *random variables*, how we model random variables with densities, and the importance of *random seeds*. We’re not going to learn everything we need to work with data in this single exercise, but this will be our starting point.
*Reading*: (None; this exercise *is* the reading.)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
22\.1 Densities
---------------
Fundamentally, a density is a *function*: It takes an input, and returns an output. A density has a special interpretation though; it represents the *relative chance* of particular values occurring.
First, let’s look at a density of some real data:
```
mpg %>%
filter(class == "midsize") %>%
ggplot(aes(cty, color = class)) +
geom_density()
```
This density indicates that a value around `18` is more prevalent than other values of `cty`, at least for `midsize` vehicles in this dataset. Note that the plot above was generated by taking a set of `cty` values and smoothing them to create a density.
Next, let’s look at a normal density:
```
## NOTE: No need to change this!
df_norm_d <-
tibble(z = seq(-5, +5, length.out = 100)) %>%
mutate(d = dnorm(z, mean = 0, sd = 1))
df_norm_d %>%
ggplot(aes(z, d)) +
geom_line() +
labs(
y = "density"
)
```
This plot of a *normal density* shows that the value `z = 0` is most likely, while values both larger and smaller are less likely to occur. Note that the the density above was generated by `dnorm(z, mean = 0, sd = 1)`. The values `[mean, sd]` are called *parameters*. Rather than requiring an entire dataset to define this density, the normal is fully defined by specifying values for `mean` and `sd`.
However, note that the normal density is quite a bit simpler than the density of `cty` values above. This is a general trend; “named” densities like the normal are simplified models we use to represent real data. Picking an appropriate model is part of the challenge in statistics.
`R` comes with a great many densities implemented:
| Density | Code |
| --- | --- |
| Normal | `norm` |
| Uniform | `unif` |
| Chi\-squared | `chisq` |
| … | … |
In `R` we access a density by prefixing its name with the letter `d`.
### 22\.1\.1 **q1** Modify the code below to evaluate a uniform density with `min = -1` and `max = +1`. Assign this value to `d`.
```
df_q1 <-
tibble(x = seq(-5, +5, length.out = 100)) %>%
mutate(d = dunif(x, min = -1, max = +1))
```
Use the following test to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(
all(
df_q1 %>%
filter(-1 <= x, x <= +1) %>%
mutate(flag = d > 0) %>%
pull(flag)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
all(
df_q1 %>%
filter((x < -1) | (+1 < x)) %>%
mutate(flag = d == 0) %>%
pull(flag)
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
22\.2 Drawing Samples: A Model for Randomness
---------------------------------------------
What does it *practically* mean for a value to be more or less likely? A density gives information about a *random variable*. A random variable is a quantity that takes a random value every time we observe it. For instance: If we roll a six\-sided die, it will (unpredictably) land on values `1` through `6`. In my research, I use random variables to model material properties: Every component that comes off an assembly line will have a different strength associated with it. In order to design for a desired rate of failure, I model this variability with a random variable. A single value from a random variable—a roll of the die or a single value of material strength—is called a *realization*.
Let’s illustrate this idea by using a normal density to define a *random variable* `Z`. This is sometimes denoted `Z ~ norm(mean, sd)`, which means `Z` is density as a normal density with `mean` and `sd`. The code below illustrates what it looks like when we draw a few samples of `Z`.
```
## NOTE: No need to change this!
df_norm_samp <- tibble(Z = rnorm(n = 100, mean = 0, sd = 1))
ggplot() +
## Plot a *sample*
geom_histogram(
data = df_norm_samp,
mapping = aes(Z, after_stat(density)),
bins = 20
) +
## Plot a *density*
geom_line(
data = df_norm_d,
mapping = aes(z, d)
)
```
Note that the histogram of `Z` values looks like a “blocky” version of the density. If we draw many more samples, the histogram will start to strongly resemble the density from which it was drawn:
```
## NOTE: No need to change this!
ggplot() +
## Plot a *sample*
geom_histogram(
data = tibble(Z = rnorm(n = 1e5, mean = 0, sd = 1)),
mapping = aes(Z, after_stat(density)),
bins = 40
) +
## Plot a *density*
geom_line(
data = df_norm_d,
mapping = aes(z, d)
)
```
To clarify some terms:
* A **realization** is a single observed value of a random variable. A realization is a single value, e.g. `rnorm(n = 1)`.
* A **sample** is a set of realizations. A sample is a set of values, e.g. `rnorm(n = 10)`.
* A **density** is a function that describes the relative chance of all the values a random variable might take. A density is a function, e.g. `dnorm(x)`.
### 22\.2\.1 **q2** Plot densities for `Z` values in `df_q2`. Use aesthetics to plot a separate curve for different values of `n`. Comment on which values of `n` give a density closer to “normal”.
*Hint*: You may need to use the `group` aesthetic, in addition to `color` or `fill`.
```
## NOTE: Use the data generated here
df_q2 <- map_dfr(
c(5, 10, 1e3),
function(nsamp) {
tibble(Z = rnorm(n = nsamp), n = nsamp)
}
)
## TODO: Create the figure
df_q2 %>%
ggplot(aes(Z, color = n, group = n)) +
geom_density()
```
**Observations**
\- The larger the value of `n`, the closer the density tends to be to normal.
22\.3 Not Everything is Normal
------------------------------
A *very common misconception* is that drawing more samples will make *anything* look like a normal density. **This is not the case!** The following exercise will help us understand that the normal density is *just a model*, and *not* what we should expect all randomness to look like.
### 22\.3\.1 **q3** Run the code below and inspect the resulting histogram. Increase `n_diamonds` below to increase the number of samples drawn: Do this a few times for an increasing number of samples. Answer the questions under *observations* below.
```
n_diamonds <- 1000
diamonds %>%
filter(cut == "Good") %>%
slice_sample(n = n_diamonds) %>%
ggplot(aes(carat)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
**Observations**:
* The histogram does not look normal for any `n_diamonds`, and indeed it should not. We’ve seen before that there are very “special” values of `carat` that gemcutters seem to favor. That “social” feature of diamonds is not at all captured by a normal density.
* Carat is a measure of weight and weight cannot be negative.
The normal density *is* a good model for some physical phenomena, such as the set of heights among a population. However, it is not always a good model. One reason the normal density “shows up” a lot is because it is involved in *estimation*, which we’ll talk more about when we later discuss the central limit theorem.
22\.4 Random Seeds: Setting the State
-------------------------------------
Remember that random variables are *random*. That means if we draw samples multiple times, we’re bound to get different values. For example, run the following code chunk multiple times.
```
## NOTE: No need to change; run this multiple times!
rnorm(5)
```
```
## [1] -1.3760786 -0.6709615 -0.5358250 0.9992815 -0.1919127
```
What this means is we’ll get a slightly different picture every time we draw a sample. **This is the challenge with randomness**: Since we could have drawn a different set of samples, we need to know the degree to which we can trust conclusions drawn from data. Being *statistically literate* means knowing how much to trust your data.
```
## NOTE: No need to change this!
df_samp_multi <-
tibble(Z = rnorm(n = 50, mean = 0, sd = 1), Run = "A") %>%
bind_rows(
tibble(Z = rnorm(n = 50, mean = 0, sd = 1), Run = "B")
) %>%
bind_rows(
tibble(Z = rnorm(n = 50, mean = 0, sd = 1), Run = "C")
)
ggplot() +
## Plot *fitted density*
geom_density(
data = df_samp_multi,
mapping = aes(Z, color = Run, group = Run)
) +
## Plot a *density*
geom_line(
data = df_norm_d,
mapping = aes(z, d)
)
```
*However* for simulation purposes, we often want to be able to repeat a calculation. In order to do this, we can set the *state* of our random number generator by setting the *seed*. To illustrate, try running the following chunk multiple times.
```
## NOTE: No need to change; run this multiple times!
set.seed(101)
rnorm(5)
```
```
## [1] -0.3260365 0.5524619 -0.6749438 0.2143595 0.3107692
```
The values are identical every time!
22\.1 Densities
---------------
Fundamentally, a density is a *function*: It takes an input, and returns an output. A density has a special interpretation though; it represents the *relative chance* of particular values occurring.
First, let’s look at a density of some real data:
```
mpg %>%
filter(class == "midsize") %>%
ggplot(aes(cty, color = class)) +
geom_density()
```
This density indicates that a value around `18` is more prevalent than other values of `cty`, at least for `midsize` vehicles in this dataset. Note that the plot above was generated by taking a set of `cty` values and smoothing them to create a density.
Next, let’s look at a normal density:
```
## NOTE: No need to change this!
df_norm_d <-
tibble(z = seq(-5, +5, length.out = 100)) %>%
mutate(d = dnorm(z, mean = 0, sd = 1))
df_norm_d %>%
ggplot(aes(z, d)) +
geom_line() +
labs(
y = "density"
)
```
This plot of a *normal density* shows that the value `z = 0` is most likely, while values both larger and smaller are less likely to occur. Note that the the density above was generated by `dnorm(z, mean = 0, sd = 1)`. The values `[mean, sd]` are called *parameters*. Rather than requiring an entire dataset to define this density, the normal is fully defined by specifying values for `mean` and `sd`.
However, note that the normal density is quite a bit simpler than the density of `cty` values above. This is a general trend; “named” densities like the normal are simplified models we use to represent real data. Picking an appropriate model is part of the challenge in statistics.
`R` comes with a great many densities implemented:
| Density | Code |
| --- | --- |
| Normal | `norm` |
| Uniform | `unif` |
| Chi\-squared | `chisq` |
| … | … |
In `R` we access a density by prefixing its name with the letter `d`.
### 22\.1\.1 **q1** Modify the code below to evaluate a uniform density with `min = -1` and `max = +1`. Assign this value to `d`.
```
df_q1 <-
tibble(x = seq(-5, +5, length.out = 100)) %>%
mutate(d = dunif(x, min = -1, max = +1))
```
Use the following test to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(
all(
df_q1 %>%
filter(-1 <= x, x <= +1) %>%
mutate(flag = d > 0) %>%
pull(flag)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
all(
df_q1 %>%
filter((x < -1) | (+1 < x)) %>%
mutate(flag = d == 0) %>%
pull(flag)
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 22\.1\.1 **q1** Modify the code below to evaluate a uniform density with `min = -1` and `max = +1`. Assign this value to `d`.
```
df_q1 <-
tibble(x = seq(-5, +5, length.out = 100)) %>%
mutate(d = dunif(x, min = -1, max = +1))
```
Use the following test to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(
all(
df_q1 %>%
filter(-1 <= x, x <= +1) %>%
mutate(flag = d > 0) %>%
pull(flag)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
all(
df_q1 %>%
filter((x < -1) | (+1 < x)) %>%
mutate(flag = d == 0) %>%
pull(flag)
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
22\.2 Drawing Samples: A Model for Randomness
---------------------------------------------
What does it *practically* mean for a value to be more or less likely? A density gives information about a *random variable*. A random variable is a quantity that takes a random value every time we observe it. For instance: If we roll a six\-sided die, it will (unpredictably) land on values `1` through `6`. In my research, I use random variables to model material properties: Every component that comes off an assembly line will have a different strength associated with it. In order to design for a desired rate of failure, I model this variability with a random variable. A single value from a random variable—a roll of the die or a single value of material strength—is called a *realization*.
Let’s illustrate this idea by using a normal density to define a *random variable* `Z`. This is sometimes denoted `Z ~ norm(mean, sd)`, which means `Z` is density as a normal density with `mean` and `sd`. The code below illustrates what it looks like when we draw a few samples of `Z`.
```
## NOTE: No need to change this!
df_norm_samp <- tibble(Z = rnorm(n = 100, mean = 0, sd = 1))
ggplot() +
## Plot a *sample*
geom_histogram(
data = df_norm_samp,
mapping = aes(Z, after_stat(density)),
bins = 20
) +
## Plot a *density*
geom_line(
data = df_norm_d,
mapping = aes(z, d)
)
```
Note that the histogram of `Z` values looks like a “blocky” version of the density. If we draw many more samples, the histogram will start to strongly resemble the density from which it was drawn:
```
## NOTE: No need to change this!
ggplot() +
## Plot a *sample*
geom_histogram(
data = tibble(Z = rnorm(n = 1e5, mean = 0, sd = 1)),
mapping = aes(Z, after_stat(density)),
bins = 40
) +
## Plot a *density*
geom_line(
data = df_norm_d,
mapping = aes(z, d)
)
```
To clarify some terms:
* A **realization** is a single observed value of a random variable. A realization is a single value, e.g. `rnorm(n = 1)`.
* A **sample** is a set of realizations. A sample is a set of values, e.g. `rnorm(n = 10)`.
* A **density** is a function that describes the relative chance of all the values a random variable might take. A density is a function, e.g. `dnorm(x)`.
### 22\.2\.1 **q2** Plot densities for `Z` values in `df_q2`. Use aesthetics to plot a separate curve for different values of `n`. Comment on which values of `n` give a density closer to “normal”.
*Hint*: You may need to use the `group` aesthetic, in addition to `color` or `fill`.
```
## NOTE: Use the data generated here
df_q2 <- map_dfr(
c(5, 10, 1e3),
function(nsamp) {
tibble(Z = rnorm(n = nsamp), n = nsamp)
}
)
## TODO: Create the figure
df_q2 %>%
ggplot(aes(Z, color = n, group = n)) +
geom_density()
```
**Observations**
\- The larger the value of `n`, the closer the density tends to be to normal.
### 22\.2\.1 **q2** Plot densities for `Z` values in `df_q2`. Use aesthetics to plot a separate curve for different values of `n`. Comment on which values of `n` give a density closer to “normal”.
*Hint*: You may need to use the `group` aesthetic, in addition to `color` or `fill`.
```
## NOTE: Use the data generated here
df_q2 <- map_dfr(
c(5, 10, 1e3),
function(nsamp) {
tibble(Z = rnorm(n = nsamp), n = nsamp)
}
)
## TODO: Create the figure
df_q2 %>%
ggplot(aes(Z, color = n, group = n)) +
geom_density()
```
**Observations**
\- The larger the value of `n`, the closer the density tends to be to normal.
22\.3 Not Everything is Normal
------------------------------
A *very common misconception* is that drawing more samples will make *anything* look like a normal density. **This is not the case!** The following exercise will help us understand that the normal density is *just a model*, and *not* what we should expect all randomness to look like.
### 22\.3\.1 **q3** Run the code below and inspect the resulting histogram. Increase `n_diamonds` below to increase the number of samples drawn: Do this a few times for an increasing number of samples. Answer the questions under *observations* below.
```
n_diamonds <- 1000
diamonds %>%
filter(cut == "Good") %>%
slice_sample(n = n_diamonds) %>%
ggplot(aes(carat)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
**Observations**:
* The histogram does not look normal for any `n_diamonds`, and indeed it should not. We’ve seen before that there are very “special” values of `carat` that gemcutters seem to favor. That “social” feature of diamonds is not at all captured by a normal density.
* Carat is a measure of weight and weight cannot be negative.
The normal density *is* a good model for some physical phenomena, such as the set of heights among a population. However, it is not always a good model. One reason the normal density “shows up” a lot is because it is involved in *estimation*, which we’ll talk more about when we later discuss the central limit theorem.
### 22\.3\.1 **q3** Run the code below and inspect the resulting histogram. Increase `n_diamonds` below to increase the number of samples drawn: Do this a few times for an increasing number of samples. Answer the questions under *observations* below.
```
n_diamonds <- 1000
diamonds %>%
filter(cut == "Good") %>%
slice_sample(n = n_diamonds) %>%
ggplot(aes(carat)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
**Observations**:
* The histogram does not look normal for any `n_diamonds`, and indeed it should not. We’ve seen before that there are very “special” values of `carat` that gemcutters seem to favor. That “social” feature of diamonds is not at all captured by a normal density.
* Carat is a measure of weight and weight cannot be negative.
The normal density *is* a good model for some physical phenomena, such as the set of heights among a population. However, it is not always a good model. One reason the normal density “shows up” a lot is because it is involved in *estimation*, which we’ll talk more about when we later discuss the central limit theorem.
22\.4 Random Seeds: Setting the State
-------------------------------------
Remember that random variables are *random*. That means if we draw samples multiple times, we’re bound to get different values. For example, run the following code chunk multiple times.
```
## NOTE: No need to change; run this multiple times!
rnorm(5)
```
```
## [1] -1.3760786 -0.6709615 -0.5358250 0.9992815 -0.1919127
```
What this means is we’ll get a slightly different picture every time we draw a sample. **This is the challenge with randomness**: Since we could have drawn a different set of samples, we need to know the degree to which we can trust conclusions drawn from data. Being *statistically literate* means knowing how much to trust your data.
```
## NOTE: No need to change this!
df_samp_multi <-
tibble(Z = rnorm(n = 50, mean = 0, sd = 1), Run = "A") %>%
bind_rows(
tibble(Z = rnorm(n = 50, mean = 0, sd = 1), Run = "B")
) %>%
bind_rows(
tibble(Z = rnorm(n = 50, mean = 0, sd = 1), Run = "C")
)
ggplot() +
## Plot *fitted density*
geom_density(
data = df_samp_multi,
mapping = aes(Z, color = Run, group = Run)
) +
## Plot a *density*
geom_line(
data = df_norm_d,
mapping = aes(z, d)
)
```
*However* for simulation purposes, we often want to be able to repeat a calculation. In order to do this, we can set the *state* of our random number generator by setting the *seed*. To illustrate, try running the following chunk multiple times.
```
## NOTE: No need to change; run this multiple times!
set.seed(101)
rnorm(5)
```
```
## [1] -0.3260365 0.5524619 -0.6749438 0.2143595 0.3107692
```
The values are identical every time!
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-pivoting-data.html |
23 Data: Pivoting Data
======================
*Purpose*: Data is easiest to use when it is *tidy*. In fact, the tidyverse
(including ggplot, dplyr, etc.) is specifically designed to use tidy data. But
not all data we’ll encounter is tidy! To that end, in this exercise we’ll learn
how to tidy our data by *pivoting*.
As a result of learning how to quickly *tidy* data, you’ll vastly expand the set
of datasets you can analyze. Rather than fighting with data, you’ll be able to
quickly wrangle and extract insights.
*Reading*: [Reshape Data](https://rstudio.cloud/learn/primers/4.1)
*Topics*: Welcome, Tidy Data (skip Gathering and Spreading columns)
*Reading Time*: \~10 minutes (this exercise contains more reading material)
*Optional readings*:
\- [selection language](https://tidyselect.r-lib.org/reference/language.html)
*Note*: Unfortunately, the RStudio primers have not been updated to use the most
up\-to\-date dplyr tools. Rather than learning the out\-of\-date tools `gather, spread`, we will instead learn how to use `pivot_longer` and `pivot_wider`.
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
23\.1 Tidy Data
---------------
Tidy data is a form of data where:
1. Each *variable* is in its own *column*
2. Each *observation* is in its own *row*
3. Each *value* is in its own *cell*
Not all data are presented in tidy form; in this case it can be difficult to
tell what the variables are. Let’s practice distinguishing between the *columns*
and the *variables*.
### 23\.1\.1 **q1** What are the variables in the following dataset?
```
## NOTE: No need to edit; execute
cases <- tribble(
~Country, ~`2011`, ~`2012`, ~`2013`,
"FR", 7000, 6900, 7000,
"DE", 5800, 6000, 6200,
"US", 15000, 14000, 13000
)
cases
```
```
## # A tibble: 3 × 4
## Country `2011` `2012` `2013`
## <chr> <dbl> <dbl> <dbl>
## 1 FR 7000 6900 7000
## 2 DE 5800 6000 6200
## 3 US 15000 14000 13000
```
* 1. Country, 2011, 2012, and 2013
* 2. Country, year, and some unknown quantity (n, count, etc.)
* 3. FR, DE, and US
```
## TODO: Modify with your multiple choice number answer
q1_answer <- 0
## NOTE: The following will test your answer
if (((q1_answer + 56) %% 3 == 1) & (q1_answer > 0)) {
"Correct!"
} else {
"Incorrect!"
}
```
```
## [1] "Incorrect!"
```
### 23\.1\.2 **q2** What are the variables in the following dataset?
```
## NOTE: No need to edit; execute
alloys <- tribble(
~thick, ~E_00, ~mu_00, ~E_45, ~mu_45, ~rep,
0.022, 10600, 0.321, 10700, 0.329, 1,
0.022, 10600, 0.323, 10500, 0.331, 2,
0.032, 10400, 0.329, 10400, 0.318, 1,
0.032, 10300, 0.319, 10500, 0.326, 2
)
alloys
```
```
## # A tibble: 4 × 6
## thick E_00 mu_00 E_45 mu_45 rep
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.022 10600 0.321 10700 0.329 1
## 2 0.022 10600 0.323 10500 0.331 2
## 3 0.032 10400 0.329 10400 0.318 1
## 4 0.032 10300 0.319 10500 0.326 2
```
* 1. thick, E\_00, mu\_00, E\_45, mu\_45, rep
* 2. thick, E, mu, rep
* 3. thick, E, mu, rep, angle
```
## TODO: Modify with your multiple choice number answer
q2_answer <- 0
## NOTE: The following will test your answer
if (((q2_answer + 38) %% 3 == 2) & (q2_answer > 0)) {
"Correct!"
} else {
"Incorrect!"
}
```
```
## [1] "Incorrect!"
```
23\.2 Pivoting: Examples
------------------------
The dplyr package comes with tools to *pivot* our data into tidy form. There are
two key tools: `pivot_longer` and `pivot_wider`. The names are suggestive of
their use. When our data are too wide we should `pivot_longer`, and when our
data are too long, we should `pivot_wider`.
### 23\.2\.1 Pivot longer
First, let’s see how `pivot_longer` works on the `cases` data. Run the following
code chunk:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
cols = c(`2011`, `2012`, `2013`)
)
```
```
## # A tibble: 9 × 3
## Country Year n
## <chr> <chr> <dbl>
## 1 FR 2011 7000
## 2 FR 2012 6900
## 3 FR 2013 7000
## 4 DE 2011 5800
## 5 DE 2012 6000
## 6 DE 2013 6200
## 7 US 2011 15000
## 8 US 2012 14000
## 9 US 2013 13000
```
Now these data are tidy! The variable `Year` is now the name of a column, and
its values appear in the cells.
Let’s break down the key inputs to `pivot_longer`:
* `names_to` is what we’re going to call the new column whose values will be the original column names
* `values_to` is what we’re going to call the new column that will hold the values associated with the original columns
* `cols` is the set of columns in the original dataset that we’re going to modify. This takes the same inputs as a call to `select`, so we can use functions like `starts_with, ends_with, contains`, etc., or a list of column names enclosed with `c()`
+ Note that in our case, we had to enclose each column name with ticks so that dplyr does not interpret the integer values as column positions (rather than column names)
+ For more details on selecting variables, see the [selection language](https://tidyselect.r-lib.org/reference/language.html) page
However, there’s a problem with the `Year` column:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
c(`2011`, `2012`, `2013`)
) %>%
summarize(Year = mean(Year))
```
```
## Warning in mean.default(Year): argument is not numeric or logical: returning NA
```
```
## # A tibble: 1 × 1
## Year
## <dbl>
## 1 NA
```
The summary failed! That’s because the `Year` column is full of strings, rather
than integers. We can fix this via mutation:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
c(`2011`, `2012`, `2013`)
) %>%
mutate(Year = as.integer(Year))
```
```
## # A tibble: 9 × 3
## Country Year n
## <chr> <int> <dbl>
## 1 FR 2011 7000
## 2 FR 2012 6900
## 3 FR 2013 7000
## 4 DE 2011 5800
## 5 DE 2012 6000
## 6 DE 2013 6200
## 7 US 2011 15000
## 8 US 2012 14000
## 9 US 2013 13000
```
Now the data are tidy and of the proper type.
Let’s look at a built\-in dataset:
```
## NOTE: No need to edit; execute
ansc <-
tribble(
~`x-1`, ~`x-2`, ~`y-1`, ~`y-2`,
10, 10, 8.04, 9.14,
8, 8, 6.95, 8.14,
13, 13, 7.58, 8.74,
9, 9, 8.81, 8.77,
11, 11, 8.33, 9.26,
14, 14, 9.96, 8.10,
6, 6, 7.24, 6.13,
4, 4, 4.26, 3.10,
12, 12, 10.84, 9.13,
7, 7, 4.82, 7.26,
5, 5, 5.68, 4.74
)
ansc
```
```
## # A tibble: 11 × 4
## `x-1` `x-2` `y-1` `y-2`
## <dbl> <dbl> <dbl> <dbl>
## 1 10 10 8.04 9.14
## 2 8 8 6.95 8.14
## 3 13 13 7.58 8.74
## 4 9 9 8.81 8.77
## 5 11 11 8.33 9.26
## 6 14 14 9.96 8.1
## 7 6 6 7.24 6.13
## 8 4 4 4.26 3.1
## 9 12 12 10.8 9.13
## 10 7 7 4.82 7.26
## 11 5 5 5.68 4.74
```
This dataset is too wide; the digit after each `x` or `y` denotes a different
dataset. The case is tricky to pivot though: We need to separate the trailing
digits while preserving the `x, y` column names. We can use the special “.value”
entry in `names_to` in order to handle this:
```
## NOTE: No need to edit; execute
ansc %>%
pivot_longer(
names_to = c(".value", "set"),
names_sep = "-",
cols = everything()
)
```
```
## # A tibble: 22 × 3
## set x y
## <chr> <dbl> <dbl>
## 1 1 10 8.04
## 2 2 10 9.14
## 3 1 8 6.95
## 4 2 8 8.14
## 5 1 13 7.58
## 6 2 13 8.74
## 7 1 9 8.81
## 8 2 9 8.77
## 9 1 11 8.33
## 10 2 11 9.26
## # … with 12 more rows
```
Note that:
\- With `.value` in `names_to`, we do *not* provide the `values_to` column names. We are instead signaling that the value names come from the column names
\- `everything()` is a convenient way to select all columns
Let’s look at one more use of `pivot_longer` on the `alloys` dataset.
```
## NOTE: No need to edit; execute
alloys %>%
pivot_longer(
names_to = c("var", "angle"),
names_sep = "_",
values_to = "val",
cols = c(-thick, -rep)
)
```
```
## # A tibble: 16 × 5
## thick rep var angle val
## <dbl> <dbl> <chr> <chr> <dbl>
## 1 0.022 1 E 00 10600
## 2 0.022 1 mu 00 0.321
## 3 0.022 1 E 45 10700
## 4 0.022 1 mu 45 0.329
## 5 0.022 2 E 00 10600
## 6 0.022 2 mu 00 0.323
## 7 0.022 2 E 45 10500
## 8 0.022 2 mu 45 0.331
## 9 0.032 1 E 00 10400
## 10 0.032 1 mu 00 0.329
## 11 0.032 1 E 45 10400
## 12 0.032 1 mu 45 0.318
## 13 0.032 2 E 00 10300
## 14 0.032 2 mu 00 0.319
## 15 0.032 2 E 45 10500
## 16 0.032 2 mu 45 0.326
```
Note a few differences from the call of `pivot_longer` on the `cases` data:
* here `names_to` contains *two* names; this is to deal with the two components of the merged column names `E_00, mu_00, E_45,` etc.
* `names_sep` allows us to specify a character that separates the components of the merged column names. In our case, the column names are merged with an underscore `_`
* We use the `-column` syntax with `cols` to signal that we *don’t* want the specified columns. This allows us to exclude `thick, rep`
+ As an alternative, we could have used the more verbose `cols = starts_with("E") | starts_with("mu")`, which means “starts with”E” OR starts with “mu””
This looks closer to tidy—we’ve taken care of the merged column names—but
now we have a different problem: The variables `E, mu` are now in cells, rather
than column names! This is an example of a dataset that is *too long*. For this,
we’ll need to use `pivot_wider`.
### 23\.2\.2 Pivot wider
We’ll continue tidying the `alloys` dataset with `pivot_wider`.
```
## NOTE: No need to edit; execute
alloys %>%
pivot_longer(
names_to = c("var", "angle"),
names_sep = "_",
values_to = "val",
starts_with("E") | starts_with("mu")
) %>%
pivot_wider(
names_from = var, # Cell entries to turn into new column names
values_from = val # Values to associate with the new column(s)
)
```
```
## # A tibble: 8 × 5
## thick rep angle E mu
## <dbl> <dbl> <chr> <dbl> <dbl>
## 1 0.022 1 00 10600 0.321
## 2 0.022 1 45 10700 0.329
## 3 0.022 2 00 10600 0.323
## 4 0.022 2 45 10500 0.331
## 5 0.032 1 00 10400 0.329
## 6 0.032 1 45 10400 0.318
## 7 0.032 2 00 10300 0.319
## 8 0.032 2 45 10500 0.326
```
Note the differences between `pivot_longer` and `pivot_wider`:
* Rather than `names_to`, we specify `names_from`; this takes a tidyselect specification. We specify the column(s) of values to turn into new column names
* Rather than `values_to`, we specify `values_from`; this takes a tidyselect specification. We specify the column(s) of values to turn into new values
What we just saw above is a general strategy: If you see merged column names,
you can:
1. First, `pivot_longer` with `names_sep` or `names_pattern` to unmerge the column names.
2. Next, `pivot_wider` to tidy the data.
Both `pivot_longer` and `pivot_wider` have a *lot* of features; see their
documentation for more info.
23\.3 Pivoting: Exercises
-------------------------
To practice using `pivot_longer` and `pivot_wider`, we’re going to work with the
following small dataset:
```
## NOTE: No need to edit; this is setup for the exercises
df_base <-
tribble(
~`X-0`, ~`X-1`, ~key,
1, 9, "A",
2, 8, "B",
3, 7, "C"
)
```
We’re going to play a game: I’m going to modify the data, and your job is to
pivot it back to equal `df_base`.
### 23\.3\.1 **q3** Recover `df_base` from `df_q3` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q3 <-
df_base %>%
pivot_longer(
names_to = "id",
names_pattern = "(\\d)",
names_transform = list(id = as.integer),
values_to = "value",
cols = -key
)
df_q3
```
```
## # A tibble: 6 × 3
## key id value
## <chr> <int> <dbl>
## 1 A 0 1
## 2 A 1 9
## 3 B 0 2
## 4 B 1 8
## 5 C 0 3
## 6 C 1 7
```
Undo the modification using a single pivot. Don’t worry about column order.
```
df_q3_res <-
df_q3 %>%
pivot_wider(
names_from = id,
names_prefix = "X-",
values_from = value
)
df_q3_res
```
```
## # A tibble: 3 × 3
## key `X-0` `X-1`
## <chr> <dbl> <dbl>
## 1 A 1 9
## 2 B 2 8
## 3 C 3 7
```
```
all_equal(df_base, df_q3_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.2 **q4** Recover `df_base` from `df_q4` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q4 <-
df_base %>%
pivot_wider(
names_from = key,
values_from = `X-0`
)
df_q4
```
```
## # A tibble: 3 × 4
## `X-1` A B C
## <dbl> <dbl> <dbl> <dbl>
## 1 9 1 NA NA
## 2 8 NA 2 NA
## 3 7 NA NA 3
```
Undo the modification using a single pivot. Don’t worry about column order.
*Hint*: You’ll need a way to drop `NA` values in the pivot (without filtering).
Check the documentation for `pivot_longer`.
```
df_q4_res <-
df_q4 %>%
pivot_longer(
names_to = "key",
values_to = "X-0",
values_drop_na = TRUE,
cols = c(A, B, C)
)
df_q4_res
```
```
## # A tibble: 3 × 3
## `X-1` key `X-0`
## <dbl> <chr> <dbl>
## 1 9 A 1
## 2 8 B 2
## 3 7 C 3
```
```
all_equal(df_base, df_q4_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.3 **q5** Recover `df_base` from `df_q5` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q5 <-
df_base %>%
pivot_wider(
names_from = key,
values_from = -key
)
df_q5
```
```
## # A tibble: 1 × 6
## `X-0_A` `X-0_B` `X-0_C` `X-1_A` `X-1_B` `X-1_C`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 2 3 9 8 7
```
Undo the modification using a single pivot. Don’t worry about column order.
*Hint*: For this one, you’ll need to use the special `.value` entry in `names_to`.
```
df_q5_res <-
df_q5 %>%
pivot_longer(
names_to = c(".value", "key"),
names_sep = "_",
cols = everything()
)
df_q5_res
```
```
## # A tibble: 3 × 3
## key `X-0` `X-1`
## <chr> <dbl> <dbl>
## 1 A 1 9
## 2 B 2 8
## 3 C 3 7
```
```
all_equal(df_base, df_q5_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.4 **q6** Make your own!
Using a single pivot on `df_base` create your own *challenge* dataframe. You will share this with the rest of the class as a puzzle, so make sure to solve your own challenge so you have a solution!
```
## NOTE: No need to edit; this is setup for the exercise
df_q6 <-
df_base %>%
glimpse()
```
```
## Rows: 3
## Columns: 3
## $ `X-0` <dbl> 1, 2, 3
## $ `X-1` <dbl> 9, 8, 7
## $ key <chr> "A", "B", "C"
```
```
df_q6
```
```
## # A tibble: 3 × 3
## `X-0` `X-1` key
## <dbl> <dbl> <chr>
## 1 1 9 A
## 2 2 8 B
## 3 3 7 C
```
Don’t forget to create a solution!
```
df_q6_res <-
df_q6 %>%
glimpse()
```
```
## Rows: 3
## Columns: 3
## $ `X-0` <dbl> 1, 2, 3
## $ `X-1` <dbl> 9, 8, 7
## $ key <chr> "A", "B", "C"
```
```
df_q6_res
```
```
## # A tibble: 3 × 3
## `X-0` `X-1` key
## <dbl> <dbl> <chr>
## 1 1 9 A
## 2 2 8 B
## 3 3 7 C
```
```
all_equal(df_base, df_q6_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
23\.1 Tidy Data
---------------
Tidy data is a form of data where:
1. Each *variable* is in its own *column*
2. Each *observation* is in its own *row*
3. Each *value* is in its own *cell*
Not all data are presented in tidy form; in this case it can be difficult to
tell what the variables are. Let’s practice distinguishing between the *columns*
and the *variables*.
### 23\.1\.1 **q1** What are the variables in the following dataset?
```
## NOTE: No need to edit; execute
cases <- tribble(
~Country, ~`2011`, ~`2012`, ~`2013`,
"FR", 7000, 6900, 7000,
"DE", 5800, 6000, 6200,
"US", 15000, 14000, 13000
)
cases
```
```
## # A tibble: 3 × 4
## Country `2011` `2012` `2013`
## <chr> <dbl> <dbl> <dbl>
## 1 FR 7000 6900 7000
## 2 DE 5800 6000 6200
## 3 US 15000 14000 13000
```
* 1. Country, 2011, 2012, and 2013
* 2. Country, year, and some unknown quantity (n, count, etc.)
* 3. FR, DE, and US
```
## TODO: Modify with your multiple choice number answer
q1_answer <- 0
## NOTE: The following will test your answer
if (((q1_answer + 56) %% 3 == 1) & (q1_answer > 0)) {
"Correct!"
} else {
"Incorrect!"
}
```
```
## [1] "Incorrect!"
```
### 23\.1\.2 **q2** What are the variables in the following dataset?
```
## NOTE: No need to edit; execute
alloys <- tribble(
~thick, ~E_00, ~mu_00, ~E_45, ~mu_45, ~rep,
0.022, 10600, 0.321, 10700, 0.329, 1,
0.022, 10600, 0.323, 10500, 0.331, 2,
0.032, 10400, 0.329, 10400, 0.318, 1,
0.032, 10300, 0.319, 10500, 0.326, 2
)
alloys
```
```
## # A tibble: 4 × 6
## thick E_00 mu_00 E_45 mu_45 rep
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.022 10600 0.321 10700 0.329 1
## 2 0.022 10600 0.323 10500 0.331 2
## 3 0.032 10400 0.329 10400 0.318 1
## 4 0.032 10300 0.319 10500 0.326 2
```
* 1. thick, E\_00, mu\_00, E\_45, mu\_45, rep
* 2. thick, E, mu, rep
* 3. thick, E, mu, rep, angle
```
## TODO: Modify with your multiple choice number answer
q2_answer <- 0
## NOTE: The following will test your answer
if (((q2_answer + 38) %% 3 == 2) & (q2_answer > 0)) {
"Correct!"
} else {
"Incorrect!"
}
```
```
## [1] "Incorrect!"
```
### 23\.1\.1 **q1** What are the variables in the following dataset?
```
## NOTE: No need to edit; execute
cases <- tribble(
~Country, ~`2011`, ~`2012`, ~`2013`,
"FR", 7000, 6900, 7000,
"DE", 5800, 6000, 6200,
"US", 15000, 14000, 13000
)
cases
```
```
## # A tibble: 3 × 4
## Country `2011` `2012` `2013`
## <chr> <dbl> <dbl> <dbl>
## 1 FR 7000 6900 7000
## 2 DE 5800 6000 6200
## 3 US 15000 14000 13000
```
* 1. Country, 2011, 2012, and 2013
* 2. Country, year, and some unknown quantity (n, count, etc.)
* 3. FR, DE, and US
```
## TODO: Modify with your multiple choice number answer
q1_answer <- 0
## NOTE: The following will test your answer
if (((q1_answer + 56) %% 3 == 1) & (q1_answer > 0)) {
"Correct!"
} else {
"Incorrect!"
}
```
```
## [1] "Incorrect!"
```
### 23\.1\.2 **q2** What are the variables in the following dataset?
```
## NOTE: No need to edit; execute
alloys <- tribble(
~thick, ~E_00, ~mu_00, ~E_45, ~mu_45, ~rep,
0.022, 10600, 0.321, 10700, 0.329, 1,
0.022, 10600, 0.323, 10500, 0.331, 2,
0.032, 10400, 0.329, 10400, 0.318, 1,
0.032, 10300, 0.319, 10500, 0.326, 2
)
alloys
```
```
## # A tibble: 4 × 6
## thick E_00 mu_00 E_45 mu_45 rep
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.022 10600 0.321 10700 0.329 1
## 2 0.022 10600 0.323 10500 0.331 2
## 3 0.032 10400 0.329 10400 0.318 1
## 4 0.032 10300 0.319 10500 0.326 2
```
* 1. thick, E\_00, mu\_00, E\_45, mu\_45, rep
* 2. thick, E, mu, rep
* 3. thick, E, mu, rep, angle
```
## TODO: Modify with your multiple choice number answer
q2_answer <- 0
## NOTE: The following will test your answer
if (((q2_answer + 38) %% 3 == 2) & (q2_answer > 0)) {
"Correct!"
} else {
"Incorrect!"
}
```
```
## [1] "Incorrect!"
```
23\.2 Pivoting: Examples
------------------------
The dplyr package comes with tools to *pivot* our data into tidy form. There are
two key tools: `pivot_longer` and `pivot_wider`. The names are suggestive of
their use. When our data are too wide we should `pivot_longer`, and when our
data are too long, we should `pivot_wider`.
### 23\.2\.1 Pivot longer
First, let’s see how `pivot_longer` works on the `cases` data. Run the following
code chunk:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
cols = c(`2011`, `2012`, `2013`)
)
```
```
## # A tibble: 9 × 3
## Country Year n
## <chr> <chr> <dbl>
## 1 FR 2011 7000
## 2 FR 2012 6900
## 3 FR 2013 7000
## 4 DE 2011 5800
## 5 DE 2012 6000
## 6 DE 2013 6200
## 7 US 2011 15000
## 8 US 2012 14000
## 9 US 2013 13000
```
Now these data are tidy! The variable `Year` is now the name of a column, and
its values appear in the cells.
Let’s break down the key inputs to `pivot_longer`:
* `names_to` is what we’re going to call the new column whose values will be the original column names
* `values_to` is what we’re going to call the new column that will hold the values associated with the original columns
* `cols` is the set of columns in the original dataset that we’re going to modify. This takes the same inputs as a call to `select`, so we can use functions like `starts_with, ends_with, contains`, etc., or a list of column names enclosed with `c()`
+ Note that in our case, we had to enclose each column name with ticks so that dplyr does not interpret the integer values as column positions (rather than column names)
+ For more details on selecting variables, see the [selection language](https://tidyselect.r-lib.org/reference/language.html) page
However, there’s a problem with the `Year` column:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
c(`2011`, `2012`, `2013`)
) %>%
summarize(Year = mean(Year))
```
```
## Warning in mean.default(Year): argument is not numeric or logical: returning NA
```
```
## # A tibble: 1 × 1
## Year
## <dbl>
## 1 NA
```
The summary failed! That’s because the `Year` column is full of strings, rather
than integers. We can fix this via mutation:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
c(`2011`, `2012`, `2013`)
) %>%
mutate(Year = as.integer(Year))
```
```
## # A tibble: 9 × 3
## Country Year n
## <chr> <int> <dbl>
## 1 FR 2011 7000
## 2 FR 2012 6900
## 3 FR 2013 7000
## 4 DE 2011 5800
## 5 DE 2012 6000
## 6 DE 2013 6200
## 7 US 2011 15000
## 8 US 2012 14000
## 9 US 2013 13000
```
Now the data are tidy and of the proper type.
Let’s look at a built\-in dataset:
```
## NOTE: No need to edit; execute
ansc <-
tribble(
~`x-1`, ~`x-2`, ~`y-1`, ~`y-2`,
10, 10, 8.04, 9.14,
8, 8, 6.95, 8.14,
13, 13, 7.58, 8.74,
9, 9, 8.81, 8.77,
11, 11, 8.33, 9.26,
14, 14, 9.96, 8.10,
6, 6, 7.24, 6.13,
4, 4, 4.26, 3.10,
12, 12, 10.84, 9.13,
7, 7, 4.82, 7.26,
5, 5, 5.68, 4.74
)
ansc
```
```
## # A tibble: 11 × 4
## `x-1` `x-2` `y-1` `y-2`
## <dbl> <dbl> <dbl> <dbl>
## 1 10 10 8.04 9.14
## 2 8 8 6.95 8.14
## 3 13 13 7.58 8.74
## 4 9 9 8.81 8.77
## 5 11 11 8.33 9.26
## 6 14 14 9.96 8.1
## 7 6 6 7.24 6.13
## 8 4 4 4.26 3.1
## 9 12 12 10.8 9.13
## 10 7 7 4.82 7.26
## 11 5 5 5.68 4.74
```
This dataset is too wide; the digit after each `x` or `y` denotes a different
dataset. The case is tricky to pivot though: We need to separate the trailing
digits while preserving the `x, y` column names. We can use the special “.value”
entry in `names_to` in order to handle this:
```
## NOTE: No need to edit; execute
ansc %>%
pivot_longer(
names_to = c(".value", "set"),
names_sep = "-",
cols = everything()
)
```
```
## # A tibble: 22 × 3
## set x y
## <chr> <dbl> <dbl>
## 1 1 10 8.04
## 2 2 10 9.14
## 3 1 8 6.95
## 4 2 8 8.14
## 5 1 13 7.58
## 6 2 13 8.74
## 7 1 9 8.81
## 8 2 9 8.77
## 9 1 11 8.33
## 10 2 11 9.26
## # … with 12 more rows
```
Note that:
\- With `.value` in `names_to`, we do *not* provide the `values_to` column names. We are instead signaling that the value names come from the column names
\- `everything()` is a convenient way to select all columns
Let’s look at one more use of `pivot_longer` on the `alloys` dataset.
```
## NOTE: No need to edit; execute
alloys %>%
pivot_longer(
names_to = c("var", "angle"),
names_sep = "_",
values_to = "val",
cols = c(-thick, -rep)
)
```
```
## # A tibble: 16 × 5
## thick rep var angle val
## <dbl> <dbl> <chr> <chr> <dbl>
## 1 0.022 1 E 00 10600
## 2 0.022 1 mu 00 0.321
## 3 0.022 1 E 45 10700
## 4 0.022 1 mu 45 0.329
## 5 0.022 2 E 00 10600
## 6 0.022 2 mu 00 0.323
## 7 0.022 2 E 45 10500
## 8 0.022 2 mu 45 0.331
## 9 0.032 1 E 00 10400
## 10 0.032 1 mu 00 0.329
## 11 0.032 1 E 45 10400
## 12 0.032 1 mu 45 0.318
## 13 0.032 2 E 00 10300
## 14 0.032 2 mu 00 0.319
## 15 0.032 2 E 45 10500
## 16 0.032 2 mu 45 0.326
```
Note a few differences from the call of `pivot_longer` on the `cases` data:
* here `names_to` contains *two* names; this is to deal with the two components of the merged column names `E_00, mu_00, E_45,` etc.
* `names_sep` allows us to specify a character that separates the components of the merged column names. In our case, the column names are merged with an underscore `_`
* We use the `-column` syntax with `cols` to signal that we *don’t* want the specified columns. This allows us to exclude `thick, rep`
+ As an alternative, we could have used the more verbose `cols = starts_with("E") | starts_with("mu")`, which means “starts with”E” OR starts with “mu””
This looks closer to tidy—we’ve taken care of the merged column names—but
now we have a different problem: The variables `E, mu` are now in cells, rather
than column names! This is an example of a dataset that is *too long*. For this,
we’ll need to use `pivot_wider`.
### 23\.2\.2 Pivot wider
We’ll continue tidying the `alloys` dataset with `pivot_wider`.
```
## NOTE: No need to edit; execute
alloys %>%
pivot_longer(
names_to = c("var", "angle"),
names_sep = "_",
values_to = "val",
starts_with("E") | starts_with("mu")
) %>%
pivot_wider(
names_from = var, # Cell entries to turn into new column names
values_from = val # Values to associate with the new column(s)
)
```
```
## # A tibble: 8 × 5
## thick rep angle E mu
## <dbl> <dbl> <chr> <dbl> <dbl>
## 1 0.022 1 00 10600 0.321
## 2 0.022 1 45 10700 0.329
## 3 0.022 2 00 10600 0.323
## 4 0.022 2 45 10500 0.331
## 5 0.032 1 00 10400 0.329
## 6 0.032 1 45 10400 0.318
## 7 0.032 2 00 10300 0.319
## 8 0.032 2 45 10500 0.326
```
Note the differences between `pivot_longer` and `pivot_wider`:
* Rather than `names_to`, we specify `names_from`; this takes a tidyselect specification. We specify the column(s) of values to turn into new column names
* Rather than `values_to`, we specify `values_from`; this takes a tidyselect specification. We specify the column(s) of values to turn into new values
What we just saw above is a general strategy: If you see merged column names,
you can:
1. First, `pivot_longer` with `names_sep` or `names_pattern` to unmerge the column names.
2. Next, `pivot_wider` to tidy the data.
Both `pivot_longer` and `pivot_wider` have a *lot* of features; see their
documentation for more info.
### 23\.2\.1 Pivot longer
First, let’s see how `pivot_longer` works on the `cases` data. Run the following
code chunk:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
cols = c(`2011`, `2012`, `2013`)
)
```
```
## # A tibble: 9 × 3
## Country Year n
## <chr> <chr> <dbl>
## 1 FR 2011 7000
## 2 FR 2012 6900
## 3 FR 2013 7000
## 4 DE 2011 5800
## 5 DE 2012 6000
## 6 DE 2013 6200
## 7 US 2011 15000
## 8 US 2012 14000
## 9 US 2013 13000
```
Now these data are tidy! The variable `Year` is now the name of a column, and
its values appear in the cells.
Let’s break down the key inputs to `pivot_longer`:
* `names_to` is what we’re going to call the new column whose values will be the original column names
* `values_to` is what we’re going to call the new column that will hold the values associated with the original columns
* `cols` is the set of columns in the original dataset that we’re going to modify. This takes the same inputs as a call to `select`, so we can use functions like `starts_with, ends_with, contains`, etc., or a list of column names enclosed with `c()`
+ Note that in our case, we had to enclose each column name with ticks so that dplyr does not interpret the integer values as column positions (rather than column names)
+ For more details on selecting variables, see the [selection language](https://tidyselect.r-lib.org/reference/language.html) page
However, there’s a problem with the `Year` column:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
c(`2011`, `2012`, `2013`)
) %>%
summarize(Year = mean(Year))
```
```
## Warning in mean.default(Year): argument is not numeric or logical: returning NA
```
```
## # A tibble: 1 × 1
## Year
## <dbl>
## 1 NA
```
The summary failed! That’s because the `Year` column is full of strings, rather
than integers. We can fix this via mutation:
```
## NOTE: No need to edit; execute
cases %>%
pivot_longer(
names_to = "Year",
values_to = "n",
c(`2011`, `2012`, `2013`)
) %>%
mutate(Year = as.integer(Year))
```
```
## # A tibble: 9 × 3
## Country Year n
## <chr> <int> <dbl>
## 1 FR 2011 7000
## 2 FR 2012 6900
## 3 FR 2013 7000
## 4 DE 2011 5800
## 5 DE 2012 6000
## 6 DE 2013 6200
## 7 US 2011 15000
## 8 US 2012 14000
## 9 US 2013 13000
```
Now the data are tidy and of the proper type.
Let’s look at a built\-in dataset:
```
## NOTE: No need to edit; execute
ansc <-
tribble(
~`x-1`, ~`x-2`, ~`y-1`, ~`y-2`,
10, 10, 8.04, 9.14,
8, 8, 6.95, 8.14,
13, 13, 7.58, 8.74,
9, 9, 8.81, 8.77,
11, 11, 8.33, 9.26,
14, 14, 9.96, 8.10,
6, 6, 7.24, 6.13,
4, 4, 4.26, 3.10,
12, 12, 10.84, 9.13,
7, 7, 4.82, 7.26,
5, 5, 5.68, 4.74
)
ansc
```
```
## # A tibble: 11 × 4
## `x-1` `x-2` `y-1` `y-2`
## <dbl> <dbl> <dbl> <dbl>
## 1 10 10 8.04 9.14
## 2 8 8 6.95 8.14
## 3 13 13 7.58 8.74
## 4 9 9 8.81 8.77
## 5 11 11 8.33 9.26
## 6 14 14 9.96 8.1
## 7 6 6 7.24 6.13
## 8 4 4 4.26 3.1
## 9 12 12 10.8 9.13
## 10 7 7 4.82 7.26
## 11 5 5 5.68 4.74
```
This dataset is too wide; the digit after each `x` or `y` denotes a different
dataset. The case is tricky to pivot though: We need to separate the trailing
digits while preserving the `x, y` column names. We can use the special “.value”
entry in `names_to` in order to handle this:
```
## NOTE: No need to edit; execute
ansc %>%
pivot_longer(
names_to = c(".value", "set"),
names_sep = "-",
cols = everything()
)
```
```
## # A tibble: 22 × 3
## set x y
## <chr> <dbl> <dbl>
## 1 1 10 8.04
## 2 2 10 9.14
## 3 1 8 6.95
## 4 2 8 8.14
## 5 1 13 7.58
## 6 2 13 8.74
## 7 1 9 8.81
## 8 2 9 8.77
## 9 1 11 8.33
## 10 2 11 9.26
## # … with 12 more rows
```
Note that:
\- With `.value` in `names_to`, we do *not* provide the `values_to` column names. We are instead signaling that the value names come from the column names
\- `everything()` is a convenient way to select all columns
Let’s look at one more use of `pivot_longer` on the `alloys` dataset.
```
## NOTE: No need to edit; execute
alloys %>%
pivot_longer(
names_to = c("var", "angle"),
names_sep = "_",
values_to = "val",
cols = c(-thick, -rep)
)
```
```
## # A tibble: 16 × 5
## thick rep var angle val
## <dbl> <dbl> <chr> <chr> <dbl>
## 1 0.022 1 E 00 10600
## 2 0.022 1 mu 00 0.321
## 3 0.022 1 E 45 10700
## 4 0.022 1 mu 45 0.329
## 5 0.022 2 E 00 10600
## 6 0.022 2 mu 00 0.323
## 7 0.022 2 E 45 10500
## 8 0.022 2 mu 45 0.331
## 9 0.032 1 E 00 10400
## 10 0.032 1 mu 00 0.329
## 11 0.032 1 E 45 10400
## 12 0.032 1 mu 45 0.318
## 13 0.032 2 E 00 10300
## 14 0.032 2 mu 00 0.319
## 15 0.032 2 E 45 10500
## 16 0.032 2 mu 45 0.326
```
Note a few differences from the call of `pivot_longer` on the `cases` data:
* here `names_to` contains *two* names; this is to deal with the two components of the merged column names `E_00, mu_00, E_45,` etc.
* `names_sep` allows us to specify a character that separates the components of the merged column names. In our case, the column names are merged with an underscore `_`
* We use the `-column` syntax with `cols` to signal that we *don’t* want the specified columns. This allows us to exclude `thick, rep`
+ As an alternative, we could have used the more verbose `cols = starts_with("E") | starts_with("mu")`, which means “starts with”E” OR starts with “mu””
This looks closer to tidy—we’ve taken care of the merged column names—but
now we have a different problem: The variables `E, mu` are now in cells, rather
than column names! This is an example of a dataset that is *too long*. For this,
we’ll need to use `pivot_wider`.
### 23\.2\.2 Pivot wider
We’ll continue tidying the `alloys` dataset with `pivot_wider`.
```
## NOTE: No need to edit; execute
alloys %>%
pivot_longer(
names_to = c("var", "angle"),
names_sep = "_",
values_to = "val",
starts_with("E") | starts_with("mu")
) %>%
pivot_wider(
names_from = var, # Cell entries to turn into new column names
values_from = val # Values to associate with the new column(s)
)
```
```
## # A tibble: 8 × 5
## thick rep angle E mu
## <dbl> <dbl> <chr> <dbl> <dbl>
## 1 0.022 1 00 10600 0.321
## 2 0.022 1 45 10700 0.329
## 3 0.022 2 00 10600 0.323
## 4 0.022 2 45 10500 0.331
## 5 0.032 1 00 10400 0.329
## 6 0.032 1 45 10400 0.318
## 7 0.032 2 00 10300 0.319
## 8 0.032 2 45 10500 0.326
```
Note the differences between `pivot_longer` and `pivot_wider`:
* Rather than `names_to`, we specify `names_from`; this takes a tidyselect specification. We specify the column(s) of values to turn into new column names
* Rather than `values_to`, we specify `values_from`; this takes a tidyselect specification. We specify the column(s) of values to turn into new values
What we just saw above is a general strategy: If you see merged column names,
you can:
1. First, `pivot_longer` with `names_sep` or `names_pattern` to unmerge the column names.
2. Next, `pivot_wider` to tidy the data.
Both `pivot_longer` and `pivot_wider` have a *lot* of features; see their
documentation for more info.
23\.3 Pivoting: Exercises
-------------------------
To practice using `pivot_longer` and `pivot_wider`, we’re going to work with the
following small dataset:
```
## NOTE: No need to edit; this is setup for the exercises
df_base <-
tribble(
~`X-0`, ~`X-1`, ~key,
1, 9, "A",
2, 8, "B",
3, 7, "C"
)
```
We’re going to play a game: I’m going to modify the data, and your job is to
pivot it back to equal `df_base`.
### 23\.3\.1 **q3** Recover `df_base` from `df_q3` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q3 <-
df_base %>%
pivot_longer(
names_to = "id",
names_pattern = "(\\d)",
names_transform = list(id = as.integer),
values_to = "value",
cols = -key
)
df_q3
```
```
## # A tibble: 6 × 3
## key id value
## <chr> <int> <dbl>
## 1 A 0 1
## 2 A 1 9
## 3 B 0 2
## 4 B 1 8
## 5 C 0 3
## 6 C 1 7
```
Undo the modification using a single pivot. Don’t worry about column order.
```
df_q3_res <-
df_q3 %>%
pivot_wider(
names_from = id,
names_prefix = "X-",
values_from = value
)
df_q3_res
```
```
## # A tibble: 3 × 3
## key `X-0` `X-1`
## <chr> <dbl> <dbl>
## 1 A 1 9
## 2 B 2 8
## 3 C 3 7
```
```
all_equal(df_base, df_q3_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.2 **q4** Recover `df_base` from `df_q4` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q4 <-
df_base %>%
pivot_wider(
names_from = key,
values_from = `X-0`
)
df_q4
```
```
## # A tibble: 3 × 4
## `X-1` A B C
## <dbl> <dbl> <dbl> <dbl>
## 1 9 1 NA NA
## 2 8 NA 2 NA
## 3 7 NA NA 3
```
Undo the modification using a single pivot. Don’t worry about column order.
*Hint*: You’ll need a way to drop `NA` values in the pivot (without filtering).
Check the documentation for `pivot_longer`.
```
df_q4_res <-
df_q4 %>%
pivot_longer(
names_to = "key",
values_to = "X-0",
values_drop_na = TRUE,
cols = c(A, B, C)
)
df_q4_res
```
```
## # A tibble: 3 × 3
## `X-1` key `X-0`
## <dbl> <chr> <dbl>
## 1 9 A 1
## 2 8 B 2
## 3 7 C 3
```
```
all_equal(df_base, df_q4_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.3 **q5** Recover `df_base` from `df_q5` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q5 <-
df_base %>%
pivot_wider(
names_from = key,
values_from = -key
)
df_q5
```
```
## # A tibble: 1 × 6
## `X-0_A` `X-0_B` `X-0_C` `X-1_A` `X-1_B` `X-1_C`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 2 3 9 8 7
```
Undo the modification using a single pivot. Don’t worry about column order.
*Hint*: For this one, you’ll need to use the special `.value` entry in `names_to`.
```
df_q5_res <-
df_q5 %>%
pivot_longer(
names_to = c(".value", "key"),
names_sep = "_",
cols = everything()
)
df_q5_res
```
```
## # A tibble: 3 × 3
## key `X-0` `X-1`
## <chr> <dbl> <dbl>
## 1 A 1 9
## 2 B 2 8
## 3 C 3 7
```
```
all_equal(df_base, df_q5_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.4 **q6** Make your own!
Using a single pivot on `df_base` create your own *challenge* dataframe. You will share this with the rest of the class as a puzzle, so make sure to solve your own challenge so you have a solution!
```
## NOTE: No need to edit; this is setup for the exercise
df_q6 <-
df_base %>%
glimpse()
```
```
## Rows: 3
## Columns: 3
## $ `X-0` <dbl> 1, 2, 3
## $ `X-1` <dbl> 9, 8, 7
## $ key <chr> "A", "B", "C"
```
```
df_q6
```
```
## # A tibble: 3 × 3
## `X-0` `X-1` key
## <dbl> <dbl> <chr>
## 1 1 9 A
## 2 2 8 B
## 3 3 7 C
```
Don’t forget to create a solution!
```
df_q6_res <-
df_q6 %>%
glimpse()
```
```
## Rows: 3
## Columns: 3
## $ `X-0` <dbl> 1, 2, 3
## $ `X-1` <dbl> 9, 8, 7
## $ key <chr> "A", "B", "C"
```
```
df_q6_res
```
```
## # A tibble: 3 × 3
## `X-0` `X-1` key
## <dbl> <dbl> <chr>
## 1 1 9 A
## 2 2 8 B
## 3 3 7 C
```
```
all_equal(df_base, df_q6_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.1 **q3** Recover `df_base` from `df_q3` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q3 <-
df_base %>%
pivot_longer(
names_to = "id",
names_pattern = "(\\d)",
names_transform = list(id = as.integer),
values_to = "value",
cols = -key
)
df_q3
```
```
## # A tibble: 6 × 3
## key id value
## <chr> <int> <dbl>
## 1 A 0 1
## 2 A 1 9
## 3 B 0 2
## 4 B 1 8
## 5 C 0 3
## 6 C 1 7
```
Undo the modification using a single pivot. Don’t worry about column order.
```
df_q3_res <-
df_q3 %>%
pivot_wider(
names_from = id,
names_prefix = "X-",
values_from = value
)
df_q3_res
```
```
## # A tibble: 3 × 3
## key `X-0` `X-1`
## <chr> <dbl> <dbl>
## 1 A 1 9
## 2 B 2 8
## 3 C 3 7
```
```
all_equal(df_base, df_q3_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.2 **q4** Recover `df_base` from `df_q4` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q4 <-
df_base %>%
pivot_wider(
names_from = key,
values_from = `X-0`
)
df_q4
```
```
## # A tibble: 3 × 4
## `X-1` A B C
## <dbl> <dbl> <dbl> <dbl>
## 1 9 1 NA NA
## 2 8 NA 2 NA
## 3 7 NA NA 3
```
Undo the modification using a single pivot. Don’t worry about column order.
*Hint*: You’ll need a way to drop `NA` values in the pivot (without filtering).
Check the documentation for `pivot_longer`.
```
df_q4_res <-
df_q4 %>%
pivot_longer(
names_to = "key",
values_to = "X-0",
values_drop_na = TRUE,
cols = c(A, B, C)
)
df_q4_res
```
```
## # A tibble: 3 × 3
## `X-1` key `X-0`
## <dbl> <chr> <dbl>
## 1 9 A 1
## 2 8 B 2
## 3 7 C 3
```
```
all_equal(df_base, df_q4_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.3 **q5** Recover `df_base` from `df_q5` by using a *single* pivot and no other functions.
```
## NOTE: No need to edit; this is setup for the exercise
df_q5 <-
df_base %>%
pivot_wider(
names_from = key,
values_from = -key
)
df_q5
```
```
## # A tibble: 1 × 6
## `X-0_A` `X-0_B` `X-0_C` `X-1_A` `X-1_B` `X-1_C`
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 2 3 9 8 7
```
Undo the modification using a single pivot. Don’t worry about column order.
*Hint*: For this one, you’ll need to use the special `.value` entry in `names_to`.
```
df_q5_res <-
df_q5 %>%
pivot_longer(
names_to = c(".value", "key"),
names_sep = "_",
cols = everything()
)
df_q5_res
```
```
## # A tibble: 3 × 3
## key `X-0` `X-1`
## <chr> <dbl> <dbl>
## 1 A 1 9
## 2 B 2 8
## 3 C 3 7
```
```
all_equal(df_base, df_q5_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
### 23\.3\.4 **q6** Make your own!
Using a single pivot on `df_base` create your own *challenge* dataframe. You will share this with the rest of the class as a puzzle, so make sure to solve your own challenge so you have a solution!
```
## NOTE: No need to edit; this is setup for the exercise
df_q6 <-
df_base %>%
glimpse()
```
```
## Rows: 3
## Columns: 3
## $ `X-0` <dbl> 1, 2, 3
## $ `X-1` <dbl> 9, 8, 7
## $ key <chr> "A", "B", "C"
```
```
df_q6
```
```
## # A tibble: 3 × 3
## `X-0` `X-1` key
## <dbl> <dbl> <chr>
## 1 1 9 A
## 2 2 8 B
## 3 3 7 C
```
Don’t forget to create a solution!
```
df_q6_res <-
df_q6 %>%
glimpse()
```
```
## Rows: 3
## Columns: 3
## $ `X-0` <dbl> 1, 2, 3
## $ `X-1` <dbl> 9, 8, 7
## $ key <chr> "A", "B", "C"
```
```
df_q6_res
```
```
## # A tibble: 3 × 3
## `X-0` `X-1` key
## <dbl> <dbl> <chr>
## 1 1 9 A
## 2 2 8 B
## 3 3 7 C
```
```
all_equal(df_base, df_q6_res) # Checks equality; returns TRUE if equal
```
```
## New names:
## New names:
## • `X-0` -> `X.0`
## • `X-1` -> `X.1`
```
```
## [1] TRUE
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/reproducibility-collaborating-with-github.html |
24 Reproducibility: Collaborating with GitHub
=============================================
*Purpose*: Git and GitHub are extremely useful for collaboration, but they take
some getting used to. In this *partner exercise*, you will learn how to
contribute to a partner’s work, and how to merge contributions to your own work.
This will be your first taste of using Git not just as a way to track changes,
but also as a powerful collaboration tool.
*Reading*: [GitHub Forking](https://gist.github.com/Chaser324/ce0505fbed06b947d962) (Optional / Reference)
24\.1 Overview
--------------
At a *high level*, what you and your partner are going to do is create a copy of each others’ public repos, make a change, and request that your partner *merge* those changes into their own repo. Using this workflow, you and your partner will be able to collaborate asynchronously on the same documents, all while maintaining safe version control of your work.
In this case you’ll make *trivial edits* to the **TODO** items below, but in practice you can use an identical process to edit things like software and reports.
24\.2 Partner Team
------------------
(You and your partner will edit the *TODO* items *separately* and *combine*, through the magic of *git*!)
| Title | Person |
| --- | --- |
| *Forking* Partner | **TODO** |
| *Merging* Partner | **TODO** |
*Note*: All instructions below are coded for either the *Forking* partner, the *Merging* partner, or *Both*. Note that you will both do all of these steps, but you will take turns being *Forking* and *Merging*. Pay attention to when these flags change—that indicates where you and your partner need to take different actions.
### 24\.2\.1 **q1** (*Both*) Pick a partner in the class, preferably from your learning team. You must mutually agree to complete this exercise together, which will involve *forking* your partner’s public repo and making a *pull request*.
24\.3 Forking Partner
---------------------
### 24\.3\.1 **q2** (*Forking*) Navigate to your partner’s GitHub repo page, and click the *Fork* button on the top\-right of the page.
Fork button on a repo, near the top\-right of the page
Depending on how much you’ve used GitHub already, you may get an additional menu once you click *Fork*. For instance, I’m part of some other organizations, and have the option to fork under those other accounts. *Make sure to fork under your own account*.
Example prompt with multiple organizations
Wait for the forking process to finish….
Wait
What you’ve just done is create a *copy* of your partner’s repository; you can now edit this copy as your own. This is called a *fork* of a repository. *Clone* your forked repository to your machine to create a local version that you can edit. Remember that you click the “copy to clipboard” button under *Code*:
Clone
Then use the command below to clone your fork:
```
# You should be able to paste the git@github.com part you copied above
$ git clone git@github.com:YOUR-USERNAME/YOUR-FORKED-REPO
$ cd YOUR-FORKED-REPO
```
### 24\.3\.2 **q3** (*Forking*) Add your partner’s repository as a *remote*. The reading described how to do this from the command line; use the command *within your forked repository*
```
$ git remote add upstream https://github.com/UPSTREAM-USER/ORIGINAL-PROJECT.git
```
where you should replace `UPSTREAM-USER` with your partner’s GitHub username, and `ORIGINAL-PROJECT` with the name of your partner’s GitHub public repo (the one you forked). What this does is allow you to pull any changes your partner makes in their own repo using the command `git fetch upstream`. We won’t use that command in this exercise, but it’s good to be aware of.
### 24\.3\.3 **q5** (*Forking*) Inspect your partner’s repository (your forked\-and\-cloned version); if they don’t have an `exercises` directory, create a new one. Within that `exercise` directory either open their existing `e-rep-05-collab-assignment.Rmd`, or copy this file into the `exercises` directory. Edit the **TODO** above under *Forking Partner* to be your own name.
### 24\.3\.4 **q6** (*Forking*) Commit your changes, and push them to your forked repository. If you do `$ git push`, you will probably see something like:
```
fatal: The current branch dev_addname has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin master
```
If you see this, it’s because Git is not willing to make assumptions about *where you want to push*. The `--set-upstream origin master` tells Git to push to `origin` (which should be set to your forked repository if you cloned it in q2 above), while `master` tells Git to push the changes to the `master` branch.
Follow the instructions by running
```
$ git push --set-upstream origin master
```
This will push your commit(s) to your forked repo. Now you have the changes in your *personal* copy of your partner’s work. Our task is now to request that our partner incorporate these changes in their own repository.
### 24\.3\.5 **q7** (*Forking*) Navigate your browser to your forked repository on GitHub. You should see a banner like the following:
Pull request
Click the **Pull request** to start the process of making a pull request (PR). Add a comment justifying the PR, and click **Create pull request**. Your partner will now be able to see your PR. Your *merging* partner will finish the rest of the exercise (though you should still complete the *merging* part on your own repo!).
24\.4 Merging Partner
---------------------
**q8** (*Merging*) Open your public repo GitHub page; you should see that your partner has an open pull request.You can see the list of pull requests by clicking on the `Pull requests` tab towards the top of your repo.
List pull requests
*Note*: You’ll probably also get an automated email from GitHub once your partner opens a PR.
Your task is to *merge* their pull request. Click on the merge request your partner opened; for instance mine was described as `Adding exercises dir, rep05.Rmd` by `mdelrosa`.
Select PR
Hopefully, you will see the message “This branch has no conflicts with the base
branch. Merging can be performed automatically.” This means that the edits your
partner made don’t conflict with any changes you made. Thus Git can
*automagically* combine all the edits. Click `Merge pull request` and `Confirm merge` to proceed.
*Note*: If you don’t see “Merging can be performed automatically,” *please*
reach out to one of the teaching team. We’ll gladly help you out!
Merge PR
Once done, the page should change to let you know that you completed the merge,
as shown below:
Merged PR
**q8** (*Merging*) Now you will pull the most recent changes to your local copy
of your repo. before pulling. Navigate to the directory of your repo. Make sure
to commit any stray changes (e.g. in\-progress exercises!). Then run `git pull`.
```
$ cd /path/to/your/repo
$ git pull
```
This will pull down the latest changes that your partner made.
### 24\.4\.1 **q9** (*Merging*) Replace **TODO** above under *Merging partner* to complete the exercise. Commit and push these changes to your repo.
24\.5 Forking Partner
---------------------
### 24\.5\.1 **q10** (*Forking*) Now your partner has successfully merged your edits, but in
order to keep collaborating, you’ll need to update your fork based on your
partner’s current (and future!) edits.
Use the following command to *fetch* your partner’s edits.
```
$ git fetch upstream
```
Note that this hasn’t actually changed your local repo; to see this check out
your *branches*.
```
$ git branch -va
```
Note that I have my own master branch, and also have my partner’s
`upstream/master` that has the merged pull request from q8\.
Branches for forked repo
In order to keep your forked copy of your partner’s repo current, run the following commands.
```
# Make sure you're on the master branch
$ git checkout master
# merge your master with your partner's upstream/master
$ git merge upstream/master
```
If you do this *after* your partner has changed and pushed the assignment in q9,
you should see both your and your partner’s name in the assignment document.
*Aside*: That was *a lot* of work just to get two names in one document. But the
advantage in this process is that *the document was rigorously controlled*
through every stage of this process, and you now have a *fully reversible and
searchable history* of the document through all the changes. Git takes a lot of
work to learn, but is *extraordinarly powerful* for collaboration.
24\.1 Overview
--------------
At a *high level*, what you and your partner are going to do is create a copy of each others’ public repos, make a change, and request that your partner *merge* those changes into their own repo. Using this workflow, you and your partner will be able to collaborate asynchronously on the same documents, all while maintaining safe version control of your work.
In this case you’ll make *trivial edits* to the **TODO** items below, but in practice you can use an identical process to edit things like software and reports.
24\.2 Partner Team
------------------
(You and your partner will edit the *TODO* items *separately* and *combine*, through the magic of *git*!)
| Title | Person |
| --- | --- |
| *Forking* Partner | **TODO** |
| *Merging* Partner | **TODO** |
*Note*: All instructions below are coded for either the *Forking* partner, the *Merging* partner, or *Both*. Note that you will both do all of these steps, but you will take turns being *Forking* and *Merging*. Pay attention to when these flags change—that indicates where you and your partner need to take different actions.
### 24\.2\.1 **q1** (*Both*) Pick a partner in the class, preferably from your learning team. You must mutually agree to complete this exercise together, which will involve *forking* your partner’s public repo and making a *pull request*.
### 24\.2\.1 **q1** (*Both*) Pick a partner in the class, preferably from your learning team. You must mutually agree to complete this exercise together, which will involve *forking* your partner’s public repo and making a *pull request*.
24\.3 Forking Partner
---------------------
### 24\.3\.1 **q2** (*Forking*) Navigate to your partner’s GitHub repo page, and click the *Fork* button on the top\-right of the page.
Fork button on a repo, near the top\-right of the page
Depending on how much you’ve used GitHub already, you may get an additional menu once you click *Fork*. For instance, I’m part of some other organizations, and have the option to fork under those other accounts. *Make sure to fork under your own account*.
Example prompt with multiple organizations
Wait for the forking process to finish….
Wait
What you’ve just done is create a *copy* of your partner’s repository; you can now edit this copy as your own. This is called a *fork* of a repository. *Clone* your forked repository to your machine to create a local version that you can edit. Remember that you click the “copy to clipboard” button under *Code*:
Clone
Then use the command below to clone your fork:
```
# You should be able to paste the git@github.com part you copied above
$ git clone git@github.com:YOUR-USERNAME/YOUR-FORKED-REPO
$ cd YOUR-FORKED-REPO
```
### 24\.3\.2 **q3** (*Forking*) Add your partner’s repository as a *remote*. The reading described how to do this from the command line; use the command *within your forked repository*
```
$ git remote add upstream https://github.com/UPSTREAM-USER/ORIGINAL-PROJECT.git
```
where you should replace `UPSTREAM-USER` with your partner’s GitHub username, and `ORIGINAL-PROJECT` with the name of your partner’s GitHub public repo (the one you forked). What this does is allow you to pull any changes your partner makes in their own repo using the command `git fetch upstream`. We won’t use that command in this exercise, but it’s good to be aware of.
### 24\.3\.3 **q5** (*Forking*) Inspect your partner’s repository (your forked\-and\-cloned version); if they don’t have an `exercises` directory, create a new one. Within that `exercise` directory either open their existing `e-rep-05-collab-assignment.Rmd`, or copy this file into the `exercises` directory. Edit the **TODO** above under *Forking Partner* to be your own name.
### 24\.3\.4 **q6** (*Forking*) Commit your changes, and push them to your forked repository. If you do `$ git push`, you will probably see something like:
```
fatal: The current branch dev_addname has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin master
```
If you see this, it’s because Git is not willing to make assumptions about *where you want to push*. The `--set-upstream origin master` tells Git to push to `origin` (which should be set to your forked repository if you cloned it in q2 above), while `master` tells Git to push the changes to the `master` branch.
Follow the instructions by running
```
$ git push --set-upstream origin master
```
This will push your commit(s) to your forked repo. Now you have the changes in your *personal* copy of your partner’s work. Our task is now to request that our partner incorporate these changes in their own repository.
### 24\.3\.5 **q7** (*Forking*) Navigate your browser to your forked repository on GitHub. You should see a banner like the following:
Pull request
Click the **Pull request** to start the process of making a pull request (PR). Add a comment justifying the PR, and click **Create pull request**. Your partner will now be able to see your PR. Your *merging* partner will finish the rest of the exercise (though you should still complete the *merging* part on your own repo!).
### 24\.3\.1 **q2** (*Forking*) Navigate to your partner’s GitHub repo page, and click the *Fork* button on the top\-right of the page.
Fork button on a repo, near the top\-right of the page
Depending on how much you’ve used GitHub already, you may get an additional menu once you click *Fork*. For instance, I’m part of some other organizations, and have the option to fork under those other accounts. *Make sure to fork under your own account*.
Example prompt with multiple organizations
Wait for the forking process to finish….
Wait
What you’ve just done is create a *copy* of your partner’s repository; you can now edit this copy as your own. This is called a *fork* of a repository. *Clone* your forked repository to your machine to create a local version that you can edit. Remember that you click the “copy to clipboard” button under *Code*:
Clone
Then use the command below to clone your fork:
```
# You should be able to paste the git@github.com part you copied above
$ git clone git@github.com:YOUR-USERNAME/YOUR-FORKED-REPO
$ cd YOUR-FORKED-REPO
```
### 24\.3\.2 **q3** (*Forking*) Add your partner’s repository as a *remote*. The reading described how to do this from the command line; use the command *within your forked repository*
```
$ git remote add upstream https://github.com/UPSTREAM-USER/ORIGINAL-PROJECT.git
```
where you should replace `UPSTREAM-USER` with your partner’s GitHub username, and `ORIGINAL-PROJECT` with the name of your partner’s GitHub public repo (the one you forked). What this does is allow you to pull any changes your partner makes in their own repo using the command `git fetch upstream`. We won’t use that command in this exercise, but it’s good to be aware of.
### 24\.3\.3 **q5** (*Forking*) Inspect your partner’s repository (your forked\-and\-cloned version); if they don’t have an `exercises` directory, create a new one. Within that `exercise` directory either open their existing `e-rep-05-collab-assignment.Rmd`, or copy this file into the `exercises` directory. Edit the **TODO** above under *Forking Partner* to be your own name.
### 24\.3\.4 **q6** (*Forking*) Commit your changes, and push them to your forked repository. If you do `$ git push`, you will probably see something like:
```
fatal: The current branch dev_addname has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin master
```
If you see this, it’s because Git is not willing to make assumptions about *where you want to push*. The `--set-upstream origin master` tells Git to push to `origin` (which should be set to your forked repository if you cloned it in q2 above), while `master` tells Git to push the changes to the `master` branch.
Follow the instructions by running
```
$ git push --set-upstream origin master
```
This will push your commit(s) to your forked repo. Now you have the changes in your *personal* copy of your partner’s work. Our task is now to request that our partner incorporate these changes in their own repository.
### 24\.3\.5 **q7** (*Forking*) Navigate your browser to your forked repository on GitHub. You should see a banner like the following:
Pull request
Click the **Pull request** to start the process of making a pull request (PR). Add a comment justifying the PR, and click **Create pull request**. Your partner will now be able to see your PR. Your *merging* partner will finish the rest of the exercise (though you should still complete the *merging* part on your own repo!).
24\.4 Merging Partner
---------------------
**q8** (*Merging*) Open your public repo GitHub page; you should see that your partner has an open pull request.You can see the list of pull requests by clicking on the `Pull requests` tab towards the top of your repo.
List pull requests
*Note*: You’ll probably also get an automated email from GitHub once your partner opens a PR.
Your task is to *merge* their pull request. Click on the merge request your partner opened; for instance mine was described as `Adding exercises dir, rep05.Rmd` by `mdelrosa`.
Select PR
Hopefully, you will see the message “This branch has no conflicts with the base
branch. Merging can be performed automatically.” This means that the edits your
partner made don’t conflict with any changes you made. Thus Git can
*automagically* combine all the edits. Click `Merge pull request` and `Confirm merge` to proceed.
*Note*: If you don’t see “Merging can be performed automatically,” *please*
reach out to one of the teaching team. We’ll gladly help you out!
Merge PR
Once done, the page should change to let you know that you completed the merge,
as shown below:
Merged PR
**q8** (*Merging*) Now you will pull the most recent changes to your local copy
of your repo. before pulling. Navigate to the directory of your repo. Make sure
to commit any stray changes (e.g. in\-progress exercises!). Then run `git pull`.
```
$ cd /path/to/your/repo
$ git pull
```
This will pull down the latest changes that your partner made.
### 24\.4\.1 **q9** (*Merging*) Replace **TODO** above under *Merging partner* to complete the exercise. Commit and push these changes to your repo.
### 24\.4\.1 **q9** (*Merging*) Replace **TODO** above under *Merging partner* to complete the exercise. Commit and push these changes to your repo.
24\.5 Forking Partner
---------------------
### 24\.5\.1 **q10** (*Forking*) Now your partner has successfully merged your edits, but in
order to keep collaborating, you’ll need to update your fork based on your
partner’s current (and future!) edits.
Use the following command to *fetch* your partner’s edits.
```
$ git fetch upstream
```
Note that this hasn’t actually changed your local repo; to see this check out
your *branches*.
```
$ git branch -va
```
Note that I have my own master branch, and also have my partner’s
`upstream/master` that has the merged pull request from q8\.
Branches for forked repo
In order to keep your forked copy of your partner’s repo current, run the following commands.
```
# Make sure you're on the master branch
$ git checkout master
# merge your master with your partner's upstream/master
$ git merge upstream/master
```
If you do this *after* your partner has changed and pushed the assignment in q9,
you should see both your and your partner’s name in the assignment document.
*Aside*: That was *a lot* of work just to get two names in one document. But the
advantage in this process is that *the document was rigorously controlled*
through every stage of this process, and you now have a *fully reversible and
searchable history* of the document through all the changes. Git takes a lot of
work to learn, but is *extraordinarly powerful* for collaboration.
### 24\.5\.1 **q10** (*Forking*) Now your partner has successfully merged your edits, but in
order to keep collaborating, you’ll need to update your fork based on your
partner’s current (and future!) edits.
Use the following command to *fetch* your partner’s edits.
```
$ git fetch upstream
```
Note that this hasn’t actually changed your local repo; to see this check out
your *branches*.
```
$ git branch -va
```
Note that I have my own master branch, and also have my partner’s
`upstream/master` that has the merged pull request from q8\.
Branches for forked repo
In order to keep your forked copy of your partner’s repo current, run the following commands.
```
# Make sure you're on the master branch
$ git checkout master
# merge your master with your partner's upstream/master
$ git merge upstream/master
```
If you do this *after* your partner has changed and pushed the assignment in q9,
you should see both your and your partner’s name in the assignment document.
*Aside*: That was *a lot* of work just to get two names in one document. But the
advantage in this process is that *the document was rigorously controlled*
through every stage of this process, and you now have a *fully reversible and
searchable history* of the document through all the changes. Git takes a lot of
work to learn, but is *extraordinarly powerful* for collaboration.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-boxplots-and-counts.html |
25 Vis: Boxplots and Counts
===========================
*Purpose*: *Boxplots* are a key tool for EDA. Like histograms, boxplots give us a sense of “shape” for a distribution. However, a boxplot is a *careful summary* of shape. This helps us pick out key features of a distribution, and enables easier comparison of different distributions.
*Reading*: [Boxplots and Counts](https://rstudio.cloud/learn/primers/3.4)
*Topics*: (All topics)
*Reading Time*: \~20 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
A *subtle point* from the primer is that we can use `dplyr` to generate new
facts about our data, then use `ggplot2` to visualize those facts. We’ll
practice this idea in a number of ways.
### 25\.0\.1 **q1** Use a `cut_*` verb to create a categorical variable out of `carat`. Tweak
the settings in your cut and document your observations.
*Hint*: Recall that we learned about `cut_interval, cut_number, cut_width`. Take
your pick!
```
diamonds %>%
mutate(carat_cut = cut_width(carat, width = 0.5, boundary = 0)) %>%
ggplot(aes(x = carat_cut, y = price)) +
geom_boxplot()
```
**Observations**
\- Price tends to increase with carat
\- Median price rises dramatically across carat \[0, 2]
\- Median price is roughly constant across carat \[2, 4\.5]
\- Across carat \[2, 4\.5], the whiskers have essentially the same max price
\- The IQR is quite small at low carat, but increases with carat; the prices become more variable at higher carat
### 25\.0\.2 **q2** The following code visualizes the count of diamonds of *all* carats according to their cut and color. Modify the code to consider *only* diamonds with `carat >= 2`. Does the most common group of cut and color change?
```
## NOTE: No need to modify; run and inspect
diamonds %>%
count(cut, color) %>%
ggplot(aes(cut, color, fill = n)) +
geom_tile()
```
Modify the following code:
```
diamonds %>%
filter(carat >= 2) %>%
count(cut, color) %>%
ggplot(aes(cut, color, fill = n)) +
geom_tile()
```
**Observations**:
\- Yes, it used to be `G, Ideal`, but is now `I, Premium`
### 25\.0\.3 **q3** The following plot has overlapping x\-axis labels. Use a verb from the reading to `flip` the coordinates and improve readability.
```
mpg %>%
ggplot(aes(manufacturer, hwy)) +
geom_boxplot() +
coord_flip()
```
This is a simple—but important—trick to remember when visualizing data with many categories.
### 25\.0\.1 **q1** Use a `cut_*` verb to create a categorical variable out of `carat`. Tweak
the settings in your cut and document your observations.
*Hint*: Recall that we learned about `cut_interval, cut_number, cut_width`. Take
your pick!
```
diamonds %>%
mutate(carat_cut = cut_width(carat, width = 0.5, boundary = 0)) %>%
ggplot(aes(x = carat_cut, y = price)) +
geom_boxplot()
```
**Observations**
\- Price tends to increase with carat
\- Median price rises dramatically across carat \[0, 2]
\- Median price is roughly constant across carat \[2, 4\.5]
\- Across carat \[2, 4\.5], the whiskers have essentially the same max price
\- The IQR is quite small at low carat, but increases with carat; the prices become more variable at higher carat
### 25\.0\.2 **q2** The following code visualizes the count of diamonds of *all* carats according to their cut and color. Modify the code to consider *only* diamonds with `carat >= 2`. Does the most common group of cut and color change?
```
## NOTE: No need to modify; run and inspect
diamonds %>%
count(cut, color) %>%
ggplot(aes(cut, color, fill = n)) +
geom_tile()
```
Modify the following code:
```
diamonds %>%
filter(carat >= 2) %>%
count(cut, color) %>%
ggplot(aes(cut, color, fill = n)) +
geom_tile()
```
**Observations**:
\- Yes, it used to be `G, Ideal`, but is now `I, Premium`
### 25\.0\.3 **q3** The following plot has overlapping x\-axis labels. Use a verb from the reading to `flip` the coordinates and improve readability.
```
mpg %>%
ggplot(aes(manufacturer, hwy)) +
geom_boxplot() +
coord_flip()
```
This is a simple—but important—trick to remember when visualizing data with many categories.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-probability.html |
26 Stats: Probability
=====================
*Purpose*: *Probability* is a quantitative measure of uncertainty. It is intimately tied to how we use *distributions* to model data and to how we express uncertainty. In order to do all these useful things, we’ll need to learn some basics about probability.
*Reading*: (None; this exercise *is* the reading.)
*Topics*: Frequency, probability, probability density function (PDF), cumulative distribution function (CDF)
“Probability is the most important concept in modern science, especially as
nobody has the slightest notion what it means.” — Bertrand Russell
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
26\.1 Intuitive Definition
--------------------------
In the previous stats exercise, we learned about *densities*. In this exercise, we’re going to learn a more formal definition using probability. To introduce the idea of probability, let’s first think about *frequency*.
Imagine we have some set of events \\(X\\), and we’re considering some *particular* subset of cases that we’re interested in \\(A\\). For instance, imagine we’re rolling a 6\-sided die, and we’re interested in cases when the number rolled is even. Then the subset of cases is \\(A \= \\{2, 4, 6\\}\\), and an *example run* of rolls might be \\(X \= \\{3, 5, 5, 2, 6, 1, 3, 3\\}\\).
The *frequency* with which events in \\(A\\) occurred for a run \\(X\\) is
\\\[F\_X(A) \\equiv \\sum \\frac{\\text{Cases in set }A}{\\text{Total Cases in }X}.\\]
For the example above, we have
\\\[\\begin{aligned}
A \&\= \\{2, 4, 6\\}, \\\\
X \&\= \\{3, 5, 5, \\mathbf{2}, \\mathbf{6}, 1, 3, 3\\}, \\\\
F\_X(A) \&\= \\frac{2}{8} \= 1/4
\\end{aligned}\\]
Note that this definition of frequency considers both a *set* \\(A\\) and a sample \\(X\\). We need to define both \\(A, X\\) in order to compute a frequency.
As an example, let’s consider the set \\(A\\) to be the set of \\(Z\\) values such that \\(\-1\.96 \<\= Z \<\= \+1\.96\\): We denote this set as \\(A \= {Z \| \-1\.96 \<\= Z \<\= \+1\.96}\\). Let’s also let \\(Z\\) be a sample from a standard (`mean = 0, sd = 1`) normal. The following figure illustrates the set \\(A\\) against a standard normal density.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_ribbon(
data = . %>% filter(-1.96 <= z, z <= +1.96),
aes(ymin = 0, ymax = d, fill = "Set A"),
alpha = 1 / 3
) +
geom_line() +
scale_fill_discrete(name = "")
```
Note that a frequency is defined *not* in terms of a density, but rather in terms of a sample \\(X\\). The following example code draws a sample from a standard normal, and computes the frequency with which values in the sample \\(X\\) lie in the set \\(A\\).
```
## NOTE: No need to change this!
df_z <- tibble(z = rnorm(100))
df_z %>%
mutate(in_A = (-1.96 <= z) & (z <= +1.96)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
```
```
## # A tibble: 1 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 100 98 0.98
```
Now it’s your turn!
### 26\.1\.1 **q1** Let \\(A \= {Z \| Z \<\= 0}\\). Complete the following code to compute `count_total`, `count_A`, and `fr`. Before executing the code, **make a prediction** about the value of `fr`. Did the computed `fr` value match your prediction?
```
## NOTE: No need to change this!
df_z <- tibble(z = rnorm(100))
df_z %>%
mutate(in_A = (z <= 0)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
```
```
## # A tibble: 1 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 100 52 0.52
```
**Observations**:
* I predicted `fr = 0.5`
* The value of `fr` I computed was `0.52`, not quite the same. This is due to randomness in the calculation.
The following graph visualizes the set \\(A \= {z \| z \<\= 0}\\) against a standard normal density.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_ribbon(
data = . %>% filter(z <= 0),
aes(ymin = 0, ymax = d, fill = "Set A"),
alpha = 1 / 3
) +
geom_line() +
scale_fill_discrete(name = "")
```
Based on this visual, we might expect `fr = 0.5`. This was (likely) not the value that our *frequency* took, but it is the precise value of the *probability* that \\(Z \<\= 0\.5\\).
Remember in the previous stats exercise that when running `rnorm` with larger values of `n` we obtained histograms closer to the normal density? Something very similar happens with frequency and probability:
```
## NOTE: No need to change this!
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (z <= 0)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 4 0.4
## 2 100 39 0.39
## 3 1000 510 0.51
## 4 10000 4955 0.496
```
This is because *probability* is actually defined\[1] in terms of the limit
\\\[\\mathbb{P}\_{\\rho}\[X \\in A] \= \\lim\_{n \\to \\infty} F\_{X\_n}(A),\\]
where \\(X\_n\\) is a sample of size \\(n\\) drawn from the density \\(X\_n \\sim \\rho\\).\[2]
### 26\.1\.2 **q2**: Modify the code below to consider the set \\(A \= {z \| \-1\.96 \<\= z \<\= \+1\.96}\\). What value does `fr` appear to be limiting towards?
```
## TASK: Modify the code below
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (-1.96 <= z) & (z <= +1.96)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 9 0.9
## 2 100 93 0.93
## 3 1000 949 0.949
## 4 10000 9467 0.947
```
**Observations**:
* `fr` is limiting towards `~0.95`
Now that we know what probability is; let’s connect the idea to *distributions*.
26\.2 Relation to Distributions
-------------------------------
A continuous *distribution* models probability in terms of an integral
\\\[\\mathbb{P}\_{\\rho}\[X \\in A] \= \\int\_{\-\\infty}^{\+\\infty} \\mathcal{I}\_A(x)\\rho(x)\\,dx \= \\mathbb{E}\_{\\rho}\[\\mathcal{I}\_A(X)]\\]
where \\(\\mathcal{I}\_A(x)\\) is the *indicator function*; a function that takes the value \\(1\\) when \\(x \\in A\\), and the value \\(0\\) when \\(x \\not\\in A\\). The \\\[\\mathbb{E}\[Y]\\] denotes taking the mean of a random variable; this is also called the *expectation* of a random variable. The function \\(\\rho(x)\\) is the *density* of the random variable—we’ll return to this idea in a bit.
This definition gives us a geometric way to think about probability; the distribution definition means probability is the area under the density curve within the set \\(A\\).
Before concluding this reading, let’s talk about two sticking points about distributions:
26\.3 Sets vs Points
--------------------
Note that for continuous distributions, the probability of a single point is *zero*. First, let’s gather some empirical evidence.
### 26\.3\.1 **q3** Modify the code below to consider the set \\(A \= {z \| z \= 2}\\).
*Hint*: Remember the difference between `=` and `==`!
```
## TASK: Modify the code below
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (z == 2)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 0 0
## 2 100 0 0
## 3 1000 0 0
## 4 10000 0 0
```
**Observations**:
* `fr` is consistenly zero
We can also understand this phenomenon in terms of areas; the following graph visualizes the set \\(A \= {z \| z \= 2}\\) against a standard normal.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_segment(
data = tibble(z = 2, d = dnorm(2)),
mapping = aes(z, 0, xend = 2, yend = d, color = "Set A")
) +
geom_line() +
scale_color_discrete(name = "")
```
Note that this set \\(A\\) has nonzero height but zero width. Zero width corresponds to zero area, and thus zero probability.
*This is weird*. If we’re using a distribution to model something physical, say a material property, this means that any specific material property has zero probability of occurring. But in practice, some specific value will be realized! If you’d like to learn more, take a look at Note \[3].
26\.4 Two expressions of the same information
---------------------------------------------
There is a bit more terminology associated with distributions. The \\(\\rho(x)\\) we considered above is called a [probability density function](https://en.wikipedia.org/wiki/Probability_density_function) (PDF); it is the function we integrate in order to obtain a probability. In R, the PDF has the `d` prefix, for instance `dnorm`. For example, the standard normal has the following PDF.
```
tibble(z = seq(-3, +3, length.out = 1000)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_line() +
labs(
x = "z",
y = "Probability Density",
title = "Probability Density Function"
)
```
There is also a [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is related to the PDF \\(\\rho(x)\\) via
\\\[R(x) \= \\int\_{\-\\infty}^x \\rho(s) ds.\\]
In R, the CDF has the prefix `p`, such as `pnorm`. For example, the standard normal has the following CDF.
```
tibble(z = seq(-3, +3, length.out = 1000)) %>%
mutate(p = pnorm(z)) %>%
ggplot(aes(z, p)) +
geom_line() +
labs(
x = "z",
y = "Probability",
title = "Cumulative Distribution Function"
)
```
Note that, by definition, the CDF gives the probability over the set \\(A(x) \= {x' \| x' \<\= x}\\) (this is just all the values less than the value we’re considering \\(x\\)). Thus the CDF returns a probability (which explains the `p` prefix for R functions).
### 26\.4\.1 **q4** Use `pnorm` to compute the probability that `Z ~ norm(mean = 0, sd = 1)` is less than or equal to zero. Compare this against your frequency prediction from **q1**.
```
## TASK: Compute the probability that Z <= 0, assign to p0
p0 <- pnorm(q = 0)
p0
```
```
## [1] 0.5
```
Use the following code to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(p0 == 0.5)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
**Observations**:
* I predicted `fr = 0.5`, which matches `p0`.
Note that when our set \\(A\\) is an *interval*, we can use the CDF to express the associated probability. Note that
\\\[\\mathbb{P}\_{\\rho}\[a \<\= X \<\= b] \= \\int\_a^b \\rho(x) dx \= \\int\_{\-\\infty}^b \\rho(x) dx \- \\int\_{\-\\infty}^a \\rho(x) dx.\\]
### 26\.4\.2 **q5** Using the identity above, use `pnorm` to compute the probability that \\(\-1\.96 \<\= Z \<\= \+1\.96\\) with `Z ~ norm(mean = 0, sd = 1)`.
```
## TASK: Compute the probability that -1.96 <= Z <= +1.96, assign to pI
pI <- pnorm(q = +1.96) - pnorm(q = -1.96)
pI
```
```
## [1] 0.9500042
```
Use the following code to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(abs(pI - 0.95) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
26\.5 Notes
-----------
\[1] This is where things get confusing and potentially controversial. This “limit of frequencies” definition of probability is called “Frequentist probability”, to distinguish it from a “Bayesian probability”. The distinction is meaningful but slippery. We won’t cover this distinction in this course. If you’re curious to learn more, my favorite video on Bayes vs Frequentist is by [Kristin Lennox](https://www.youtube.com/watch?v=eDMGDhyDxuY&list=WL&index=131&t=0s).
Note that even Bayesians use this Frequentist definition of probability, for instance in [Markov Chain Monte Carlo](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo), the workhorse of Bayesian computation.
\[2] Technically these samples must be drawn *independently and identically
distributed*, shortened to iid.
\[3] [3Blue1Brown](https://www.youtube.com/watch?v=8idr1WZ1A7Q) has a very nice video about continuous probabilities.
26\.1 Intuitive Definition
--------------------------
In the previous stats exercise, we learned about *densities*. In this exercise, we’re going to learn a more formal definition using probability. To introduce the idea of probability, let’s first think about *frequency*.
Imagine we have some set of events \\(X\\), and we’re considering some *particular* subset of cases that we’re interested in \\(A\\). For instance, imagine we’re rolling a 6\-sided die, and we’re interested in cases when the number rolled is even. Then the subset of cases is \\(A \= \\{2, 4, 6\\}\\), and an *example run* of rolls might be \\(X \= \\{3, 5, 5, 2, 6, 1, 3, 3\\}\\).
The *frequency* with which events in \\(A\\) occurred for a run \\(X\\) is
\\\[F\_X(A) \\equiv \\sum \\frac{\\text{Cases in set }A}{\\text{Total Cases in }X}.\\]
For the example above, we have
\\\[\\begin{aligned}
A \&\= \\{2, 4, 6\\}, \\\\
X \&\= \\{3, 5, 5, \\mathbf{2}, \\mathbf{6}, 1, 3, 3\\}, \\\\
F\_X(A) \&\= \\frac{2}{8} \= 1/4
\\end{aligned}\\]
Note that this definition of frequency considers both a *set* \\(A\\) and a sample \\(X\\). We need to define both \\(A, X\\) in order to compute a frequency.
As an example, let’s consider the set \\(A\\) to be the set of \\(Z\\) values such that \\(\-1\.96 \<\= Z \<\= \+1\.96\\): We denote this set as \\(A \= {Z \| \-1\.96 \<\= Z \<\= \+1\.96}\\). Let’s also let \\(Z\\) be a sample from a standard (`mean = 0, sd = 1`) normal. The following figure illustrates the set \\(A\\) against a standard normal density.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_ribbon(
data = . %>% filter(-1.96 <= z, z <= +1.96),
aes(ymin = 0, ymax = d, fill = "Set A"),
alpha = 1 / 3
) +
geom_line() +
scale_fill_discrete(name = "")
```
Note that a frequency is defined *not* in terms of a density, but rather in terms of a sample \\(X\\). The following example code draws a sample from a standard normal, and computes the frequency with which values in the sample \\(X\\) lie in the set \\(A\\).
```
## NOTE: No need to change this!
df_z <- tibble(z = rnorm(100))
df_z %>%
mutate(in_A = (-1.96 <= z) & (z <= +1.96)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
```
```
## # A tibble: 1 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 100 98 0.98
```
Now it’s your turn!
### 26\.1\.1 **q1** Let \\(A \= {Z \| Z \<\= 0}\\). Complete the following code to compute `count_total`, `count_A`, and `fr`. Before executing the code, **make a prediction** about the value of `fr`. Did the computed `fr` value match your prediction?
```
## NOTE: No need to change this!
df_z <- tibble(z = rnorm(100))
df_z %>%
mutate(in_A = (z <= 0)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
```
```
## # A tibble: 1 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 100 52 0.52
```
**Observations**:
* I predicted `fr = 0.5`
* The value of `fr` I computed was `0.52`, not quite the same. This is due to randomness in the calculation.
The following graph visualizes the set \\(A \= {z \| z \<\= 0}\\) against a standard normal density.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_ribbon(
data = . %>% filter(z <= 0),
aes(ymin = 0, ymax = d, fill = "Set A"),
alpha = 1 / 3
) +
geom_line() +
scale_fill_discrete(name = "")
```
Based on this visual, we might expect `fr = 0.5`. This was (likely) not the value that our *frequency* took, but it is the precise value of the *probability* that \\(Z \<\= 0\.5\\).
Remember in the previous stats exercise that when running `rnorm` with larger values of `n` we obtained histograms closer to the normal density? Something very similar happens with frequency and probability:
```
## NOTE: No need to change this!
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (z <= 0)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 4 0.4
## 2 100 39 0.39
## 3 1000 510 0.51
## 4 10000 4955 0.496
```
This is because *probability* is actually defined\[1] in terms of the limit
\\\[\\mathbb{P}\_{\\rho}\[X \\in A] \= \\lim\_{n \\to \\infty} F\_{X\_n}(A),\\]
where \\(X\_n\\) is a sample of size \\(n\\) drawn from the density \\(X\_n \\sim \\rho\\).\[2]
### 26\.1\.2 **q2**: Modify the code below to consider the set \\(A \= {z \| \-1\.96 \<\= z \<\= \+1\.96}\\). What value does `fr` appear to be limiting towards?
```
## TASK: Modify the code below
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (-1.96 <= z) & (z <= +1.96)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 9 0.9
## 2 100 93 0.93
## 3 1000 949 0.949
## 4 10000 9467 0.947
```
**Observations**:
* `fr` is limiting towards `~0.95`
Now that we know what probability is; let’s connect the idea to *distributions*.
### 26\.1\.1 **q1** Let \\(A \= {Z \| Z \<\= 0}\\). Complete the following code to compute `count_total`, `count_A`, and `fr`. Before executing the code, **make a prediction** about the value of `fr`. Did the computed `fr` value match your prediction?
```
## NOTE: No need to change this!
df_z <- tibble(z = rnorm(100))
df_z %>%
mutate(in_A = (z <= 0)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
```
```
## # A tibble: 1 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 100 52 0.52
```
**Observations**:
* I predicted `fr = 0.5`
* The value of `fr` I computed was `0.52`, not quite the same. This is due to randomness in the calculation.
The following graph visualizes the set \\(A \= {z \| z \<\= 0}\\) against a standard normal density.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_ribbon(
data = . %>% filter(z <= 0),
aes(ymin = 0, ymax = d, fill = "Set A"),
alpha = 1 / 3
) +
geom_line() +
scale_fill_discrete(name = "")
```
Based on this visual, we might expect `fr = 0.5`. This was (likely) not the value that our *frequency* took, but it is the precise value of the *probability* that \\(Z \<\= 0\.5\\).
Remember in the previous stats exercise that when running `rnorm` with larger values of `n` we obtained histograms closer to the normal density? Something very similar happens with frequency and probability:
```
## NOTE: No need to change this!
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (z <= 0)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 4 0.4
## 2 100 39 0.39
## 3 1000 510 0.51
## 4 10000 4955 0.496
```
This is because *probability* is actually defined\[1] in terms of the limit
\\\[\\mathbb{P}\_{\\rho}\[X \\in A] \= \\lim\_{n \\to \\infty} F\_{X\_n}(A),\\]
where \\(X\_n\\) is a sample of size \\(n\\) drawn from the density \\(X\_n \\sim \\rho\\).\[2]
### 26\.1\.2 **q2**: Modify the code below to consider the set \\(A \= {z \| \-1\.96 \<\= z \<\= \+1\.96}\\). What value does `fr` appear to be limiting towards?
```
## TASK: Modify the code below
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (-1.96 <= z) & (z <= +1.96)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 9 0.9
## 2 100 93 0.93
## 3 1000 949 0.949
## 4 10000 9467 0.947
```
**Observations**:
* `fr` is limiting towards `~0.95`
Now that we know what probability is; let’s connect the idea to *distributions*.
26\.2 Relation to Distributions
-------------------------------
A continuous *distribution* models probability in terms of an integral
\\\[\\mathbb{P}\_{\\rho}\[X \\in A] \= \\int\_{\-\\infty}^{\+\\infty} \\mathcal{I}\_A(x)\\rho(x)\\,dx \= \\mathbb{E}\_{\\rho}\[\\mathcal{I}\_A(X)]\\]
where \\(\\mathcal{I}\_A(x)\\) is the *indicator function*; a function that takes the value \\(1\\) when \\(x \\in A\\), and the value \\(0\\) when \\(x \\not\\in A\\). The \\\[\\mathbb{E}\[Y]\\] denotes taking the mean of a random variable; this is also called the *expectation* of a random variable. The function \\(\\rho(x)\\) is the *density* of the random variable—we’ll return to this idea in a bit.
This definition gives us a geometric way to think about probability; the distribution definition means probability is the area under the density curve within the set \\(A\\).
Before concluding this reading, let’s talk about two sticking points about distributions:
26\.3 Sets vs Points
--------------------
Note that for continuous distributions, the probability of a single point is *zero*. First, let’s gather some empirical evidence.
### 26\.3\.1 **q3** Modify the code below to consider the set \\(A \= {z \| z \= 2}\\).
*Hint*: Remember the difference between `=` and `==`!
```
## TASK: Modify the code below
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (z == 2)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 0 0
## 2 100 0 0
## 3 1000 0 0
## 4 10000 0 0
```
**Observations**:
* `fr` is consistenly zero
We can also understand this phenomenon in terms of areas; the following graph visualizes the set \\(A \= {z \| z \= 2}\\) against a standard normal.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_segment(
data = tibble(z = 2, d = dnorm(2)),
mapping = aes(z, 0, xend = 2, yend = d, color = "Set A")
) +
geom_line() +
scale_color_discrete(name = "")
```
Note that this set \\(A\\) has nonzero height but zero width. Zero width corresponds to zero area, and thus zero probability.
*This is weird*. If we’re using a distribution to model something physical, say a material property, this means that any specific material property has zero probability of occurring. But in practice, some specific value will be realized! If you’d like to learn more, take a look at Note \[3].
### 26\.3\.1 **q3** Modify the code below to consider the set \\(A \= {z \| z \= 2}\\).
*Hint*: Remember the difference between `=` and `==`!
```
## TASK: Modify the code below
map_dfr(
c(10, 100, 1000, 1e4),
function(n_samples) {
tibble(
z = rnorm(n = n_samples),
n = n_samples
) %>%
mutate(in_A = (z == 2)) %>%
summarize(count_total = n(), count_A = sum(in_A), fr = mean(in_A))
}
)
```
```
## # A tibble: 4 × 3
## count_total count_A fr
## <int> <int> <dbl>
## 1 10 0 0
## 2 100 0 0
## 3 1000 0 0
## 4 10000 0 0
```
**Observations**:
* `fr` is consistenly zero
We can also understand this phenomenon in terms of areas; the following graph visualizes the set \\(A \= {z \| z \= 2}\\) against a standard normal.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_segment(
data = tibble(z = 2, d = dnorm(2)),
mapping = aes(z, 0, xend = 2, yend = d, color = "Set A")
) +
geom_line() +
scale_color_discrete(name = "")
```
Note that this set \\(A\\) has nonzero height but zero width. Zero width corresponds to zero area, and thus zero probability.
*This is weird*. If we’re using a distribution to model something physical, say a material property, this means that any specific material property has zero probability of occurring. But in practice, some specific value will be realized! If you’d like to learn more, take a look at Note \[3].
26\.4 Two expressions of the same information
---------------------------------------------
There is a bit more terminology associated with distributions. The \\(\\rho(x)\\) we considered above is called a [probability density function](https://en.wikipedia.org/wiki/Probability_density_function) (PDF); it is the function we integrate in order to obtain a probability. In R, the PDF has the `d` prefix, for instance `dnorm`. For example, the standard normal has the following PDF.
```
tibble(z = seq(-3, +3, length.out = 1000)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_line() +
labs(
x = "z",
y = "Probability Density",
title = "Probability Density Function"
)
```
There is also a [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is related to the PDF \\(\\rho(x)\\) via
\\\[R(x) \= \\int\_{\-\\infty}^x \\rho(s) ds.\\]
In R, the CDF has the prefix `p`, such as `pnorm`. For example, the standard normal has the following CDF.
```
tibble(z = seq(-3, +3, length.out = 1000)) %>%
mutate(p = pnorm(z)) %>%
ggplot(aes(z, p)) +
geom_line() +
labs(
x = "z",
y = "Probability",
title = "Cumulative Distribution Function"
)
```
Note that, by definition, the CDF gives the probability over the set \\(A(x) \= {x' \| x' \<\= x}\\) (this is just all the values less than the value we’re considering \\(x\\)). Thus the CDF returns a probability (which explains the `p` prefix for R functions).
### 26\.4\.1 **q4** Use `pnorm` to compute the probability that `Z ~ norm(mean = 0, sd = 1)` is less than or equal to zero. Compare this against your frequency prediction from **q1**.
```
## TASK: Compute the probability that Z <= 0, assign to p0
p0 <- pnorm(q = 0)
p0
```
```
## [1] 0.5
```
Use the following code to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(p0 == 0.5)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
**Observations**:
* I predicted `fr = 0.5`, which matches `p0`.
Note that when our set \\(A\\) is an *interval*, we can use the CDF to express the associated probability. Note that
\\\[\\mathbb{P}\_{\\rho}\[a \<\= X \<\= b] \= \\int\_a^b \\rho(x) dx \= \\int\_{\-\\infty}^b \\rho(x) dx \- \\int\_{\-\\infty}^a \\rho(x) dx.\\]
### 26\.4\.2 **q5** Using the identity above, use `pnorm` to compute the probability that \\(\-1\.96 \<\= Z \<\= \+1\.96\\) with `Z ~ norm(mean = 0, sd = 1)`.
```
## TASK: Compute the probability that -1.96 <= Z <= +1.96, assign to pI
pI <- pnorm(q = +1.96) - pnorm(q = -1.96)
pI
```
```
## [1] 0.9500042
```
Use the following code to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(abs(pI - 0.95) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 26\.4\.1 **q4** Use `pnorm` to compute the probability that `Z ~ norm(mean = 0, sd = 1)` is less than or equal to zero. Compare this against your frequency prediction from **q1**.
```
## TASK: Compute the probability that Z <= 0, assign to p0
p0 <- pnorm(q = 0)
p0
```
```
## [1] 0.5
```
Use the following code to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(p0 == 0.5)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
**Observations**:
* I predicted `fr = 0.5`, which matches `p0`.
Note that when our set \\(A\\) is an *interval*, we can use the CDF to express the associated probability. Note that
\\\[\\mathbb{P}\_{\\rho}\[a \<\= X \<\= b] \= \\int\_a^b \\rho(x) dx \= \\int\_{\-\\infty}^b \\rho(x) dx \- \\int\_{\-\\infty}^a \\rho(x) dx.\\]
### 26\.4\.2 **q5** Using the identity above, use `pnorm` to compute the probability that \\(\-1\.96 \<\= Z \<\= \+1\.96\\) with `Z ~ norm(mean = 0, sd = 1)`.
```
## TASK: Compute the probability that -1.96 <= Z <= +1.96, assign to pI
pI <- pnorm(q = +1.96) - pnorm(q = -1.96)
pI
```
```
## [1] 0.9500042
```
Use the following code to check your answer.
```
## NOTE: No need to change this
assertthat::assert_that(abs(pI - 0.95) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
26\.5 Notes
-----------
\[1] This is where things get confusing and potentially controversial. This “limit of frequencies” definition of probability is called “Frequentist probability”, to distinguish it from a “Bayesian probability”. The distinction is meaningful but slippery. We won’t cover this distinction in this course. If you’re curious to learn more, my favorite video on Bayes vs Frequentist is by [Kristin Lennox](https://www.youtube.com/watch?v=eDMGDhyDxuY&list=WL&index=131&t=0s).
Note that even Bayesians use this Frequentist definition of probability, for instance in [Markov Chain Monte Carlo](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo), the workhorse of Bayesian computation.
\[2] Technically these samples must be drawn *independently and identically
distributed*, shortened to iid.
\[3] [3Blue1Brown](https://www.youtube.com/watch?v=8idr1WZ1A7Q) has a very nice video about continuous probabilities.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-separate-and-unite-columns.html |
27 Data: Separate and Unite Columns
===================================
*Purpose*: Data is easiest to use when it is *tidy*. In fact, the tidyverse
(including ggplot, dplyr, etc.) is specifically designed to use tidy data. Last
time we learned how to pivot data, but data can be untidy in other ways.
Pivoting helped us when data were locked up in the *column headers*: This time,
we’ll learn how to use *separate* and *unite* to deal with *cell values* that
are untidy.
*Reading*: [Separate and Unite Columns](https://rstudio.cloud/learn/primers/4.2)
*Topics*: Welcome, separate(), unite(), Case study
*Reading Time*: \~30 minutes
*Notes*:
\- I had trouble running the Case study in my browser. Note that the `who` dataset is loaded by the `tidyverse`. You can run the Case study locally if you need to!
\- The case study uses `gather` instead of `pivot_longer`; feel free to use `pivot_longer` in place.
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
The Case study was already a fair bit of work! Let’s do some simple review with
`separate` and `unite`.
27\.1 Punnett Square
--------------------
Let’s make a [Punnett square](https://en.wikipedia.org/wiki/Punnett_square) with
`unite` and some pivoting. You don’t need to remember any biology for this
example: Your task is to take `genes` and turn the data into `punnett`.
```
punnett <-
tribble(
~parent1, ~a, ~A,
"a", "aa", "aA",
"A", "Aa", "AA"
)
punnett
```
```
## # A tibble: 2 × 3
## parent1 a A
## <chr> <chr> <chr>
## 1 a aa aA
## 2 A Aa AA
```
```
genes <-
expand_grid(
parent1 = c("a", "A"),
parent2 = c("a", "A")
)
genes
```
```
## # A tibble: 4 × 2
## parent1 parent2
## <chr> <chr>
## 1 a a
## 2 a A
## 3 A a
## 4 A A
```
### 27\.1\.1 **q1** Use a combination of `unite` and pivoting to turn `genes` into the same dataframe as `punnett`.
```
df_q1 <-
genes %>%
unite(col = "offspring", sep = "", remove = FALSE, parent1, parent2) %>%
pivot_wider(
names_from = parent2,
values_from = offspring
)
df_q1
```
```
## # A tibble: 2 × 3
## parent1 a A
## <chr> <chr> <chr>
## 1 a aa aA
## 2 A Aa AA
```
Use the following test to check your answer:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(df_q1, punnett)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
27\.2 Alloys, Revisited
-----------------------
In the previous data exercise, we studied an alloys dataset:
```
## NOTE: No need to edit; execute
alloys_mod <- tribble(
~thick, ~E00, ~mu00, ~E45, ~mu45, ~rep,
0.022, 10600, 0.321, 10700, 0.329, 1,
0.022, 10600, 0.323, 10500, 0.331, 2,
0.032, 10400, 0.329, 10400, 0.318, 1,
0.032, 10300, 0.319, 10500, 0.326, 2
)
alloys_mod
```
```
## # A tibble: 4 × 6
## thick E00 mu00 E45 mu45 rep
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.022 10600 0.321 10700 0.329 1
## 2 0.022 10600 0.323 10500 0.331 2
## 3 0.032 10400 0.329 10400 0.318 1
## 4 0.032 10300 0.319 10500 0.326 2
```
This *slightly modified* version of the data no longer has a convenient
separator to help with pivoting. We’ll use a combination of pivoting and
separate to tidy these data.
### 27\.2\.1 **q2** Use a combination of `separate` and pivoting to tidy `alloys_mod`.
```
df_q2 <-
alloys_mod %>%
pivot_longer(
names_to = "varang",
values_to = "value",
cols = c(-thick, -rep)
) %>%
separate(
col = varang,
into = c("var", "ang"),
sep = -2,
convert = TRUE
) %>%
pivot_wider(
names_from = var,
values_from = value
)
df_q2
```
```
## # A tibble: 8 × 5
## thick rep ang E mu
## <dbl> <dbl> <int> <dbl> <dbl>
## 1 0.022 1 0 10600 0.321
## 2 0.022 1 45 10700 0.329
## 3 0.022 2 0 10600 0.323
## 4 0.022 2 45 10500 0.331
## 5 0.032 1 0 10400 0.329
## 6 0.032 1 45 10400 0.318
## 7 0.032 2 0 10300 0.319
## 8 0.032 2 45 10500 0.326
```
Use the following tests to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
(dim(df_q2)[1] == 8) & (dim(df_q2)[2] == 5)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
27\.1 Punnett Square
--------------------
Let’s make a [Punnett square](https://en.wikipedia.org/wiki/Punnett_square) with
`unite` and some pivoting. You don’t need to remember any biology for this
example: Your task is to take `genes` and turn the data into `punnett`.
```
punnett <-
tribble(
~parent1, ~a, ~A,
"a", "aa", "aA",
"A", "Aa", "AA"
)
punnett
```
```
## # A tibble: 2 × 3
## parent1 a A
## <chr> <chr> <chr>
## 1 a aa aA
## 2 A Aa AA
```
```
genes <-
expand_grid(
parent1 = c("a", "A"),
parent2 = c("a", "A")
)
genes
```
```
## # A tibble: 4 × 2
## parent1 parent2
## <chr> <chr>
## 1 a a
## 2 a A
## 3 A a
## 4 A A
```
### 27\.1\.1 **q1** Use a combination of `unite` and pivoting to turn `genes` into the same dataframe as `punnett`.
```
df_q1 <-
genes %>%
unite(col = "offspring", sep = "", remove = FALSE, parent1, parent2) %>%
pivot_wider(
names_from = parent2,
values_from = offspring
)
df_q1
```
```
## # A tibble: 2 × 3
## parent1 a A
## <chr> <chr> <chr>
## 1 a aa aA
## 2 A Aa AA
```
Use the following test to check your answer:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(df_q1, punnett)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 27\.1\.1 **q1** Use a combination of `unite` and pivoting to turn `genes` into the same dataframe as `punnett`.
```
df_q1 <-
genes %>%
unite(col = "offspring", sep = "", remove = FALSE, parent1, parent2) %>%
pivot_wider(
names_from = parent2,
values_from = offspring
)
df_q1
```
```
## # A tibble: 2 × 3
## parent1 a A
## <chr> <chr> <chr>
## 1 a aa aA
## 2 A Aa AA
```
Use the following test to check your answer:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(df_q1, punnett)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
27\.2 Alloys, Revisited
-----------------------
In the previous data exercise, we studied an alloys dataset:
```
## NOTE: No need to edit; execute
alloys_mod <- tribble(
~thick, ~E00, ~mu00, ~E45, ~mu45, ~rep,
0.022, 10600, 0.321, 10700, 0.329, 1,
0.022, 10600, 0.323, 10500, 0.331, 2,
0.032, 10400, 0.329, 10400, 0.318, 1,
0.032, 10300, 0.319, 10500, 0.326, 2
)
alloys_mod
```
```
## # A tibble: 4 × 6
## thick E00 mu00 E45 mu45 rep
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.022 10600 0.321 10700 0.329 1
## 2 0.022 10600 0.323 10500 0.331 2
## 3 0.032 10400 0.329 10400 0.318 1
## 4 0.032 10300 0.319 10500 0.326 2
```
This *slightly modified* version of the data no longer has a convenient
separator to help with pivoting. We’ll use a combination of pivoting and
separate to tidy these data.
### 27\.2\.1 **q2** Use a combination of `separate` and pivoting to tidy `alloys_mod`.
```
df_q2 <-
alloys_mod %>%
pivot_longer(
names_to = "varang",
values_to = "value",
cols = c(-thick, -rep)
) %>%
separate(
col = varang,
into = c("var", "ang"),
sep = -2,
convert = TRUE
) %>%
pivot_wider(
names_from = var,
values_from = value
)
df_q2
```
```
## # A tibble: 8 × 5
## thick rep ang E mu
## <dbl> <dbl> <int> <dbl> <dbl>
## 1 0.022 1 0 10600 0.321
## 2 0.022 1 45 10700 0.329
## 3 0.022 2 0 10600 0.323
## 4 0.022 2 45 10500 0.331
## 5 0.032 1 0 10400 0.329
## 6 0.032 1 45 10400 0.318
## 7 0.032 2 0 10300 0.319
## 8 0.032 2 45 10500 0.326
```
Use the following tests to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
(dim(df_q2)[1] == 8) & (dim(df_q2)[2] == 5)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 27\.2\.1 **q2** Use a combination of `separate` and pivoting to tidy `alloys_mod`.
```
df_q2 <-
alloys_mod %>%
pivot_longer(
names_to = "varang",
values_to = "value",
cols = c(-thick, -rep)
) %>%
separate(
col = varang,
into = c("var", "ang"),
sep = -2,
convert = TRUE
) %>%
pivot_wider(
names_from = var,
values_from = value
)
df_q2
```
```
## # A tibble: 8 × 5
## thick rep ang E mu
## <dbl> <dbl> <int> <dbl> <dbl>
## 1 0.022 1 0 10600 0.321
## 2 0.022 1 45 10700 0.329
## 3 0.022 2 0 10600 0.323
## 4 0.022 2 45 10500 0.331
## 5 0.032 1 0 10400 0.329
## 6 0.032 1 45 10400 0.318
## 7 0.032 2 0 10300 0.319
## 8 0.032 2 45 10500 0.326
```
Use the following tests to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
(dim(df_q2)[1] == 8) & (dim(df_q2)[2] == 5)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-scatterplots-and-layers.html |
28 Vis: Scatterplots and Layers
===============================
*Purpose*: *Scatterplots* are a key tool for EDA. Scatteplots help us inspect the relationship between two variables. To enhance our scatterplots, we’ll learn how to use *layers* in ggplot to add multiple pieces of information to our plots.
*Reading*: [Scatterplots](https://rstudio.cloud/learn/primers/3.5)
*Topics*: (All topics)
*Reading Time*: \~40 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(ggrepel)
```
28\.1 A Note on Layers
----------------------
In the reading we learned about *layers* in ggplot. Formally, ggplot is a
“layered grammar of graphics”; each layer has the option to use built\-in or
inherited defaults, or override those defaults. There are two major settings we
might want to change: the source of `data` or the `mapping` which defines the
aesthetics. If we’re being verbose, we write a ggplot call like:
```
## NOTE: No need to modify! Just example code
ggplot(
data = mpg,
mapping = aes(x = displ, y = hwy)
) +
geom_point()
```
However, ggplot makes a number of sensible defaults to help save us typing.
Ggplot assumes an order for `data, mapping`, so we can drop the keywords:
```
## NOTE: No need to modify! Just example code
ggplot(
mpg,
aes(x = displ, y = hwy)
) +
geom_point()
```
Similarly the aesthetic function `aes()` assumes the first two arguments will be
`x, y`, so we can drop those arguments as well
```
## NOTE: No need to modify! Just example code
ggplot(
mpg,
aes(displ, hwy)
) +
geom_point()
```
Above `geom_point()` inherits the `mapping` from the base `ggplot` call;
however, we can override this. This can be helpful for a number of different
purposes:
```
## NOTE: No need to modify! Just example code
ggplot(mpg, aes(x = displ)) +
geom_point(aes(y = hwy, color = "hwy")) +
geom_point(aes(y = cty, color = "cty"))
```
Later, we’ll learn more concise ways to construct graphs like the one above. But
for now, we’ll practice using layers to add more information to scatterplots.
28\.2 Exercises
---------------
### 28\.2\.1 **q1** Add two `geom_smooth` trends to the following plot. Use “gam” for one
trend and “lm” for the other. Comment on how linear or nonlinear the “gam” trend
looks.
```
diamonds %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_smooth(aes(color = "gam"), method = "gam") +
geom_smooth(aes(color = "lm"), method = "lm")
```
```
## `geom_smooth()` using formula = 'y ~ s(x, bs = "cs")'
## `geom_smooth()` using formula = 'y ~ x'
```
**Observations**:
\- No; the “gam” trend curves below then above the linear trend
### 28\.2\.2 **q2** Add non\-overlapping labels to the following scattterplot using the
provided `df_annotate`.
*Hint 1*: `geom_label_repel` comes from the `ggrepel` package. Make sure to load
it, and adhere to best\-practices!
*Hint 2*: You’ll have to use the `data` keyword to override the data layer!
```
## TODO: Use df_annotate below to add text labels to the scatterplot
df_annotate <-
mpg %>%
group_by(class) %>%
summarize(
displ = mean(displ),
hwy = mean(hwy)
)
mpg %>%
ggplot(aes(displ, hwy)) +
geom_point(aes(color = class)) +
geom_label_repel(
data = df_annotate,
aes(label = class, fill = class)
)
```
### 28\.2\.3 **q3** Study the following scatterplot: Note whether city (`cty`) or highway
(`hwy`) mileage tends to be greater. Describe the trend (visualized by
`geom_smooth`) in mileage with engine displacement (a measure of engine size).
*Note*: The grey region around the smooth trend is a *confidence bound*; we’ll
discuss these further as we get deeper into statistical literacy.
```
## NOTE: No need to modify! Just analyze the scatterplot
mpg %>%
pivot_longer(names_to = "source", values_to = "mpg", c(hwy, cty)) %>%
ggplot(aes(displ, mpg, color = source)) +
geom_point() +
geom_smooth() +
scale_color_discrete(name = "Mileage Type") +
labs(
x = "Engine displacement (liters)",
y = "Mileage (mpg)"
)
```
```
## `geom_smooth()` using method = 'loess' and formula = 'y ~ x'
```
**Observations**:
\- `hwy` mileage tends to be larger; driving on the highway is more efficient
\- Mileage tends to decrease with engine size; cars with larger engines tend to be less efficient
28\.3 Aside: Scatterplot vs bar chart
-------------------------------------
Why use a scatterplot vs a bar chart? A bar chart is useful for emphasizing some *threshold*. Let’s look at a few examples:
28\.4 Raw populations
---------------------
Two visuals of the same data:
```
economics %>%
filter(date > lubridate::ymd("2010-01-01")) %>%
ggplot(aes(date, pop)) +
geom_col()
```
Here we’re emphasizing zero, so we don’t see much of a change
```
economics %>%
filter(date > lubridate::ymd("2010-01-01")) %>%
ggplot(aes(date, pop)) +
geom_point()
```
Here’s we’re not emphasizing zero; the scale is adjusted to emphasize the trend in the data.
28\.5 Population changes
------------------------
Two visuals of the same data:
```
economics %>%
mutate(pop_delta = pop - lag(pop)) %>%
filter(date > lubridate::ymd("2005-01-01")) %>%
ggplot(aes(date, pop_delta)) +
geom_col()
```
Here we’re emphasizing zero, so we can easily see the month of negative change.
```
economics %>%
mutate(pop_delta = pop - lag(pop)) %>%
filter(date > lubridate::ymd("2005-01-01")) %>%
ggplot(aes(date, pop_delta)) +
geom_point()
```
Here we’re not emphasizing zero; we can easily see the outlier month, but we have to read the axis to see that this is a case of negative growth.
For more, see [Bars vs Dots](https://dcl-data-vis.stanford.edu/discrete-continuous.html#bars-vs.-dots).
28\.1 A Note on Layers
----------------------
In the reading we learned about *layers* in ggplot. Formally, ggplot is a
“layered grammar of graphics”; each layer has the option to use built\-in or
inherited defaults, or override those defaults. There are two major settings we
might want to change: the source of `data` or the `mapping` which defines the
aesthetics. If we’re being verbose, we write a ggplot call like:
```
## NOTE: No need to modify! Just example code
ggplot(
data = mpg,
mapping = aes(x = displ, y = hwy)
) +
geom_point()
```
However, ggplot makes a number of sensible defaults to help save us typing.
Ggplot assumes an order for `data, mapping`, so we can drop the keywords:
```
## NOTE: No need to modify! Just example code
ggplot(
mpg,
aes(x = displ, y = hwy)
) +
geom_point()
```
Similarly the aesthetic function `aes()` assumes the first two arguments will be
`x, y`, so we can drop those arguments as well
```
## NOTE: No need to modify! Just example code
ggplot(
mpg,
aes(displ, hwy)
) +
geom_point()
```
Above `geom_point()` inherits the `mapping` from the base `ggplot` call;
however, we can override this. This can be helpful for a number of different
purposes:
```
## NOTE: No need to modify! Just example code
ggplot(mpg, aes(x = displ)) +
geom_point(aes(y = hwy, color = "hwy")) +
geom_point(aes(y = cty, color = "cty"))
```
Later, we’ll learn more concise ways to construct graphs like the one above. But
for now, we’ll practice using layers to add more information to scatterplots.
### 28\.2\.1 **q1** Add two `geom_smooth` trends to the following plot. Use “gam” for one
trend and “lm” for the other. Comment on how linear or nonlinear the “gam” trend
looks.
```
diamonds %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_smooth(aes(color = "gam"), method = "gam") +
geom_smooth(aes(color = "lm"), method = "lm")
```
```
## `geom_smooth()` using formula = 'y ~ s(x, bs = "cs")'
## `geom_smooth()` using formula = 'y ~ x'
```
**Observations**:
\- No; the “gam” trend curves below then above the linear trend
### 28\.2\.2 **q2** Add non\-overlapping labels to the following scattterplot using the
provided `df_annotate`.
*Hint 1*: `geom_label_repel` comes from the `ggrepel` package. Make sure to load
it, and adhere to best\-practices!
*Hint 2*: You’ll have to use the `data` keyword to override the data layer!
```
## TODO: Use df_annotate below to add text labels to the scatterplot
df_annotate <-
mpg %>%
group_by(class) %>%
summarize(
displ = mean(displ),
hwy = mean(hwy)
)
mpg %>%
ggplot(aes(displ, hwy)) +
geom_point(aes(color = class)) +
geom_label_repel(
data = df_annotate,
aes(label = class, fill = class)
)
```
### 28\.2\.3 **q3** Study the following scatterplot: Note whether city (`cty`) or highway
(`hwy`) mileage tends to be greater. Describe the trend (visualized by
`geom_smooth`) in mileage with engine displacement (a measure of engine size).
*Note*: The grey region around the smooth trend is a *confidence bound*; we’ll
discuss these further as we get deeper into statistical literacy.
```
## NOTE: No need to modify! Just analyze the scatterplot
mpg %>%
pivot_longer(names_to = "source", values_to = "mpg", c(hwy, cty)) %>%
ggplot(aes(displ, mpg, color = source)) +
geom_point() +
geom_smooth() +
scale_color_discrete(name = "Mileage Type") +
labs(
x = "Engine displacement (liters)",
y = "Mileage (mpg)"
)
```
```
## `geom_smooth()` using method = 'loess' and formula = 'y ~ x'
```
**Observations**:
\- `hwy` mileage tends to be larger; driving on the highway is more efficient
\- Mileage tends to decrease with engine size; cars with larger engines tend to be less efficient
28\.3 Aside: Scatterplot vs bar chart
-------------------------------------
Why use a scatterplot vs a bar chart? A bar chart is useful for emphasizing some *threshold*. Let’s look at a few examples:
28\.4 Raw populations
---------------------
Two visuals of the same data:
```
economics %>%
filter(date > lubridate::ymd("2010-01-01")) %>%
ggplot(aes(date, pop)) +
geom_col()
```
Here we’re emphasizing zero, so we don’t see much of a change
```
economics %>%
filter(date > lubridate::ymd("2010-01-01")) %>%
ggplot(aes(date, pop)) +
geom_point()
```
Here’s we’re not emphasizing zero; the scale is adjusted to emphasize the trend in the data.
28\.5 Population changes
------------------------
Two visuals of the same data:
```
economics %>%
mutate(pop_delta = pop - lag(pop)) %>%
filter(date > lubridate::ymd("2005-01-01")) %>%
ggplot(aes(date, pop_delta)) +
geom_col()
```
Here we’re emphasizing zero, so we can easily see the month of negative change.
```
economics %>%
mutate(pop_delta = pop - lag(pop)) %>%
filter(date > lubridate::ymd("2005-01-01")) %>%
ggplot(aes(date, pop_delta)) +
geom_point()
```
Here we’re not emphasizing zero; we can easily see the outlier month, but we have to read the axis to see that this is a case of negative growth.
For more, see [Bars vs Dots](https://dcl-data-vis.stanford.edu/discrete-continuous.html#bars-vs.-dots).
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-descriptive-statistics.html |
29 Stats: Descriptive Statistics
================================
*Purpose*: We will use *descriptive statistics* to make quantitative summaries of a dataset. Descriptive statistics provide a much more compact description than a visualization, and are important when a data consumer wants “just one number”.
*Reading*: (None; this exercise *is* the reading.)
*Topics*: Mean, standard deviation, median, quantiles, dependence, correlation, robustness
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(nycflights13)
library(gapminder)
library(mvtnorm)
## NOTE: No need to change this!
vis_central <- function(df, var) {
df %>%
ggplot(aes({{var}})) +
geom_density() +
geom_vline(
data = . %>% summarize(mu = mean({{var}}, na.rm = TRUE)),
mapping = aes(xintercept = mu, color = "Mean")
) +
geom_vline(
data = . %>% summarize(mu = median({{var}}, na.rm = TRUE)),
mapping = aes(xintercept = mu, color = "Median")
) +
scale_color_discrete(name = "Location")
}
```
29\.1 Statistics
----------------
A *statistic* is a numerical summary of a sample. Statistics are useful because they provide a useful summary about our data. A histogram gives us a rich summary of a datset: for example the departure delay time in the NYC flight data.
```
## NOTE: No need to change this!
flights %>%
ggplot(aes(dep_delay)) +
geom_histogram(bins = 60) +
scale_x_log10()
```
```
## Warning in self$trans$transform(x): NaNs produced
```
```
## Warning: Transformation introduced infinite values in continuous x-axis
```
```
## Warning: Removed 208344 rows containing non-finite values (`stat_bin()`).
```
However, we might be interested in a few questions about these data:
* What is a *typical* value for the departure delay? (Location)
* How *variable* are departure delay times? (Spread)
* How much does departure delay *co\-vary* with distance? (Dependence)
We can give quantitative answers to all these questions using statistics!
29\.2 Handling NA’s
-------------------
Before we can start computing (descriptive) statistics, we need to learn how to deal with data issues. For instance, in the NYC flights data, we have a number of `NA`s.
```
## NOTE: No need to change this!
flights %>%
summarize(across(where(is.numeric), ~sum(is.na(.)))) %>%
glimpse
```
```
## Rows: 1
## Columns: 14
## $ year <int> 0
## $ month <int> 0
## $ day <int> 0
## $ dep_time <int> 8255
## $ sched_dep_time <int> 0
## $ dep_delay <int> 8255
## $ arr_time <int> 8713
## $ sched_arr_time <int> 0
## $ arr_delay <int> 9430
## $ flight <int> 0
## $ air_time <int> 9430
## $ distance <int> 0
## $ hour <int> 0
## $ minute <int> 0
```
These `NA`s will “infect” our computation, and lead to `NA` summaries.
```
## NOTE: No need to change this!
flights %>%
summarize(across(where(is.numeric), mean)) %>%
glimpse
```
```
## Rows: 1
## Columns: 14
## $ year <dbl> 2013
## $ month <dbl> 6.54851
## $ day <dbl> 15.71079
## $ dep_time <dbl> NA
## $ sched_dep_time <dbl> 1344.255
## $ dep_delay <dbl> NA
## $ arr_time <dbl> NA
## $ sched_arr_time <dbl> 1536.38
## $ arr_delay <dbl> NA
## $ flight <dbl> 1971.924
## $ air_time <dbl> NA
## $ distance <dbl> 1039.913
## $ hour <dbl> 13.18025
## $ minute <dbl> 26.2301
```
Let’s learn how to handle this:
### 29\.2\.1 **q1** The following code returns `NA`. Look up the documentation for `mean` and use an additional argument to strip the `NA` values in the dataset before computing the mean. Make this modification to the code below and report the mean departure delay time.
```
## TASK: Edit to drop all NAs before computing the mean
flights %>%
summarize(dep_delay = mean(dep_delay, na.rm = TRUE))
```
```
## # A tibble: 1 × 1
## dep_delay
## <dbl>
## 1 12.6
```
**Observations**:
* The mean departure delay is about `12.6` minutes
29\.3 Central Tendency
----------------------
*Central tendency* is the idea of where data tend to be “located”—this concept is also called *location*. It is best thought of as the “center” of the data. The following graph illustrates central tendency.
```
## NOTE: No need to change this!
set.seed(101)
tibble(z = rnorm(n = 1e3)) %>%
vis_central(z)
```
There are two primary measures of central tendency; the *mean* and *median*. The mean is the simple [arithmetic average](https://en.wikipedia.org/wiki/Arithmetic_mean): the sum of all values divided by the total number of values. The mean is denoted by \\(\\overline{x}\\) and defined by
\\\[\\overline{X} \= \\frac{1}{n} \\sum\_{i\=1}^n X\_i,\\]
where \\(n\\) is the number of data points, and the \\(X\_i\\) are the individual values.
The [median](https://en.wikipedia.org/wiki/Median) is the value that separates half the data above and below. Weirdly, there’s no standard symbol for the median, so we’ll just denote it as \\(\\text{Median}\[D]\\) to denote the median of a set of data \\(D\\).
The median is a *robust* statistic, which is best illustrated by example. Consider the following two samples `v_base` and `v_outlier`. The sample `v_outlier` has an *outlier*, a value very different from the other values. Observe what value the mean and median take for these different samples.
```
## NOTE: No need to change this!
v_base <- c(1, 2, 3, 4, 5)
v_outlier <- c(v_base, 1e3)
tibble(
mean_base = mean(v_base),
median_base = median(v_base),
mean_outlier = mean(v_outlier),
median_outlier = median(v_outlier)
) %>% glimpse
```
```
## Rows: 1
## Columns: 4
## $ mean_base <dbl> 3
## $ median_base <dbl> 3
## $ mean_outlier <dbl> 169.1667
## $ median_outlier <dbl> 3.5
```
Note that for `v_outlier` the mean is greatly increased, but the median is only slightly changed. It is in this sense that the median is *robust*—it is robust to outliers. When one has a dataset with outliers, the median is usually a better measure of central tendency.\[1]
It can be useful to think about when the mean and median agree or disagree with each other. For instance, with the flights data:
```
## NOTE: No need to change this!
flights %>% vis_central(dep_delay)
```
```
## Warning: Removed 8255 rows containing non-finite values (`stat_density()`).
```
the mean and median `dep_delay` largely agree (relative to all the other data). But for the gapminder data:
```
## NOTE: No need to change this!
gapminder %>%
filter(year == max(year)) %>%
vis_central(gdpPercap)
```
the mean and median `gdpPercap` disagree.\[2]
### 29\.3\.1 **q2** The following code computes the mean and median `dep_delay` for each carrier, and sorts based on mean. Duplicate the code, and sort by median instead. Report your observations on which carriers are in both lists, and which are different. Also comment on what negative `dep_delay` values mean.
*Hint*: Remember you can check the documentation of a built\-in dataset with `?flights`!
```
## NOTE: No need to change this!
flights %>%
group_by(carrier) %>%
summarize(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(mean)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier mean median
## <chr> <dbl> <dbl>
## 1 F9 20.2 0.5
## 2 EV 20.0 -1
## 3 YV 19.0 -2
## 4 FL 18.7 1
## 5 WN 17.7 1
```
```
## TASK: Duplicate the code above, but sort by `median` instead
flights %>%
group_by(carrier) %>%
summarize(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(median)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier mean median
## <chr> <dbl> <dbl>
## 1 FL 18.7 1
## 2 WN 17.7 1
## 3 F9 20.2 0.5
## 4 UA 12.1 0
## 5 VX 12.9 0
```
**Observations**:
* The carriers `F9, FL, WN` are in both lists
* The carriers `EV, YV` are top in mean, while `UA, VX` are top in median
* Negative values of `dep_delay` signal early departures
29\.4 Multi\-modality
---------------------
It may not seem like it, but we’re actually *making an assumption* when we use the mean (or median) as a typical value. Imagine we had the following data:
```
bind_rows(
tibble(X = rnorm(300, mean = -2)),
tibble(X = rnorm(300, mean = +2))
) %>%
ggplot(aes(X)) +
geom_histogram(bins = 60) +
geom_vline(aes(xintercept = mean(X), color = "Mean")) +
geom_vline(aes(xintercept = median(X), color = "Median")) +
scale_color_discrete(name = "Statistic")
```
Here the mean and median are both close to zero, but *zero is an atypical number*! This is partly why we don’t *only* compute descriptive statistics, but also do a deeper dive into our data. Here, we should probably refuse to give a single typical value; instead, it seems there might really be two populations showing up in the same dataset, so we can give two typical numbers, say `-2, +2`.
29\.5 Quantiles
---------------
Before we can talk about spread, we need to talk about *quantiles*. A [quantile](https://en.wikipedia.org/wiki/Quantile) is a value that separates a user\-specified fraction of data (or a distribution). For instance, the median is the \\(50%\\) quantile; thus \\(\\text{Median}\[D] \= Q\_{0\.5}\[D]\\). We can generalize this idea to talk about any quantile between \\(0%\\) and \\(100%\\).
The following graph visualizes the \\(25%, 50%, 75%\\) quantiles of a standard normal. Since these are the quarter\-quantiles (\\(1/4, 2/4, 3/4\\)), these are often called the *quartiles*.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_line() +
geom_segment(
data = tibble(p = c(0.25, 0.50, 0.75)) %>%
mutate(
z = qnorm(p),
d = dnorm(z)
),
mapping = aes(xend = z, yend = 0, color = as_factor(p))
) +
scale_color_discrete(name = "Quantile")
```
*Note*: The function `qnorm` returns the quantiles of a normal distribution. We’ll focus on quantiles of *samples* in this exercise.
We’ll use the quartiles to define the *interquartile range*. First, the `quantile()` function computes quantiles of a sample. For example:
```
## NOTE: No need to change this! Run for an example
flights %>%
pull(dep_delay) %>%
quantile(., probs = c(0, 0.25, 0.50, 0.75, 1.00), na.rm = TRUE)
```
```
## 0% 25% 50% 75% 100%
## -43 -5 -2 11 1301
```
Like with `mean, median`, we need to specify if we want to remove `NA`s. We can provide a list of `probs` to specify the probabilities of the quantiles. Remember: a *probability* is a value between \\(\[0, 1]\\), while a *quantile* is a value that probably has units, like minutes in the case of `dep_delay`.
Now we can define the interquartile range:
\\\[IQR\[D] \= Q\_{0\.75}\[D] \- Q\_{0\.25}\[D]\\],
where \\(Q\_{p}\[D]\\) is the \\(p\\)\-th quantile of a sample \\(D\\).
### 29\.5\.1 **q3** Using the function `quantile`, compute the *interquartile range*; this is the difference between the \\(75%\\) and \\(25%\\) quantiles.
```
## NOTE: No need to change this!
set.seed(101)
v_test_iqr <- rnorm(n = 10)
test_iqr <- quantile(v_test_iqr, probs = 0.75) - quantile(v_test_iqr, probs = 0.25)
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(test_iqr == IQR(v_test_iqr))
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
29\.6 Spread
------------
*Spread* is the concept of how tightly or widely data are *spread out*. There are two primary measures of spread: the *standard deviation*, and the *interquartile range*.
The [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) (SD) is denoted by \\(s\\) and defined by
\\\[s \= \\sqrt{ \\frac{1}{n\-1} \\sum\_{i\=1}^n (X\_i \- \\overline{X})^2 },\\]
where \\(\\overline{X}\\) is the mean of the data. Note the factor of \\(n\-1\\) rather than \\(n\\): This slippery idea is called [Bessel’s correction](https://en.wikipedia.org/wiki/Bessel%27s_correction). Note that \\(\\sigma^2\\) is called the *variance*.
By way of analogy, mean is to standard deviation as median is to IQR: The IQR is a robust measure of spread. Returning to our outlier example:
```
## NOTE: No need to change this!
v_base <- c(1, 2, 3, 4, 5)
v_outlier <- c(v_base, 1e3)
tibble(
sd_base = sd(v_base),
IQR_base = IQR(v_base),
sd_outlier = sd(v_outlier),
IQR_outlier = IQR(v_outlier)
) %>% glimpse
```
```
## Rows: 1
## Columns: 4
## $ sd_base <dbl> 1.581139
## $ IQR_base <dbl> 2
## $ sd_outlier <dbl> 407.026
## $ IQR_outlier <dbl> 2.5
```
### 29\.6\.1 **q4** Using the code from q2 as a starting point, compute the standard deviation (`sd()`) and interquartile range (`IQR()`), and rank the top five carriers, this time by sd and IQR. Report your observations on which carriers are in both lists, and which are different. Also note and comment on which carrier (among your top\-ranked) has the largest difference between `sd` and `IQR`.
```
## TODO: Use code from q2 to compute the sd and IQR, rank as before
flights %>%
group_by(carrier) %>%
summarize(
sd = sd(dep_delay, na.rm = TRUE),
IQR = IQR(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(sd)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier sd IQR
## <chr> <dbl> <dbl>
## 1 HA 74.1 6
## 2 F9 58.4 22
## 3 FL 52.7 21
## 4 YV 49.2 30
## 5 EV 46.6 30
```
```
flights %>%
group_by(carrier) %>%
summarize(
sd = sd(dep_delay, na.rm = TRUE),
IQR = IQR(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(IQR)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier sd IQR
## <chr> <dbl> <dbl>
## 1 EV 46.6 30
## 2 YV 49.2 30
## 3 9E 45.9 23
## 4 F9 58.4 22
## 5 FL 52.7 21
```
**Observations**:
* The carriers `F9, FL, YV, EV` are in both lists
* The carrier `HA` is top in `sd`, while `9E` is top in IQR
* `HA` has a large difference between `sd` and `IQR`; based on the following vis, it appears that `HA` has a lot more outliers than other carriers, which bumps up its `sd`
```
flights %>%
filter(carrier %in% c("HA", "F9", "FL", "YV", "EV")) %>%
ggplot(aes(carrier, dep_delay)) +
geom_boxplot()
```
```
## Warning: Removed 2949 rows containing non-finite values (`stat_boxplot()`).
```
29\.7 Dependence
----------------
So far, we’ve talked about descriptive statistics to consider one variable at a time. To conclude, we’ll talk about statistics to consider *dependence* between two variables in a dataset.
[Dependence](https://en.wikipedia.org/wiki/Correlation_and_dependence)—like location or spread—is a general idea of relation between two variables. For instance, when it comes to flights we’d expect trips between more distant airports to take longer. If we plot `distance` vs `air_time` in a scatterplot, we indeed see this dependence.
```
flights %>%
ggplot(aes(air_time, distance)) +
geom_point()
```
```
## Warning: Removed 9430 rows containing missing values (`geom_point()`).
```
Two flavors of correlation help us make this idea quantitative: the *Pearson correlation* and *Spearman correlation*. Unlike our previous quantities for location and spread, these correlations are *dimensionless* (they have no units), and they are bounded between \\(\[\-1, \+1]\\).
The [Pearson correlation](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is often denoted by \\(r\_{XY}\\), and it specifies the variables being considered \\(X, Y\\). It is defined by
\\\[r\_{XY} \= \\frac{\\sum\_{i\=1}^n (X\_i \- \\overline{X}) (Y\_i \- \\overline{Y})}{s\_X s\_Y}.\\]
The [Spearman correlation](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) is often denoted by \\(\\rho\_{XY}\\), and is actually defined in terms of the Pearson correlation \\(r\_{XY}\\), but with the ranks (\\(1\\) to \\(n\\)) rather than the values \\(X\_i, Y\_i\\).
For example, we might expect a strong correlation between the `air_time` and the `distance` between airports. The function `cor` computes the Pearson correlation.
```
## NOTE: No need to change this!
flights %>%
summarize(rho = cor(air_time, distance, use = "na.or.complete"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.991
```
*Note*: Unfortunately, the function `cor` doesn’t follow the same pattern as `mean` or `sd`. We have to use this `use` argument to filter `NA`s.
However, we wouldn’t expect any relation between `air_time` and `month`.
```
## NOTE: No need to change this!
flights %>%
summarize(rho = cor(air_time, month, use = "na.or.complete"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.0109
```
In the case of a *perfect linear relationships* the Pearson correlation takes the value \\(\+1\\) (for a positive slope) or \\(\-1\\) for a negative slope.
### 29\.7\.1 **q5** Compute the Pearson correlation between `x, y` below. Play with the `slope` and observe the change in the correlation.
```
slope <- 0.5 # Play with this value; observe the correlation
df_line <-
tibble(x = seq(-1, +1, length.out = 50)) %>%
mutate(y = slope * x)
df_line %>%
ggplot(aes(x, y)) +
geom_point()
```
**Observations**:
* `slope` values greater than 0 have a positive correlation
* `slope` values less than 0 have a negative correlation
Note that this means *correlation is a measure of dependence*; it is **not** a measure of slope! It is better thought of as how *strong* the relationship between two variables is. A closer\-to\-zero correlation indicates a noisy relationship between variables, while a closer\-to\-one (in absolute value) indicates a more perfect, predictable relationship between the variables. For instance, the following code simulates data with different correlations, and facets the data based on the underlying correlation.
```
## NOTE: No need to change this!
map_dfr(
c(-1.0, -0.5, +0.0, +0.5, +1.0), # Chosen correlations
function(r) {
# Simulate a multivariate gaussian
X <- rmvnorm(
n = 100,
sigma = matrix(c(1, r, r, 1), nrow = 2)
)
# Package and return the data
tibble(
x = X[, 1],
y = X[, 2],
r = r
)
}
) %>%
# Plot the data
ggplot(aes(x, y)) +
geom_point() +
facet_wrap(~r)
```
One of the primary differences between Pearson and Spearman is that Pearson is a *linear correlation*, while Spearman is a *nonlinear correlation*. For instance, the following data
```
## NOTE: No need to change this!
# Positive slope
df_monotone <-
tibble(x = seq(-pi/2 + 0.1, +pi/2 - 0.1, length.out = 50)) %>%
mutate(y = tan(x))
df_monotone %>%
ggplot(aes(x, y)) +
geom_point()
```
have a perfect relationship between them. The Pearson correlation does not pick up on this fact, while the Spearman correlation indicates a perfect relation.
```
# Positive slope
df_monotone %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.846
```
```
df_monotone %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 1
```
One more note about functional relationships: Neither Pearson nor Spearman can pick up on arbitrary dependencies.
### 29\.7\.2 **q6** Run the code chunk below and look at the visualization: Make a prediction about what you think the correlation will be. Then compute the Pearson correlation between `x, y` below.
```
## NOTE: No need to change this!
df_quad <-
tibble(x = seq(-1, +1, length.out = 50)) %>%
mutate(y = x^2 - 0.5)
## TASK: Compute the Pearson and Spearman correlations on `df_quad`
df_quad %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 -2.02e-16
```
```
df_quad %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 -0.0236
```
```
df_quad %>%
ggplot(aes(x, y)) +
geom_point()
```
**Observations**:
* Both correlations are near\-zero
One last point about correlation: The mean is to Pearson correlation as the median is to Spearman correlation. The median and Spearman’s rho are robust to outliers.
```
## NOTE: No need to change this!
set.seed(101)
X <- rmvnorm(
n = 25,
sigma = matrix(c(1, 0.9, 0.9, 1), nrow = 2)
)
df_cor_outliers <-
tibble(
x = X[, 1],
y = X[, 2]
) %>%
bind_rows(tibble(x = c(-10.1, -10, 10, 10.1), y = c(-1.2, -1.1, 1.1, 1.2)))
df_cor_outliers %>%
ggplot(aes(x, y)) +
geom_point()
```
```
df_cor_outliers %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.621
```
```
df_cor_outliers %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.884
```
29\.8 Notes
-----------
\[1] So then why bother with the mean? It turns out the mean is a fundamental idea in statistics, as it’s a key component of *a lot* of other statistical procedures. You’ll end up using the mean in countless different ways, so it’s worth recognizing its weakness to outliers.
\[2] As a side note, since dollars are pretty\-well divorced from reality (there’s not physical upper bound on perceived value), distributions of dollars can have very large outliers. That’s why you often see *median* incomes reported, rather than *mean* income.
29\.1 Statistics
----------------
A *statistic* is a numerical summary of a sample. Statistics are useful because they provide a useful summary about our data. A histogram gives us a rich summary of a datset: for example the departure delay time in the NYC flight data.
```
## NOTE: No need to change this!
flights %>%
ggplot(aes(dep_delay)) +
geom_histogram(bins = 60) +
scale_x_log10()
```
```
## Warning in self$trans$transform(x): NaNs produced
```
```
## Warning: Transformation introduced infinite values in continuous x-axis
```
```
## Warning: Removed 208344 rows containing non-finite values (`stat_bin()`).
```
However, we might be interested in a few questions about these data:
* What is a *typical* value for the departure delay? (Location)
* How *variable* are departure delay times? (Spread)
* How much does departure delay *co\-vary* with distance? (Dependence)
We can give quantitative answers to all these questions using statistics!
29\.2 Handling NA’s
-------------------
Before we can start computing (descriptive) statistics, we need to learn how to deal with data issues. For instance, in the NYC flights data, we have a number of `NA`s.
```
## NOTE: No need to change this!
flights %>%
summarize(across(where(is.numeric), ~sum(is.na(.)))) %>%
glimpse
```
```
## Rows: 1
## Columns: 14
## $ year <int> 0
## $ month <int> 0
## $ day <int> 0
## $ dep_time <int> 8255
## $ sched_dep_time <int> 0
## $ dep_delay <int> 8255
## $ arr_time <int> 8713
## $ sched_arr_time <int> 0
## $ arr_delay <int> 9430
## $ flight <int> 0
## $ air_time <int> 9430
## $ distance <int> 0
## $ hour <int> 0
## $ minute <int> 0
```
These `NA`s will “infect” our computation, and lead to `NA` summaries.
```
## NOTE: No need to change this!
flights %>%
summarize(across(where(is.numeric), mean)) %>%
glimpse
```
```
## Rows: 1
## Columns: 14
## $ year <dbl> 2013
## $ month <dbl> 6.54851
## $ day <dbl> 15.71079
## $ dep_time <dbl> NA
## $ sched_dep_time <dbl> 1344.255
## $ dep_delay <dbl> NA
## $ arr_time <dbl> NA
## $ sched_arr_time <dbl> 1536.38
## $ arr_delay <dbl> NA
## $ flight <dbl> 1971.924
## $ air_time <dbl> NA
## $ distance <dbl> 1039.913
## $ hour <dbl> 13.18025
## $ minute <dbl> 26.2301
```
Let’s learn how to handle this:
### 29\.2\.1 **q1** The following code returns `NA`. Look up the documentation for `mean` and use an additional argument to strip the `NA` values in the dataset before computing the mean. Make this modification to the code below and report the mean departure delay time.
```
## TASK: Edit to drop all NAs before computing the mean
flights %>%
summarize(dep_delay = mean(dep_delay, na.rm = TRUE))
```
```
## # A tibble: 1 × 1
## dep_delay
## <dbl>
## 1 12.6
```
**Observations**:
* The mean departure delay is about `12.6` minutes
### 29\.2\.1 **q1** The following code returns `NA`. Look up the documentation for `mean` and use an additional argument to strip the `NA` values in the dataset before computing the mean. Make this modification to the code below and report the mean departure delay time.
```
## TASK: Edit to drop all NAs before computing the mean
flights %>%
summarize(dep_delay = mean(dep_delay, na.rm = TRUE))
```
```
## # A tibble: 1 × 1
## dep_delay
## <dbl>
## 1 12.6
```
**Observations**:
* The mean departure delay is about `12.6` minutes
29\.3 Central Tendency
----------------------
*Central tendency* is the idea of where data tend to be “located”—this concept is also called *location*. It is best thought of as the “center” of the data. The following graph illustrates central tendency.
```
## NOTE: No need to change this!
set.seed(101)
tibble(z = rnorm(n = 1e3)) %>%
vis_central(z)
```
There are two primary measures of central tendency; the *mean* and *median*. The mean is the simple [arithmetic average](https://en.wikipedia.org/wiki/Arithmetic_mean): the sum of all values divided by the total number of values. The mean is denoted by \\(\\overline{x}\\) and defined by
\\\[\\overline{X} \= \\frac{1}{n} \\sum\_{i\=1}^n X\_i,\\]
where \\(n\\) is the number of data points, and the \\(X\_i\\) are the individual values.
The [median](https://en.wikipedia.org/wiki/Median) is the value that separates half the data above and below. Weirdly, there’s no standard symbol for the median, so we’ll just denote it as \\(\\text{Median}\[D]\\) to denote the median of a set of data \\(D\\).
The median is a *robust* statistic, which is best illustrated by example. Consider the following two samples `v_base` and `v_outlier`. The sample `v_outlier` has an *outlier*, a value very different from the other values. Observe what value the mean and median take for these different samples.
```
## NOTE: No need to change this!
v_base <- c(1, 2, 3, 4, 5)
v_outlier <- c(v_base, 1e3)
tibble(
mean_base = mean(v_base),
median_base = median(v_base),
mean_outlier = mean(v_outlier),
median_outlier = median(v_outlier)
) %>% glimpse
```
```
## Rows: 1
## Columns: 4
## $ mean_base <dbl> 3
## $ median_base <dbl> 3
## $ mean_outlier <dbl> 169.1667
## $ median_outlier <dbl> 3.5
```
Note that for `v_outlier` the mean is greatly increased, but the median is only slightly changed. It is in this sense that the median is *robust*—it is robust to outliers. When one has a dataset with outliers, the median is usually a better measure of central tendency.\[1]
It can be useful to think about when the mean and median agree or disagree with each other. For instance, with the flights data:
```
## NOTE: No need to change this!
flights %>% vis_central(dep_delay)
```
```
## Warning: Removed 8255 rows containing non-finite values (`stat_density()`).
```
the mean and median `dep_delay` largely agree (relative to all the other data). But for the gapminder data:
```
## NOTE: No need to change this!
gapminder %>%
filter(year == max(year)) %>%
vis_central(gdpPercap)
```
the mean and median `gdpPercap` disagree.\[2]
### 29\.3\.1 **q2** The following code computes the mean and median `dep_delay` for each carrier, and sorts based on mean. Duplicate the code, and sort by median instead. Report your observations on which carriers are in both lists, and which are different. Also comment on what negative `dep_delay` values mean.
*Hint*: Remember you can check the documentation of a built\-in dataset with `?flights`!
```
## NOTE: No need to change this!
flights %>%
group_by(carrier) %>%
summarize(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(mean)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier mean median
## <chr> <dbl> <dbl>
## 1 F9 20.2 0.5
## 2 EV 20.0 -1
## 3 YV 19.0 -2
## 4 FL 18.7 1
## 5 WN 17.7 1
```
```
## TASK: Duplicate the code above, but sort by `median` instead
flights %>%
group_by(carrier) %>%
summarize(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(median)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier mean median
## <chr> <dbl> <dbl>
## 1 FL 18.7 1
## 2 WN 17.7 1
## 3 F9 20.2 0.5
## 4 UA 12.1 0
## 5 VX 12.9 0
```
**Observations**:
* The carriers `F9, FL, WN` are in both lists
* The carriers `EV, YV` are top in mean, while `UA, VX` are top in median
* Negative values of `dep_delay` signal early departures
### 29\.3\.1 **q2** The following code computes the mean and median `dep_delay` for each carrier, and sorts based on mean. Duplicate the code, and sort by median instead. Report your observations on which carriers are in both lists, and which are different. Also comment on what negative `dep_delay` values mean.
*Hint*: Remember you can check the documentation of a built\-in dataset with `?flights`!
```
## NOTE: No need to change this!
flights %>%
group_by(carrier) %>%
summarize(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(mean)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier mean median
## <chr> <dbl> <dbl>
## 1 F9 20.2 0.5
## 2 EV 20.0 -1
## 3 YV 19.0 -2
## 4 FL 18.7 1
## 5 WN 17.7 1
```
```
## TASK: Duplicate the code above, but sort by `median` instead
flights %>%
group_by(carrier) %>%
summarize(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(median)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier mean median
## <chr> <dbl> <dbl>
## 1 FL 18.7 1
## 2 WN 17.7 1
## 3 F9 20.2 0.5
## 4 UA 12.1 0
## 5 VX 12.9 0
```
**Observations**:
* The carriers `F9, FL, WN` are in both lists
* The carriers `EV, YV` are top in mean, while `UA, VX` are top in median
* Negative values of `dep_delay` signal early departures
29\.4 Multi\-modality
---------------------
It may not seem like it, but we’re actually *making an assumption* when we use the mean (or median) as a typical value. Imagine we had the following data:
```
bind_rows(
tibble(X = rnorm(300, mean = -2)),
tibble(X = rnorm(300, mean = +2))
) %>%
ggplot(aes(X)) +
geom_histogram(bins = 60) +
geom_vline(aes(xintercept = mean(X), color = "Mean")) +
geom_vline(aes(xintercept = median(X), color = "Median")) +
scale_color_discrete(name = "Statistic")
```
Here the mean and median are both close to zero, but *zero is an atypical number*! This is partly why we don’t *only* compute descriptive statistics, but also do a deeper dive into our data. Here, we should probably refuse to give a single typical value; instead, it seems there might really be two populations showing up in the same dataset, so we can give two typical numbers, say `-2, +2`.
29\.5 Quantiles
---------------
Before we can talk about spread, we need to talk about *quantiles*. A [quantile](https://en.wikipedia.org/wiki/Quantile) is a value that separates a user\-specified fraction of data (or a distribution). For instance, the median is the \\(50%\\) quantile; thus \\(\\text{Median}\[D] \= Q\_{0\.5}\[D]\\). We can generalize this idea to talk about any quantile between \\(0%\\) and \\(100%\\).
The following graph visualizes the \\(25%, 50%, 75%\\) quantiles of a standard normal. Since these are the quarter\-quantiles (\\(1/4, 2/4, 3/4\\)), these are often called the *quartiles*.
```
## NOTE: No need to change this!
tibble(z = seq(-3, +3, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_line() +
geom_segment(
data = tibble(p = c(0.25, 0.50, 0.75)) %>%
mutate(
z = qnorm(p),
d = dnorm(z)
),
mapping = aes(xend = z, yend = 0, color = as_factor(p))
) +
scale_color_discrete(name = "Quantile")
```
*Note*: The function `qnorm` returns the quantiles of a normal distribution. We’ll focus on quantiles of *samples* in this exercise.
We’ll use the quartiles to define the *interquartile range*. First, the `quantile()` function computes quantiles of a sample. For example:
```
## NOTE: No need to change this! Run for an example
flights %>%
pull(dep_delay) %>%
quantile(., probs = c(0, 0.25, 0.50, 0.75, 1.00), na.rm = TRUE)
```
```
## 0% 25% 50% 75% 100%
## -43 -5 -2 11 1301
```
Like with `mean, median`, we need to specify if we want to remove `NA`s. We can provide a list of `probs` to specify the probabilities of the quantiles. Remember: a *probability* is a value between \\(\[0, 1]\\), while a *quantile* is a value that probably has units, like minutes in the case of `dep_delay`.
Now we can define the interquartile range:
\\\[IQR\[D] \= Q\_{0\.75}\[D] \- Q\_{0\.25}\[D]\\],
where \\(Q\_{p}\[D]\\) is the \\(p\\)\-th quantile of a sample \\(D\\).
### 29\.5\.1 **q3** Using the function `quantile`, compute the *interquartile range*; this is the difference between the \\(75%\\) and \\(25%\\) quantiles.
```
## NOTE: No need to change this!
set.seed(101)
v_test_iqr <- rnorm(n = 10)
test_iqr <- quantile(v_test_iqr, probs = 0.75) - quantile(v_test_iqr, probs = 0.25)
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(test_iqr == IQR(v_test_iqr))
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
### 29\.5\.1 **q3** Using the function `quantile`, compute the *interquartile range*; this is the difference between the \\(75%\\) and \\(25%\\) quantiles.
```
## NOTE: No need to change this!
set.seed(101)
v_test_iqr <- rnorm(n = 10)
test_iqr <- quantile(v_test_iqr, probs = 0.75) - quantile(v_test_iqr, probs = 0.25)
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(test_iqr == IQR(v_test_iqr))
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
29\.6 Spread
------------
*Spread* is the concept of how tightly or widely data are *spread out*. There are two primary measures of spread: the *standard deviation*, and the *interquartile range*.
The [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) (SD) is denoted by \\(s\\) and defined by
\\\[s \= \\sqrt{ \\frac{1}{n\-1} \\sum\_{i\=1}^n (X\_i \- \\overline{X})^2 },\\]
where \\(\\overline{X}\\) is the mean of the data. Note the factor of \\(n\-1\\) rather than \\(n\\): This slippery idea is called [Bessel’s correction](https://en.wikipedia.org/wiki/Bessel%27s_correction). Note that \\(\\sigma^2\\) is called the *variance*.
By way of analogy, mean is to standard deviation as median is to IQR: The IQR is a robust measure of spread. Returning to our outlier example:
```
## NOTE: No need to change this!
v_base <- c(1, 2, 3, 4, 5)
v_outlier <- c(v_base, 1e3)
tibble(
sd_base = sd(v_base),
IQR_base = IQR(v_base),
sd_outlier = sd(v_outlier),
IQR_outlier = IQR(v_outlier)
) %>% glimpse
```
```
## Rows: 1
## Columns: 4
## $ sd_base <dbl> 1.581139
## $ IQR_base <dbl> 2
## $ sd_outlier <dbl> 407.026
## $ IQR_outlier <dbl> 2.5
```
### 29\.6\.1 **q4** Using the code from q2 as a starting point, compute the standard deviation (`sd()`) and interquartile range (`IQR()`), and rank the top five carriers, this time by sd and IQR. Report your observations on which carriers are in both lists, and which are different. Also note and comment on which carrier (among your top\-ranked) has the largest difference between `sd` and `IQR`.
```
## TODO: Use code from q2 to compute the sd and IQR, rank as before
flights %>%
group_by(carrier) %>%
summarize(
sd = sd(dep_delay, na.rm = TRUE),
IQR = IQR(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(sd)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier sd IQR
## <chr> <dbl> <dbl>
## 1 HA 74.1 6
## 2 F9 58.4 22
## 3 FL 52.7 21
## 4 YV 49.2 30
## 5 EV 46.6 30
```
```
flights %>%
group_by(carrier) %>%
summarize(
sd = sd(dep_delay, na.rm = TRUE),
IQR = IQR(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(IQR)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier sd IQR
## <chr> <dbl> <dbl>
## 1 EV 46.6 30
## 2 YV 49.2 30
## 3 9E 45.9 23
## 4 F9 58.4 22
## 5 FL 52.7 21
```
**Observations**:
* The carriers `F9, FL, YV, EV` are in both lists
* The carrier `HA` is top in `sd`, while `9E` is top in IQR
* `HA` has a large difference between `sd` and `IQR`; based on the following vis, it appears that `HA` has a lot more outliers than other carriers, which bumps up its `sd`
```
flights %>%
filter(carrier %in% c("HA", "F9", "FL", "YV", "EV")) %>%
ggplot(aes(carrier, dep_delay)) +
geom_boxplot()
```
```
## Warning: Removed 2949 rows containing non-finite values (`stat_boxplot()`).
```
### 29\.6\.1 **q4** Using the code from q2 as a starting point, compute the standard deviation (`sd()`) and interquartile range (`IQR()`), and rank the top five carriers, this time by sd and IQR. Report your observations on which carriers are in both lists, and which are different. Also note and comment on which carrier (among your top\-ranked) has the largest difference between `sd` and `IQR`.
```
## TODO: Use code from q2 to compute the sd and IQR, rank as before
flights %>%
group_by(carrier) %>%
summarize(
sd = sd(dep_delay, na.rm = TRUE),
IQR = IQR(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(sd)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier sd IQR
## <chr> <dbl> <dbl>
## 1 HA 74.1 6
## 2 F9 58.4 22
## 3 FL 52.7 21
## 4 YV 49.2 30
## 5 EV 46.6 30
```
```
flights %>%
group_by(carrier) %>%
summarize(
sd = sd(dep_delay, na.rm = TRUE),
IQR = IQR(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(IQR)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## carrier sd IQR
## <chr> <dbl> <dbl>
## 1 EV 46.6 30
## 2 YV 49.2 30
## 3 9E 45.9 23
## 4 F9 58.4 22
## 5 FL 52.7 21
```
**Observations**:
* The carriers `F9, FL, YV, EV` are in both lists
* The carrier `HA` is top in `sd`, while `9E` is top in IQR
* `HA` has a large difference between `sd` and `IQR`; based on the following vis, it appears that `HA` has a lot more outliers than other carriers, which bumps up its `sd`
```
flights %>%
filter(carrier %in% c("HA", "F9", "FL", "YV", "EV")) %>%
ggplot(aes(carrier, dep_delay)) +
geom_boxplot()
```
```
## Warning: Removed 2949 rows containing non-finite values (`stat_boxplot()`).
```
29\.7 Dependence
----------------
So far, we’ve talked about descriptive statistics to consider one variable at a time. To conclude, we’ll talk about statistics to consider *dependence* between two variables in a dataset.
[Dependence](https://en.wikipedia.org/wiki/Correlation_and_dependence)—like location or spread—is a general idea of relation between two variables. For instance, when it comes to flights we’d expect trips between more distant airports to take longer. If we plot `distance` vs `air_time` in a scatterplot, we indeed see this dependence.
```
flights %>%
ggplot(aes(air_time, distance)) +
geom_point()
```
```
## Warning: Removed 9430 rows containing missing values (`geom_point()`).
```
Two flavors of correlation help us make this idea quantitative: the *Pearson correlation* and *Spearman correlation*. Unlike our previous quantities for location and spread, these correlations are *dimensionless* (they have no units), and they are bounded between \\(\[\-1, \+1]\\).
The [Pearson correlation](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is often denoted by \\(r\_{XY}\\), and it specifies the variables being considered \\(X, Y\\). It is defined by
\\\[r\_{XY} \= \\frac{\\sum\_{i\=1}^n (X\_i \- \\overline{X}) (Y\_i \- \\overline{Y})}{s\_X s\_Y}.\\]
The [Spearman correlation](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) is often denoted by \\(\\rho\_{XY}\\), and is actually defined in terms of the Pearson correlation \\(r\_{XY}\\), but with the ranks (\\(1\\) to \\(n\\)) rather than the values \\(X\_i, Y\_i\\).
For example, we might expect a strong correlation between the `air_time` and the `distance` between airports. The function `cor` computes the Pearson correlation.
```
## NOTE: No need to change this!
flights %>%
summarize(rho = cor(air_time, distance, use = "na.or.complete"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.991
```
*Note*: Unfortunately, the function `cor` doesn’t follow the same pattern as `mean` or `sd`. We have to use this `use` argument to filter `NA`s.
However, we wouldn’t expect any relation between `air_time` and `month`.
```
## NOTE: No need to change this!
flights %>%
summarize(rho = cor(air_time, month, use = "na.or.complete"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.0109
```
In the case of a *perfect linear relationships* the Pearson correlation takes the value \\(\+1\\) (for a positive slope) or \\(\-1\\) for a negative slope.
### 29\.7\.1 **q5** Compute the Pearson correlation between `x, y` below. Play with the `slope` and observe the change in the correlation.
```
slope <- 0.5 # Play with this value; observe the correlation
df_line <-
tibble(x = seq(-1, +1, length.out = 50)) %>%
mutate(y = slope * x)
df_line %>%
ggplot(aes(x, y)) +
geom_point()
```
**Observations**:
* `slope` values greater than 0 have a positive correlation
* `slope` values less than 0 have a negative correlation
Note that this means *correlation is a measure of dependence*; it is **not** a measure of slope! It is better thought of as how *strong* the relationship between two variables is. A closer\-to\-zero correlation indicates a noisy relationship between variables, while a closer\-to\-one (in absolute value) indicates a more perfect, predictable relationship between the variables. For instance, the following code simulates data with different correlations, and facets the data based on the underlying correlation.
```
## NOTE: No need to change this!
map_dfr(
c(-1.0, -0.5, +0.0, +0.5, +1.0), # Chosen correlations
function(r) {
# Simulate a multivariate gaussian
X <- rmvnorm(
n = 100,
sigma = matrix(c(1, r, r, 1), nrow = 2)
)
# Package and return the data
tibble(
x = X[, 1],
y = X[, 2],
r = r
)
}
) %>%
# Plot the data
ggplot(aes(x, y)) +
geom_point() +
facet_wrap(~r)
```
One of the primary differences between Pearson and Spearman is that Pearson is a *linear correlation*, while Spearman is a *nonlinear correlation*. For instance, the following data
```
## NOTE: No need to change this!
# Positive slope
df_monotone <-
tibble(x = seq(-pi/2 + 0.1, +pi/2 - 0.1, length.out = 50)) %>%
mutate(y = tan(x))
df_monotone %>%
ggplot(aes(x, y)) +
geom_point()
```
have a perfect relationship between them. The Pearson correlation does not pick up on this fact, while the Spearman correlation indicates a perfect relation.
```
# Positive slope
df_monotone %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.846
```
```
df_monotone %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 1
```
One more note about functional relationships: Neither Pearson nor Spearman can pick up on arbitrary dependencies.
### 29\.7\.2 **q6** Run the code chunk below and look at the visualization: Make a prediction about what you think the correlation will be. Then compute the Pearson correlation between `x, y` below.
```
## NOTE: No need to change this!
df_quad <-
tibble(x = seq(-1, +1, length.out = 50)) %>%
mutate(y = x^2 - 0.5)
## TASK: Compute the Pearson and Spearman correlations on `df_quad`
df_quad %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 -2.02e-16
```
```
df_quad %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 -0.0236
```
```
df_quad %>%
ggplot(aes(x, y)) +
geom_point()
```
**Observations**:
* Both correlations are near\-zero
One last point about correlation: The mean is to Pearson correlation as the median is to Spearman correlation. The median and Spearman’s rho are robust to outliers.
```
## NOTE: No need to change this!
set.seed(101)
X <- rmvnorm(
n = 25,
sigma = matrix(c(1, 0.9, 0.9, 1), nrow = 2)
)
df_cor_outliers <-
tibble(
x = X[, 1],
y = X[, 2]
) %>%
bind_rows(tibble(x = c(-10.1, -10, 10, 10.1), y = c(-1.2, -1.1, 1.1, 1.2)))
df_cor_outliers %>%
ggplot(aes(x, y)) +
geom_point()
```
```
df_cor_outliers %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.621
```
```
df_cor_outliers %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.884
```
### 29\.7\.1 **q5** Compute the Pearson correlation between `x, y` below. Play with the `slope` and observe the change in the correlation.
```
slope <- 0.5 # Play with this value; observe the correlation
df_line <-
tibble(x = seq(-1, +1, length.out = 50)) %>%
mutate(y = slope * x)
df_line %>%
ggplot(aes(x, y)) +
geom_point()
```
**Observations**:
* `slope` values greater than 0 have a positive correlation
* `slope` values less than 0 have a negative correlation
Note that this means *correlation is a measure of dependence*; it is **not** a measure of slope! It is better thought of as how *strong* the relationship between two variables is. A closer\-to\-zero correlation indicates a noisy relationship between variables, while a closer\-to\-one (in absolute value) indicates a more perfect, predictable relationship between the variables. For instance, the following code simulates data with different correlations, and facets the data based on the underlying correlation.
```
## NOTE: No need to change this!
map_dfr(
c(-1.0, -0.5, +0.0, +0.5, +1.0), # Chosen correlations
function(r) {
# Simulate a multivariate gaussian
X <- rmvnorm(
n = 100,
sigma = matrix(c(1, r, r, 1), nrow = 2)
)
# Package and return the data
tibble(
x = X[, 1],
y = X[, 2],
r = r
)
}
) %>%
# Plot the data
ggplot(aes(x, y)) +
geom_point() +
facet_wrap(~r)
```
One of the primary differences between Pearson and Spearman is that Pearson is a *linear correlation*, while Spearman is a *nonlinear correlation*. For instance, the following data
```
## NOTE: No need to change this!
# Positive slope
df_monotone <-
tibble(x = seq(-pi/2 + 0.1, +pi/2 - 0.1, length.out = 50)) %>%
mutate(y = tan(x))
df_monotone %>%
ggplot(aes(x, y)) +
geom_point()
```
have a perfect relationship between them. The Pearson correlation does not pick up on this fact, while the Spearman correlation indicates a perfect relation.
```
# Positive slope
df_monotone %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.846
```
```
df_monotone %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 1
```
One more note about functional relationships: Neither Pearson nor Spearman can pick up on arbitrary dependencies.
### 29\.7\.2 **q6** Run the code chunk below and look at the visualization: Make a prediction about what you think the correlation will be. Then compute the Pearson correlation between `x, y` below.
```
## NOTE: No need to change this!
df_quad <-
tibble(x = seq(-1, +1, length.out = 50)) %>%
mutate(y = x^2 - 0.5)
## TASK: Compute the Pearson and Spearman correlations on `df_quad`
df_quad %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 -2.02e-16
```
```
df_quad %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 -0.0236
```
```
df_quad %>%
ggplot(aes(x, y)) +
geom_point()
```
**Observations**:
* Both correlations are near\-zero
One last point about correlation: The mean is to Pearson correlation as the median is to Spearman correlation. The median and Spearman’s rho are robust to outliers.
```
## NOTE: No need to change this!
set.seed(101)
X <- rmvnorm(
n = 25,
sigma = matrix(c(1, 0.9, 0.9, 1), nrow = 2)
)
df_cor_outliers <-
tibble(
x = X[, 1],
y = X[, 2]
) %>%
bind_rows(tibble(x = c(-10.1, -10, 10, 10.1), y = c(-1.2, -1.1, 1.1, 1.2)))
df_cor_outliers %>%
ggplot(aes(x, y)) +
geom_point()
```
```
df_cor_outliers %>%
summarize(rho = cor(x, y, method = "pearson"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.621
```
```
df_cor_outliers %>%
summarize(rho = cor(x, y, method = "spearman"))
```
```
## # A tibble: 1 × 1
## rho
## <dbl>
## 1 0.884
```
29\.8 Notes
-----------
\[1] So then why bother with the mean? It turns out the mean is a fundamental idea in statistics, as it’s a key component of *a lot* of other statistical procedures. You’ll end up using the mean in countless different ways, so it’s worth recognizing its weakness to outliers.
\[2] As a side note, since dollars are pretty\-well divorced from reality (there’s not physical upper bound on perceived value), distributions of dollars can have very large outliers. That’s why you often see *median* incomes reported, rather than *mean* income.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-joining-datasets.html |
30 Data: Joining Datasets
=========================
*Purpose*: Often our data are scattered across multiple sets. In this case, we
need to be able to *join* data.
*Reading*: [Join Data Sets](https://rstudio.cloud/learn/primers/4.3)
*Topics*: Welcome, mutating joins, filtering joins, Binds and set operations
*Reading Time*: \~30 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(nycflights13)
```
30\.1 Dangers of Binding!
-------------------------
In the reading we learned about `bind_cols` and `bind_rows`.
```
## NOTE: No need to change this; setup
beatles1 <-
tribble(
~band, ~name,
"Beatles", "John",
"Beatles", "Paul",
"Beatles", "George",
"Beatles", "Ringo"
)
beatles2 <-
tribble(
~surname, ~instrument,
"McCartney", "bass",
"Harrison", "guitar",
"Starr", "drums",
"Lennon", "guitar"
)
bind_cols(beatles1, beatles2)
```
```
## # A tibble: 4 × 4
## band name surname instrument
## <chr> <chr> <chr> <chr>
## 1 Beatles John McCartney bass
## 2 Beatles Paul Harrison guitar
## 3 Beatles George Starr drums
## 4 Beatles Ringo Lennon guitar
```
### 30\.1\.1 **q1** Describe what is wrong with the result of `bind_cols` above and how it happened.
* The rows of `beatles1` and `beatles2` were not ordered identically; therefore the wrong names and surnames were combined
We’ll use the following `beatles3` to *correctly* join the data.
```
## NOTE: No need to change this; setup
beatles3 <-
tribble(
~name, ~surname,
"John", "Lennon",
"Paul", "McCartney",
"George", "Harrison",
"Ringo", "Starr"
)
beatles_joined <-
tribble(
~band, ~name, ~surname, ~instrument,
"Beatles", "John", "Lennon", "guitar",
"Beatles", "Paul", "McCartney", "bass",
"Beatles", "George", "Harrison", "guitar",
"Beatles", "Ringo", "Starr", "drums"
)
```
### 30\.1\.2 **q2** Use the following `beatles3` to *correctly* join `beatles1`
```
df_q2 <-
beatles1 %>%
left_join(
beatles3,
by = "name"
) %>%
left_join(
beatles2,
by = "surname"
)
df_q2
```
```
## # A tibble: 4 × 4
## band name surname instrument
## <chr> <chr> <chr> <chr>
## 1 Beatles John Lennon guitar
## 2 Beatles Paul McCartney bass
## 3 Beatles George Harrison guitar
## 4 Beatles Ringo Starr drums
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(all_equal(df_q2, beatles_joined))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
There’s a **very important lesson** here: In general, don’t trust `bind_cols`.
It’s easy in the example above to tell there’s a problem because the data are
*small*; when working with larger datasets, R will happily give you the wrong
answer if you give it the wrong instructions. Whenever possible, use some form
of join to combine datasets.
30\.2 Utility of Filtering Joins
--------------------------------
Filtering joins are an elegant way to produce complicated filters. They are
especially helpful because you can first inspect what *criteria* you’ll filter
on, then perform the filter. We’ll use the tidyr tool `expand_grid` to make such
a criteria dataframe, then apply it to filter the `flights` data.
### 30\.2\.1 **q3** Create a “grid” of values
Use `expand_grid` to create a `criteria` dataframe with the `month` equal to `8, 9` and the airport identifiers in `dest` for the San Francisco, San Jose, and
Oakland airports.
*Hint 1*: To find the airport identifiers, you can either use `str_detect` to
filter the `airports` dataset, or use Google!
*Hint 2*: Remember to look up the documentation for a function you don’t yet know!
```
criteria <-
expand_grid(
month = c(8, 9),
dest = c("SJC", "SFO", "OAK")
)
criteria
```
```
## # A tibble: 6 × 2
## month dest
## <dbl> <chr>
## 1 8 SJC
## 2 8 SFO
## 3 8 OAK
## 4 9 SJC
## 5 9 SFO
## 6 9 OAK
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(
criteria,
criteria %>%
semi_join(
airports %>%
filter(
str_detect(name, "San Jose") |
str_detect(name, "San Francisco") |
str_detect(name, "Metropolitan Oakland")
),
by = c("dest" = "faa")
)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
all_equal(
criteria,
criteria %>% filter(month %in% c(8, 9))
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 30\.2\.2 **q4** Use the `criteria` dataframe you produced above to filter `flights` on `dest` and `month`.
*Hint*: Remember to use a *filtering join* to take advantage of the `criteria`
dataset we built above!
```
df_q4 <-
flights %>%
semi_join(
criteria,
by = c("dest", "month")
)
df_q4
```
```
## # A tibble: 2,584 × 19
## year month day dep_time sched_de…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
## <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr>
## 1 2013 8 1 554 559 -5 909 902 7 UA
## 2 2013 8 1 601 601 0 916 915 1 UA
## 3 2013 8 1 657 700 -3 1016 1016 0 DL
## 4 2013 8 1 723 730 -7 1040 1045 -5 VX
## 5 2013 8 1 738 740 -2 1111 1055 16 VX
## 6 2013 8 1 745 743 2 1117 1103 14 UA
## 7 2013 8 1 810 755 15 1120 1115 5 AA
## 8 2013 8 1 825 829 -4 1156 1143 13 UA
## 9 2013 8 1 838 840 -2 1230 1143 47 UA
## 10 2013 8 1 851 853 -2 1227 1212 15 B6
## # … with 2,574 more rows, 9 more variables: flight <int>, tailnum <chr>,
## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time, ⁵arr_delay
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(
df_q4,
df_q4 %>%
filter(
month %in% c(8, 9),
dest %in% c("SJC", "SFO", "OAK")
)
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
30\.1 Dangers of Binding!
-------------------------
In the reading we learned about `bind_cols` and `bind_rows`.
```
## NOTE: No need to change this; setup
beatles1 <-
tribble(
~band, ~name,
"Beatles", "John",
"Beatles", "Paul",
"Beatles", "George",
"Beatles", "Ringo"
)
beatles2 <-
tribble(
~surname, ~instrument,
"McCartney", "bass",
"Harrison", "guitar",
"Starr", "drums",
"Lennon", "guitar"
)
bind_cols(beatles1, beatles2)
```
```
## # A tibble: 4 × 4
## band name surname instrument
## <chr> <chr> <chr> <chr>
## 1 Beatles John McCartney bass
## 2 Beatles Paul Harrison guitar
## 3 Beatles George Starr drums
## 4 Beatles Ringo Lennon guitar
```
### 30\.1\.1 **q1** Describe what is wrong with the result of `bind_cols` above and how it happened.
* The rows of `beatles1` and `beatles2` were not ordered identically; therefore the wrong names and surnames were combined
We’ll use the following `beatles3` to *correctly* join the data.
```
## NOTE: No need to change this; setup
beatles3 <-
tribble(
~name, ~surname,
"John", "Lennon",
"Paul", "McCartney",
"George", "Harrison",
"Ringo", "Starr"
)
beatles_joined <-
tribble(
~band, ~name, ~surname, ~instrument,
"Beatles", "John", "Lennon", "guitar",
"Beatles", "Paul", "McCartney", "bass",
"Beatles", "George", "Harrison", "guitar",
"Beatles", "Ringo", "Starr", "drums"
)
```
### 30\.1\.2 **q2** Use the following `beatles3` to *correctly* join `beatles1`
```
df_q2 <-
beatles1 %>%
left_join(
beatles3,
by = "name"
) %>%
left_join(
beatles2,
by = "surname"
)
df_q2
```
```
## # A tibble: 4 × 4
## band name surname instrument
## <chr> <chr> <chr> <chr>
## 1 Beatles John Lennon guitar
## 2 Beatles Paul McCartney bass
## 3 Beatles George Harrison guitar
## 4 Beatles Ringo Starr drums
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(all_equal(df_q2, beatles_joined))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
There’s a **very important lesson** here: In general, don’t trust `bind_cols`.
It’s easy in the example above to tell there’s a problem because the data are
*small*; when working with larger datasets, R will happily give you the wrong
answer if you give it the wrong instructions. Whenever possible, use some form
of join to combine datasets.
### 30\.1\.1 **q1** Describe what is wrong with the result of `bind_cols` above and how it happened.
* The rows of `beatles1` and `beatles2` were not ordered identically; therefore the wrong names and surnames were combined
We’ll use the following `beatles3` to *correctly* join the data.
```
## NOTE: No need to change this; setup
beatles3 <-
tribble(
~name, ~surname,
"John", "Lennon",
"Paul", "McCartney",
"George", "Harrison",
"Ringo", "Starr"
)
beatles_joined <-
tribble(
~band, ~name, ~surname, ~instrument,
"Beatles", "John", "Lennon", "guitar",
"Beatles", "Paul", "McCartney", "bass",
"Beatles", "George", "Harrison", "guitar",
"Beatles", "Ringo", "Starr", "drums"
)
```
### 30\.1\.2 **q2** Use the following `beatles3` to *correctly* join `beatles1`
```
df_q2 <-
beatles1 %>%
left_join(
beatles3,
by = "name"
) %>%
left_join(
beatles2,
by = "surname"
)
df_q2
```
```
## # A tibble: 4 × 4
## band name surname instrument
## <chr> <chr> <chr> <chr>
## 1 Beatles John Lennon guitar
## 2 Beatles Paul McCartney bass
## 3 Beatles George Harrison guitar
## 4 Beatles Ringo Starr drums
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(all_equal(df_q2, beatles_joined))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
There’s a **very important lesson** here: In general, don’t trust `bind_cols`.
It’s easy in the example above to tell there’s a problem because the data are
*small*; when working with larger datasets, R will happily give you the wrong
answer if you give it the wrong instructions. Whenever possible, use some form
of join to combine datasets.
30\.2 Utility of Filtering Joins
--------------------------------
Filtering joins are an elegant way to produce complicated filters. They are
especially helpful because you can first inspect what *criteria* you’ll filter
on, then perform the filter. We’ll use the tidyr tool `expand_grid` to make such
a criteria dataframe, then apply it to filter the `flights` data.
### 30\.2\.1 **q3** Create a “grid” of values
Use `expand_grid` to create a `criteria` dataframe with the `month` equal to `8, 9` and the airport identifiers in `dest` for the San Francisco, San Jose, and
Oakland airports.
*Hint 1*: To find the airport identifiers, you can either use `str_detect` to
filter the `airports` dataset, or use Google!
*Hint 2*: Remember to look up the documentation for a function you don’t yet know!
```
criteria <-
expand_grid(
month = c(8, 9),
dest = c("SJC", "SFO", "OAK")
)
criteria
```
```
## # A tibble: 6 × 2
## month dest
## <dbl> <chr>
## 1 8 SJC
## 2 8 SFO
## 3 8 OAK
## 4 9 SJC
## 5 9 SFO
## 6 9 OAK
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(
criteria,
criteria %>%
semi_join(
airports %>%
filter(
str_detect(name, "San Jose") |
str_detect(name, "San Francisco") |
str_detect(name, "Metropolitan Oakland")
),
by = c("dest" = "faa")
)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
all_equal(
criteria,
criteria %>% filter(month %in% c(8, 9))
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 30\.2\.2 **q4** Use the `criteria` dataframe you produced above to filter `flights` on `dest` and `month`.
*Hint*: Remember to use a *filtering join* to take advantage of the `criteria`
dataset we built above!
```
df_q4 <-
flights %>%
semi_join(
criteria,
by = c("dest", "month")
)
df_q4
```
```
## # A tibble: 2,584 × 19
## year month day dep_time sched_de…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
## <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr>
## 1 2013 8 1 554 559 -5 909 902 7 UA
## 2 2013 8 1 601 601 0 916 915 1 UA
## 3 2013 8 1 657 700 -3 1016 1016 0 DL
## 4 2013 8 1 723 730 -7 1040 1045 -5 VX
## 5 2013 8 1 738 740 -2 1111 1055 16 VX
## 6 2013 8 1 745 743 2 1117 1103 14 UA
## 7 2013 8 1 810 755 15 1120 1115 5 AA
## 8 2013 8 1 825 829 -4 1156 1143 13 UA
## 9 2013 8 1 838 840 -2 1230 1143 47 UA
## 10 2013 8 1 851 853 -2 1227 1212 15 B6
## # … with 2,574 more rows, 9 more variables: flight <int>, tailnum <chr>,
## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time, ⁵arr_delay
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(
df_q4,
df_q4 %>%
filter(
month %in% c(8, 9),
dest %in% c("SJC", "SFO", "OAK")
)
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 30\.2\.1 **q3** Create a “grid” of values
Use `expand_grid` to create a `criteria` dataframe with the `month` equal to `8, 9` and the airport identifiers in `dest` for the San Francisco, San Jose, and
Oakland airports.
*Hint 1*: To find the airport identifiers, you can either use `str_detect` to
filter the `airports` dataset, or use Google!
*Hint 2*: Remember to look up the documentation for a function you don’t yet know!
```
criteria <-
expand_grid(
month = c(8, 9),
dest = c("SJC", "SFO", "OAK")
)
criteria
```
```
## # A tibble: 6 × 2
## month dest
## <dbl> <chr>
## 1 8 SJC
## 2 8 SFO
## 3 8 OAK
## 4 9 SJC
## 5 9 SFO
## 6 9 OAK
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(
criteria,
criteria %>%
semi_join(
airports %>%
filter(
str_detect(name, "San Jose") |
str_detect(name, "San Francisco") |
str_detect(name, "Metropolitan Oakland")
),
by = c("dest" = "faa")
)
)
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
all_equal(
criteria,
criteria %>% filter(month %in% c(8, 9))
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 30\.2\.2 **q4** Use the `criteria` dataframe you produced above to filter `flights` on `dest` and `month`.
*Hint*: Remember to use a *filtering join* to take advantage of the `criteria`
dataset we built above!
```
df_q4 <-
flights %>%
semi_join(
criteria,
by = c("dest", "month")
)
df_q4
```
```
## # A tibble: 2,584 × 19
## year month day dep_time sched_de…¹ dep_d…² arr_t…³ sched…⁴ arr_d…⁵ carrier
## <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr>
## 1 2013 8 1 554 559 -5 909 902 7 UA
## 2 2013 8 1 601 601 0 916 915 1 UA
## 3 2013 8 1 657 700 -3 1016 1016 0 DL
## 4 2013 8 1 723 730 -7 1040 1045 -5 VX
## 5 2013 8 1 738 740 -2 1111 1055 16 VX
## 6 2013 8 1 745 743 2 1117 1103 14 UA
## 7 2013 8 1 810 755 15 1120 1115 5 AA
## 8 2013 8 1 825 829 -4 1156 1143 13 UA
## 9 2013 8 1 838 840 -2 1230 1143 47 UA
## 10 2013 8 1 851 853 -2 1227 1212 15 B6
## # … with 2,574 more rows, 9 more variables: flight <int>, tailnum <chr>,
## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>, and abbreviated variable names
## # ¹sched_dep_time, ²dep_delay, ³arr_time, ⁴sched_arr_time, ⁵arr_delay
```
Use the following test to check your work:
```
## NOTE: No need to change this
assertthat::assert_that(
all_equal(
df_q4,
df_q4 %>%
filter(
month %in% c(8, 9),
dest %in% c("SJC", "SFO", "OAK")
)
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-lines.html |
31 Vis: Lines
=============
*Purpose*: *Line plots* are a key tool for EDA. In contrast with a scatterplot,
a line plot assumes the data have a *function* relation. This can create an
issue if we try to plot data that do not satisfy our assumptions. In this
exercise, we’ll practice some best\-practices for constructing line plots.
*Reading*: [Line plots](https://rstudio.cloud/learn/primers/3.6)
*Topics*: Welcome, Line graphs, Similar geoms (skip Maps)
*Reading Time*: \~30 minutes
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(gapminder)
```
### 31\.0\.1 **q1** The following graph doesn’t work as its author intended. Based on what we learned in the reading, fix the following code.
```
gapminder %>%
filter(continent == "Asia") %>%
ggplot(aes(year, lifeExp, color = country)) +
geom_line()
```
### 31\.0\.2 **q2** A line plot makes *a certain assumption* about the underlying data. What assumption is this? How does that assumption relate to the following graph? Put differently, why is the use of `geom_line` a bad idea for the following dataset?
```
## TODO: No need to edit; just answer the questions
mpg %>%
ggplot(aes(displ, hwy)) +
geom_line()
```
**Observations**:
\- A line plot assumes the underlying data have a *function* relationship; that is, that there is one y value for every x value
\- The `mpg` dataset does not have a function relation between `displ` and `hwy`; there are cars with identical values of `displ` but different values of `hwy`
### 31\.0\.3 **q3** The following graph shows both the raw data and a smoothed version. Describe the trends that you can see in the different curves.
```
## TODO: No need to edit; just interpret the graph
economics %>%
ggplot(aes(date, unemploy)) +
geom_line(aes(color = "Raw")) +
geom_smooth(aes(color = "Smoothed"), se = FALSE) +
scale_color_discrete(name = "Source")
```
```
## `geom_smooth()` using method = 'loess' and formula = 'y ~ x'
```
**Observations**:
\- The `Raw` data indicate short\-term cyclical patterns that occur over a few years
\- The `Smoothed` data indicate a longer\-term trend occurring over decades
### 31\.0\.1 **q1** The following graph doesn’t work as its author intended. Based on what we learned in the reading, fix the following code.
```
gapminder %>%
filter(continent == "Asia") %>%
ggplot(aes(year, lifeExp, color = country)) +
geom_line()
```
### 31\.0\.2 **q2** A line plot makes *a certain assumption* about the underlying data. What assumption is this? How does that assumption relate to the following graph? Put differently, why is the use of `geom_line` a bad idea for the following dataset?
```
## TODO: No need to edit; just answer the questions
mpg %>%
ggplot(aes(displ, hwy)) +
geom_line()
```
**Observations**:
\- A line plot assumes the underlying data have a *function* relationship; that is, that there is one y value for every x value
\- The `mpg` dataset does not have a function relation between `displ` and `hwy`; there are cars with identical values of `displ` but different values of `hwy`
### 31\.0\.3 **q3** The following graph shows both the raw data and a smoothed version. Describe the trends that you can see in the different curves.
```
## TODO: No need to edit; just interpret the graph
economics %>%
ggplot(aes(date, unemploy)) +
geom_line(aes(color = "Raw")) +
geom_smooth(aes(color = "Smoothed"), se = FALSE) +
scale_color_discrete(name = "Source")
```
```
## `geom_smooth()` using method = 'loess' and formula = 'y ~ x'
```
**Observations**:
\- The `Raw` data indicate short\-term cyclical patterns that occur over a few years
\- The `Smoothed` data indicate a longer\-term trend occurring over decades
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/communication-narrative-basics.html |
32 Communication: Narrative Basics
==================================
*Purpose*: The point of data science is not to gather a random pile of facts;
it’s to *do something* with those facts. One of our key goals in data science is
to help drive decisions and action—we accomplish this by communicating our
*data story*. Since story is fundamentally an exercise in narrative, we’ll
start our data story training with some narrative basics.
*Reading*: [Randy Olson’s TED Talk](https://www.youtube.com/watch?v=ERB7ITvabA4), introducing the *And, But, Therefore* (ABT) framework.
*Reading Time*: \~ 10 minutes
*Lesson Plan*: Ensure all students submit their narrative spectrum
classifications and justification through the Google Form. Set aside a
full\-class meeting to discuss the student results. Visualize the results,
and see what (if any) consensus the class came to. Ask a few students who
classified the graphs differently to share their perspective. After gathering
student input, offer your own perspective on the graph. Repeat for
each example.
Make sure to emphasize that different people will read graphs differently;
we don’t have *complete control* over the narrative when presenting a graph.
But the simpler and more focused our graph, the better control we have over
the narrative experienced. (TODO: Future exercises/activities should
emphasize this!)
32\.1 Exposition
----------------
Scientist\-turned\-storyteller Randy Olson\[1] introduced the *narrative spectrum*.
Three points along the spectrum are listed below:
| TLA | Framework | Narrative Spectrum |
| --- | --- | --- |
| AAA | And, And, And | Non\-narrative |
| ABT | And, But, Therefore | Just right! |
| DHY | Despite, However, Yet | Overly\-narrative |
The narrative spectrum runs from non\-narrative: introducing no conflict or
tension, to overly\-narrative: introducing too many conflicting ideas. Olson
observed that the middle of the narrative spectrum is just the right amount of
narrative content. The extreme points tend to be boring:
* A child may tell a story like “We went to the store, AND the man had a hat,
AND I lost a shoe, AND we went home.” This story lacks any narrative content:
no part of the story relates to any other part, so no conflict or drama
can arise. (AAA)
* An extremely learned professor may tell a story like “Kolmogorov proposed a
5/3 power law. HOWEVER Smith found 3/8 power law behavior. YET Chandrasekhar
discovered a 2/3 power law….” This story swings in the opposite direction;
there is *too much conflict*, and most listeners will be totally lost. This is
the proverbial *random pile of facts* we need to avoid when communicating.
(DHY)
Using the ABT framework can help us get started with framing a story. For
example:
“Data science is the use of computation and statistics to learn from data AND
we want to use data science to help people make decisions BUT a random pile of
facts will lose our audience THEREFORE we will study narrative to help tell
our data story.”
The **AND** part of the framework is our *exposition*; every story needs some
setup. The **BUT** part introduces some conflict—in a hollywood story this
could be a murder, but in science it could be an unexplained phenomenon.
**THEREFORE** is where we pay off the exposition and conflict. In our hollywood
story its where we solve the murder. In science its where we learn something
about reality, and pose the next exciting question to investigate.
*Note*: The ABT framework is **not the only way to tell a story**. It is a
simplified framework to help us get started!
32\.2 Exercises: Judging Narrative Content
------------------------------------------
Now let’s put all that narrative theory to use! We’re going to judge a number of
graphs based on their narrative content, placing them at a point on Olson’s
narrative spectrum. To do so, we’re going to need some framing for the following
exercise:
**For this exercise**, pretend that you are going to show the following graphs
to a very busy data scientist.
The following graphs are not intended for you to use to discover things. They
are intended to communicate your findings to someone else. Your data science
colleague is smart and competent (she knows what a boxplot is, understands
variability, etc.), but she’s also busy. You need to present a figure that
*tells a story* quickly, or she’s going to use her limited time to think about
something else.
**Your task**: Study the following graphs and determine *the closest
point*—AAA, ABT, or DHY—near which the example lies on narrative spectrum.
Keep in mind your intended audience (your colleague) when judging how much or
how little narrative content is present.
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 32\.2\.1 **q1** Identify the point on the narrative spectrum, and justify your answer.
*Hint*: Try telling yourself a story based on the graph! This can be your
**justification** for the narrative spectrum point you select.
```
## Saving 7 x 5 in image
```
**Classify**:
ABT
**Justify**
At low carat, an improvement in `cut` tends to *decrease* `price`. However,
at a higher carat, an improvement in `cut` tends to *increase* `price`.
This graph conveys a small amount of information, but still manages to
introduce some conflict.
### 32\.2\.2 **q2** Identify the point on the narrative spectrum, and justify your answer.
```
## `geom_smooth()` using method = 'gam' and formula = 'y ~ s(x, bs = "cs")'
```
```
## Saving 7 x 5 in image
## `geom_smooth()` using method = 'gam' and formula = 'y ~ s(x, bs = "cs")'
```
**Classify**:
DHY
**Justify**
Diamonds tend to be `price` ordered by their `cut`. However, at low `carat`
the `Fair` diamonds tend to be most pricey. Yet `Good`, `Very Good` and
`Premium` diamonds tend to overlap. There are too many conflicting ideas
in this graph to quickly tell a compelling story.
### 32\.2\.3 **q3** Identify the point on the narrative spectrum, and justify your answer.
```
## Saving 7 x 5 in image
```
**Classify**:
AAA
**Justify**
The highest `carat` diamonds tend to be `Fair`, and `Fair` diamonds tend
to vary a lot in `depth`, and `price` varies widely for all `cut` values,
and some diamonds have `x, y, z` values at zero….
### 32\.2\.4 **q4** Turn in all your answers via google forms. [Link](https://forms.gle/kHuT5oufUQA7jRWG8)
32\.1 Exposition
----------------
Scientist\-turned\-storyteller Randy Olson\[1] introduced the *narrative spectrum*.
Three points along the spectrum are listed below:
| TLA | Framework | Narrative Spectrum |
| --- | --- | --- |
| AAA | And, And, And | Non\-narrative |
| ABT | And, But, Therefore | Just right! |
| DHY | Despite, However, Yet | Overly\-narrative |
The narrative spectrum runs from non\-narrative: introducing no conflict or
tension, to overly\-narrative: introducing too many conflicting ideas. Olson
observed that the middle of the narrative spectrum is just the right amount of
narrative content. The extreme points tend to be boring:
* A child may tell a story like “We went to the store, AND the man had a hat,
AND I lost a shoe, AND we went home.” This story lacks any narrative content:
no part of the story relates to any other part, so no conflict or drama
can arise. (AAA)
* An extremely learned professor may tell a story like “Kolmogorov proposed a
5/3 power law. HOWEVER Smith found 3/8 power law behavior. YET Chandrasekhar
discovered a 2/3 power law….” This story swings in the opposite direction;
there is *too much conflict*, and most listeners will be totally lost. This is
the proverbial *random pile of facts* we need to avoid when communicating.
(DHY)
Using the ABT framework can help us get started with framing a story. For
example:
“Data science is the use of computation and statistics to learn from data AND
we want to use data science to help people make decisions BUT a random pile of
facts will lose our audience THEREFORE we will study narrative to help tell
our data story.”
The **AND** part of the framework is our *exposition*; every story needs some
setup. The **BUT** part introduces some conflict—in a hollywood story this
could be a murder, but in science it could be an unexplained phenomenon.
**THEREFORE** is where we pay off the exposition and conflict. In our hollywood
story its where we solve the murder. In science its where we learn something
about reality, and pose the next exciting question to investigate.
*Note*: The ABT framework is **not the only way to tell a story**. It is a
simplified framework to help us get started!
32\.2 Exercises: Judging Narrative Content
------------------------------------------
Now let’s put all that narrative theory to use! We’re going to judge a number of
graphs based on their narrative content, placing them at a point on Olson’s
narrative spectrum. To do so, we’re going to need some framing for the following
exercise:
**For this exercise**, pretend that you are going to show the following graphs
to a very busy data scientist.
The following graphs are not intended for you to use to discover things. They
are intended to communicate your findings to someone else. Your data science
colleague is smart and competent (she knows what a boxplot is, understands
variability, etc.), but she’s also busy. You need to present a figure that
*tells a story* quickly, or she’s going to use her limited time to think about
something else.
**Your task**: Study the following graphs and determine *the closest
point*—AAA, ABT, or DHY—near which the example lies on narrative spectrum.
Keep in mind your intended audience (your colleague) when judging how much or
how little narrative content is present.
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 32\.2\.1 **q1** Identify the point on the narrative spectrum, and justify your answer.
*Hint*: Try telling yourself a story based on the graph! This can be your
**justification** for the narrative spectrum point you select.
```
## Saving 7 x 5 in image
```
**Classify**:
ABT
**Justify**
At low carat, an improvement in `cut` tends to *decrease* `price`. However,
at a higher carat, an improvement in `cut` tends to *increase* `price`.
This graph conveys a small amount of information, but still manages to
introduce some conflict.
### 32\.2\.2 **q2** Identify the point on the narrative spectrum, and justify your answer.
```
## `geom_smooth()` using method = 'gam' and formula = 'y ~ s(x, bs = "cs")'
```
```
## Saving 7 x 5 in image
## `geom_smooth()` using method = 'gam' and formula = 'y ~ s(x, bs = "cs")'
```
**Classify**:
DHY
**Justify**
Diamonds tend to be `price` ordered by their `cut`. However, at low `carat`
the `Fair` diamonds tend to be most pricey. Yet `Good`, `Very Good` and
`Premium` diamonds tend to overlap. There are too many conflicting ideas
in this graph to quickly tell a compelling story.
### 32\.2\.3 **q3** Identify the point on the narrative spectrum, and justify your answer.
```
## Saving 7 x 5 in image
```
**Classify**:
AAA
**Justify**
The highest `carat` diamonds tend to be `Fair`, and `Fair` diamonds tend
to vary a lot in `depth`, and `price` varies widely for all `cut` values,
and some diamonds have `x, y, z` values at zero….
### 32\.2\.4 **q4** Turn in all your answers via google forms. [Link](https://forms.gle/kHuT5oufUQA7jRWG8)
### 32\.2\.1 **q1** Identify the point on the narrative spectrum, and justify your answer.
*Hint*: Try telling yourself a story based on the graph! This can be your
**justification** for the narrative spectrum point you select.
```
## Saving 7 x 5 in image
```
**Classify**:
ABT
**Justify**
At low carat, an improvement in `cut` tends to *decrease* `price`. However,
at a higher carat, an improvement in `cut` tends to *increase* `price`.
This graph conveys a small amount of information, but still manages to
introduce some conflict.
### 32\.2\.2 **q2** Identify the point on the narrative spectrum, and justify your answer.
```
## `geom_smooth()` using method = 'gam' and formula = 'y ~ s(x, bs = "cs")'
```
```
## Saving 7 x 5 in image
## `geom_smooth()` using method = 'gam' and formula = 'y ~ s(x, bs = "cs")'
```
**Classify**:
DHY
**Justify**
Diamonds tend to be `price` ordered by their `cut`. However, at low `carat`
the `Fair` diamonds tend to be most pricey. Yet `Good`, `Very Good` and
`Premium` diamonds tend to overlap. There are too many conflicting ideas
in this graph to quickly tell a compelling story.
### 32\.2\.3 **q3** Identify the point on the narrative spectrum, and justify your answer.
```
## Saving 7 x 5 in image
```
**Classify**:
AAA
**Justify**
The highest `carat` diamonds tend to be `Fair`, and `Fair` diamonds tend
to vary a lot in `depth`, and `price` varies widely for all `cut` values,
and some diamonds have `x, y, z` values at zero….
### 32\.2\.4 **q4** Turn in all your answers via google forms. [Link](https://forms.gle/kHuT5oufUQA7jRWG8)
| Field Specific |