text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
sequencelengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
sequencelengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
sequencelengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# Assignment 1
The goal of this assignment is to supply you with machine learning models and algorithms. In this notebook, we will cover linear and nonlinear models, the concept of loss functions and some optimization techniques. All mathematical operations should be implemented in **NumPy** only.
## Table of contents
* [1. Logistic Regression](#1.-Logistic-Regression)
* [1.1 Linear Mapping](#1.1-Linear-Mapping)
* [1.2 Sigmoid](#1.2-Sigmoid)
* [1.3 Negative Log Likelihood](#1.3-Negative-Log-Likelihood)
* [1.4 Model](#1.4-Model)
* [1.5 Simple Experiment](#1.5-Simple-Experiment)
* [2. Decision Tree](#2.-Decision-Tree)
* [2.1 Gini Index & Data Split](#2.1-Gini-Index-&-Data-Split)
* [2.2 Terminal Node](#2.2-Terminal-Node)
* [2.3 Build the Decision Tree](#2.3-Build-the-Decision-Tree)
* [3. Experiments](#3.-Experiments)
* [3.1 Decision Tree for Heart Disease Prediction](#3.1-Decision-Tree-for-Heart-Disease-Prediction)
* [3.2 Logistic Regression for Heart Disease Prediction](#3.2-Logistic-Regression-for-Heart-Disease-Prediction)
### Note
Some of the concepts below have not (yet) been discussed during the lecture. These will be discussed further during the next lectures.
### Before you begin
To check whether the code you've written is correct, we'll use **automark**. For this, we created for each of you an account with the username being your student number.
```python
import automark as am
# fill in you student number as your username
username = 'Your Username'
# to check your progress, you can run this function
am.get_progress(username)
```
So far all your tests are 'not attempted'. At the end of this notebook you'll need to have completed all test. The output of `am.get_progress(username)` should at least match the example below. However, we encourage you to take a shot at the 'not attempted' tests!
```
---------------------------------------------
| Your name / student number |
| your_email@your_domain.whatever |
---------------------------------------------
| linear_forward | not attempted |
| linear_grad_W | not attempted |
| linear_grad_b | not attempted |
| nll_forward | not attempted |
| nll_grad_input | not attempted |
| sigmoid_forward | not attempted |
| sigmoid_grad_input | not attempted |
| tree_data_split_left | not attempted |
| tree_data_split_right | not attempted |
| tree_gini_index | not attempted |
| tree_to_terminal | not attempted |
---------------------------------------------
```
```python
from __future__ import print_function, absolute_import, division # You don't need to know what this is.
import numpy as np # this imports numpy, which is used for vector- and matrix calculations
```
This notebook makes use of **classes** and their **instances** that we have already implemented for you. It allows us to write less code and make it more readable. If you are interested in it, here are some useful links:
* The official [documentation](https://docs.python.org/3/tutorial/classes.html)
* Video by *sentdex*: [Object Oriented Programming Introduction](https://www.youtube.com/watch?v=ekA6hvk-8H8)
* Antipatterns in OOP: [Stop Writing Classes](https://www.youtube.com/watch?v=o9pEzgHorH0)
# 1. Logistic Regression
We start with a very simple algorithm called **Logistic Regression**. It is a generalized linear model for 2-class classification.
It can be generalized to the case of many classes and to non-linear cases as well. However, here we consider only the simplest case.
Let us consider a data with 2 classes. Class 0 and class 1. For a given test sample, logistic regression returns a value from $[0, 1]$ which is interpreted as a probability of belonging to class 1. The set of points for which the prediction is $0.5$ is called a *decision boundary*. It is a line on a plane or a hyper-plane in a space.
Logistic regression has two trainable parameters: a weight $W$ and a bias $b$. For a vector of features $X$, the prediction of logistic regression is given by
$$
f(X) = \frac{1}{1 + \exp(-[XW + b])} = \sigma(h(X))
$$
where $\sigma(z) = \frac{1}{1 + \exp(-z)}$ and $h(X)=XW + b$.
Parameters $W$ and $b$ are fitted by maximizing the log-likelihood (or minimizing the negative log-likelihood) of the model on the training data. For a training subset $\{X_j, Y_j\}_{j=1}^N$ the normalized negative log likelihood (NLL) is given by
$$
\mathcal{L} = -\frac{1}{N}\sum_j \log\Big[ f(X_j)^{Y_j} \cdot (1-f(X_j))^{1-Y_j}\Big]
= -\frac{1}{N}\sum_j \Big[ Y_j\log f(X_j) + (1-Y_j)\log(1-f(X_j))\Big]
$$
There are different ways of fitting this model. In this assignment we consider Logistic Regression as a one-layer neural network. We use the following algorithm for the **forward** pass:
1. Linear mapping: $h=XW + b$
2. Sigmoid activation function: $f=\sigma(h)$
3. Calculation of NLL: $\mathcal{L} = -\frac{1}{N}\sum_j \Big[ Y_j\log f_j + (1-Y_j)\log(1-f_j)\Big]$
In order to fit $W$ and $b$ we perform Gradient Descent ([GD](https://en.wikipedia.org/wiki/Gradient_descent)). We choose a small learning rate $\gamma$ and after each computation of forward pass, we update the parameters
$$W_{\text{new}} = W_{\text{old}} - \gamma \frac{\partial \mathcal{L}}{\partial W}$$
$$b_{\text{new}} = b_{\text{old}} - \gamma \frac{\partial \mathcal{L}}{\partial b}$$
We use Backpropagation method ([BP](https://en.wikipedia.org/wiki/Backpropagation)) to calculate the partial derivatives of the loss function with respect to the parameters of the model.
$$
\frac{\partial\mathcal{L}}{\partial W} =
\frac{\partial\mathcal{L}}{\partial h} \frac{\partial h}{\partial W} =
\frac{\partial\mathcal{L}}{\partial f} \frac{\partial f}{\partial h} \frac{\partial h}{\partial W}
$$
$$
\frac{\partial\mathcal{L}}{\partial b} =
\frac{\partial\mathcal{L}}{\partial h} \frac{\partial h}{\partial b} =
\frac{\partial\mathcal{L}}{\partial f} \frac{\partial f}{\partial h} \frac{\partial h}{\partial b}
$$
## 1.1 Linear Mapping
First of all, you need to implement the forward pass of a linear mapping:
$$
h(X) = XW +b
$$
**Note**: here we use `n_out` as the dimensionality of the output. For logisitc regression `n_out = 1`. However, we will work with cases of `n_out > 1` in next assignments. You will **pass** the current assignment even if your implementation works only in case `n_out = 1`. If your implementation works for the cases of `n_out > 1` then you will not have to modify your method next week. All **numpy** operations are generic. It is recommended to use numpy when is it possible.
```python
def linear_forward(x_input, W, b):
"""Perform the mapping of the input
# Arguments
x_input: input of the linear function - np.array of size `(n_objects, n_in)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the output of the linear function
np.array of size `(n_objects, n_out)`
"""
#################
### YOUR CODE ###
#################
return output
```
Let's check your first function. We set the matrices $X, W, b$:
$$
X = \begin{bmatrix}
1 & -1 \\
-1 & 0 \\
1 & 1 \\
\end{bmatrix} \quad
W = \begin{bmatrix}
4 \\
2 \\
\end{bmatrix} \quad
b = \begin{bmatrix}
3 \\
\end{bmatrix}
$$
And then compute
$$
XW = \begin{bmatrix}
1 & -1 \\
-1 & 0 \\
1 & 1 \\
\end{bmatrix}
\begin{bmatrix}
4 \\
2 \\
\end{bmatrix} =
\begin{bmatrix}
2 \\
-4 \\
6 \\
\end{bmatrix} \\
XW + b =
\begin{bmatrix}
5 \\
-1 \\
9 \\
\end{bmatrix}
$$
```python
X_test = np.array([[1, -1],
[-1, 0],
[1, 1]])
W_test = np.array([[4],
[2]])
b_test = np.array([3])
h_test = linear_forward(X_test, W_test, b_test)
print(h_test)
```
```python
am.test_student_function(username, linear_forward, ['x_input', 'W', 'b'])
```
Now you need to implement the calculation of the partial derivative of the loss function with respect to the parameters of the model. As this expressions are used for the updates of the parameters, we refer to them as gradients.
$$
\frac{\partial \mathcal{L}}{\partial W} =
\frac{\partial \mathcal{L}}{\partial h}
\frac{\partial h}{\partial W} \\
\frac{\partial \mathcal{L}}{\partial b} =
\frac{\partial \mathcal{L}}{\partial h}
\frac{\partial h}{\partial b} \\
$$
```python
def linear_grad_W(x_input, grad_output, W, b):
"""Calculate the partial derivative of
the loss with respect to W parameter of the function
dL / dW = (dL / dh) * (dh / dW)
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
grad_output: partial derivative of the loss functions with
respect to the ouput of the dense layer (dL / dh)
np.array of size `(n_objects, n_out)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the partial derivative of the loss
with respect to W parameter of the function
np.array of size `(n_in, n_out)`
"""
#################
### YOUR CODE ###
#################
return grad_W
```
```python
am.test_student_function(username, linear_grad_W, ['x_input', 'grad_output', 'W', 'b'])
```
```python
def linear_grad_b(x_input, grad_output, W, b):
"""Calculate the partial derivative of
the loss with respect to b parameter of the function
dL / db = (dL / dh) * (dh / db)
# Arguments
x_input: input of a dense layer - np.array of size `(n_objects, n_in)`
grad_output: partial derivative of the loss functions with
respect to the ouput of the linear function (dL / dh)
np.array of size `(n_objects, n_out)`
W: np.array of size `(n_in, n_out)`
b: np.array of size `(n_out,)`
# Output
the partial derivative of the loss
with respect to b parameter of the linear function
np.array of size `(n_out,)`
"""
#################
### YOUR CODE ###
#################
return grad_b
```
```python
am.test_student_function(username, linear_grad_b, ['x_input', 'grad_output', 'W', 'b'])
```
```python
am.get_progress(username)
```
## 1.2 Sigmoid
$$
f = \sigma(h) = \frac{1}{1 + e^{-h}}
$$
Sigmoid function is applied element-wise. It does not change the dimensionality of the tensor and its implementation is shape-agnostic in general.
```python
def sigmoid_forward(x_input):
"""sigmoid nonlinearity
# Arguments
x_input: np.array of size `(n_objects, n_in)`
# Output
the output of relu layer
np.array of size `(n_objects, n_in)`
"""
#################
### YOUR CODE ###
#################
return output
```
```python
am.test_student_function(username, sigmoid_forward, ['x_input'])
```
Now you need to implement the calculation of the partial derivative of the loss function with respect to the input of sigmoid.
$$
\frac{\partial \mathcal{L}}{\partial h} =
\frac{\partial \mathcal{L}}{\partial f}
\frac{\partial f}{\partial h}
$$
Tensor $\frac{\partial \mathcal{L}}{\partial f}$ comes from the loss function. Let's calculate $\frac{\partial f}{\partial h}$
$$
\frac{\partial f}{\partial h} =
\frac{\partial \sigma(h)}{\partial h} =
\frac{\partial}{\partial h} \Big(\frac{1}{1 + e^{-h}}\Big)
= \frac{e^{-h}}{(1 + e^{-h})^2}
= \frac{1}{1 + e^{-h}} \frac{e^{-h}}{1 + e^{-h}}
= f(h) (1 - f(h))
$$
Therefore, in order to calculate the gradient of the loss with respect to the input of sigmoid function you need
to
1. calculate $f(h) (1 - f(h))$
2. multiply it element-wise by $\frac{\partial \mathcal{L}}{\partial f}$
```python
def sigmoid_grad_input(x_input, grad_output):
"""sigmoid nonlinearity gradient.
Calculate the partial derivative of the loss
with respect to the input of the layer
# Arguments
x_input: np.array of size `(n_objects, n_in)`
grad_output: np.array of size `(n_objects, n_in)`
dL / df
# Output
the partial derivative of the loss
with respect to the input of the function
np.array of size `(n_objects, n_in)`
dL / dh
"""
#################
### YOUR CODE ###
#################
return grad_input
```
```python
am.test_student_function(username, sigmoid_grad_input, ['x_input', 'grad_output'])
```
## 1.3 Negative Log Likelihood
$$
\mathcal{L}
= -\frac{1}{N}\sum_j \Big[ Y_j\log \dot{Y}_j + (1-Y_j)\log(1-\dot{Y}_j)\Big]
$$
Here $N$ is the number of objects. $Y_j$ is the real label of an object and $\dot{Y}_j$ is the predicted one.
```python
def nll_forward(target_pred, target_true):
"""Compute the value of NLL
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects, 1)`
target_true: ground truth - np.array of size `(n_objects, 1)`
# Output
the value of NLL for a given prediction and the ground truth
scalar
"""
#################
### YOUR CODE ###
#################
return output
```
```python
am.test_student_function(username, nll_forward, ['target_pred', 'target_true'])
```
Now you need to calculate the partial derivative of NLL with with respect to its input.
$$
\frac{\partial \mathcal{L}}{\partial \dot{Y}}
=
\begin{pmatrix}
\frac{\partial \mathcal{L}}{\partial \dot{Y}_0} \\
\frac{\partial \mathcal{L}}{\partial \dot{Y}_1} \\
\vdots \\
\frac{\partial \mathcal{L}}{\partial \dot{Y}_N}
\end{pmatrix}
$$
Let's do it step-by-step
\begin{equation}
\begin{split}
\frac{\partial \mathcal{L}}{\partial \dot{Y}_0}
&= \frac{\partial}{\partial \dot{Y}_0} \Big(-\frac{1}{N}\sum_j \Big[ Y_j\log \dot{Y}_j + (1-Y_j)\log(1-\dot{Y}_j)\Big]\Big) \\
&= -\frac{1}{N} \frac{\partial}{\partial \dot{Y}_0} \Big(Y_0\log \dot{Y}_0 + (1-Y_0)\log(1-\dot{Y}_0)\Big) \\
&= -\frac{1}{N} \Big(\frac{Y_0}{\dot{Y}_0} - \frac{1-Y_0}{1-\dot{Y}_0}\Big)
= \frac{1}{N} \frac{\dot{Y}_0 - Y_0}{\dot{Y}_0 (1 - \dot{Y}_0)}
\end{split}
\end{equation}
And for the other components it can be done in exactly the same way. So the result is the vector where each component is given by
$$\frac{1}{N} \frac{\dot{Y}_j - Y_j}{\dot{Y}_j (1 - \dot{Y}_j)}$$
Or if we assume all multiplications and divisions to be done element-wise the output can be calculated as
$$
\frac{\partial \mathcal{L}}{\partial \dot{Y}} = \frac{1}{N} \frac{\dot{Y} - Y}{\dot{Y} (1 - \dot{Y})}
$$
```python
def nll_grad_input(target_pred, target_true):
"""Compute the partial derivative of NLL
with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects, 1)`
target_true: ground truth - np.array of size `(n_objects, 1)`
# Output
the partial derivative
of NLL with respect to its input
np.array of size `(n_objects, 1)`
"""
#################
### YOUR CODE ###
#################
return grad_input
```
```python
am.test_student_function(username, nll_grad_input, ['target_pred', 'target_true'])
```
```python
am.get_progress(username)
```
## 1.4 Model
Here we provide a model for your. It consist of the function which you have implmeneted above
```python
class LogsticRegressionGD(object):
def __init__(self, n_in, lr=0.05):
super().__init__()
self.lr = lr
self.b = np.zeros(1, )
self.W = np.random.randn(n_in, 1)
def forward(self, x):
self.h = linear_forward(x, self.W, self.b)
y = sigmoid_forward(self.h)
return y
def update_params(self, x, nll_grad):
# compute gradients
grad_h = sigmoid_grad_input(self.h, nll_grad)
grad_W = linear_grad_W(x, grad_h, self.W, self.b)
grad_b = linear_grad_b(x, grad_h, self.W, self.b)
# update params
self.W = self.W - self.lr * grad_W
self.b = self.b - self.lr * grad_b
```
## 1.5 Simple Experiment
```python
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
# Generate some data
def generate_2_circles(N=100):
phi = np.linspace(0.0, np.pi * 2, 100)
X1 = 1.1 * np.array([np.sin(phi), np.cos(phi)])
X2 = 3.0 * np.array([np.sin(phi), np.cos(phi)])
Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))
X = np.hstack([X1,X2]).T
return X, Y
def generate_2_gaussians(N=100):
phi = np.linspace(0.0, np.pi * 2, 100)
X1 = np.random.normal(loc=[1, 2], scale=[2.5, 0.9], size=(N, 2))
X1 = X1.dot(np.array([[0.7, -0.7], [0.7, 0.7]]))
X2 = np.random.normal(loc=[-2, 0], scale=[1, 1.5], size=(N, 2))
X2 = X2.dot(np.array([[0.7, 0.7], [-0.7, 0.7]]))
Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))
X = np.vstack([X1,X2])
return X, Y
def split(X, Y, train_ratio=0.7):
size = len(X)
train_size = int(size * train_ratio)
indices = np.arange(size)
np.random.shuffle(indices)
train_indices = indices[:train_size]
test_indices = indices[train_size:]
return X[train_indices], Y[train_indices], X[test_indices], Y[test_indices]
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))
X, Y = generate_2_circles()
ax1.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')
ax1.set_aspect('equal')
X, Y = generate_2_gaussians()
ax2.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')
ax2.set_aspect('equal')
```
```python
X_train, Y_train, X_test, Y_test = split(*generate_2_gaussians(), 0.7)
```
```python
# let's train our model
model = LogsticRegressionGD(2, 0.05)
for step in range(30):
Y_pred = model.forward(X_train)
loss_value = nll_forward(Y_pred, Y_train)
accuracy = ((Y_pred > 0.5) == Y_train).mean()
print('Step: {} \t Loss: {:.3f} \t Acc: {:.1f}%'.format(step, loss_value, accuracy * 100))
loss_grad = nll_grad_input(Y_pred, Y_train)
model.update_params(X_train, loss_grad)
print('\n\nTesting...')
Y_test_pred = model.forward(X_test)
test_accuracy = ((Y_test_pred > 0.5) == Y_test).mean()
print('Acc: {:.1f}%'.format(test_accuracy * 100))
```
```python
def plot_model_prediction(prediction_func, X, Y, hard=True):
u_min = X[:, 0].min()-1
u_max = X[:, 0].max()+1
v_min = X[:, 1].min()-1
v_max = X[:, 1].max()+1
U, V = np.meshgrid(np.linspace(u_min, u_max, 100), np.linspace(v_min, v_max, 100))
UV = np.stack([U.ravel(), V.ravel()]).T
c = prediction_func(UV).ravel()
if hard:
c = c > 0.5
plt.scatter(UV[:,0], UV[:,1], c=c, edgecolors= 'none', alpha=0.15)
plt.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'black')
plt.xlim(left=u_min, right=u_max)
plt.ylim(bottom=v_min, top=v_max)
plt.axes().set_aspect('equal')
plt.show()
plot_model_prediction(lambda x: model.forward(x), X_train, Y_train, False)
plot_model_prediction(lambda x: model.forward(x), X_train, Y_train, True)
```
```python
# Now run the same experiment on 2 circles
```
# 2. Decision Tree
The next model we look at is called **Decision Tree**. This type of model is non-parametric, meaning in contrast to **Logistic Regression** we do not have any parameters here that need to be trained.
Let us consider a simple binary decision tree for deciding on the two classes of "creditable" and "Not creditable".
Each node, except the leafs, asks a question about the the client in question. A decision is made by going from the root node to a leaf node, while considering the clients situation. The situation of the client, in this case, is fully described by the features:
1. Checking account balance
2. Duration of requested credit
3. Payment status of previous loan
4. Length of current employment
In order to build a decision tree we need training data. To carry on the previous example: we need a number of clients for which we know the properties 1.-4. and their creditability.
The process of building a decision tree starts with the root node and involves the following steps:
1. Choose a splitting criteria and add it to the current node.
2. Split the dataset at the current node into those that fullfil the criteria and those that do not.
3. Add a child node for each data split.
4. For each child node decide on either A. or B.:
1. Repeat from 1. step
2. Make it a leaf node: The predicted class label is decided by the majority vote over the training data in the current split.
## 2.1 Gini Index & Data Split
Deciding on how to split your training data at each node is dominated by the following two criterias:
1. Does the rule help me make a final decision?
2. Is the rule general enough such that it applies not only to my training data, but also to new unseen examples?
When considering our previous example, splitting the clients by their handedness would not help us deciding on their creditability. Knowning if a rule will generalize is usually a hard call to make, but in practice we rely on [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor) principle. Thus the less rules we use, the better we believe it to generalize to previously unseen examples.
One way to measure the quality of a rule is by the [**Gini Index**](https://en.wikipedia.org/wiki/Gini_coefficient).
Since we only consider binary classification, it is calculated by:
$$
Gini = \sum_{n\in\{L,R\}}\frac{|S_n|}{|S|}\left( 1 - \sum_{c \in C} p_{S_n}(c)^2\right)\\
p_{S_n}(c) = \frac{|\{\mathbf{x}_{i}\in \mathbf{X}|y_{i} = c, i \in S_n\}|}{|S_n|}, n \in \{L, R\}
$$
with $|C|=2$ being your set of class labels and $S_L$ and $S_R$ the two splits determined by the splitting criteria.
The lower the gini score, the better the split. In the extreme case, where all class labels are the same in each split respectively, the gini index takes the value of $0$.
```python
def tree_gini_index(Y_left, Y_right, classes):
"""Compute the Gini Index.
# Arguments
Y_left: class labels of the data left set
np.array of size `(n_objects, 1)`
Y_right: class labels of the data right set
np.array of size `(n_objects, 1)`
classes: list of all class values
# Output
gini: scalar `float`
"""
gini = 0.0
#################
### YOUR CODE ###
#################
return gini
```
```python
am.test_student_function(username, tree_gini_index, ['Y_left', 'Y_right', 'classes'])
```
At each node in the tree, the data is split according to a split criterion and each split is passed onto the left/right child respectively.
Implement the following function to return all rows in `X` and `Y` such that the left child gets all examples that are less than the split value and vice versa.
```python
def tree_split_data_left(X, Y, feature_index, split_value):
"""Split the data `X` and `Y`, at the feature indexed by `feature_index`.
If the value is less than `split_value` then return it as part of the left group.
# Arguments
X: np.array of size `(n_objects, n_in)`
Y: np.array of size `(n_objects, 1)`
feature_index: index of the feature to split at
split_value: value to split between
# Output
(XY_left): np.array of size `(n_objects_left, n_in + 1)`
"""
X_left, Y_left = None, None
#################
### YOUR CODE ###
#################
XY_left = np.concatenate([X_left, Y_left], axis=-1)
return XY_left
def tree_split_data_right(X, Y, feature_index, split_value):
"""Split the data `X` and `Y`, at the feature indexed by `feature_index`.
If the value is greater or equal than `split_value` then return it as part of the right group.
# Arguments
X: np.array of size `(n_objects, n_in)`
Y: np.array of size `(n_objects, 1)`
feature_index: index of the feature to split at
split_value: value to split between
# Output
(XY_left): np.array of size `(n_objects_left, n_in + 1)`
"""
X_right, Y_right = None, None
#################
### YOUR CODE ###
#################
XY_right = np.concatenate([X_right, Y_right], axis=-1)
return XY_right
```
```python
am.test_student_function(username, tree_split_data_left, ['X', 'Y', 'feature_index', 'split_value'])
```
```python
am.test_student_function(username, tree_split_data_right, ['X', 'Y', 'feature_index', 'split_value'])
```
```python
am.get_progress(username)
```
Now to find the split rule with the lowest gini score, we brute-force search over all features and values to split by.
```python
def tree_best_split(X, Y):
class_values = list(set(Y.flatten().tolist()))
r_index, r_value, r_score = float("inf"), float("inf"), float("inf")
r_XY_left, r_XY_right = (X,Y), (X,Y)
for feature_index in range(X.shape[1]):
for row in X:
XY_left = tree_split_data_left(X, Y, feature_index, row[feature_index])
XY_right = tree_split_data_right(X, Y, feature_index, row[feature_index])
XY_left, XY_right = (XY_left[:,:-1], XY_left[:,-1:]), (XY_right[:,:-1], XY_right[:,-1:])
gini = tree_gini_index(XY_left[1], XY_right[1], class_values)
if gini < r_score:
r_index, r_value, r_score = feature_index, row[feature_index], gini
r_XY_left, r_XY_right = XY_left, XY_right
return {'index':r_index, 'value':r_value, 'XY_left': r_XY_left, 'XY_right':r_XY_right}
```
## 2.2 Terminal Node
The leaf nodes predict the label of an unseen example, by taking a majority vote over all training class labels in that node.
```python
def tree_to_terminal(Y):
"""The most frequent class label, out of the data points belonging to the leaf node,
is selected as the predicted class.
# Arguments
Y: np.array of size `(n_objects)`
# Output
label: most frequent label of `Y.dtype`
"""
label = None
#################
### YOUR CODE ###
#################
return label
```
```python
am.test_student_function(username, tree_to_terminal, ['Y'])
```
```python
am.get_progress(username)
```
## 2.3 Build the Decision Tree
Now we recursively build the decision tree, by greedily splitting the data at each node according to the gini index.
To prevent the model from overfitting, we transform a node into a terminal/leaf node, if:
1. a maximum depth is reached.
2. the node does not reach a minimum number of training samples.
```python
def tree_recursive_split(X, Y, node, max_depth, min_size, depth):
XY_left, XY_right = node['XY_left'], node['XY_right']
del(node['XY_left'])
del(node['XY_right'])
# check for a no split
if XY_left[0].size <= 0 or XY_right[0].size <= 0:
node['left_child'] = node['right_child'] = tree_to_terminal(np.concatenate((XY_left[1], XY_right[1])))
return
# check for max depth
if depth >= max_depth:
node['left_child'], node['right_child'] = tree_to_terminal(XY_left[1]), tree_to_terminal(XY_right[1])
return
# process left child
if XY_left[0].shape[0] <= min_size:
node['left_child'] = tree_to_terminal(XY_left[1])
else:
node['left_child'] = tree_best_split(*XY_left)
tree_recursive_split(X, Y, node['left_child'], max_depth, min_size, depth+1)
# process right child
if XY_right[0].shape[0] <= min_size:
node['right_child'] = tree_to_terminal(XY_right[1])
else:
node['right_child'] = tree_best_split(*XY_right)
tree_recursive_split(X, Y, node['right_child'], max_depth, min_size, depth+1)
def build_tree(X, Y, max_depth, min_size):
root = tree_best_split(X, Y)
tree_recursive_split(X, Y, root, max_depth, min_size, 1)
return root
```
By printing the split criteria or the predicted class at each node, we can visualise the decising making process.
Both the tree and a a prediction can be implemented recursively, by going from the root to a leaf node.
```python
def print_tree(node, depth=0):
if isinstance(node, dict):
print('%s[X%d < %.3f]' % ((depth*' ', (node['index']+1), node['value'])))
print_tree(node['left_child'], depth+1)
print_tree(node['right_child'], depth+1)
else:
print('%s[%s]' % ((depth*' ', node)))
def tree_predict_single(x, node):
if isinstance(node, dict):
if x[node['index']] < node['value']:
return tree_predict_single(x, node['left_child'])
else:
return tree_predict_single(x, node['right_child'])
return node
def tree_predict_multi(X, node):
Y = np.array([tree_predict_single(row, node) for row in X])
return Y[:, None] # size: (n_object,) -> (n_object, 1)
```
Let's test our decision tree model on some toy data.
```python
X_train, Y_train, X_test, Y_test = split(*generate_2_circles(), 0.7)
tree = build_tree(X_train, Y_train, 4, 1)
Y_pred = tree_predict_multi(X_test, tree)
test_accuracy = (Y_pred == Y_test).mean()
print('Test Acc: {:.1f}%'.format(test_accuracy * 100))
```
We print the decision tree in [pre-order](https://en.wikipedia.org/wiki/Tree_traversal#Pre-order_(NLR)).
```python
print_tree(tree)
```
```python
plot_model_prediction(lambda x: tree_predict_multi(x, tree), X_test, Y_test)
```
# 3. Experiments
The [Cleveland Heart Disease](https://archive.ics.uci.edu/ml/datasets/Heart+Disease) dataset aims at predicting the presence of heart disease based on other available medical information of the patient.
Although the whole database contains 76 attributes, we focus on the following 14:
1. Age: age in years
2. Sex:
* 0 = female
* 1 = male
3. Chest pain type:
* 1 = typical angina
* 2 = atypical angina
* 3 = non-anginal pain
* 4 = asymptomatic
4. Trestbps: resting blood pressure in mm Hg on admission to the hospital
5. Chol: serum cholestoral in mg/dl
6. Fasting blood sugar: > 120 mg/dl
* 0 = false
* 1 = true
7. Resting electrocardiographic results:
* 0 = normal
* 1 = having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV)
* 2 = showing probable or definite left ventricular hypertrophy by Estes' criteria
8. Thalach: maximum heart rate achieved
9. Exercise induced angina:
* 0 = no
* 1 = yes
10. Oldpeak: ST depression induced by exercise relative to rest
11. Slope: the slope of the peak exercise ST segment
* 1 = upsloping
* 2 = flat
* 3 = downsloping
12. Ca: number of major vessels (0-3) colored by flourosopy
13. Thal:
* 3 = normal
* 6 = fixed defect
* 7 = reversable defect
14. Target: diagnosis of heart disease (angiographic disease status)
* 0 = < 50% diameter narrowing
* 1 = > 50% diameter narrowing
The 14. attribute is the target variable that we would like to predict based on the rest.
We have prepared some helper functions to download and pre-process the data in `heart_disease_data.py`
```python
import heart_disease_data
```
```python
X, Y = heart_disease_data.download_and_preprocess()
X_train, Y_train, X_test, Y_test = split(X, Y, 0.7)
```
Let's have a look at some examples
```python
print(X_train[0:2])
print(Y_train[0:2])
# TODO feel free to explore more examples and see if you can predict the presence of a heart disease
```
## 3.1 Decision Tree for Heart Disease Prediction
Let's build a decision tree model on the training data and see how well it performs
```python
# TODO: you are free to make use of code that we provide in previous cells
# TODO: play around with different hyper parameters and see how these impact your performance
tree = build_tree(X_train, Y_train, 5, 4)
Y_pred = tree_predict_multi(X_test, tree)
test_accuracy = (Y_pred == Y_test).mean()
print('Test Acc: {:.1f}%'.format(test_accuracy * 100))
```
How did changing the hyper parameters affect the test performance? Usually hyper parameters are tuned using a hold-out [validation set](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets#Validation_dataset) instead of the test set.
## 3.2 Logistic Regression for Heart Disease Prediction
Instead of manually going through the data to find possible correlations, let's try training a logistic regression model on the data.
```python
# TODO: you are free to make use of code that we provide in previous cells
# TODO: play around with different hyper parameters and see how these impact your performance
```
How well did your model perform? Was it actually better then guessing? Let's look at the empirical mean of the target.
```python
Y_train.mean()
```
So what is the problem? Let's have a look at the learned parameters of our model.
```python
print(model.W, model.b)
```
If you trained sufficiently many steps you'll probably see how some weights are much larger than others. Have a look at what range the parameters were initialized and how much change we allow per step (learning rate). Compare this to the scale of the input features. Here an important concept arises, when we want to train on real world data:
[Feature Scaling](https://en.wikipedia.org/wiki/Feature_scaling).
Let's try applying it on our data and see how it affects our performance.
```python
# TODO: Rescale the input features and train again
```
Notice that we did not need any rescaling for the decision tree. Can you think of why?
| 573a6d2178c0300d5138735a1933012506c0e4c7 | 48,702 | ipynb | Jupyter Notebook | week_2/ML.ipynb | shoemaker9/aml2019 | f09c3ac942158b6fe9748c76552d6ace73f47815 | [
"MIT"
] | null | null | null | week_2/ML.ipynb | shoemaker9/aml2019 | f09c3ac942158b6fe9748c76552d6ace73f47815 | [
"MIT"
] | null | null | null | week_2/ML.ipynb | shoemaker9/aml2019 | f09c3ac942158b6fe9748c76552d6ace73f47815 | [
"MIT"
] | null | null | null | 35.012221 | 483 | 0.544618 | true | 9,195 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91848 | 0.90599 | 0.832134 | __label__eng_Latn | 0.962431 | 0.771659 |
# Propriedade da multiplicação e janelas
Imagine que você tenha um fenômeno (ou sinal) com duração infinita, $x(t)$, que deseja observar (medir).
Quando medimos $x(t)$ por um tempo finito estamos observando o fenômeno por uma janela temporal $w(t)$ finita. Na prática, o sinal observado é:
\begin{equation}
x_o(t) = x(t)w(t)
\end{equation}
e pela propriedade da dualidade e do teorema da convolução, o espectro observado será
\begin{equation}
X_o(\mathrm{j} \omega) = X(\mathrm{j} \omega) * W(\mathrm{j} \omega)
\end{equation}
ou seja, o espectro observado terá em si as características do fenômeno em si e da janela de observação. Vamos investigar os espectros dos sinais $x(t)$, $w(t)$ e $x_o(t)$.
```python
# importar as bibliotecas necessárias
import numpy as np # arrays
import matplotlib.pyplot as plt # plots
plt.rcParams.update({'font.size': 14})
from scipy import signal
```
# O sinal infinito (ou ao menos de longa duração)
```python
fs = 1000
time = np.arange(0, 1000, 1/fs)
xt = np.cos(2*np.pi*10*time)
# Espectro
Xjw = np.fft.fft(xt)
freq = np.linspace(0, (len(Xjw)-1)*fs/len(Xjw), len(Xjw))
# tempo
plt.figure(figsize=(12,4))
plt.plot(time, xt, '-b', linewidth = 1, label = r'$x(t)$')
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 1000))
# frequencia
plt.figure(figsize=(12,4))
plt.semilogx(freq, 20*np.log10(2*np.abs(Xjw)/len(Xjw)), '-b', linewidth = 1, label = r'$|X(\mathrm{j} \omega)|$')
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Frequência [Hz]')
plt.ylabel(r'$|X(\mathrm{j} \omega)|$ [-]')
plt.xlim((1, fs/2))
plt.ylim((-80, 10));
```
# Definamos algumas janelas
1. Retangular
2. Hanning
3. Hammming
```python
# Tempo de observação
Tp = 10
Np = len(time[time<Tp]) # numero de amostras contidas na janela de observação (início do sinal)
tw = np.linspace(0, Tp, Np) # vetor temporal de observação
# Janela retangular
w_ret = np.concatenate((np.ones(Np), np.zeros(len(xt)-Np)))
W_ret = np.fft.fft(w_ret)
# Janela Hanning
w_hann = np.concatenate((signal.hann(Np), np.zeros(len(xt)-Np)))
W_hann = np.fft.fft(w_hann)
# Janela Hamming
w_hamm = np.concatenate((signal.hamming(Np), np.zeros(len(xt)-Np)))
W_hamm = np.fft.fft(w_hamm)
# tempo
plt.figure(figsize=(12,4))
plt.plot(time, w_ret, '-r', linewidth = 3, label = 'Rect.')
plt.plot(time, w_hann, '-g', linewidth = 3, label = 'Hanning')
plt.plot(time, w_hamm, '-k', linewidth = 3, label = 'Hamming')
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 2*Tp))
plt.ylim((0, 1.1))
# frequencia
plt.figure(figsize=(12,4))
plt.semilogx(freq, 20*np.log10(2*np.abs(W_ret)/Np), '-r', linewidth = 3, label = 'Rect')
plt.semilogx(freq, 20*np.log10(2*np.abs(W_hann)/Np), '-g', linewidth = 3, label = 'Hanning')
plt.semilogx(freq, 20*np.log10(2*np.abs(W_hamm)/Np), '-k', linewidth = 3, label = 'Hamming')
plt.legend(loc = 'lower left')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Frequência [Hz]')
plt.ylabel(r'$|W(\mathrm{j} \omega)|$ [-]')
plt.xlim((0.001, 10))
plt.ylim((-80, 10));
```
# Observemos o espectro resultante
```python
# Retangular
xo_ret = xt*w_ret
Xo_ret = np.fft.fft(xo_ret)
# Hanning
xo_hann = xt*w_hann
Xo_hann = np.fft.fft(xo_hann)
f_hann = np.sqrt(np.sum(w_ret**2)/np.sum(w_hann**2))
# Hamming
xo_hamm = xt*w_hamm
Xo_hamm = np.fft.fft(xo_hamm)
f_hamm = np.sqrt(np.sum(w_ret**2)/np.sum(w_hamm**2))
# tempo
plt.figure(figsize=(12,4))
plt.plot(time, xo_ret, '-r', linewidth = 2, label = 'Rect.')
plt.plot(time, xo_hann, '-g', linewidth = 2, label = 'Hanning')
plt.plot(time, xo_hamm, '-k', linewidth = 2, label = 'Hamming')
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.xlim((0, 2*Tp))
plt.ylim((-1.1, 1.1))
# frequencia
plt.figure(figsize=(12,4))
plt.semilogx(freq, 20*np.log10(2*np.abs(Xo_ret)/Np), '-r', linewidth = 2, label = 'Rect')
plt.semilogx(freq, 20*np.log10(2*f_hann*np.abs(Xo_hann)/Np), '-g', linewidth = 2, label = 'Hanning')
plt.semilogx(freq, 20*np.log10(2*f_hamm*np.abs(Xo_hamm)/Np), '-k', linewidth = 2, label = 'Hamming')
plt.legend(loc = 'lower left')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Frequência [Hz]')
plt.ylabel(r'$|W(\mathrm{j} \omega)|$ [-]')
plt.xlim((1, fs/2))
plt.ylim((-80, 10));
```
| 8638d2e64a8902d181a010bc4b46cc7bc8903952 | 251,592 | ipynb | Jupyter Notebook | Aula 27 - propriedade da multiplicacao e janelas/Jamelas.ipynb | RicardoGMSilveira/codes_proc_de_sinais | e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0 | [
"CC0-1.0"
] | 8 | 2020-10-01T20:59:33.000Z | 2021-07-27T22:46:58.000Z | Aula 27 - propriedade da multiplicacao e janelas/Jamelas.ipynb | RicardoGMSilveira/codes_proc_de_sinais | e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0 | [
"CC0-1.0"
] | null | null | null | Aula 27 - propriedade da multiplicacao e janelas/Jamelas.ipynb | RicardoGMSilveira/codes_proc_de_sinais | e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0 | [
"CC0-1.0"
] | 9 | 2020-10-15T12:08:22.000Z | 2021-04-12T12:26:53.000Z | 852.854237 | 62,988 | 0.947379 | true | 1,540 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.782662 | 0.703709 | __label__por_Latn | 0.357245 | 0.473282 |
# Scenario A - Noise Level Variation (results evaluation)
This file is used to evaluate the inference (numerical) results.
The model used in the inference of the parameters is formulated as follows:
\begin{equation}
\large y = f(x) = \sum\limits_{m=1}^M \big[A_m \cdot e^{-\frac{(x-\mu_m)^2}{2\cdot\sigma_m^2}}\big] + \epsilon
\end{equation}
```python
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pymc3 as pm
import arviz as az
#az.style.use('arviz-darkgrid')
print('Running on PyMC3 v{}'.format(pm.__version__))
```
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
Running on PyMC3 v3.8
## Load results summary
```python
# load results from disk
df = pd.read_csv('./scenario_noise.csv')
df.index += 1
#df.sort_values(by=['waic'], ascending=False)
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>r_hat</th>
<th>mcse</th>
<th>ess</th>
<th>bfmi</th>
<th>r2</th>
<th>waic</th>
<th>epsilon</th>
<th>epsilon_real</th>
<th>run</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1.792</td>
<td>17.6343</td>
<td>2.7</td>
<td>0.983629</td>
<td>0.819621</td>
<td>5224.546374</td>
<td>1.363282</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>1.046</td>
<td>2.6305</td>
<td>6995.3</td>
<td>0.797742</td>
<td>0.906804</td>
<td>5396.323351</td>
<td>1.449872</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>1.497</td>
<td>1.2642</td>
<td>101.8</td>
<td>1.264172</td>
<td>0.999211</td>
<td>-3299.410427</td>
<td>0.079342</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>1.000</td>
<td>0.0000</td>
<td>35333.5</td>
<td>1.010989</td>
<td>0.999781</td>
<td>-3282.649651</td>
<td>0.080546</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>5</th>
<td>1.010</td>
<td>1.2969</td>
<td>2590.5</td>
<td>0.505033</td>
<td>0.720291</td>
<td>7475.069122</td>
<td>2.909367</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>6</th>
<td>1.375</td>
<td>0.5484</td>
<td>1384.7</td>
<td>1.251146</td>
<td>0.876761</td>
<td>5318.837708</td>
<td>1.421364</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>7</th>
<td>1.345</td>
<td>1.1971</td>
<td>291.5</td>
<td>1.154194</td>
<td>0.995257</td>
<td>1016.922391</td>
<td>0.338878</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>8</th>
<td>1.763</td>
<td>3.4459</td>
<td>10.2</td>
<td>0.497526</td>
<td>0.699698</td>
<td>7306.713303</td>
<td>1.415655</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>9</th>
<td>1.168</td>
<td>3.6623</td>
<td>31.6</td>
<td>0.441019</td>
<td>0.934793</td>
<td>4777.161077</td>
<td>1.161842</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>10</th>
<td>2.308</td>
<td>0.2235</td>
<td>34.0</td>
<td>0.003836</td>
<td>0.993789</td>
<td>1066.920371</td>
<td>0.361880</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<th>11</th>
<td>1.735</td>
<td>14.4972</td>
<td>23.0</td>
<td>0.986735</td>
<td>0.977436</td>
<td>3291.500964</td>
<td>0.711856</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>12</th>
<td>1.000</td>
<td>0.0393</td>
<td>29583.1</td>
<td>1.008336</td>
<td>0.799142</td>
<td>6492.833368</td>
<td>2.099084</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>13</th>
<td>1.999</td>
<td>7.4211</td>
<td>2.7</td>
<td>1.393198</td>
<td>0.998259</td>
<td>171.074332</td>
<td>0.162321</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>14</th>
<td>2.739</td>
<td>0.7501</td>
<td>3.3</td>
<td>1.003822</td>
<td>0.999518</td>
<td>1309.262642</td>
<td>0.302636</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>15</th>
<td>2.015</td>
<td>8.2185</td>
<td>1218.0</td>
<td>0.001468</td>
<td>0.707638</td>
<td>7196.037268</td>
<td>1.450688</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>16</th>
<td>1.000</td>
<td>0.0353</td>
<td>41440.1</td>
<td>0.976178</td>
<td>0.636774</td>
<td>7525.263090</td>
<td>2.955174</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>17</th>
<td>1.833</td>
<td>9.2352</td>
<td>4.7</td>
<td>0.442542</td>
<td>0.757043</td>
<td>6118.781046</td>
<td>1.620072</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>18</th>
<td>1.060</td>
<td>3.0758</td>
<td>400.1</td>
<td>0.434032</td>
<td>0.757816</td>
<td>6632.806131</td>
<td>2.164517</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>19</th>
<td>1.980</td>
<td>11.4349</td>
<td>2.5</td>
<td>0.918732</td>
<td>0.986219</td>
<td>3106.783921</td>
<td>0.520355</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>20</th>
<td>2.131</td>
<td>9.1774</td>
<td>7.5</td>
<td>0.486471</td>
<td>0.265548</td>
<td>10582.633054</td>
<td>2.771358</td>
<td>0.10</td>
<td>1</td>
</tr>
<tr>
<th>21</th>
<td>1.827</td>
<td>1.6465</td>
<td>20.7</td>
<td>0.494267</td>
<td>0.096097</td>
<td>7580.319020</td>
<td>1.563907</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>22</th>
<td>1.000</td>
<td>0.0983</td>
<td>11285.1</td>
<td>0.861854</td>
<td>0.779369</td>
<td>7063.682095</td>
<td>2.534114</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>23</th>
<td>1.015</td>
<td>1.6263</td>
<td>6300.3</td>
<td>0.001088</td>
<td>0.725909</td>
<td>5757.857579</td>
<td>1.571888</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>24</th>
<td>1.660</td>
<td>3.1826</td>
<td>4250.5</td>
<td>0.992823</td>
<td>0.932047</td>
<td>7226.362048</td>
<td>0.632385</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>25</th>
<td>2.199</td>
<td>3.5816</td>
<td>20.4</td>
<td>0.531355</td>
<td>0.964774</td>
<td>3814.780677</td>
<td>0.942123</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>26</th>
<td>1.455</td>
<td>5.5307</td>
<td>42.7</td>
<td>0.001519</td>
<td>0.757377</td>
<td>6417.173372</td>
<td>2.050267</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>27</th>
<td>1.938</td>
<td>3.8205</td>
<td>11.4</td>
<td>0.514028</td>
<td>0.513889</td>
<td>8918.640812</td>
<td>2.004500</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>28</th>
<td>2.002</td>
<td>0.1386</td>
<td>26.7</td>
<td>0.016020</td>
<td>0.989207</td>
<td>1289.054065</td>
<td>0.396308</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>29</th>
<td>1.000</td>
<td>0.0008</td>
<td>30679.2</td>
<td>0.996755</td>
<td>0.911607</td>
<td>4596.484693</td>
<td>1.114364</td>
<td>0.25</td>
<td>1</td>
</tr>
<tr>
<th>30</th>
<td>1.000</td>
<td>0.1927</td>
<td>14839.5</td>
<td>0.917105</td>
<td>0.960027</td>
<td>3584.065631</td>
<td>0.793608</td>
<td>0.25</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```python
#suc = df.loc[(df['r_hat'] <= 1.1) & (df['r2'] >= 0.99)]
suc = df.loc[(df['r_hat'] <= 1.1) & (df['r2'] >= 0.99) & (df['run'].astype(str) == '1')]
suc
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>r_hat</th>
<th>mcse</th>
<th>ess</th>
<th>bfmi</th>
<th>r2</th>
<th>waic</th>
<th>epsilon</th>
<th>epsilon_real</th>
<th>run</th>
</tr>
</thead>
<tbody>
<tr>
<th>4</th>
<td>1.0</td>
<td>0.0</td>
<td>35333.5</td>
<td>1.010989</td>
<td>0.999781</td>
<td>-3282.649651</td>
<td>0.080546</td>
<td>0.05</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```python
run = '1'
suc_l = df.loc[(df['r_hat'] <= 1.1) & (df['r2'] >= 0.99) & (df['run'].astype(str) == run)
& (df['epsilon_real'].astype(str) == '0.05')]
suc_m = df.loc[(df['r_hat'] <= 1.1) & (df['r2'] >= 0.99) & (df['run'].astype(str) == run)
& (df['epsilon_real'].astype(str) == '0.1')]
suc_h = df.loc[(df['r_hat'] <= 1.1) & (df['r2'] >= 0.99) & (df['run'].astype(str) == run)
& (df['epsilon_real'].astype(str) == '0.25')]
print("1%: ", len(suc_l))
print("2%: ", len(suc_m))
print("5%: ", len(suc_h))
print("t : ", len(suc_l)+len(suc_m)+len(suc_h))
```
1%: 1
2%: 0
5%: 0
t : 1
```python
```
| f5a18e8a02c6ac6cbc913c88e7a82adcaa1fa6e5 | 21,680 | ipynb | Jupyter Notebook | code/scenarios/scenario_a/scenario_noise_evaluation.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
] | 1 | 2021-01-07T02:22:25.000Z | 2021-01-07T02:22:25.000Z | code/scenarios/scenario_a/scenario_noise_evaluation.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
] | null | null | null | code/scenarios/scenario_a/scenario_noise_evaluation.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
] | null | null | null | 32.118519 | 130 | 0.344557 | true | 4,410 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.640636 | 0.504777 | __label__kor_Hang | 0.132317 | 0.011095 |
<p align="center">
</p>
# Data Science Basics in Python Series
## Chapter VI: Basic Statistical Analysis in Python
### Michael Pyrcz, Associate Professor, The University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
#### Basic Univariate Statistics
Here's a demonstration of the calculation and visualization for basic statistical analysis in Python.
We will use the following Python packages:
* [statistics](https://docs.python.org/3/library/statistics.html)
* [SciPy](https://www.scipy.org/)
* [MatPlotLib](https://matplotlib.org/)
We will cover a variety of common, basic statistical analyses and displays.
#### Basic Univariate Statistics
This tutorial includes the methods and operations that would commonly be required for Engineers and Geoscientists working with Regularly Gridded Data Structures for the purpose of:
1. Data Checking and Cleaning
2. Data Mining / Inferential Data Analysis
3. Predictive Modeling
for Data Analytics, Geostatistics and Machine Learning.
#### General Definitions
**Statistics**
collecting, organizing, and interpreting data, as well as drawing conclusions and making decisions.
**Geostatistics**
a branch of applied statistics that integrates:
1. the spatial (geological) context,
2. the spatial relationships,
3. volumetric support / scale, and
4. uncertainty.
**Data Analytics**
use of statistics [with visualization] to support decision making.
**Big Data Analytics**
process of examining large and varied data sets (big data) to discover patterns and make decisions.
#### General Definitions
**Variable** or **Feature**
* any property measured / observed in a study (e.g., porosity, permeability, mineral concentrations, saturations, contaminant concentration)
* the measure often requires significant analysis, interpretation and uncertainty, 'data softness'
#### General Definitions
**Population**
exhaustive, finite list of property of interest over area of interest. Generally the entire population is not accessible.
* exhaustive set of porosity at each location within a reservoir
**Sample**
set of values, locations that have been measured
* porosity data from well-logs within a reservoir
#### General Definitions
**Parameters**
summary measure of a population
* population mean, population standard deviation, we rarely have access to this
* model parameters is different and we will cover later.
**Statistics**
summary measure of a sample
* sample mean, sample standard deviation, we use statistics as estimates of the parameters
#### Covered Parameters / Statistics
We cover the following parameters and statistics.
| Central Tendency | Dispersion | Outliers | Distributions Shape |
| :--------------: | :--------: | :------: | :-----------------: |
| Arithmetic Average / Mean | Variance | Tukey Outlier Test| Skew |
| Median | Standard Deviation | | Excess Kurtosis |
| Mode | Range | | Person's Mode Skewness |
| Geometric Mean | Percentile | | Quartile Skew Coefficient |
| Harmonic Mean | Interquartile Range | | |
| Power Law Average | | | |
I have a lecture on these univariate statistics available on [YouTube](https://www.youtube.com/watch?v=wAcbA2cIqec&list=PLG19vXLQHvSB-D4XKYieEku9GQMQyAzjJ&index=11&t=0s).
#### Nonparmetric Cumulative Distribution Functions (CDFs)
**nonparametric CDF**
* plotting nonparametric distributions
**fitting CDFs**
* fitting a parametric distribution and plotting
#### Getting Started
Here's the steps to get setup to run this demonstration:
1. **Install Anaconda 3** on your machine from https://www.anaconda.com/download/.
2. **Open Jupyter Notebook**, look for the Jupyter app on your system after installing Anaconda 3.
3. **Load this Workflow** found here [PythonDataBasics_Statistics.ipynb](https://github.com/GeostatsGuy/PythonNumericalDemos/blob/master/PythonDataBasics_PedictiveMachineLearning.ipynb).
4. **Load the data**, this workflow retreives the data from my GitHub [GeoDataSets Repository](https://github.com/GeostatsGuy/GeoDataSets). If you want to work locally, you will need to first download the data file to your working directory. The data file is found here, [2D_MV_200wells.csv](https://github.com/GeostatsGuy/GeoDataSets/blob/master/2D_MV_200wells.csv). Code is provided below to set the working directory and to load the data locally.
#### Load the required libraries
The following code loads the required libraries.
```python
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # plotting
import scipy # statistics
import statistics as stats # statistics like the mode
from scipy.stats import norm # fitting a Gaussian distribution
```
#### Set the Working Directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). Set this to your working directory, with the above mentioned data file.
```python
os.chdir("c:/PGE383") # set the working directory
```
#### Loading Data
Let's load the provided multivariate, spatial dataset [2D_MV_200wells.csv](https://github.com/GeostatsGuy/GeoDataSets). These are the features:
| Feature | Units | Descriptions |
| :------: | :--------------: | :--------- |
| X, Y | $meters$ | Location |
| porosity | $fraction$ | rock void fraction |
| permeability | $mDarcy$ | capability of a porous rock to permit the flow of fluids through its pore spaces |
| acoustic impedance | $\frac{kg}{m^2s} 10^6$ | rock bulk density times rock acoustic velocity |
* load use the Pandas 'read_csv' function and rename the features for readable code
```python
df = pd.read_csv("2D_MV_200wells.csv") # read a .csv file in as a DataFrame
df = df.rename(columns={'facies_threshold_0.3': 'facies','permeability':'perm','acoustic_impedance':'ai'}) # rename columns of the
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>X</th>
<th>Y</th>
<th>facies</th>
<th>porosity</th>
<th>perm</th>
<th>ai</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>565</td>
<td>1485</td>
<td>1</td>
<td>0.1184</td>
<td>6.170</td>
<td>2.009</td>
</tr>
<tr>
<th>1</th>
<td>2585</td>
<td>1185</td>
<td>1</td>
<td>0.1566</td>
<td>6.275</td>
<td>2.864</td>
</tr>
<tr>
<th>2</th>
<td>2065</td>
<td>2865</td>
<td>2</td>
<td>0.1920</td>
<td>92.297</td>
<td>3.524</td>
</tr>
<tr>
<th>3</th>
<td>3575</td>
<td>2655</td>
<td>1</td>
<td>0.1621</td>
<td>9.048</td>
<td>2.157</td>
</tr>
<tr>
<th>4</th>
<td>1835</td>
<td>35</td>
<td>1</td>
<td>0.1766</td>
<td>7.123</td>
<td>3.979</td>
</tr>
</tbody>
</table>
</div>
#### Extract a Feature
Let's extract one of the features, porosity, into a 1D ndarray and do our statistics on porosity.
* then we can use NumPy's statistics methods
* note this is a **shallow copy**, any changes to the array will change the feature in the DataFrame
```python
feature = 'Porosity'; fcol = 'porosity'; funits = '(fraction)'; f2units = '(fraction^2)'; fmin = 0.0; fmax = 0.25
X = df[fcol].values
print('We are working with ' + feature + ' ' + funits + ' from column ' + fcol + ' .')
```
We are working with Porosity (fraction) from column porosity .
#### Visualize the Feature Histogram
To improve our understanding of the feature, let's visualize the featre distribuiton as a histogram.
```python
plt.hist(X,color='red',alpha=0.2,edgecolor='black',bins=np.linspace(fmin,fmax,20))
plt.xlabel(feature + ' ' + funits); plt.ylabel('Frequency');
```
#### Measures of Central Tendency
##### The Arithmetic Average / Mean
\begin{equation}
\overline{x} = \frac{1}{n}\sum^n_{i=1} x_i
\end{equation}
```python
average = np.average(X)
print(feature + ' average is ' + str(round(average,2)) + ' ' + funits + '.')
```
Porosity average is 0.15 (fraction).
#### Measures of Central Tendency
##### The Weighted Arithmetic Average / Mean
Many of the following methods accept data weights, e.g. declustering
\begin{equation}
\overline{x} = \frac{\sum^n_{i=1} \lambda_i x_i}{\sum^n_{i=1} \lambda_i}
\end{equation}
```python
weights = np.ones(X.shape)
wt_average = np.average(X,weights = weights)
print(feature + ' average is ' + str(round(wt_average,2)) + ' ' + funits + '.')
```
Porosity average is 0.15 (fraction).
#### Measures of Central Tendency
##### Median
\begin{equation}
P50_x = F^{-1}_{x}(0.50)
\end{equation}
```python
median = np.median(X)
print(feature + ' median is ' + str(round(median,2)) + ' ' + funits + '.')
```
Porosity median is 0.15 (fraction).
#### Measures of Central Tendency
##### Mode
The most common value. To do this we should bin the data, like into histogram bins/bars. To do this we will round the data to the 2nd decimal place. We are assume bin boundaries, $0.01, 0.02,\ldots, 0.30$.
```python
mode = stats.mode(np.round(X,2))
print(feature + ' mode is ' + str(round(mode,2)) + ' ' + funits + '.')
```
Porosity mode is 0.14 (fraction).
#### Measures of Central Tendency
##### Geometric Mean
\begin{equation}
\overline{x}_G = ( \prod^n_{i=1} x_i )^{\frac{1}{n}}
\end{equation}
```python
geometric = scipy.stats.mstats.gmean(X)
print(feature + ' geometric mean is ' + str(round(geometric,2)) + ' ' + funits + '.')
```
Porosity geometric mean is 0.15 (fraction).
#### Measures of Central Tendency
##### Harmonic Mean
\begin{equation}
\overline{x}_H = \frac{n}{\sum^n_{i=1} \frac{1}{x_i}}
\end{equation}
```python
hmean = scipy.stats.mstats.hmean(X)
print(feature + ' harmonic mean is ' + str(round(hmean,2)) + ' ' + funits + '.')
```
Porosity harmonic mean is 0.14 (fraction).
##### Power Law Average
\begin{equation}
\overline{x}_p = (\frac{1}{n}\sum^n_{i=1}{x_i^{p}})^\frac{1}{p}
\end{equation}
```python
power = -0.5
power_avg = np.average(np.power(X,power))**(1/power)
print(feature + ' power law average for p = ' + str(power) + ' is ' + str(round(power_avg,2)) + ' ' + funits + '.')
```
Porosity power law average for p = -0.5 is 0.14 (fraction).
#### Let's Visualize Some of the Measures of Central Tendency
To visualize ocmpare these statistics, parameters
```python
plt.hist(X,color='red',alpha=0.2,edgecolor='black',bins=np.linspace(fmin,fmax,20))
plt.xlabel(feature + ' ' + funits); plt.ylabel('Frequency')
plt.axvline(x=average, ymin=0, ymax=1,color='black',label='Average')
plt.axvline(x=median, ymin=0, ymax=1,color='black',label='Median',linestyle='--')
plt.axvline(x=mode, ymin=0, ymax=1,color='black',label='Mode',linestyle='dashdot')
plt.axvline(x=power_avg, ymin=0, ymax=1,color='black',label='Power',linestyle='dotted');
plt.legend(loc='upper left');
```
#### Measures of Dispersion
##### Population Variance
\begin{equation}
\sigma^2_{x} = \frac{1}{n}\sum^n_{i=1}(x_i - \mu)
\end{equation}
```python
varp = stats.pvariance(X)
print(feature + ' population variance is ' + str(round(varp,4)) + ' ' + f2units + '.')
```
Porosity population variance is 0.0011 (fraction^2).
##### Sample Variance
\begin{equation}
\sigma^2_{x} = \frac{1}{n-1}\sum^n_{i=1}(x_i - \overline{x})^2
\end{equation}
```python
var = stats.variance(X)
print(feature + ' sample variance is ' + str(round(var,4)) + ' ' + f2units + '.')
```
Porosity sample variance is 0.0011 (fraction^2).
##### Population Standard Deviation
\begin{equation}
\sigma_{x} = \sqrt{ \frac{1}{n}\sum^n_{i=1}(x_i - \mu)^2 }
\end{equation}
```python
stdp = stats.pstdev(X)
print(feature + ' sample variance is ' + str(round(stdp,4)) + ' ' + f2units + '.')
```
Porosity sample variance is 0.0329 (fraction^2).
##### Sample Standard Deviation
\begin{equation}
\sigma_{x} = \sqrt{ \frac{1}{n-1}\sum^n_{i=1}(x_i - \mu)^2 }
\end{equation}
```python
std = stats.stdev(X)
print(feature + ' sample variance is ' + str(round(std,4)) + ' ' + f2units + '.')
```
Porosity sample variance is 0.0329 (fraction^2).
##### Range
\begin{equation}
range_x = P100_x - P00_x
\end{equation}
```python
range = np.max(X) - np.min(X)
print(feature + ' range is ' + str(round(range,2)) + ' ' + funits + '.')
```
Porosity range is 0.17 (fraction).
##### Percentile
\begin{equation}
P(p)_x = F^{-1}_{x}(p)
\end{equation}
```python
p_value = 13
percentile = np.percentile(X,p_value)
print(feature + ' ' + str(int(p_value)) + 'th percentile is ' + str(round(percentile,2)) + ' ' + funits + '.')
```
Porosity 13th percentile is 0.11 (fraction).
##### Inter Quartile Range
\begin{equation}
IQR = P(0.75)_x - P(0.25)_x
\end{equation}
```python
iqr = scipy.stats.iqr(X)
print(feature + ' interquartile range is ' + str(round(iqr,2)) + ' ' + funits + '.')
```
Porosity interquartile range is 0.04 (fraction).
#### Tukey Test for Outliers
Let's demonstrate the Tukey test for outliers based on the lower and upper fences.
\begin{equation}
fence_{lower} = P_x(0.25) - 1.5 \times [P_x(0.75) - P_x(0.25)]
\end{equation}
\begin{equation}
fence_{upper} = P_x(0.75) + 1.5 \times [P_x(0.75) - P_x(0.25)]
\end{equation}
Then we declare samples values above the upper fence or below the lower fence as outliers.
```python
p25, p75 = np.percentile(X, [25, 75])
lower_fence = p25 - iqr * 1.5
upper_fence = p75 + iqr * 1.5
outliers = X[np.where((X > upper_fence) | (X < lower_fence))[0]]
print(feature + ' outliers by Tukey test include ' + str(outliers) + '.')
outliers_indices = np.where((X > upper_fence) | (X < lower_fence))[0]
print(feature + ' outlier indices by Tukey test are ' + str(outliers_indices) + '.')
```
Porosity outliers by Tukey test include [0.06726 0.05 0.06092].
Porosity outlier indices by Tukey test are [110 152 198].
#### Let's Visualize Outliers with a Box Plot (Box and Wisker Plot)
The median is the orange line, P25 and P75 are the box and lower and upper fences are the wiskers.
```python
plt.boxplot(X); plt.ylabel(feature + ' ' + funits)
plt.xticks([1], [feature + ' Boxplot and Outliers']); plt.show()
```
#### Measures of Shape
##### Pearson's Mode Skewness
\begin{equation}
skew = \frac{3 (\overline{x} - P50_x)}{\sigma_x}
\end{equation}
```python
skew = (average - median)/std
print(feature + ' skew is ' + str(round(skew,2)) + '.')
```
Porosity skew is -0.03.
##### Population Skew, 3rd Central Moment
\begin{equation}
\gamma_{x} = \frac{1}{n}\sum^n_{i=1}(x_i - \mu)^3
\end{equation}
```python
cm = scipy.stats.moment(X,moment=3)
print(feature + ' 3rd cenral moment is ' + str(round(cm,7)) + '.')
```
Porosity 3rd cenral moment is -1.22e-05.
##### Quartile Skew Coefficient
\begin{equation}
QS = \frac{(P75_x - P50_x) - (P50_x - P25_x)}{(P75_x - P25_x)}
\end{equation}
```python
qs = ((np.percentile(X,75)-np.percentile(X,50))
-(np.percentile(X,50)-np.percentile(X,25))) /((np.percentile(X,75))-np.percentile(X,25))
print(feature + ' quartile skew coefficient is ' + str(round(qs,2)) + '.')
```
Porosity quartile skew coefficient is 0.14.
#### Plot the Nonparametric CDF
Let's demonstrate plotting a nonparametric cumulative distribution function (CDF) in Python
```python
# sort the data:
sort = np.sort(X)
# calculate the cumulative probabilities assuming known tails
p = np.arange(len(X)) / (len(X) - 1)
# plot the cumulative probabilities vs. the sorted porosity values
plt.subplot(122)
plt.scatter(sort, p, c = 'red', edgecolors = 'black', s = 10, alpha = 0.7)
plt.xlabel(feature + ' ' + funits); plt.ylabel('Cumulative Probability'); plt.grid();
plt.title('Nonparametric CDF')
plt.ylim([0,1]); plt.xlim([0,0.25])
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.3)
```
#### Fit a Gaussian Distribution
Let's fit a Gaussian distribution
* we get fancy with Maximuum Likelihood Estimation (MLE) for the Gaussian parametric distribution fit mean and standard deviation
```python
values = np.linspace(fmin,fmax,100)
fit_mean, fit_stdev = norm.fit(X,loc = average, scale = std) # fit MLE of the distribution
cumul_p = norm.cdf(values, loc = fit_mean, scale = fit_stdev)
# plot the cumulative probabilities vs. the sorted porosity values
plt.subplot(122)
plt.scatter(sort, p, c = 'red', edgecolors = 'black', s = 10, alpha = 0.7,label='data')
plt.plot(values,cumul_p, c = 'black',label='fit'); plt.legend(loc='upper left')
plt.xlabel(feature + ' ' + funits); plt.ylabel('Cumulative Probability'); plt.grid();
plt.title('Nonparametric and Fit Gaussian CDFs')
plt.ylim([0,1]); plt.xlim([0,0.25])
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.3)
```
#### Comments
This was a basic demonstration of univariate statistics in Python.
I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at [Python Demos](https://github.com/GeostatsGuy/PythonNumericalDemos) and a Python package for data analytics and geostatistics at [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy).
I hope this was helpful,
*Michael*
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
```python
```
| b5d7f6a8b369fe3693af97ff7fd2a54fa6bf07ca | 117,620 | ipynb | Jupyter Notebook | PythonDataBasics_Statistics.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
] | 403 | 2017-10-15T02:07:38.000Z | 2022-03-30T15:27:14.000Z | PythonDataBasics_Statistics.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
] | 4 | 2019-08-21T10:35:09.000Z | 2021-02-04T04:57:13.000Z | PythonDataBasics_Statistics.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
] | 276 | 2018-06-27T11:20:30.000Z | 2022-03-25T16:04:24.000Z | 97.126342 | 28,384 | 0.835122 | true | 5,816 | Qwen/Qwen-72B | 1. YES
2. YES | 0.894789 | 0.867036 | 0.775814 | __label__eng_Latn | 0.848654 | 0.640809 |
# Teoría del Error
<p><code>Python en Jupyter Notebook</code></p>
<p>Creado por <code>Giancarlo Ortiz</code> para el curso de <code>Métodos Numéricos</code></p>
<style type="text/css">
.formula {
background: #f7f7f7;
border-radius: 50px;
padding: 15px;
}
.border {
display: inline-block;
border: solid 1px rgba(204, 204, 204, 0.4);
border-bottom-color: rgba(187, 187, 187, 0.4);
border-radius: 3px;
box-shadow: inset 0 -1px 0 rgba(187, 187, 187, 0.4);
background-color: inherit !important;
vertical-align: middle;
color: inherit !important;
font-size: 11px;
padding: 3px 5px;
margin: 0 2px;
}
</style>
## Error
El error es inherente a los métodos numéricos, por tanto es fundamental hacer seguimiento de la propagación de los errores cometidos a fin de poder estimar el grado de aproximación de la solución que se obtiene.
## Agenda
1. Error real
1. Error absoluto
1. Error relativo
1. Incertidumbre
```python
# Importar módulos al cuaderno de jupyter
import math as m
import numpy as np
import pylab as plt
# Definir e incluir nuevas funciones al cuaderno
def _significativas(valor, cifras):
''' Reducir un valor a un numero de cifras significativas '''
Primera_significativa = -int(m.floor(m.log10(abs(valor))))
decimales = Primera_significativa + cifras - 1
return round(valor, decimales)
def _normalizar(valor, referencia):
''' Aproximar un numero a las cifras significativas de la referencia'''
Primera_significativa = -int(m.floor(m.log10(abs(referencia))))
cifras = Primera_significativa + 1
return _significativas(valor, cifras)
```
## 1. Error real
---
El error real por definición es la diferencia entre el valor que se considera real y el valor aproximado, puede ser positivo si el error es por defecto, o negativo si el error es por exceso.
\begin{equation*}
E = V_r - V_a \\
\end{equation*}
### <code>Ejemplo:</code> Constante de la gravedad
---
Cálculo de la constante de la gravedad como un valor aparente $\color{#a78a4d}{g_a}$ a partir de un conjunto de medidas del periodo de oscilación en un sistema de péndulo simple con longitud conocida.
<p align="center">
</p>
```python
# Modelo de gravitación Básico
# Medidas directas de longitud y ángulo en el experimento
Longitud = 1; θ = 45; deltaT = 0.30
# Medidas directas del periodo cada 30 segundos
Tr = [2.106, 2.101, 2.098, 2.087, 2.073, 2.070, 2.064, 2.059, 2.057, 2.052]
# Valores reales
Lat = 1.209673 # Latitud de (Pasto - Nariño)
Alt = 2_539 # Altitud de (Pasto - Nariño)
R = 6_371_000 # Radio medio de la tierra
Afc = 1 + 0.0053024 * m.sin(Lat)**2 - 0.0000058 * m.sin(2*Lat)**2 # Aporte de la fuerza centrifuga
Afg = 1 - (2/R)*Alt + (3/R**2)*Alt**2 # Aporte de la distancia al centro
g = 9.780327 * Afc * Afg
# Péndulo Simple
Ti = np.array(Tr) # Periodo (Medida directa)
θ_rad = θ * (m.pi/180) # Conversión del ángulo a radianes (~Medida directa)
To = Ti / (1 + (θ_rad/4)**2) # Corrección para ángulos grandes (~Medida directa)
K = 4 * Longitud * m.pi**2 # Constante de proporcionalidad (~Medida directa)
# Medida indirecta o aparente de la gravedad
ga = K / To**2
# Error Real
Et_ga = g - ga
# Media aritmética (Promedio - P)
P_Ti = sum(Ti) / len(Ti)
P_To = sum(To) / len(To)
P_ga = sum(ga) / len(ga)
P_Et = sum(Et_ga) / len(Et_ga)
# Desviación típica (Desviación - D)
D_Ti = ((1/len(Ti)) * sum((P_Ti - Ti)**2))**(1/2)
D_To = ((1/len(To)) * sum((P_To - To)**2))**(1/2)
D_ga = ((1/len(ga)) * sum((P_ga - ga)**2))**(1/2)
D_Et = ((1/len(Et_ga)) * sum((P_Et - Et_ga)**2))**(1/2)
# Salida estándar
print(f"--------------------------------------------------------")
print(f"Valor del modelo algebraico: {g:8.4f}")
print(f"--------------------------------------------------------")
print(f"| No | T [s] | To [s] | g [m/s²] | ERROR REAL |")
print(f"--------------------------------------------------------")
print(f"| No | V Medido | V aparente | V aparente | Individual |")
print(f"--------------------------------------------------------")
for N in range(len(Tr)):
print(f"| {N+1:2} | {Ti[N]:8.3f} | {To[N]:10.3f} | {ga[N]:10.4f} | {Et_ga[N]:10.4f} |")
print(f"--------------------------------------------------------")
print(f"| P | {P_Ti:8.3f} | {P_To:10.3f} | {P_ga:10.4f} | {P_Et:10.4f} |")
print(f"--------------------------------------------------------")
print(f"| D | {D_Ti:8.3f} | {D_To:10.3f} | {D_ga:10.4f} | {D_Et:10.4f} |")
print(f"--------------------------------------------------------")
```
--------------------------------------------------------
Valor del modelo algebraico: 9.8179
--------------------------------------------------------
| No | T [s] | To [s] | g [m/s²] | ERROR REAL |
--------------------------------------------------------
| No | V Medido | V aparente | V aparente | Individual |
--------------------------------------------------------
| 1 | 2.106 | 2.028 | 9.6006 | 0.2172 |
| 2 | 2.101 | 2.023 | 9.6464 | 0.1715 |
| 3 | 2.098 | 2.020 | 9.6740 | 0.1439 |
| 4 | 2.087 | 2.010 | 9.7762 | 0.0416 |
| 5 | 2.073 | 1.996 | 9.9087 | -0.0909 |
| 6 | 2.070 | 1.993 | 9.9375 | -0.1196 |
| 7 | 2.064 | 1.987 | 9.9953 | -0.1775 |
| 8 | 2.059 | 1.983 | 10.0439 | -0.2261 |
| 9 | 2.057 | 1.981 | 10.0635 | -0.2456 |
| 10 | 2.052 | 1.976 | 10.1126 | -0.2947 |
--------------------------------------------------------
| P | 2.077 | 2.000 | 9.8759 | -0.0580 |
--------------------------------------------------------
| D | 0.019 | 0.018 | 0.1782 | 0.1782 |
--------------------------------------------------------
## 1. Error absoluto
---
El error absoluto por definición la diferencia entre el valor que se considera real y el valor aproximado, siempre es positivo; cuando se realizan $\color{#a78a4d}{n}$ medidas de un mismo fenómeno se puede calcular como el promedio de los errores absolutos individuales o la raíz del error cuadrático medio.
\begin{align}
e_{a} & = \left| E \right| = \left| V_r-V_a \right| \\
EAM & = \frac{1}{n} \sum_{1}^n e_{a} = \frac{1}{n} \sum_{1}^n \left| V_r-V_a \right| \\
RECM & = \sqrt{ \frac{1}{n} \sum_{1}^n \left( V_r-V_a \right)^2 } \\
\end{align}
```python
# Error Absoluto
ea_ga = abs(Et_ga)
# Media aritmética (Promedio - P)
P_ea = sum(ea_ga) / len(ea_ga)
# Desviación típica (Desviación - D)
D_ea = ((1/len(ea_ga)) * sum((P_ea - ea_ga)**2))**(1/2)
# Otros Promedios
EAM = sum(ea_ga) / len(ea_ga)
RECM = ((1/len(Et_ga)) * sum((Et_ga)**2))**(1/2)
# Salida estándar
print(f"-------------------------------------------------------------------------")
print(f"| No | T [s] | To [s] | g[m/s²] | ERROR REAL | ERROR ABSOLUTO |")
print(f"-------------------------------------------------------------------------")
print(f"| No | V Medido | V aparente | V aparente | Individual | Individual |")
print(f"-------------------------------------------------------------------------")
for N in range(len(Tr)):
print(f"| {N+1:2} | {Ti[N]:8.3f} | {To[N]:10.3f} | {ga[N]:10.4f} | {Et_ga[N]:10.4f} | {ea_ga[N]:14.4f} |")
print(f"-------------------------------------------------------------------------")
print(f"| P | {P_Ti:8.3f} | {P_To:10.3f} | {P_ga:10.4f} | {P_Et:10.4f} | {P_ea:14.4f} |")
print(f"-------------------------------------------------------------------------")
print(f"| D | {D_Ti:8.3f} | {D_To:10.3f} | {D_ga:10.4f} | {D_Et:10.4f} | {D_ea:14.4f} |")
print(f"-------------------------------------------------------------------------")
print(f"Error Absoluto Medio - EAM: {EAM:8.4f}")
print(f"Raíz Error Cuadrático Medio - RECM: {RECM:8.4f}")
print(f"-------------------------------------------------------------------------")
```
-------------------------------------------------------------------------
| No | T [s] | To [s] | g[m/s²] | ERROR REAL | ERROR ABSOLUTO |
-------------------------------------------------------------------------
| No | V Medido | V aparente | V aparente | Individual | Individual |
-------------------------------------------------------------------------
| 1 | 2.106 | 2.028 | 9.6006 | 0.2172 | 0.2172 |
| 2 | 2.101 | 2.023 | 9.6464 | 0.1715 | 0.1715 |
| 3 | 2.098 | 2.020 | 9.6740 | 0.1439 | 0.1439 |
| 4 | 2.087 | 2.010 | 9.7762 | 0.0416 | 0.0416 |
| 5 | 2.073 | 1.996 | 9.9087 | -0.0909 | 0.0909 |
| 6 | 2.070 | 1.993 | 9.9375 | -0.1196 | 0.1196 |
| 7 | 2.064 | 1.987 | 9.9953 | -0.1775 | 0.1775 |
| 8 | 2.059 | 1.983 | 10.0439 | -0.2261 | 0.2261 |
| 9 | 2.057 | 1.981 | 10.0635 | -0.2456 | 0.2456 |
| 10 | 2.052 | 1.976 | 10.1126 | -0.2947 | 0.2947 |
-------------------------------------------------------------------------
| P | 2.077 | 2.000 | 9.8759 | -0.0580 | 0.1729 |
-------------------------------------------------------------------------
| D | 0.019 | 0.018 | 0.1782 | 0.1782 | 0.0725 |
-------------------------------------------------------------------------
Error Absoluto Medio - EAM: 0.1729
Raíz Error Cuadrático Medio - RECM: 0.1875
-------------------------------------------------------------------------
## 3. Error relativo
---
Es el proceso de operar cifras aproximadas o truncadas.
\begin{align}
e_{r} & = \frac{E}{V_r} = \frac{ \left| V_r-V_a \right| }{V_r} \\
\overline{e_{r}} & = \frac{EAM}{V_r} = \frac{1}{n} \sum_{1}^n \frac{ \left| V_r-V_a \right| }{V_r} \\
\end{align}
```python
# Error Absoluto
er_ga = 100 * ea_ga / g
# Media aritmética (Promedio - P)
P_er = sum(er_ga) / len(er_ga)
# Desviación típica (Desviación - D)
D_er = ((1/len(er_ga)) * sum((P_er - er_ga)**2))**(1/2)
# Otros Promedios
EPM = sum(er_ga) / len(er_ga)
# Salida estándar
print(f"------------------------------------------------------------------------------------------")
print(f"| No | T [s] | To [s] | g[m/s²] | ERROR REAL | ERROR ABSOLUTO | ERROR RELATIVO |")
print(f"------------------------------------------------------------------------------------------")
print(f"| No | V Medido | V aparente | V aparente | Individual | Individual | Individual |")
print(f"------------------------------------------------------------------------------------------")
for N in range(len(Tr)):
print(f"| {N+1:2} | {Ti[N]:8.3f} | {To[N]:10.3f} | {ga[N]:10.4f} | {Et_ga[N]:10.4f} | {ea_ga[N]:14.4f} | {er_ga[N]:12.2f} % |")
print(f"------------------------------------------------------------------------------------------")
print(f"| P | {P_Ti:8.3f} | {P_To:10.3f} | {P_ga:10.4f} | {P_Et:10.4f} | {P_ea:14.4f} | {P_er:12.2f} % |")
print(f"------------------------------------------------------------------------------------------")
print(f"| D | {D_Ti:8.3f} | {D_To:10.3f} | {D_ga:10.4f} | {D_Et:10.4f} | {D_ea:14.4f} | {D_er:12.2f} % |")
print(f"------------------------------------------------------------------------------------------")
print(f"Error Absoluto Medio - EAM: {EAM:8.4f}")
print(f"Raíz Porcentual Medio - EPM: {EPM:6.2f} %")
print(f"------------------------------------------------------------------------------------------")
```
------------------------------------------------------------------------------------------
| No | T [s] | To [s] | g[m/s²] | ERROR REAL | ERROR ABSOLUTO | ERROR RELATIVO |
------------------------------------------------------------------------------------------
| No | V Medido | V aparente | V aparente | Individual | Individual | Individual |
------------------------------------------------------------------------------------------
| 1 | 2.106 | 2.028 | 9.6006 | 0.2172 | 0.2172 | 2.21 % |
| 2 | 2.101 | 2.023 | 9.6464 | 0.1715 | 0.1715 | 1.75 % |
| 3 | 2.098 | 2.020 | 9.6740 | 0.1439 | 0.1439 | 1.47 % |
| 4 | 2.087 | 2.010 | 9.7762 | 0.0416 | 0.0416 | 0.42 % |
| 5 | 2.073 | 1.996 | 9.9087 | -0.0909 | 0.0909 | 0.93 % |
| 6 | 2.070 | 1.993 | 9.9375 | -0.1196 | 0.1196 | 1.22 % |
| 7 | 2.064 | 1.987 | 9.9953 | -0.1775 | 0.1775 | 1.81 % |
| 8 | 2.059 | 1.983 | 10.0439 | -0.2261 | 0.2261 | 2.30 % |
| 9 | 2.057 | 1.981 | 10.0635 | -0.2456 | 0.2456 | 2.50 % |
| 10 | 2.052 | 1.976 | 10.1126 | -0.2947 | 0.2947 | 3.00 % |
------------------------------------------------------------------------------------------
| P | 2.077 | 2.000 | 9.8759 | -0.0580 | 0.1729 | 1.76 % |
------------------------------------------------------------------------------------------
| D | 0.019 | 0.018 | 0.1782 | 0.1782 | 0.0725 | 0.74 % |
------------------------------------------------------------------------------------------
Error Absoluto Medio - EAM: 0.1729
Raíz Porcentual Medio - EPM: 1.76 %
------------------------------------------------------------------------------------------
# Incertidumbre
---
**Incertidumbre:** Es una medida del ajuste o cálculo que debe hacerse de una cantidad con respecto al valor que se supone real de dicha magnitud; esto como una medida que refleje las características ignoradas de la magnitud o los errores cometidos para calcularla. Finalmente, esto significa que un valor aparente se puede representar expresando una cantidad como la suma de un valor confiable sumado a un intervalo de incertidumbre.
\begin{equation*}
V_a = \overline{x} \pm \Delta x \\
\end{equation*}
El valor confiable en algunos casos se puede expresar como el _**valor promedio**_ de la cantidad a representar y la incertidumbre como la _**Desviación típica o estándar**_, que es una medida del ajuste o cálculo de una magnitud con respecto al valor que se supone real de dicha magnitud.
\begin{align}
V_a & = \mu \pm \sigma \\
\mu & = \frac{1}{n} \sum_{1}^n x_i \\
\sigma & = \sqrt{ \frac{1}{n} \sum_{1}^n \left( \mu - x_i \right)^2 } \\
\end{align}
```python
# Valor aparente del periodo
μT = sum(To) / len(To) # Valor promedio
DT = (sum((μT - To)**2) / len(To))**(1/2) # Valor promedio
# Valor aparente de la gravedad
μg = P_ga # Valor promedio
Dg = (sum((μg - ga)**2) / len(ga))**(1/2) # Incertidumbre
# Una convención aceptada es expresar la incertidumbre en una sola cifra significativa
ΔT = _significativas(DT, 1)
Δg = _significativas(Dg, 1)
# Valores medios normalizados
μT_n = _normalizar(μT, ΔT)
μg_n = _normalizar(μg, Δg)
# Salida estándar
print(f"-----------------------------------------------")
print(f"Valor promedio de T: {μT:8.4f}")
print(f"Incertidumbre del grupo T: {DT:8.4f}")
print(f"Valor aparente del grupo: {μT_n:8.4f} ± {ΔT}")
print(f"-----------------------------------------------")
print(f"Valor promedio de g: {μg:8.4f}")
print(f"Incertidumbre del grupo g: {Dg:8.4f}")
print(f"Valor aparente del grupo: {μg_n:8.4f} ± {Δg}")
print(f"-----------------------------------------------")
print(f"Valor del modelo algebraico: {g:8.4f}")
print(f"-----------------------------------------------")
```
-----------------------------------------------
Valor promedio de T: 1.9996
Incertidumbre del grupo T: 0.0181
Valor aparente del grupo: 2.0000 ± 0.02
-----------------------------------------------
Valor promedio de g: 9.8759
Incertidumbre del grupo g: 0.1782
Valor aparente del grupo: 9.9000 ± 0.2
-----------------------------------------------
Valor del modelo algebraico: 9.8179
-----------------------------------------------
---
## Mas Recursos
- [Métodos de redondeo](https://en.wikipedia.org/wiki/Rounding) (Wikipedia)
- [Error experimental](https://es.wikipedia.org/wiki/Error_experimental) (Wikipedia)
- [Error de aproximación](https://es.wikipedia.org/wiki/Error_de_aproximaci%C3%B3n) (Wikipedia)
- [Error absoluto medio](https://es.wikipedia.org/wiki/Error_absoluto_medio) (Wikipedia)
| bc069757e7b6c8e04c5e8f232974178bf77ed324 | 21,322 | ipynb | Jupyter Notebook | Jupyter/13_Error.ipynb | GiancarloBenavides/Metodos-Numericos | c35eb538d33b8dd58eacccf9e8b9b59c605d7dba | [
"MIT"
] | 1 | 2020-10-29T19:13:39.000Z | 2020-10-29T19:13:39.000Z | Jupyter/13_Error.ipynb | GiancarloBenavides/Metodos-Numericos | c35eb538d33b8dd58eacccf9e8b9b59c605d7dba | [
"MIT"
] | null | null | null | Jupyter/13_Error.ipynb | GiancarloBenavides/Metodos-Numericos | c35eb538d33b8dd58eacccf9e8b9b59c605d7dba | [
"MIT"
] | 1 | 2020-11-12T20:22:40.000Z | 2020-11-12T20:22:40.000Z | 54.392857 | 2,040 | 0.408498 | true | 5,511 | Qwen/Qwen-72B | 1. YES
2. YES | 0.651355 | 0.845942 | 0.551009 | __label__spa_Latn | 0.31264 | 0.118507 |
# Introduction to Quantum Physics
### A complex Number
$c = a + ib$
Acircle of radius 1: $e^{-i\theta}$
### Single Qubit System ($\mathcal{C}^{2}$ -space)
$|\psi \rangle = \alpha |0 \rangle + \beta | 1 \rangle $
$ \langle \psi | \psi \rangle = 1 \implies \alpha^{2} + \beta^{2} = 1 $
- Operators are 2 by 2 matrices, vectors are 2 by 1 column vectors.
#### General form of single qubit Unitary Operation
A single qubit quantum state can be written as
$$\left|\psi\right\rangle = \alpha\left|0\right\rangle + \beta \left|1\right\rangle$$
where $\alpha$ and $\beta$ are complex numbers. In a measurement the probability of the bit being in $\left|0\right\rangle$ is $|\alpha|^2$ and $\left|1\right\rangle$ is $|\beta|^2$. As a vector this is
$$
\left|\psi\right\rangle =
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix}
$$
Where
$$\left| 0 \right\rangle =
\begin{pmatrix}
1 \\
0
\end{pmatrix}; \left|1\right\rangle =
\begin{pmatrix}
0 \\
1
\end{pmatrix}.
$$
Note due to conservation probability $|\alpha|^2+ |\beta|^2 = 1$ and since global phase is undetectable $\left|\psi\right\rangle := e^{i\delta} \left|\psi\right\rangle$ we only requires two real numbers to describe a single qubit quantum state.
A convenient representation is
$$\left|\psi\right\rangle = \cos(\theta/2)\left|0\right\rangle + \sin(\theta/2)e^{i\phi}\left|1\right\rangle$$
where $0\leq \phi < 2\pi$, and $0\leq \theta \leq \pi$. From this it is clear that there is a one-to-one correspondence between qubit states ($\mathbb{C}^2$) and the points on the surface of a unit sphere ($\mathbb{R}^3$). This is called the Bloch sphere representation of a qubit state.
Quantum gates/operations are usually represented as matrices. A gate which acts on a qubit is represented by a $2\times 2$ unitary matrix $U$. The action of the quantum gate is found by multiplying the matrix representing the gate with the vector which represents the quantum state.
$$\left|\psi'\right\rangle = U\left|\psi\right\rangle$$
A general unitary must be able to take the $\left|0\right\rangle$ to the above state. That is
$$
U = \begin{pmatrix}
\cos(\theta/2) & a \\
e^{i\phi}\sin(\theta/2) & b
\end{pmatrix}
$$
where $a$ and $b$ are complex numbers constrained such that $U^\dagger U = I$ for all $0\leq\theta\leq\pi$ and $0\leq \phi<2\pi$. This gives 3 constraints and as such $a\rightarrow -e^{i\lambda}\sin(\theta/2)$ and $b\rightarrow e^{i\lambda+i\phi}\cos(\theta/2)$ where $0\leq \lambda<2\pi$ giving
$$
U = \begin{pmatrix}
\cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\
e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2)
\end{pmatrix}.
$$
This is the most general form of a single qubit unitary.
### Quantum Gates:
Quanum gates are Unitary Transformation. There exist a universal set of quantum gates. Hadamard gate, X,Y,Z, paili gates, Cnot gate are few examples.
### Multiqubit System ($\mathcal{C}^{4}$ -space)
$|\psi \rangle = \alpha |00 \rangle + \beta | 01 \rangle + \gamma |10 \rangle + \delta | 11 \rangle $
$ \langle \psi | \psi \rangle = 1 \implies \alpha^{2} + \beta^{2} + \gamma^{2} + \delta^{2} = 1 $
- Operators are 4 by 4 matrices, vectors are 4 by 1 column vectors.
#### Realization of multi-qubit through single qubit
The space of quantum computer grows exponential with the number of qubits. For $n$ qubits the complex vector space has dimensions $d=2^n$. To describe states of a multi-qubit system, the tensor product is used to "glue together" operators and basis vectors.
Let's start by considering a 2-qubit system. Given two operators $A$ and $B$ that each act on one qubit, the joint operator $A \otimes B$ acting on two qubits is
$$\begin{equation}
A\otimes B =
\begin{pmatrix}
A_{00} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} & A_{01} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} \\
A_{10} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix} & A_{11} \begin{pmatrix}
B_{00} & B_{01} \\
B_{10} & B_{11}
\end{pmatrix}
\end{pmatrix},
\end{equation}$$
where $A_{jk}$ and $B_{lm}$ are the matrix elements of $A$ and $B$, respectively.
Analogously, the basis vectors for the 2-qubit system are formed using the tensor product of basis vectors for a single qubit:
$$\begin{equation}\begin{split}
\left|{00}\right\rangle &= \begin{pmatrix}
1 \begin{pmatrix}
1 \\
0
\end{pmatrix} \\
0 \begin{pmatrix}
1 \\
0
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \\0 \end{pmatrix}~~~\left|{01}\right\rangle = \begin{pmatrix}
1 \begin{pmatrix}
0 \\
1
\end{pmatrix} \\
0 \begin{pmatrix}
0 \\
1
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix}0 \\ 1 \\ 0 \\ 0 \end{pmatrix}\end{split}
\end{equation}$$
$$\begin{equation}\begin{split}\left|{10}\right\rangle = \begin{pmatrix}
0\begin{pmatrix}
1 \\
0
\end{pmatrix} \\
1\begin{pmatrix}
1 \\
0
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}~~~ \left|{11}\right\rangle = \begin{pmatrix}
0 \begin{pmatrix}
0 \\
1
\end{pmatrix} \\
1\begin{pmatrix}
0 \\
1
\end{pmatrix}
\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\1 \end{pmatrix}\end{split}
\end{equation}.$$
Note we've introduced a shorthand for the tensor product of basis vectors, wherein $\left|0\right\rangle \otimes \left|0\right\rangle$ is written as $\left|00\right\rangle$. The state of an $n$-qubit system can described using the $n$-fold tensor product of single-qubit basis vectors. Notice that the basis vectors for a 2-qubit system are 4-dimensional; in general, the basis vectors of an $n$-qubit sytsem are $2^{n}$-dimensional.
### Superposition and Entanglement
Superposition in 2 qubit system : $|\psi \rangle = \alpha |00 \rangle + \beta | 01 \rangle + \gamma |10 \rangle + \delta | 11 \rangle $
- It can be written in direct product of lower qubit states.
Entanglement in two qubit system:
An entanglement circuit:
- It can not be written in direct product of lower qubit states
$\begin{bmatrix}
p \\
q
\end{bmatrix} \otimes \begin{bmatrix}
r \\
s
\end{bmatrix} = c \begin{bmatrix}
m \\
0 \\
0 \\
n
\end{bmatrix}$
### Quantum Circuits and Quantum Algorithms
Execution of Quantum Algorithm (A unitary matrix), is challangeing task. This is because one need to find out product of single or multi-qubit gates to represent that algorithm.
$$
QFT: F_{N} = \frac{1}{\sqrt{N}} \left( \begin{array}{cccccc}
1 & 1 & 1 & 1 & \cdots & 1 \\
1 & \omega_{n} & \omega_{n}^{2} & \omega_{n}^{3} & \cdots & \omega_{n} ^{N-1}\\
1 & \omega_{n}^{2} & \omega_{n}^{4} & \omega_{n}^{6} & \cdots & \omega_{n} ^{2(N-1)}\\
1 & \omega_{n}^{3} & \omega_{n}^{6} & \omega_{n}^{9} & \cdots & \omega_{n} ^{3(N-1)}\\
\vdots & \vdots & \vdots & \vdots & \dots & \vdots \\
1 & \omega_{n}^{(N-1)} & \omega_{n}^{2(N-1)} & \omega_{n}^{3(N-1)} & \cdots & \omega_{n} ^{(N-1((N-1)}\\
\end{array}\right )
$$
Figure: Execution of QFT algorithm
### Measurement
- Only observable can be measured.
- What is being measured? E, P or X? Non of these are being measured. We measure probability through $<\psi | \psi>$.
- After measurement, system collapse to one of the eigen state. Due to time evolution, it will superposed to many states in future.
### Noise and Error
- Current time Quantum Computers are **Noisy Intermediate-Scale Quantum (NISQ)**
- Decoherence : Quantum decoherence is the loss of quantum coherence. Decoherence can be viewed as the loss of information from a system into the environment (often modeled as a heat bath).
- Circuit depth: The depth of a circuit is the longest path in the circuit. The path length is always an integer number, representing the number of gates it has to execute in that path.
- Fidelity: fidelity is the measure of the distance between two quantum states. Fidelity equal to 1, means that two states are equal. In the case of a density matrix, fidelity represents the overlap with a reference pure state.
```python
```
| 70ae0de7fab9092ebcbb172f36dcdc1fd3ab01e3 | 13,210 | ipynb | Jupyter Notebook | day1/3. Quantum-Physics-of-Quantum-Computing.ipynb | srh-dhu/Quantum-Computing-2021 | 5d6f99776f10224df237a2fadded25f63f5032c3 | [
"MIT"
] | 12 | 2021-07-23T13:38:20.000Z | 2021-09-07T00:40:09.000Z | day1/3. Quantum-Physics-of-Quantum-Computing.ipynb | Pratha-Me/Quantum-Computing-2021 | bd9cf9a1165a47c61f9277126f4df04ae5562d61 | [
"MIT"
] | 3 | 2021-07-31T08:43:38.000Z | 2021-07-31T08:43:38.000Z | day1/3. Quantum-Physics-of-Quantum-Computing.ipynb | Pratha-Me/Quantum-Computing-2021 | bd9cf9a1165a47c61f9277126f4df04ae5562d61 | [
"MIT"
] | 7 | 2021-07-24T06:14:36.000Z | 2021-07-29T22:02:12.000Z | 35.320856 | 449 | 0.533687 | true | 2,752 | Qwen/Qwen-72B | 1. YES
2. YES | 0.974821 | 0.853913 | 0.832412 | __label__eng_Latn | 0.95965 | 0.772306 |
# Math behind LinearExplainer with correlation feature perturbation
When we use `LinearExplainer(model, prior, feature_perturbation="correlation_dependent")` we do not use $E[f(x) \mid do(X_S = x_S)]$ to measure the impact of a set $S$ of features, but rather use $E[f(x) \mid X_S = x_s]$ under the assumption that the random variable $X$ (representing the input features) follows a multivariate guassian distribution. To compute SHAP values this way we need to compute conditional expectations under the multivariate guassian distribution for all subset of features. This would be a lot of matrix match for an exponential number of terms, and it hence intractable for models with more than just a few features.
This document briefly outlines the math we have used to precompute all of the required linear algebra using a sampling procedure that can be done just once, and then applied to as many samples as we like. This drastically speed up the computation compared to a brute force approach. Note that all these calculations depend on the fact that we are explaining a linear model $f(x) = \beta x$.
The permutation definition of SHAP values in the interventional form used by most explainers is
$$
\phi_i = \frac{1}{M!} \sum_R E[f(X) \mid do(X_{S_i^R \cup i} = x_{S_i^R \cup i})] - E[f(X) \mid do(X_{S_i^R} = x_{S_i^R})]
$$
but here we will use the non-interventional conditional expectation form (where we have simplified the notation by dropping the explicit reference to the random variable $X$).
$$
\phi_i = \frac{1}{M!} \sum_R E[f(x) \mid x_{S_i^R \cup i}] - E[f(x) \mid x_{S_i^R}]
$$
where $f(x) = \beta x + b$ with $\beta$ a row vector and $b$ a scalar.
If we replace f(x) with the linear function definition we get:
\begin{align}
\phi_i = \frac{1}{M!} \sum_R E[\beta x + b \mid x_{S_i^R \cup i}] - E[\beta x + b \mid x_{S_i^R}] \\
= \beta \frac{1}{M!} \sum_R E[x \mid x_{S_i^R \cup i}] - E[x \mid x_{S_i^R}]
\end{align}
Assume the inputs $x$ follow a multivariate normal distribution with mean $\mu$ and covariance $\Sigma$. Denote the projection matrix that selects a set $S$ as $P_S$, then we get:
\begin{align}
E[x \mid x_S] = [P_{\bar S} \mu + P_{\bar S} \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} ( P_S x - P_S \mu)] P_{\bar S} + x P_S^T P_S \\
= [P_{\bar S} \mu + P_{\bar S} \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} P_S (x - \mu)] P_{\bar S} + x P_S^T P_S \\
= [\mu + \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} P_S (x - \mu)] P_{\bar S}^T P_{\bar S} + x P_S^T P_S \\
= P_{\bar S}^T P_{\bar S} [\mu + \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} P_S (x - \mu)] + P_S^T P_S x \\
= P_{\bar S}^T P_{\bar S} \mu + P_{\bar S}^T P_{\bar S} \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} P_S x - P_{\bar S}^T P_{\bar S} \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} P_S \mu + P_S^T P_S x \\
= [P_{\bar S}^T P_{\bar S} - P_{\bar S}^T P_{\bar S} \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} P_S] \mu + [P_S^T P_S + P_{\bar S}^T P_{\bar S} \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} P_S] x
\end{align}
if we let $R_S = P_{\bar S}^T P_{\bar S} \Sigma P_S^T (P_S \Sigma P_S^T)^{-1} P_S$ and $Q_S = P_S^T P_S$ then we can write
\begin{align}
E[x \mid x_S] = [Q_{\bar S} - R_S] \mu + [Q_S + R_S] x
\end{align}
or
\begin{align}
E[x \mid x_{S_i^R \cup i}] = [Q_{\bar{S_i^R \cup i}} - R_{S_i^R \cup i}] \mu + [Q_{S_i^R \cup i} + R_{S_i^R \cup i}] x
\end{align}
leading to the Shapley equation of
\begin{align}
\phi_i = \beta \frac{1}{M!} \sum_R [Q_{\bar{S_i^R \cup i}} - R_{S_i^R \cup i}] \mu + [Q_{S_i^R \cup i} + R_{S_i^R \cup i}] x - [Q_{\bar{S_i^R}} - R_{S_i^R}] \mu - [Q_{S_i^R} + R_{S_i^R}] x \\
= \beta \frac{1}{M!} \sum_R ([Q_{\bar{S_i^R \cup i}} - R_{S_i^R \cup i}] - [Q_{\bar{S_i^R}} - R_{S_i^R}]) \mu + ([Q_{S_i^R \cup i} + R_{S_i^R \cup i}] - [Q_{S_i^R} + R_{S_i^R}]) x \\
= \beta \left [ \frac{1}{M!} \sum_R ([Q_{\bar{S_i^R \cup i}} - R_{S_i^R \cup i}] - [Q_{\bar{S_i^R}} - R_{S_i^R}]) \right ] \mu + \beta \left [ \frac{1}{M!} \sum_R ([Q_{S_i^R \cup i} + R_{S_i^R \cup i}] - [Q_{S_i^R} + R_{S_i^R}]) \right ] x
\end{align}
$$
\phi = \beta T x
$$
This means that we can precompute the transform matrix $T$ by drawing random permutations $R$ many times and averaging our results. Once we have computed $T$ we can explain any number of samples (or models for that matter) by just using matrix multiplication.
| b08d525728e387ab3ea393ea7bd2c82a80097ddb | 5,862 | ipynb | Jupyter Notebook | notebooks/linear_explainer/Math behind LinearExplainer with correlation feature perturbation.ipynb | santanaangel/shap | 1c1c4a45440f3475b8544251f9d9e5b43977cd0e | [
"MIT"
] | 16,097 | 2016-12-01T20:01:26.000Z | 2022-03-31T20:27:40.000Z | notebooks/linear_explainer/Math behind LinearExplainer with correlation feature perturbation.ipynb | santanaangel/shap | 1c1c4a45440f3475b8544251f9d9e5b43977cd0e | [
"MIT"
] | 2,217 | 2017-09-18T20:06:45.000Z | 2022-03-31T21:00:25.000Z | notebooks/linear_explainer/Math behind LinearExplainer with correlation feature perturbation.ipynb | santanaangel/shap | 1c1c4a45440f3475b8544251f9d9e5b43977cd0e | [
"MIT"
] | 2,634 | 2017-06-29T21:30:46.000Z | 2022-03-30T07:30:36.000Z | 48.446281 | 656 | 0.552712 | true | 1,662 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.805632 | 0.73305 | __label__eng_Latn | 0.878589 | 0.541453 |
```python
!pip install pandas
import sympy as sym
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
sym.init_printing()
```
Requirement already satisfied: pandas in c:\users\usuario\.conda\envs\sys\lib\site-packages (1.1.2)
Requirement already satisfied: pytz>=2017.2 in c:\users\usuario\.conda\envs\sys\lib\site-packages (from pandas) (2020.1)
Requirement already satisfied: numpy>=1.15.4 in c:\users\usuario\.conda\envs\sys\lib\site-packages (from pandas) (1.19.1)
Requirement already satisfied: python-dateutil>=2.7.3 in c:\users\usuario\.conda\envs\sys\lib\site-packages (from pandas) (2.8.1)
Requirement already satisfied: six>=1.5 in c:\users\usuario\.conda\envs\sys\lib\site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)
## Correlación
La correlación entre las señales $f(t)$ y $g(t)$ es una operación que indica cuán parecidas son las dos señales entre sí.
\begin{equation}
(f \; \circ \; g)(\tau) = h(\tau) = \int_{-\infty}^{\infty} f(t) \cdot g(t + \tau) \; dt
\end{equation}
Observe que la correlación y la convolución tienen estructura similares.
\begin{equation}
f(t) * g(t) = \int_{-\infty}^{\infty} f(\tau) \cdot g(t - \tau) \; d\tau
\end{equation}
## Señales periódicas
La señal $y(t)$ es periódica si cumple con la condición $y(t+nT)=y(t)$ para todo $n$ entero. En este caso, $T$ es el periodo de la señal.
La señal seno es la oscilación más pura que se puede expresar matemáticamente. Esta señal surge al considerar la proyección de un movimiento circular uniforme.
## Serie de Fourier
Si se combinan apropiadamente un conjunto de oscilaciones puras, como combinaciones lineales de señales desplazadas y escaladas en tiempo y amplitud, podría recrearse cualquiér señal periódica. Esta idea da lugar a las series de Fourier.
\begin{equation}
y(t) = \sum_{n=0}^{\infty} C_n \cdot cos(n \omega_0 t - \phi_n)
\end{equation}
La señal $y(t)$ es igual a una combinación de infinitas señales coseno, cada una con una amplitud $C_n$, una frecuencia $n \omega_0$ y un desfase $\phi_n$.
También puede expresarse como:
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
La serie queda definida si se encuentran los valores apropiados de $A_n$ y $B_n$ para todos los valores de $n$.
Observe que:
- $A_n$ debe ser más grande si $y(t)$ se "parece" más a un cos.
- $B_n$ debe ser más grande si $y(t)$ se "parece" más a un sin.
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
\begin{equation}
(f \; \circ \; g)(\tau) = \int_{-\infty}^{\infty} f(t) \cdot g(t + \tau) \; dt
\end{equation}
\begin{equation}
(y \; \circ \; sin_n)(\tau) = \int_{-\infty}^{\infty} y(t) \cdot sin(n \omega_0(t + \tau)) \; dt
\end{equation}
Considerando:
- $\tau=0$ para no incluir desfases.
- la señal $y(t)$ es periódica con periodo $T$.
\begin{equation}
(y \; \circ \; sin_n)(0) = \frac{1}{T} \int_{0}^{T} y(t) \cdot sin(n \omega_0 t) \; dt
\end{equation}
Esta expresión puede interpretarse como el parecido de una señal $y(t)$ a la señal $sin$ con crecuencia $n \omega_0$ promediado a lo largo de un periodo sin desfase del seno.
Retomando la idea inicial
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
donde
\begin{equation}
A_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot cos(n \omega_0 t) \; dt
\end{equation}
\begin{equation}
B_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot sin(n \omega_0 t) \; dt
\end{equation}
Se recomienda al estudiante que encuentre la relación entre las Series anteriores y la siguiente alternativa para representar la Series de Fourier.
\begin{equation}
y(t) = \sum_{n=-\infty}^{\infty} C_n \cdot e^{j n \omega_0 t}
\end{equation}
donde
\begin{equation}
C_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot e^{j n \omega_0 t} \; dt
\end{equation}
Los valores $C_n$ son el espectro de la señal periódica $y(t)$ y son una representación en el dominio de la frecuencia.
**Ejemplo # 1**
La señal $y(t) = sin(2 \pi t)$ es en sí misma una oscilación pura de periodo $T=1$.
```python
# Se define y como el seno de t
t = sym.symbols('t', real=True)
#T = sym.symbols('T', real=True)
T = 1
nw = sym.symbols('n', real=True)
delta = sym.DiracDelta(nw)
w0 = 2 * sym.pi / T
y = 1*sym.sin(w0*t) + 0.5
# y = sym.sin(w0*t)
# y = (t-0.5)*(t-0.5)
y
```
Aunque la sumatoria de las series de Fourier incluye infinitos términos, solamente se tomaran las primeras 3 componentes.
```python
n_max = 5
y_ser = 0
C = 0
ns = range(-n_max,n_max+1)
espectro = pd.DataFrame(index = ns,
columns= ['C','C_np','C_real','C_imag','C_mag','C_ang'])
for n in espectro.index:
C_n = (1/T)*sym.integrate(y*sym.exp(-1j*n*w0*t), (t,0,T)).evalf()
C = C + C_n*delta.subs(nw,nw-n)
y_ser = y_ser + C_n*sym.exp(1j*n*w0*t)
espectro['C'][n]=C_n
C_r = float(sym.re(C_n))
C_i = float(sym.im(C_n))
espectro['C_real'][n] = C_r
espectro['C_imag'][n] = C_i
espectro['C_np'][n] = complex(C_r + 1j*C_i)
espectro['C_mag'][n] = np.absolute(espectro['C_np'][n])
espectro['C_ang'][n] = np.angle(espectro['C_np'][n])
espectro
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>C</th>
<th>C_np</th>
<th>C_real</th>
<th>C_imag</th>
<th>C_mag</th>
<th>C_ang</th>
</tr>
</thead>
<tbody>
<tr>
<th>-5</th>
<td>-1.44265290902902e-129 + 3.60663227257255e-130*I</td>
<td>(-1.4426529090290212e-129+3.606632272572553e-1...</td>
<td>-1.44265e-129</td>
<td>3.60663e-130</td>
<td>1.48705e-129</td>
<td>2.89661</td>
</tr>
<tr>
<th>-4</th>
<td>5.77061163611608e-129 + 1.15412232722322e-128*I</td>
<td>(5.770611636116085e-129+1.154122327223217e-128j)</td>
<td>5.77061e-129</td>
<td>1.15412e-128</td>
<td>1.29035e-128</td>
<td>1.10715</td>
</tr>
<tr>
<th>-3</th>
<td>-4.61648930889287e-128 - 4.61648930889287e-128*I</td>
<td>(-4.616489308892868e-128-4.616489308892868e-128j)</td>
<td>-4.61649e-128</td>
<td>-4.61649e-128</td>
<td>6.5287e-128</td>
<td>-2.35619</td>
</tr>
<tr>
<th>-2</th>
<td>9.23297861778574e-128 + 9.23297861778574e-128*I</td>
<td>(9.232978617785736e-128+9.232978617785736e-128j)</td>
<td>9.23298e-128</td>
<td>9.23298e-128</td>
<td>1.30574e-127</td>
<td>0.785398</td>
</tr>
<tr>
<th>-1</th>
<td>1.05879118406788e-22 + 0.5*I</td>
<td>(1.0587911840678754e-22+0.5j)</td>
<td>1.05879e-22</td>
<td>0.5</td>
<td>0.5</td>
<td>1.5708</td>
</tr>
<tr>
<th>0</th>
<td>0.500000000000000</td>
<td>(0.5+0j)</td>
<td>0.5</td>
<td>0</td>
<td>0.5</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>3.58856254064475e-26 - 0.5*I</td>
<td>(3.5885625406447527e-26-0.5j)</td>
<td>3.58856e-26</td>
<td>-0.5</td>
<td>0.5</td>
<td>-1.5708</td>
</tr>
<tr>
<th>2</th>
<td>9.23297861778574e-128 - 9.23297861778574e-128*I</td>
<td>(9.232978617785736e-128-9.232978617785736e-128j)</td>
<td>9.23298e-128</td>
<td>-9.23298e-128</td>
<td>1.30574e-127</td>
<td>-0.785398</td>
</tr>
<tr>
<th>3</th>
<td>-4.61648930889287e-128 + 4.61648930889287e-128*I</td>
<td>(-4.616489308892868e-128+4.616489308892868e-128j)</td>
<td>-4.61649e-128</td>
<td>4.61649e-128</td>
<td>6.5287e-128</td>
<td>2.35619</td>
</tr>
<tr>
<th>4</th>
<td>5.77061163611608e-129 - 1.15412232722322e-128*I</td>
<td>(5.770611636116085e-129-1.154122327223217e-128j)</td>
<td>5.77061e-129</td>
<td>-1.15412e-128</td>
<td>1.29035e-128</td>
<td>-1.10715</td>
</tr>
<tr>
<th>5</th>
<td>-1.44265290902902e-129 - 3.60663227257255e-130*I</td>
<td>(-1.4426529090290212e-129-3.606632272572553e-1...</td>
<td>-1.44265e-129</td>
<td>-3.60663e-130</td>
<td>1.48705e-129</td>
<td>-2.89661</td>
</tr>
</tbody>
</table>
</div>
La señal reconstruida con un **n_max** componentes
```python
y_ser
```
```python
plt.rcParams['figure.figsize'] = 7, 2
g1 = sym.plot(y, (t,0,1), ylabel=r'Amp',show=False,line_color='blue',legend=True, label = 'y(t) original')
g2 = sym.plot(sym.re(y_ser), (t,-1,2), ylabel=r'Amp',show=False,line_color='red',legend=True, label = 'y(t) reconstruida')
g1.extend(g2)
g1.show()
```
```python
C
```
```python
plt.rcParams['figure.figsize'] = 7, 2
plt.stem(espectro.index,espectro['C_mag'])
```
<StemContainer object of 3 artists>
**Ejercicio para entregar 02-Octubre-2020**
Use las siguientes funciones para definir un periodo de una señal periódica con periodo $T=1$:
\begin{equation}
y_1(t) = \begin{cases}
-1 & 0 \leq t < 0.5 \\
1 & 0.5 \leq t < 1
\end{cases}
\end{equation}
\begin{equation}
y_2(t) = t
\end{equation}
\begin{equation}
y_3(t) = 3 sin(2 \pi t)
\end{equation}
Varíe la cantidad de componentes que reconstruyen cada función y analice la reconstrucción obtenida y los valores de $C_n$
```python
```
| d128ac18a21059fd9a2a7284a255d06e59cab6d8 | 232,579 | ipynb | Jupyter Notebook | .ipynb_checkpoints/4_Series_de_Fourier-checkpoint.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/4_Series_de_Fourier-checkpoint.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/4_Series_de_Fourier-checkpoint.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
] | null | null | null | 74.30639 | 23,058 | 0.652587 | true | 3,748 | Qwen/Qwen-72B | 1. YES
2. YES | 0.824462 | 0.861538 | 0.710305 | __label__spa_Latn | 0.328317 | 0.488609 |
# Lecture 22: Transformations, Log-Normal, Convolutions, Proving Existence
## Stat 110, Prof. Joe Blitzstein, Harvard University
----
## Variance of Hypergeometric, con't
Returning to where we left off in Lecture 21, recall that we are considering $X \sim \operatorname{HGeom}(w, b, n)$ where $p = \frac{w}{w+b}$ and $w + b = N$.
\begin{align}
Var\left( \sum_{j=1}^{n} X_j \right) &= \operatorname{Var}(X_1) + \dots + \operatorname{Var}(X_n) + 2 \, \sum_{i<j} \operatorname{Cov}(X_i, X_j) \\
&= n \, Var(X_1) + 2 \, \binom{n}{2} \operatorname{Cov} (X_1, X_2) & \quad \text{symmetry, amirite?} \\
&= n \, p \, (1-p) + 2 \, \binom{n}{2} \left( \frac{w}{w+b} \, \frac{w-1}{w+b-1} - p^2 \right) \\
&= \frac{N-n}{N-1} \, n \, p \, (1-p) \\
\\
\text{where } \frac{N-n}{N-1} &\text{ is known as the finite population correction}
\end{align}
Note how this closely resembles the variance for a binomial distribution, except for scaling by that finite population correction.
Let's idiot-check this:
\begin{align}
\text{let } n &= 1 \\
\\
\operatorname{Var}(X) &= \frac{N-1}{N-1} 1 \, p \, (1-p) \\
&= p \, (1-p) & \quad \text{ ... just a Bernoulli, since we only sample once!} \\
\\
\text{let } N &\gg n \\
\Rightarrow \frac{N-n}{N-1} &= 1
\\
\operatorname{Var}(X) &= \frac{N-n}{N-1} n \, p \, (1-p) \\
&= n \, p \, (1-p) & \quad \text{ ... Binomial, we probably never sample same element twice!} \\
\end{align}
## Transformations
### Or a change of variables
A function of an r.v. is itself an r.v., and we can use LOTUS to find mean and variance. But what if we want more than just the mean and variance? What if we want to know the entire distribution (PDF)?
### Theorem
> Let $X$ be a continuous r.v. with PDF $f_X, Y = g(X)$.
> Given that $g$ is differentiable, and strictly increasing
> (at least on the region in which we are interested),
> then the PDF of $Y$ is given by
>
> \begin\{align\}
> f_Y(y) &= f_X(x) \, \frac{dx}{dy} & \quad \text{ where } y = g(x) \text{ , } x = g^{-1}(y)
> \end\{align\}
>
And since we know from the [Chain Rule](https://en.wikipedia.org/wiki/Chain_rule) that $\frac{dx}{dy} = \left( \frac{dy}{dx} \right)^{-1}$, you can substitute $\left( \frac{dy}{dx} \right)^{-1}$ for $\frac{dx}{dy}$ if that makes things easier.
#### Proof
\begin{align}
&\text{starting from the CDF...} \\
\\
F_Y(y) &= P(Y \le y) \\
&= P \left(g(x) \le y \right) \\
&= P \left(X \le g^{-1}(y) \right) \\
&= F_X \left( g^{-1}(y) \right) \\
&= F_X(x) \\
\\
&\text{and now differentiating to get the PDF...} \\
\\
\Rightarrow f_{Y}(y) &= f_{X}(x) \frac{dx}{dy}
\end{align}
### Log-Normal
Now let's try applying what we now know about transformations to get the PDF of a [Log-Normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution).
Given the log-normal distribution $Y = e^{z}$, where $Z \sim (0,1)$, find the PDF.
Note that $\frac{dy}{dz} = e^z = y$.
\begin{align}
f_Y(y) &= f_Z{z} \, \frac{dz}{dy} \\
&= \frac{1}{\sqrt{2\pi}} \, e^{-\frac{lny^2}{2}} \, \frac{1}{y} & \quad \text{where }y \gt 0
\end{align}
## Transformations in $\mathbb{R}^n$
### Multi-dimensional Example
Here's a multi-dimensional example.
Given the distribution $\vec{Y} = g(\vec{X})$, where $g \colon \mathbb{R}^n \rightarrow \mathbb{R}^n$, with continuous joint PDF $\vec{X} = \{ X_1, \dots , X_n \}$.
What is the joint PDF of $Y$ in terms of the joint PDF $X$?
\begin{align}
f_Y(\vec{y}) &= f_X(\vec{x}) \, | \frac{d\vec{x}}{d\vec{y}} | \\
\\
\text{where } \frac{d\vec{x}}{d\vec{y}} &=
\begin{bmatrix}
\frac{\partial x_1}{\partial y_1} & \cdots & \frac{\partial x_1}{\partial y_n} \\
\vdots&\ddots&\vdots \\
\frac{\partial x_n}{\partial y_1}& \cdots &\frac{\partial x_n}{\partial y_n}
\end{bmatrix} & \text{... is the Jacobian} \\
\\
\text{and } | \frac{d\vec{x}}{d\vec{y}} | &= \left| \, det \, \frac{d\vec{x}}{d\vec{y}} \, \right| & \quad \text{... absolute value of determinant of Jacobian}
\end{align}
Similar to the previous explanation on transformations, you can substitute $\left( | \, \frac{d\vec{y}}{d\vec{x}} \, | \right)^{-1}$ for $\frac{d\vec{x}}{d\vec{y}}$ if that makes things easier.
## Convolutions
### Distribution for a Sum of Random Variables
Let $T = X + Y$, where $X,Y$ are independent.
\begin{align}
P(T=t) &= \sum_{x} P(X=x) \, P(Y=t-x) & \quad \text{discrete case}\\
\\
f_T(t) &= \int_{-\infty}^{\infty} f_X(x) \, f_Y(t-x) \, dx & \quad \text{continuous case} \\
\end{align}
#### Proof of continuous case
\begin{align}
&\text{starting from the CDF...} \\
\\
F_T(t) &= P(T \le t) \\
&= \int_{-\infty}^{\infty} P(X + Y \le t \, | \, X=x) \, f_X(x) \, dx & \quad \text{ law of total probability} \\
&= \int_{-\infty}^{\infty} P(Y \le t - x) \, f_X(x) \, dx \\
&= \int_{-\infty}^{\infty} F_Y(t-x) \, f_X(x) \, dx \\
\\
&\text{and now differentiating w.r.t. } T \text{ ...} \\
\\
\Rightarrow f_{T}(t) &= \int_{-\infty}^{\infty} f_Y(t-x) \, f_X(x) \, dx
\end{align}
## Proving Existence
### Using Probability to Prove the Existence of Object with Desired Properties
Let us say that $A$ is our desired property.
Can we show that $P(A) \gt 0$ for a _random object_? For if $P(A) \gt 0$, it follows that there should be _at least one object with property $A$_.
Suppose each object has some associated "score". We can pick a random object, and use that to compute the average score. From there, we can reason that there must be an object where this score is $\ge \mathbb{E}(X)$
Suppose we have:
* 100 people
* 15 committees
* each committee has 20 people
* assume that each person is on 3 committees
Show that there exists 2 committees where a group of 3 people are on both committees (overlap $\ge 3$).
Rather than try to enumerate all possible committee permutations, find the average overlap of 2 _random_ committees using indicator random variables.
#### Proof
\begin{align}
\text{let } \, I_1 &= \text{person 1 on both the randomly chosen committees} \\
\\
\text{then } \, P(I_1) &= \frac{\binom{3}{2}}{\binom{15}{2}} \\
\\
\mathbb{E}(overlap) &= 100 \, \frac{\binom{3}{2}}{\binom{15}{2}} & \quad \text{... by symmetry} \\
&= 100 \, \frac{3}{105} \\
&= \frac{20}{7} \\
&= 2.857142857142857 \\
\end{align}
But if the average overlap is $\frac{20}{7}$, since overlap must be an integer, we can safely round up and assume that average overlap is 3. And so we conclude that there must be at least one pair of committees where the overlap $\ge 3$.
This is similar to how Shannon proved his theory on channel capacity.
----
View [Lecture 22: Transformations and Convolutions | Statistics 110](http://bit.ly/2wRz77T) on YouTube.
| 992176428ba56397d6e1d82bacbeee1516152548 | 9,723 | ipynb | Jupyter Notebook | Lecture_22.ipynb | abhra-nilIITKgp/stats-110 | 258461cdfbdcf99de5b96bcf5b4af0dd98d48f85 | [
"BSD-3-Clause"
] | 113 | 2016-04-29T07:27:33.000Z | 2022-02-27T18:32:47.000Z | Lecture_22.ipynb | snoop2head/stats-110 | 88d0cc56ede406a584f6ba46368e548010f2b14a | [
"BSD-3-Clause"
] | null | null | null | Lecture_22.ipynb | snoop2head/stats-110 | 88d0cc56ede406a584f6ba46368e548010f2b14a | [
"BSD-3-Clause"
] | 65 | 2016-12-24T02:02:25.000Z | 2022-02-13T13:20:02.000Z | 38.583333 | 260 | 0.497275 | true | 2,366 | Qwen/Qwen-72B | 1. YES
2. YES | 0.808067 | 0.887205 | 0.716921 | __label__eng_Latn | 0.850217 | 0.503979 |
```python
import numpy as np
import sympy as sp
import pandas as pd
import math
import midterm as p1
import matplotlib.pyplot as plt
# Needed only in Jupyter to render properly in-notebook
%matplotlib inline
```
# Midterm
## Chinmai Raman
### 3/22/2016
$x_{n+1} = rx_n(1-x_n)$ for $x_0$ in $[0,1]$ and $r$ in $[2.9,4]$
### Problem 1
### $x_0$ = 0.5, $r$ = 2.5
```python
p1.sequence(0.5, 2.5, 100)
```
This function is defined only where r is a real value in [2.9, 4]. Please supply an r value in the range.
The function is not graphable at r = 2.5
### $x_0$ = 0.5, $r$ = 3.2
```python
p1.sequence(0.5, 3.2, 10)
```
[0.5,
0.8,
0.512,
0.7995392,
0.512884056522752,
0.7994688034800593,
0.5130189943751092,
0.7994576185134749,
0.5130404310855622,
0.7994558309027286,
0.5130438570827405]
```python
p1.graph(0.5, 3.2, 100)
```
The sequence approaches an oscillation between 0.799 (at odd indices of x) and 0.513 (at even indices of x)
### $x_0$ = 0.5, $r$ = 3.5
```python
p1.sequence(0.5, 3.5, 10)
```
[0.5,
0.875,
0.3828125,
0.826934814453125,
0.5008976948447526,
0.87499717950388,
0.3828199037744718,
0.826940887670016,
0.500883795893397,
0.8749972661668659,
0.38281967628581853]
```python
p1.graph(0.5, 3.5, 100)
```
The sequence approaches an oscillation between two sets of two numbers: 0.501 (at indices 4n), 0.875 (at indices 4n+1), 0.383 (at indices 4n+2), and 0.827 (at indices 4n+3) where n is a whole number
### Varying initial condition $x_0$ in the range $[0,1]$ with r = 3.2
```python
p1.sequence(0.25, 3.2, 10)
```
[0.25,
0.6000000000000001,
0.768,
0.5701632,
0.7842468011704321,
0.5414520192780059,
0.7945015363128827,
0.522460304349926,
0.7983857111312278,
0.5150910956566768,
0.7992712282620194]
```python
p1.graph(0.25, 3.2, 100)
```
```python
p1.sequence(0.75, 3.2, 10)
```
[0.75,
0.6000000000000001,
0.768,
0.5701632,
0.7842468011704321,
0.5414520192780059,
0.7945015363128827,
0.522460304349926,
0.7983857111312278,
0.5150910956566768,
0.7992712282620194]
```python
p1.graph(0.75, 3.2, 100)
```
The sequences both converge to the same pair of values (0.513 and 0.799) at the same speed. At a fixed r, the value of $x_0$ does not have an impact on the values that the sequence converges to.
### Varying initial condition $x_0$ in the range $[0,1]$ with r = 3.5
```python
p1.sequence(0.25, 3.5, 10)
```
[0.25,
0.65625,
0.78955078125,
0.5815612077713013,
0.8517171928541032,
0.44203255687790355,
0.8632392143826028,
0.41320045597148347,
0.8486304370475457,
0.4495988642741304,
0.8661090393113987]
```python
p1.graph(0.25, 3.5, 100)
```
```python
p1.sequence(0.75, 3.5, 10)
```
[0.75,
0.65625,
0.78955078125,
0.5815612077713013,
0.8517171928541032,
0.44203255687790355,
0.8632392143826028,
0.41320045597148347,
0.8486304370475457,
0.4495988642741304,
0.8661090393113987]
```python
p1.graph(0.75, 3.5, 100)
```
The sequences both converge to the same 4 values (0.501, 0.875, 0.383, and 0.827) at the same speed. At a fixed r, the value of $x_0$ does not have an impact on the values that the sequence converges to.
### Problem 2
### $r$ = 3.5441
```python
p1.sequence(0.5, 3.5441, 1000000)[-9:-1]
```
[0.5228976900798744,
0.8841668134458163,
0.3629720474657136,
0.8194786400888046,
0.5242907577195693,
0.883933836008775,
0.36360626458848544,
0.8200932179200039]
```python
p1.graph(0.5, 3.5441, 100)
```
The sequence seems to converge to an oscillation between 8 values, approximately: (0.5229, 0.8842, 0.3630, 0.8195, 0.5243, 0.8839, 0.3636, 0.8201). It could be argued that the sequence oscillates between four points, but there is a slight variation in the 3rd or 4th decimal place in the values of the sequence at the indices 4n, 4n+1, 4n+2, 4n+3. There seems to be an variation in every oscillation at index 4n+k ($0<=k<=3$ and n is a whole number) between two values.
```python
p1.sequence(0.75, 3.5441, 1000000)[-9:-1]
```
[0.8200932179200039,
0.5228976900798744,
0.8841668134458163,
0.3629720474657136,
0.8194786400888046,
0.5242907577195693,
0.883933836008775,
0.36360626458848544]
Varying $x_0$ with a constant $r$ does not change the values that the sequence converges to and oscillates between for a large N
### $r$ = 3.5699
```python
p1.sequence(0.5, 3.5699, 1000000)[-33:-1]
```
[0.49636520717822474,
0.8924278354848516,
0.34271180631453335,
0.8041571880915597,
0.5622178567675878,
0.8786556968344851,
0.3806222498332767,
0.8416001864762425,
0.4759009150485851,
0.8904017238296705,
0.34837402504063025,
0.8104014415155337,
0.5485185089306204,
0.884071292223974,
0.36587634676293335,
0.8282555178586006,
0.5078122597020139,
0.8922571239992436,
0.3431900113236088,
0.8046933989384318,
0.5610523833434796,
0.8791685779017997,
0.3792347235100257,
0.8404106787648518,
0.4787970220678066,
0.890870093361328,
0.34706771325606134,
0.8089811637748658,
0.5516589332793732,
0.8829482028309398,
0.3689515709289408,
0.8311666413487633]
```python
p1.graph(0.5, 3.5699, 100)
```
The sequence seems to converge to an oscillation between 16 values, approximately: (0.5078122597020139, 0.8922571239992436, 0.3431900113236088, 0.8046933989384318, 0.5610523833434796, 0.8791685779017997, 0.3792347235100257, 0.8404106787648518, 0.4787970220678066, 0.890870093361328, 0.34706771325606134, 0.8089811637748658, 0.5516589332793732, 0.8829482028309398, 0.3689515709289408, 0.8311666413487633). It could be argued that the sequence oscillates between eight points, but there is variation in the 2nd decimal place in the values of the sequence at the indices 8n, 8n+1, 8n+2, 8n+3, 8n+4, 8n+5, 8n+6, 8n+7. There seems to be an variation in every oscillation at index 4n+k ($0<=k<=7$ and n is a whole number) between four values. These oscillations of oscillations are larger at r = 3.5699 than at r = 3.5441. The periodicity of the oscillations increases as r increases.
```python
p1.sequence(0.75, 3.5699, 1000000)[-9:-1]
```
[0.8046933989384318,
0.5610523833434796,
0.8791685779017997,
0.3792347235100257,
0.8404106787648518,
0.4787970220678066,
0.890870093361328,
0.34706771325606134]
Varying $x_0$ with a constant $r$ does not change the values that the sequence converges to and oscillates between for a large N
### $r$ = 3.57
```python
p1.sequence(0.5, 3.57, 1000000)[-33:-1]
```
[0.4960958395524152,
0.8924455843863822,
0.3426716739654025,
0.8041346382429303,
0.5622825749004056,
0.878651544683678,
0.3806441375199326,
0.8416424157871522,
0.4758112412543736,
0.8904112071027348,
0.34835734904585025,
0.8104060878894045,
0.5485235763462671,
0.8840943012626876,
0.3658234968229741,
0.8282280976028125,
0.5078907479202178,
0.8922777178672167,
0.34314194567165146,
0.8046621163285997,
0.5611361517008182,
0.8791566643101159,
0.37927759935593724,
0.8404711840783617,
0.4786633609398336,
0.8908747497660097,
0.3470644400834328,
0.8090002508114317,
0.5516322766445462,
0.8829827655903487,
0.3688673985009418,
0.8311111397419982]
```python
p1.graph(0.5, 3.57, 100)
```
The sequence seems to converge to an oscillation between 16 values, approximately: (0.5078907479202178, 0.8922777178672167, 0.34314194567165146, 0.8046621163285997, 0.5611361517008182, 0.8791566643101159, 0.37927759935593724, 0.8404711840783617, 0.4786633609398336, 0.8908747497660097, 0.3470644400834328, 0.8090002508114317, 0.5516322766445462, 0.8829827655903487, 0.3688673985009418, 0.8311111397419982). It could be argued that the sequence oscillates between thirty-two points, but the values were two fine to compare adequately by eye. There seems to be an variation in every oscillation at index 4n+k ($0<=k<=7$ and n is a whole number) between eight values. These oscillations of oscillations are larger at r = 3.57 than at r = 3.5699 and r = 3.5441. Again, the periodicity of the oscillations is greater as r increases.
```python
p1.sequence(0.2, 3.57, 100000)[-8:-1]
```
[0.5098314419789786,
0.8921549336125519,
0.34348579371470583,
0.8050467925342573,
0.5602988420814857,
0.8795196572277663,
0.37829444230645615]
Varying $x_0$ with a constant $r$ does not change the values that the sequence converges to and oscillates between for a large N
### Problem 3
### Bifurcation Diagram
```python
p1.asymptote_graph(0.5, 2.9, 4, 0.0001, 200)
```
### Problem 4
The following are graphs zoomed into specific regions of the graph above:
### Zoom graph 1
```python
p1.zoom_graph(0.5, 2.9, 4, 0.0001, 1000, [3.54, 3.55, 0.78, 0.9])
```
### Zoom graph 2
```python
p1.zoom_graph(0.5, 2.9, 4, 0.0001, 1000, [3.568, 3.572, 0.33, 0.39])
```
The original bifurcation diagram in problem 3 shows the repeated bifurcation in $x_n$ as r approaches 4 from 2.9 with an $x_0$ of 0.5 . This means that the oscillations go from converging to one value to oscillating between two values, four values, eight values, and so on. The Bifurcations occur at and around certain values of r, for example: the one to two bifurcation occurs around r = 3.0 . The second is around 3.54, as illustrated in Zoom graph 1. As r increases, the periodicity of the oscillations increases, as can be seen in Zoom graph 2. The number of values that the discrete update map converges to double with each bifurcation, as r increases. I would not have guessed this complicated a behavior from the simple equation above. I would plot $f(x)=rx_n(1-x_n)$ and $x_{n+1}=x_n$ with $x_n$ and $x_{n+1}$ the recommend using cobwebbing (recursive graphical method). I would like to explore further the behavior of this update map at values of r below 2.9 and above 4.
```python
```
| 22fd6027671b5907b5a2ab82de098dc92883e97e | 322,677 | ipynb | Jupyter Notebook | Midterm.ipynb | ChinmaiRaman/phys227-midterm | c65a22052044799e34ec31d78827266629137636 | [
"MIT"
] | null | null | null | Midterm.ipynb | ChinmaiRaman/phys227-midterm | c65a22052044799e34ec31d78827266629137636 | [
"MIT"
] | null | null | null | Midterm.ipynb | ChinmaiRaman/phys227-midterm | c65a22052044799e34ec31d78827266629137636 | [
"MIT"
] | null | null | null | 340.017914 | 50,854 | 0.927373 | true | 4,051 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.851953 | 0.717196 | __label__eng_Latn | 0.818317 | 0.504618 |
<table>
<tr align=left><td>
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
Note: This material largely follows the text "Numerical Linear Algebra" by Trefethen and Bau (SIAM, 1997) and is meant as a guide and supplement to the material presented there.
```python
%matplotlib inline
%precision 3
import numpy
import matplotlib.pyplot as plt
```
# Numerical Linear Algebra
Numerical methods for linear algebra problems lies at the heart of many numerical approaches and is something we will spend some time on. Roughly we can break down problems that we would like to solve into two general problems, solving a system of equations
$$A \mathbf{x} = \mathbf{b}$$
and solving the eigenvalue problem
$$A \mathbf{v} = \lambda \mathbf{v}.$$
We examine each of these problems separately and will evaluate some of the fundamental properties and methods for solving these problems. We will be careful in deciding how to evaluate the results of our calculations and try to gain some understanding of when and how they fail.
## General Problem Specification
The number and power of the different tools made available from the study of linear algebra makes it an invaluable field of study. Before we dive in to numerical approximations we first consider some of the pivotal problems that numerical methods for linear algebra are used to address.
For this discussion we will be using the common notation $m \times n$ to denote the dimensions of a matrix $A$. The $m$ refers to the number of rows and $n$ the number of columns. If a matrix is square, i.e. $m = n$, then we will use the notation that $A$ is $m \times m$.
### Systems of Equations
The first type of problem is to find the solution to a linear system of equations. If we have $m$ equations for $m$ unknowns it can be written in matrix/vector form,
$$A \mathbf{x} = \mathbf{b}.$$
For this example $A$ is an $m \times m$ matrix, denoted as being in $\mathbb{R}^{m\times m}$, and $\mathbf{x}$ and $\mathbf{b}$ are column vectors with $m$ entries, denoted as $\mathbb{R}^m$.
#### Example: Vandermonde Matrix
We have data $(x_i, y_i), ~~ i = 1, 2, \ldots, m$ that we want to fit a polynomial of order $m-1$. Solving the linear system $A \mathbf{p} = \mathbf{y}$ does this for us where
$$A = \begin{bmatrix}
1 & x_1 & x_1^2 & \cdots & x_1^{m-1} \\
1 & x_2 & x_2^2 & \cdots & x_2^{m-1} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & x_m & x_m^2 & \cdots & x_m^{m-1}
\end{bmatrix} \quad \quad \mathbf{y} = \begin{bmatrix}
y_1 \\ y_2 \\ \vdots \\ y_m
\end{bmatrix}$$
and $\mathbf{p}$ are the coefficients of the interpolating polynomial $\mathcal{P}_N(x) = p_0 + p_1 x + p_2 x^2 + \cdots + p_m x^{m-1}$. The solution to this system satisfies $\mathcal{P}_N(x_i)=y_i$ for $i=1, 2, \ldots, m$.
#### Example: Linear least squares 1
In a similar case as above, say we want to fit a particular function (could be a polynomial) to a given number of data points except in this case we have more data points than free parameters. In the case of polynomials this could be the same as saying we have $m$ data points but only want to fit a $n - 1$ order polynomial through the data where $n - 1 \leq m$. One of the common approaches to this problem is to minimize the "least-squares" error between the data and the resulting function:
$$
E = \left( \sum^m_{i=1} |y_i - f(x_i)|^2 \right )^{1/2}.
$$
But how do we do this if our matrix $A$ is now $m \times n$ and looks like
$$
A = \begin{bmatrix}
1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\
1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & x_m & x_m^2 & \cdots & x_m^{n-1}
\end{bmatrix}?
$$
Turns out if we solve the system
$$A^T A \mathbf{x} = A^T \mathbf{b}$$
we can guarantee that the error is minimized in the least-squares sense[<sup>1</sup>](#footnoteRegression). (Although we will also show that this is not the most numerically stable way to solve this problem)
#### Example: Linear least squares 2
Fitting a line through data that has random noise added to it.
```python
# Linear Least Squares Problem
# First define the independent and dependent variables.
N = 20
x = numpy.linspace(-1.0, 1.0, N)
y = x + numpy.random.random((N))
# Define the Vandermonde matrix based on our x-values
A = numpy.array([ numpy.ones(x.shape), x]).T
A = numpy.array([ numpy.ones(x.shape), x, x**2]).T
A
```
```python
# Determine the coefficients of the polynomial that will
# result in the smallest sum of the squares of the residual.
p = numpy.linalg.solve(numpy.dot(A.T, A), numpy.dot(A.T, y))
print("Error in slope = %s, y-intercept = %s" % (numpy.abs(p[1] - 1.0), numpy.abs(p[0] - 0.5)))
print(p)
```
```python
# Plot it out, cuz pictures are fun!
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
f = numpy.zeros(x.shape)
for i in range(len(p)):
f += p[i] * x**i
axes.plot(x, y, 'ko')
axes.plot(x, f, 'r')
axes.set_title("Least Squares Fit to Data")
axes.set_xlabel("$x$")
axes.set_ylabel("$f(x)$ and $y_i$")
axes.grid()
plt.show()
```
### Eigenproblems
Eigenproblems come up in a variety of contexts and often are integral to many problem of scientific and engineering interest. It is such a powerful idea that it is not uncommon for us to take a problem and convert it into an eigenproblem. Here we introduce the idea and give some examples.
As a review, if $A \in \mathbb{C}^{m\times m}$ (a square matrix with complex values), a non-zero vector $\mathbf{v}\in\mathbb{C}^m$ is an **eigenvector** of $A$ with a corresponding **eigenvalue** $\lambda \in \mathbb{C}$ if
$$A \mathbf{v} = \lambda \mathbf{v}.$$
One way to interpret the eigenproblem is that we are attempting to ascertain the "action" of the matrix $A$ on some subspace of $\mathbb{C}^m$ where this action acts like scalar multiplication. This subspace is called an **eigenspace**.
### General idea of EigenProblems
Rewriting the standard Eigen problem $A\mathbf{v}=\lambda\mathbf{v}$ for $A \in \mathbb{C}^{m\times m}$, $\mathbf{v}\in\mathbb{C}^m$ as
$$
(A - \lambda I)\mathbf{v} = 0
$$
it becomes clear that for $\mathbf{v}$ to be non-trivial (i.e. $\neq \mathbf{0}$), requires that the matrix $(A-\lambda I)$ be singular,
This is equivalent to finding all values of $\lambda$ such that $|A-\lambda I| = 0$ (the determinant of singular matrices is always zero). However, it can also be shown that
$$
| A-\lambda I| = P_m(\lambda)
$$
which is a $m$th order polynomial in $\lambda$. Thus $P_m(\lambda)=0$ implies the eigenvalues are the $m$ roots of $P$, and the **eigenspace** corresponding to $\lambda_i$ is just $N(A-\lambda_i I)$
### Solving EigenProblems
The temptation (and what) we usually teach in introductory linear algebra is to simply find the roots of $P_m(\lambda)$. However that would be **wrong**. The best algorithms for finding Eigenvalues are completely unrelated to rootfinding as we shall see.
#### Example
Compute the eigenspace of the matrix
$$
A = \begin{bmatrix}
1 & 2 \\
2 & 1
\end{bmatrix}
$$
Recall that we can find the eigenvalues of a matrix by computing $\det(A - \lambda I) = 0$.
In this case we have
$$\begin{aligned}
A - \lambda I &= \begin{bmatrix}
1 & 2 \\
2 & 1
\end{bmatrix} - \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \lambda\\
&= \begin{bmatrix}
1 - \lambda & 2 \\
2 & 1 - \lambda
\end{bmatrix}.
\end{aligned}$$
The determinant of the matrix is
$$\begin{aligned}
\begin{vmatrix}
1 - \lambda & 2 \\
2 & 1 - \lambda
\end{vmatrix} &= (1 - \lambda) (1 - \lambda) - 2 \cdot 2 \\
&= 1 - 2 \lambda + \lambda^2 - 4 \\
&= \lambda^2 - 2 \lambda - 3.
\end{aligned}$$
This result is sometimes referred to as the characteristic equation of the matrix, $A$.
Setting the determinant equal to zero we can find the eigenvalues as
$$\begin{aligned}
& \\
\lambda &= \frac{2 \pm \sqrt{4 - 4 \cdot 1 \cdot (-3)}}{2} \\
&= 1 \pm 2 \\
&= -1 \mathrm{~and~} 3
\end{aligned}$$
The eigenvalues are used to determine the eigenvectors. The eigenvectors are found by going back to the equation $(A - \lambda I) \mathbf{v}_i = 0$ and solving for each vector. A trick that works some of the time is to normalize each vector such that the first entry is 1 ($\mathbf{v}_1 = 1$):
$$
\begin{bmatrix}
1 - \lambda & 2 \\
2 & 1 - \lambda
\end{bmatrix} \begin{bmatrix} 1 \\ v_2 \end{bmatrix} = 0
$$
$$\begin{aligned}
1 - \lambda + 2 v_2 &= 0 \\
v_2 &= \frac{\lambda - 1}{2}
\end{aligned}$$
We can check this by
$$\begin{aligned}
2 + \left(1- \lambda \frac{\lambda - 1}{2}\right) & = 0\\
(\lambda - 1)^2 - 4 &=0
\end{aligned}$$
which by design is satisfied by our eigenvalues. Another sometimes easier approach is to plug-in the eigenvalues to find the Null space of $A-\lambda I$ where the eigenvectors will be a basis for $N(A-\lambda I)$. The eigenvectors are therefore
$$\mathbf{v} = \begin{bmatrix}1 \\ -1 \end{bmatrix}, \begin{bmatrix}1 \\ 1 \end{bmatrix}.$$
Note that these are linearly independent (and because $A^T = A$, also orthogonal)
## Fundamentals
### Matrix-Vector Multiplication
One of the most basic operations we can perform with matrices is to multiply them be a vector. This matrix-vector product $A \mathbf{x} = \mathbf{b}$ is defined as
$$
b_i = \sum^n_{j=1} a_{ij} x_j \quad \text{where}\quad i = 1, \ldots, m
$$
### row picture
In addition to index form, we can consider matrix-vector as a sequence of inner products (dot-products between the rows of $A$ and the vector $\mathbf{x}$.
\begin{align}
\mathbf{b} &= A \mathbf{x}, \\
&=
\begin{bmatrix} \mathbf{a}_1^T \mathbf{x} \\ \mathbf{a}_2^T \mathbf{x} \\ \vdots \\ \mathbf{a}_m^T \mathbf{x}\end{bmatrix}
\end{align}
where $\mathbf{a}_i^T$ is the $i$th **row** of $A$
#### Operation Counts
This view is convenient for calculating the **Operation counts** required for to compute $A\mathbf{x}$. If $A\in\mathbb{C}^{m\times n}$ and $\mathbf{x}\in\mathbb{C}^n$. Then just counting the number of multiplications involved to compute $A\mathbf{x}$ is $O(??)$
### Column picture
An alternative (and entirely equivalent way) to write the matrix-vector product is as a linear combination of the columns of $A$ where each column's weighting is $x_j$.
$$
\begin{align}
\mathbf{b} &= A \mathbf{x}, \\
&=
\begin{bmatrix} & & & \\ & & & \\ \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n \\ & & & \\ & & & \end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}, \\
&= x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \cdots + x_n \mathbf{a}_n.
\end{align}
$$
This view will be useful later when we are trying to interpret various types of matrices.
One important property of the matrix-vector product is that is a **linear** operation, also known as a **linear operator**. This means that the for any $\mathbf{x}, \mathbf{y} \in \mathbb{C}^n$ and any $c \in \mathbb{C}$ we know that
1. $A (\mathbf{x} + \mathbf{y}) = A\mathbf{x} + A\mathbf{y}$
1. $A\cdot (c\mathbf{x}) = c A \mathbf{x}$
#### Example: Vandermonde Matrix
In the case where we have $m$ data points and want $m - 1$ order polynomial interpolant the matrix $A$ is a square, $m \times m$, matrix as before. Using the above interpretation the polynomial coefficients $p$ are the weights for each of the monomials that give exactly the $y$ values of the data.
#### Example: Numerical matrix-vector multiply
Write a matrix-vector multiply function and check it with the appropriate `numpy` routine. Also verify the linearity of the matrix-vector multiply.
```python
#A x = b
#(m x n) (n x 1) = (m x 1)
def matrix_vector_product(A, x):
m, n = A.shape
b = numpy.zeros(m)
for i in range(m):
for j in range(n):
b[i] += A[i, j] * x[j]
return b
m = 4
n = 3
A = numpy.random.uniform(size=(m,n))
x = numpy.random.uniform(size=(n))
y = numpy.random.uniform(size=(n))
c = numpy.random.uniform()
b = matrix_vector_product(A, x)
print(numpy.allclose(b, numpy.dot(A, x)))
print(numpy.allclose(matrix_vector_product(A, (x + y)), matrix_vector_product(A, x) + matrix_vector_product(A, y)))
print(numpy.allclose(matrix_vector_product(A, c * x), c*matrix_vector_product(A, x)))
```
### Matrix-Matrix Multiplication
The matrix product with another matrix $A B = C$ is defined as
$$
c_{ij} = \sum^m_{k=1} a_{ik} b_{kj} = \mathbf{a}_i^T\mathbf{b}_j
$$
i.e. each component of $C$ is a dot-product between the $i$th row of $A$ and the $j$th column of $B$
As with matrix-vector multiplication, Matrix-matrix multiplication can be thought of multiple ways
* $m\times p$ dot products
* $A$ multiplying the columns of $B$
$$
C = AB = \begin{bmatrix}
A\mathbf{b}_1 & A\mathbf{b}_2 & \ldots & A\mathbf{b}_p\\
\end{bmatrix}
$$
* Linear combinations of the rows of $B$
$$
C = AB = \begin{bmatrix}
\mathbf{a}_1^T B \\ \mathbf{a}_2^T B \\ \vdots \\ \mathbf{a}_m^T B\\
\end{bmatrix}
$$
### Questions
* What are the dimensions of $A$ and $B$ so that the multiplication works?
* What are the Operations Counts for Matrix-Matrix Multiplication?
* Comment on the product $\mathbf{c}=(AB)\mathbf{x}$ vs. $\mathbf{d} = A(B\mathbf{x})$
#### Example: Outer Product
The product of two vectors $\mathbf{u} \in \mathbb{C}^m$ and $\mathbf{v} \in \mathbb{C}^n$ is a $m \times n$ matrix where the columns are the vector $u$ multiplied by the corresponding value of $v$:
$$
\begin{align}
\mathbf{u} \mathbf{v}^T &=
\begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{bmatrix}
\begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}, \\
& = \begin{bmatrix} v_1u_1 & \cdots & v_n u_1 \\ \vdots & & \vdots \\ v_1 u_m & \cdots & v_n u_m \end{bmatrix}.
\end{align}
$$
It is useful to think of these as operations on the column vectors, and an equivalent way to express this relationship is
$$
\begin{align}
\mathbf{u} \mathbf{v}^T &=
\begin{bmatrix} \\ \mathbf{u} \\ \\ \end{bmatrix}
\begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}, \\
&=
\begin{bmatrix} & & & \\ & & & \\ \mathbf{u}v_1 & \mathbf{u} v_2 & \cdots & \mathbf{u} v_n \\ & & & \\ & & & \end{bmatrix}, \\
& = \begin{bmatrix} v_1u_1 & \cdots & v_n u_1 \\ \vdots & & \vdots \\ v_1 u_m & \cdots & v_n u_m \end{bmatrix}.
\end{align}
$$
### rank 1 updates
We call any matrix of the form $\mathbf{u}\mathbf{v}^T$ a "rank one matrix" ( because its rank r=?). These sort of matrix operations are very common in numerical algorithms for orthogonalization, eigenvalues and the original page-rank algorithm for google. Again, the order of operations is critical.
Comment on the difference in values and operation counts between
$$
\mathbf{y} = (\mathbf{u}\mathbf{v}^T)\mathbf{x}
$$
and
$$
\mathbf{y}' = \mathbf{u}(\mathbf{v}^T\mathbf{x})
$$
for $\mathbf{u}$, $\mathbf{v}$, $\mathbf{x}$, $\mathbf{y}$, $\mathbf{y}'\in\mathbb{R}^n$,
#### Example: Upper Triangular Multiplication
Consider the multiplication of a matrix $A \in \mathbb{C}^{m\times n}$ and the **upper-triangular** matrix $R$ defined as the $n \times n$ matrix with entries $r_{ij} = 1$ for $i \leq j$ and $r_{ij} = 0$ for $i > j$. The product can be written as
$$
\begin{bmatrix} \\ \\ \mathbf{b}_1 & \cdots & \mathbf{b}_n \\ \\ \\ \end{bmatrix} = \begin{bmatrix} \\ \\ \mathbf{a}_1 & \cdots & \mathbf{a}_n \\ \\ \\ \end{bmatrix} \begin{bmatrix} 1 & \cdots & 1 \\ & \ddots & \vdots \\ & & 1 \end{bmatrix}.
$$
The columns of $B$ are then
$$
\mathbf{b}_j = A \mathbf{r}_j = \sum^j_{k=1} \mathbf{a}_k
$$
so that $\mathbf{b}_j$ is the sum of the first $j$ columns of $A$.
#### Example: Write Matrix-Matrix Multiplication
Write a function that computes matrix-matrix multiplication and demonstrate the following properties:
1. $A (B + C) = AB + AC$ (for square matrices))
1. $A (cB) = c AB$ where $c \in \mathbb{C}$
1. $AB \neq BA$ in general
```python
def matrix_matrix_product(A, B):
C = numpy.zeros((A.shape[0], B.shape[1]))
for i in range(A.shape[0]):
for j in range(B.shape[1]):
for k in range(A.shape[1]):
C[i, j] += A[i, k] * B[k, j]
return C
m = 4
n = 4
p = 4
A = numpy.random.uniform(size=(m, n))
B = numpy.random.uniform(size=(n, p))
C = numpy.random.uniform(size=(m, p))
c = numpy.random.uniform()
print(numpy.allclose(matrix_matrix_product(A, B), numpy.dot(A, B)))
print(numpy.allclose(matrix_matrix_product(A, (B + C)), matrix_matrix_product(A, B) + matrix_matrix_product(A, C)))
print(numpy.allclose(matrix_matrix_product(A, c * B), c*matrix_matrix_product(A, B)))
print(numpy.allclose(matrix_matrix_product(A, B), matrix_matrix_product(B, A)))
```
#### NumPy Products
NumPy and SciPy contain routines that are optimized to perform matrix-vector and matrix-matrix multiplication. Given two `ndarray`s you can take their product by using the `dot` function.
```python
n = 10
m = 5
# Matrix vector with identity
A = numpy.identity(n)
x = numpy.random.random(n)
print(numpy.allclose(x, numpy.dot(A, x)))
print(x-A.dot(x))
print(A*x)
# Matrix vector product
A = numpy.random.random((m, n))
print(numpy.dot(A, x))
# Matrix matrix product
B = numpy.random.random((n, m))
print(numpy.dot(A, B))
```
### Range and Null-Space
#### Range
- The **range** of a matrix $A \in \mathbb R^{m \times n}$ (similar to any function), denoted as $\text{range}(A)$, is the set of vectors that can be expressed as $A x$ for $x \in \mathbb R^n$.
- We can also then say that that $\text{range}(A)$ is the space **spanned** by the columns of $A$. In other words the columns of $A$ provide a basis for $\text{range}(A)$, also called the **column space** of the matrix $A$.
- $C(A)$ controls the **existence** of solutions to $A\mathbf{x}=\mathbf{b}$
#### Null-Space
- Similarly the **null-space** of a matrix $A$, denoted $\text{null}(A)$ is the set of vectors $x$ that satisfy $A x = 0$.
- $N(A)$ controls the **uniqueness** of solutions to $A\mathbf{x}=\mathbf{b}$
- A similar concept is the **rank** of the matrix $A$, denoted as $\text{rank}(A)$, is the dimension of the column space. A matrix $A$ is said to have **full-rank** if $\text{rank}(A) = \min(m, n)$. This property also implies that the matrix mapping is **one-to-one**.
### Inverse
A **non-singular** or **invertible** matrix is characterized as a matrix with full-rank. This is related to why we know that the matrix is one-to-one, we can use it to transform a vector $x$ and using the inverse, denoted $A^{-1}$, we can map it back to the original matrix. The familiar definition of this is
\begin{align*}
A \mathbf{x} &= \mathbf{b}, \\
A^{-1} A \mathbf{x} & = A^{-1} \mathbf{b}, \\
x &=A^{-1} \mathbf{b}.
\end{align*}
Since $A$ has full rank, its columns form a basis for $\mathbb{R}^m$ and the vector $\mathbf{b}$ must be in the column space of $A$.
There are a number of important properties of a non-singular matrix A. Here we list them as the following equivalent statements
1. $A$ has an inverse $A^{-1}$
1. $\text{rank}(A) = m$
1. $\text{range}(A) = \mathbb{C}^m$
1. $\text{null}(A) = {0}$
1. 0 is not an eigenvalue of $A$
1. $\text{det}(A) \neq 0$
#### Example: Properties of invertible matrices
Show that given an invertible matrix that the rest of the properties hold. Make sure to search the `numpy` packages for relevant functions.
```python
m = 3
for n in range(100):
A = numpy.random.uniform(size=(m, m))
if numpy.linalg.det(A) != 0:
break
print(numpy.dot(numpy.linalg.inv(A), A))
print(numpy.linalg.matrix_rank(A))
print("N(A)= {}".format(numpy.linalg.solve(A, numpy.zeros(m))))
print("Eigenvalues = {}".format(numpy.linalg.eigvals(A)))
```
### Orthogonal Vectors and Matrices
Orthogonality is a very important concept in linear algebra that forms the basis of many of the modern methods used in numerical computations.
Two vectors are said to be *orthogonal* if their **inner-product** or **dot-product** defined as
$$
< \mathbf{x}, \mathbf{y} > \equiv (\mathbf{x}, \mathbf{y}) \equiv \mathbf{x}^T\mathbf{y} \equiv \mathbf{x} \cdot \mathbf{y} = \sum^m_{i=1} x_i y_i = 0
$$
Here we have shown the various notations you may run into (the inner-product is in-fact a general term for a similar operation for mathematical objects such as functions).
If $\langle \mathbf{x},\mathbf{y} \rangle = 0$ then we say $\mathbf{x}$ and $\mathbf{y}$ are orthogonal. The reason we use this terminology is that the inner-product of two vectors can also be written in terms of the angle between them where
$$
\cos \theta = \frac{\langle \mathbf{x}, \mathbf{y} \rangle}{||\mathbf{x}||_2~||\mathbf{y}||_2}
$$
and $||\mathbf{x}||_2$ is the Euclidean ($\ell^2$) norm of the vector $\mathbf{x}$.
We can write this in terms of the inner-product as well as
$$
||\mathbf{x}||_2^2 = \langle \mathbf{x}, \mathbf{x} \rangle = \mathbf{x}^T\mathbf{x} = \sum^m_{i=1} |x_i|^2.
$$
$$
||\mathbf{x}||_2 = \sqrt{\langle \mathbf{x}, \mathbf{x} \rangle}
$$
The generalization of the inner-product to complex spaces is defined as
$$
\langle x, y \rangle = \sum^m_{i=1} x_i^* y_i
$$
where $x_i^*$ is the complex-conjugate of the value $x_i$.
#### Orthonormality
Taking this idea one step further we can say a set of vectors $\mathbf{x} \in X$ are orthogonal to $\mathbf{y} \in Y$ if $\forall \mathbf{x},\mathbf{y}$ $< \mathbf{x}, \mathbf{y} > = 0$. If $\forall \mathbf{x},\mathbf{y}$ $||\mathbf{x}|| = 1$ and $||\mathbf{y}|| = 1$ then they are also called orthonormal. Note that we dropped the 2 as a subscript to the notation for the norm of a vector. Later we will explore other ways to define a norm of a vector other than the Euclidean norm defined above.
Another concept that is related to orthogonality is linear-independence. A set of vectors $\mathbf{x} \in X$ are **linearly independent** if $\forall \mathbf{x} \in X$ that each $\mathbf{x}$ cannot be written as a linear combination of the other vectors in the set $X$.
An equivalent statement is that given a set of $n$ vectors $\mathbf{x}_i$, the only set of scalars $c_i$ that satisfies
$$
\sum_{i=1}^n c_i\mathbf{x}_i = \mathbf{0}
$$
is if $c_i=0$ for all $i\in[1,n]$
This can be related directly through the idea of projection. If we have a set of vectors $\mathbf{x} \in X$ we can project another vector $\mathbf{v}$ onto the vectors in $X$ by using the inner-product. This is especially powerful if we have a set of **orthogonal** vectors $X$, which are said to **span** a space (or provide a **basis** for a space), s.t. any vector in the space spanned by $X$ can be expressed as a linear combination of the basis vectors $X$
$$
\mathbf{v} = \sum^n_{i=1} \, \langle \mathbf{v}, \mathbf{x}_i \rangle \, \mathbf{x}_i.
$$
Note if $\mathbf{v} \in X$ that
$$
\langle \mathbf{v}, \mathbf{x}_i \rangle = 0 \quad \forall \mathbf{x}_i \in X \setminus \mathbf{v}.
$$
Looping back to matrices, the column space of a matrix is spanned by its linearly independent columns. Any vector $v$ in the column space can therefore be expressed via the equation above. A special class of matrices are called **unitary** matrices when complex-valued and **orthogonal** when purely real-valued if the columns of the matrix are orthonormal to each other. Importantly this implies that for a unitary matrix $Q$ we know the following
1. $Q^* = Q^{-1}$
1. $Q^*Q = I$
where $Q^*$ is called the **adjoint** of $Q$. The adjoint is defined as the transpose of the original matrix with the entries being the complex conjugate of each entry as the notation implies.
As an example if we have the matrix
$$
\begin{aligned}
Q &= \begin{bmatrix} q_{11} & q_{12} \\ q_{21} & q_{22} \\ q_{31} & q_{32} \end{bmatrix} \quad \text{then} \\
Q^* &= \begin{bmatrix} q^*_{11} & q^*_{21} & q^*_{31} \\ q^*_{12} & q^*_{22} & q^*_{32} \end{bmatrix}
\end{aligned}
$$
The important part of being an unitary matrix is that the projection onto the column space of the matrix $Q$ preserves geometry in an Euclidean sense, i.e. preserves the Cartesian distance.
### Vector Norms
Norms (and also measures) provide a means for measure the "size" or distance in a space. In general a norm is a function, denoted by $||\cdot||$, that maps $\mathbb{C}^m \rightarrow \mathbb{R}$. In other words we stick in a multi-valued object and get a single, real-valued number out the other end. All norms satisfy the properties:
1. $~~~~||\mathbf{x}|| \geq 0$
1. $~~~~||\mathbf{x}|| = 0$ only if $\mathbf{x} = \mathbf{0}$
1. $$||\mathbf{x} + \mathbf{y}||\leq ||\mathbf{x}|| + ||\mathbf{y}||$$ (triangle inequality)
1. $~~~||c \mathbf{x}|| = |c| ~ ||\mathbf{x}||$ where $c \in \mathbb{C}$
There are a number of relevant norms that we can define beyond the Euclidean norm, also know as the 2-norm or $\ell_2$ norm:
1. $\ell_1$ norm:
$$
||\mathbf{x}||_1 = \sum^m_{i=1} |x_i|,
$$
1. $\ell_2$ norm:
$$
||\mathbf{x}||_2 = \left( \sum^m_{i=1} |x_i|^2 \right)^{1/2},
$$
3. $\ell_p$ norm:
$$
||\mathbf{x}||_p = \left( \sum^m_{i=1} |x_i|^p \right)^{1/p}, \quad \quad 1 \leq p < \infty,
$$
1. $\ell_\infty$ norm:
$$
||\mathbf{x}||_\infty = \max_{1\leq i \leq m} |x_i|,
$$
1. weighted $\ell_p$ norm:
$$
||\mathbf{x}||_{W_p} = \left( \sum^m_{i=1} |w_i x_i|^p \right)^{1/p}, \quad \quad 1 \leq p < \infty,
$$
These are also related to other norms denoted by capital letters ($L_2$ for instance). In this case we use the lower-case notation to denote finite or discrete versions of the infinite dimensional counterparts.
#### Example: Comparisons Between Norms
Compute the norms given some vector $\mathbf{x}$ and compare their values. Verify the properties of the norm for one of the norms.
```python
def pnorm(x, p):
""" return the vector p norm of a vector
parameters:
-----------
x: numpy array
vector
p: float or numpy.inf
value of p norm such that ||x||_p = (sum(|x_i|^p))^{1/p} for p< inf
for infinity norm return max(abs(x))
returns:
--------
pnorm: float
pnorm of x
"""
if p == numpy.inf:
norm = numpy.max(numpy.abs(x))
else:
norm = numpy.sum(numpy.abs(x)**p)**(1./p)
return norm
```
```python
m = 10
p = 4
x = numpy.random.uniform(size=m)
ell_1 = pnorm(x, 1)
ell_2 = pnorm(x, 2)
ell_p = pnorm(x, p)
ell_infty = pnorm(x, numpy.inf)
print('x = {}'.format(x))
print()
print("L_1 = {}\nL_2 = {}\nL_{} = {}\nL_inf = {}".format(ell_1, ell_2, p, ell_p, ell_infty))
y = numpy.random.uniform(size=m)
print()
print("Properties of norms:")
print('y = {}\n'.format(y))
p = 2
print('||x+y||_{p} = {nxy}\n||x||_{p} + ||y||_{p} = {nxny}'.format(
p=p,nxy=pnorm(x+y, p), nxny=pnorm(x, p) + pnorm(y, p)))
c = 0.1
print('||c x||_{} = {}'.format(p,pnorm(c * x, p)))
print(' c||x||_{} = {}'.format(p,c * pnorm(x, p)))
```
### Matrix Norms
The most direct way to consider a matrix norm is those induced by a vector-norm. Given a vector norm, we can define a matrix norm as the smallest number $C$ that satisfies the inequality
$$
||A \mathbf{x}||_{m} \leq C ||\mathbf{x}||_{n}.
$$
or as the supremum of the ratios so that
$$
C = \sup_{\mathbf{x}\in\mathbb{C}^n ~ \mathbf{x}\neq\mathbf{0}} \frac{||A \mathbf{x}||_{m}}{||\mathbf{x}||_n}.
$$
Noting that $||A \mathbf{x}||$ lives in the column space and $||\mathbf{x}||$ on the domain we can think of the matrix norm as the "size" of the matrix that maps the domain to the range. Also noting that if $||\mathbf{x}||_n = 1$ we also satisfy the condition we can write the induced matrix norm as
$$
||A||_{(m,n)} = \sup_{\mathbf{x} \in \mathbb{C}^n ~ ||\mathbf{x}||_{n} = 1} ||A \mathbf{x}||_{m}.
$$
This definition has a **geometric interpretation**. The set of all $\mathbf{x}$ such that $||\mathbf{x}||_n = 1$ is the "unit sphere" in $\mathbb{C}^n$. So the induced matrix norm is the largest vector in the deformed "sphere" and measures how much the matrix distorts the unit sphere.
#### Example: Induced Matrix Norms
Consider the matrix
$$
A = \begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix}.
$$
Compute the induced-matrix norm of $A$ for the vector norms $\ell_2$ and $\ell_\infty$.
$\ell^2$: For both of the requested norms the unit-length vectors $[1, 0]$ and $[0, 1]$ can be used to give an idea of what the norm might be and provide a lower bound.
$$
||A||_2 = \sup_{x \in \mathbb{R}^n} \left( ||A \cdot [1, 0]^T||_2, ||A \cdot [0, 1]^T||_2 \right )
$$
computing each of the norms we have
$$\begin{aligned}
\begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} &= \begin{bmatrix} 1 \\ 0 \end{bmatrix} \\
\begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix} \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix} &= \begin{bmatrix} 2 \\ 2 \end{bmatrix}
\end{aligned}$$
which translates into the norms $||A \cdot [1, 0]^T||_2 = 1$ and $||A \cdot [0, 1]^T||_2 = 2 \sqrt{2}$. This implies that the $\ell_2$ induced matrix norm of $A$ is at least $||A||_{2} = 2 \sqrt{2} \approx 2.828427125$.
The exact value of $||A||_2$ can be computed using the spectral radius defined as
$$
\rho(A) = \max_{i} |\lambda_i|,
$$
where $\lambda_i$ are the eigenvalues of $A$. With this we can compute the $\ell_2$ norm of $A$ as
$$
||A||_2 = \sqrt{\rho(A^\ast A)}
$$
Computing the norm again here we find
$$
A^\ast A = \begin{bmatrix} 1 & 0 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix} = \begin{bmatrix} 1 & 2 \\ 2 & 8 \end{bmatrix}
$$
which has eigenvalues
$$
\lambda = \frac{1}{2}\left(9 \pm \sqrt{65}\right )
$$
so $||A||_2 \approx 2.9208096$.
The actual induced 2-norm of a matrix can be derived using the Singular Value Decomposition (SVD) and is simply the largest singular value $\sigma_1$.
**Proof**:
Given that every Matrix $A\in\mathbb{C}^{m\times n}$ can be factored into its SVD (see notebook 10.1):
$$
A = U\Sigma V^*
$$
where $U\in\mathbb{C}^{m\times m}$ and $V\in\mathbb{C}^{n\times n}$ are unitary matrices with the property $U^*U=I$ and $V^*V=I$ (of their respective sizes) and $\Sigma$ is a real diagonal matrix of singular values $\sigma_1 \geq\sigma_2\geq...\sigma_n\geq 0$.
Then the 2-norm squared of a square matrix is
$$
||A||^2_2 = \sup_{\mathbf{x} \in \mathbb{C}^n ~ ||\mathbf{x}||_2 = 1} ||A \mathbf{x}||_2^2 = \mathbf{x}^TA^*A\mathbf{x}
$$
but $A^*A = V\Sigma^2V^*$ so
\begin{align}
||A \mathbf{x}||_2^2 &= \mathbf{x}^*V\Sigma^2V^*\mathbf{x} \\
&= \mathbf{y}^*\Sigma^2\mathbf{y} \quad\mathrm{where}\quad \mathbf{y}=V^*\mathbf{x}\\
&= \sum_{i=1}^n \sigma_i^2|y_i|^2\\
&\leq \sigma_1^2\sum_{i=1}^n |y_i|^2 = \sigma_i^2||\mathbf{y}||_2\\
\end{align}
but if $||\mathbf{x}||_2 = 1$ (i.e. is a unit vector), then so is $\mathbf{y}$ because unitary matrices don't change the length of vectors. So it follows that
$$
||A||_2 = \sigma_1
$$
```python
A = numpy.array([[1, 2], [0, 2]])
#calculate the SVD(A)
U, S, Vt = numpy.linalg.svd(A)
print('Singular_values = {}'.format(S))
print('||A||_2 = {}'.format(S.max()))
print('||A||_2 = {}'.format(numpy.linalg.norm(A, ord=2)))
# more fun facts about the SVD
#print(U.T.dot(U))
#print(Vt.T.dot(Vt))
#print(A - numpy.dot(U,numpy.dot(numpy.diag(S),Vt)))
```
#### Other useful norms of a Matrix
The 2-norm of a matrix can be expensive to compute, however there are other norms that are equivalent that can be directly computed from the components of $A$. For example
* The induced 1-norm is simply max of the 1-norm of the **columns** of $A$
$$
||A \mathbf{x}||_1 = || \sum^n_{j=1} x_j \mathbf{a}_j ||_1 \leq \sum^n_{j=1} |x_j| ||\mathbf{a}_j||_1 \leq \max_{1\leq j\leq n} ||\mathbf{a}_j||_1 ||\mathbf{x}||_1 = \max_{1\leq j\leq n} ||\mathbf{a}_j||_1
$$
* The induce $\infty$-norm is simply the max of the 1-norm of **rows** of $A$
$$
||A \mathbf{x}||_\infty = \max_{1 \leq i \leq m} | \mathbf{a}^*_i \mathbf{x} | \leq \max_{1 \leq i \leq m} ||\mathbf{a}^*_i||_1
$$
because the largest unit vector on the unit sphere in the $\infty$ norm is a vector of 1's.
#### Example:
$$
A = \begin{bmatrix} 1 & 2 \\ 0 & 2 \end{bmatrix}.
$$
$$ ||A||_1 = 4, \quad ||A||_\infty = 3$$
```python
# Calculate the 1-norm of A
normA_1 = numpy.max(numpy.sum(numpy.abs(A), axis=0))
print('||A||_1 = {}'.format(normA_1))
print('||A||_1 = {}'.format(numpy.linalg.norm(A, ord=1)))
# calculate the infinity norm of A
normA_inf = numpy.max(numpy.sum(numpy.abs(A), axis=1))
print('||A||_inf = {}'.format(normA_inf))
print('||A||_inf = {}'.format(numpy.linalg.norm(A, ord=numpy.inf)))
```
One of the most useful ways to think about matrix norms is as a transformation of a unit-ball to an ellipse. Depending on the norm in question, the norm will be some combination of the resulting ellipse.
```python
A = numpy.array([[1, 2], [0, 2]])
```
#### 2-Norm
```python
# ============
# 2-norm
# Unit-ball
fig = plt.figure()
fig.suptitle("2-Norm: $||A||_2 = ${:3.4f}".format(numpy.linalg.norm(A,ord=2)),fontsize=16)
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1, aspect='equal')
axes.add_artist(plt.Circle((0.0, 0.0), 1.0, edgecolor='r', facecolor='none'))
draw_unit_vectors(axes, numpy.eye(2))
axes.set_title("Unit Ball")
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-1.1, 1.1))
axes.grid(True)
# Image
# Compute some geometry
u, s, v = numpy.linalg.svd(A)
theta = numpy.empty(A.shape[0])
ellipse_axes = numpy.empty(A.shape)
theta[0] = numpy.arccos(u[0][0]) / numpy.linalg.norm(u[0], ord=2)
theta[1] = theta[0] - numpy.pi / 2.0
for i in range(theta.shape[0]):
ellipse_axes[0, i] = s[i] * numpy.cos(theta[i])
ellipse_axes[1, i] = s[i] * numpy.sin(theta[i])
axes = fig.add_subplot(1, 2, 2, aspect='equal')
axes.add_artist(patches.Ellipse((0.0, 0.0), 2 * s[0], 2 * s[1], theta[0] * 180.0 / numpy.pi,
edgecolor='r', facecolor='none'))
for i in range(A.shape[0]):
axes.arrow(0.0, 0.0, ellipse_axes[0, i] - head_length * numpy.cos(theta[i]),
ellipse_axes[1, i] - head_length * numpy.sin(theta[i]),
head_width=head_width, color='k')
draw_unit_vectors(axes, A, head_width=0.2)
axes.set_title("Images Under A")
axes.set_xlim((-s[0] + 0.1, s[0] + 0.1))
axes.set_ylim((-s[0] + 0.1, s[0] + 0.1))
axes.grid(True)
plt.show()
```
#### 1-Norm
```python
# Note: that this code is a bit fragile to angles that go beyond pi
# due to the use of arccos.
import matplotlib.patches as patches
def draw_unit_vectors(axes, A, head_width=0.1):
head_length = 1.5 * head_width
image_e = numpy.empty(A.shape)
angle = numpy.empty(A.shape[0])
image_e[:, 0] = numpy.dot(A, numpy.array((1.0, 0.0)))
image_e[:, 1] = numpy.dot(A, numpy.array((0.0, 1.0)))
for i in range(A.shape[0]):
angle[i] = numpy.arccos(image_e[0, i] / numpy.linalg.norm(image_e[:, i], ord=2))
axes.arrow(0.0, 0.0, image_e[0, i] - head_length * numpy.cos(angle[i]),
image_e[1, i] - head_length * numpy.sin(angle[i]),
head_width=head_width, color='b', alpha=0.5)
head_width = 0.2
head_length = 1.5 * head_width
# ============
# 1-norm
# Unit-ball
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.suptitle("1-Norm: $||A||_1 = {}$".format(numpy.linalg.norm(A,ord=1)), fontsize=16)
axes = fig.add_subplot(1, 2, 1, aspect='equal')
axes.plot((1.0, 0.0, -1.0, 0.0, 1.0), (0.0, 1.0, 0.0, -1.0, 0.0), 'r')
draw_unit_vectors(axes, numpy.eye(2))
axes.set_title("Unit Ball")
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-1.1, 1.1))
axes.grid(True)
# Image
axes = fig.add_subplot(1, 2, 2, aspect='equal')
axes.plot((1.0, 2.0, -1.0, -2.0, 1.0), (0.0, 2.0, 0.0, -2.0, 0.0), 'r')
draw_unit_vectors(axes, A, head_width=0.2)
axes.set_title("Images Under A")
axes.grid(True)
plt.show()
```
#### $\infty$-Norm
```python
# ============
# infty-norm
# Unit-ball
fig = plt.figure()
fig.suptitle("$\infty$-Norm: $||A||_\infty = {}$".format(numpy.linalg.norm(A,ord=numpy.inf)),fontsize=16)
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1, aspect='equal')
axes.plot((1.0, -1.0, -1.0, 1.0, 1.0), (1.0, 1.0, -1.0, -1.0, 1.0), 'r')
draw_unit_vectors(axes, numpy.eye(2))
axes.set_title("Unit Ball")
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-1.1, 1.1))
axes.grid(True)
# Image
# Geometry - Corners are A * ((1, 1), (1, -1), (-1, 1), (-1, -1))
# Symmetry implies we only need two. Here we just plot two
u = numpy.empty(A.shape)
u[:, 0] = numpy.dot(A, numpy.array((1.0, 1.0)))
u[:, 1] = numpy.dot(A, numpy.array((-1.0, 1.0)))
theta[0] = numpy.arccos(u[0, 0] / numpy.linalg.norm(u[:, 0], ord=2))
theta[1] = numpy.arccos(u[0, 1] / numpy.linalg.norm(u[:, 1], ord=2))
axes = fig.add_subplot(1, 2, 2, aspect='equal')
axes.plot((3, 1, -3, -1, 3), (2, 2, -2, -2, 2), 'r')
for i in range(A.shape[0]):
axes.arrow(0.0, 0.0, u[0, i] - head_length * numpy.cos(theta[i]),
u[1, i] - head_length * numpy.sin(theta[i]),
head_width=head_width, color='k')
draw_unit_vectors(axes, A, head_width=0.2)
axes.set_title("Images Under A")
axes.set_xlim((-4.1, 4.1))
axes.set_ylim((-3.1, 3.1))
axes.grid(True)
plt.show()
```
#### Cauchy-Schwarz and Hölder Inequalities
Computing matrix norms where $p \neq 1$ or $\infty$ is more difficult unfortunately. We have a couple of tools that can be useful however.
- **Cauchy-Schwarz Inequality**: For the special case where $p=q=2$, for any vectors $\mathbf{x}$ and $\mathbf{y}$
$$
|\mathbf{x}^*\mathbf{y}| \leq ||\mathbf{x}||_2 ||\mathbf{y}||_2
$$
- **Hölder's Inequality**: Turns out this holds in general if given a $p$ and $q$ that satisfy $1/p + 1/q = 1$ with $1 \leq p, q \leq \infty$
$$
|\mathbf{x}^*\mathbf{y}| \leq ||\mathbf{x}||_p ||\mathbf{y}||_q.
$$
**Note**: this is essentially what we used in the proof of the $\infty-$norm with $p=1$ and $q=\infty$
#### General Matrix Norms (induced and non-induced)
In general matrix-norms have the following properties whether they are induced from a vector-norm or not:
1. $||A|| \geq 0$ and $||A|| = 0$ only if $A = 0$
1. $||A + B|| \leq ||A|| + ||B||$ (Triangle Inequality)
1. $||c A|| = |c| ||A||$
The most widely used matrix norm not induced by a vector norm is the **Frobenius norm** defined by
$$
||A||_F = \left( \sum^m_{i=1} \sum^n_{j=1} |A_{ij}|^2 \right)^{1/2}.
$$
#### Invariance under unitary multiplication
One important property of the matrix 2-norm (and Frobenius norm) is that multiplication by a unitary matrix does not change the product (kind of like multiplication by 1). In general for any $A \in \mathbb{C}^{m\times n}$ and unitary matrix $Q \in \mathbb{C}^{m \times m}$ we have
\begin{align*}
||Q A||_2 &= ||A||_2 \\ ||Q A||_F &= ||A||_F.
\end{align*}
<sup>1</sup><span id="footnoteRegression"> http://www.utstat.toronto.edu/~brunner/books/LinearModelsInStatistics.pdf</span>
| 5398022bd9084116100bbca233ac4b37d4f1ad7f | 60,211 | ipynb | Jupyter Notebook | 10_LA_intro.ipynb | arkwave/intro-numerical-methods | e50f313b613d9f4aeb1ec6dd29a191bb771d092b | [
"CC-BY-4.0"
] | null | null | null | 10_LA_intro.ipynb | arkwave/intro-numerical-methods | e50f313b613d9f4aeb1ec6dd29a191bb771d092b | [
"CC-BY-4.0"
] | null | null | null | 10_LA_intro.ipynb | arkwave/intro-numerical-methods | e50f313b613d9f4aeb1ec6dd29a191bb771d092b | [
"CC-BY-4.0"
] | null | null | null | 33.92169 | 518 | 0.525336 | true | 13,197 | Qwen/Qwen-72B | 1. YES
2. YES | 0.715424 | 0.841826 | 0.602262 | __label__eng_Latn | 0.967854 | 0.237587 |
### Instructions
When running the notebook the first time, make sure to run all cells before making changes in the notebook. Hit Shift + Enter to run the selected cell or, in the top menu, click on: `Kernel` > `Restart Kernel and Run All Cells...` to rerun the whole notebook. If you make any changes in a cell, rerun that cell.
If you make any changes in a coding cell, rerun the notebook by `Run` > `Run Selected Cell and All Below`
```python
# Import dependencies
import sys
sys.path.append('python/')
import time
startTime = time.time() # Calculate time for running this notebook
import numpy as np
import matplotlib.pyplot as plt
import load_galaxies as lg # Load load_galaxies.py library
```
# Measured Data Plotting
Plotting radial velocity measurements is the first step of producing a rotation curve for a galaxy and an indication of how much mass the galaxy contains. Setting Newton's law of gravitation and the circular motion equation equal, the equation for circular velocity in terms of enclosed mass and radius can be derived:
\begin{equation}
v(r) = \sqrt{\frac{G M_{enc}(r)}{r}}
\end{equation}
>where:<br>
$G$ = gravitational constant<br>
$M_{enc}(r)$ = enclosed mass as a function of radius<br>
$r$ = radius or distance from the center of the galaxy
<br>
In the following activity, you will load radial velocity measurements of multiple galaxies from our Python library. The rotation curves are plotted on a single graph for comparison. <br>
Knowing the radial velocity of stars at different radii allows you to estimate the mass enclosed for a given radius. By measuring the brightness (luminosity) of stars and the amount of gas, you can approximate the mass of "visible" matter. Compare it with the actual mass calculated from the radial velocities to get an idea of how much mass is "missing". The result is a ratio (mass-to-light ratio or M/L) that has been useful to describe the amount of dark matter in galaxies. <br>
### Vocabulary
__Radial velocity__: the speed stars and gas are moving at different distances from the center of the galaxy<br>
__Rotation curve__: a plot of a galaxy's radial velocity vs the radius<br>
__NGC__: New General Catalogue of galaxies<br>
__UGC__: Uppsala General Catalogue of galaxies<br>
__kpc__: kiloparsec: 1 kpc = 3262 light years = 3.086e+19 meters = 1.917e+16 mile
### Load data of multiple galaxies
Load the radii, velocities, and errors in velocities of multiple galaxies from our Python library.
```python
# NGC 5533
r_NGC5533, v_NGC5533, v_err_NGC5533 = lg.NGC5533['m_radii'],lg.NGC5533['m_velocities'],lg.NGC5533['m_v_errors']
# NGC 891
r_NGC0891, v_NGC0891, v_err_NGC0891 = lg.NGC0891['m_radii'],lg.NGC0891['m_velocities'],lg.NGC0891['m_v_errors']
# NGC 7814
r_NGC7814, v_NGC7814, v_err_NGC7814 = lg.NGC7814['m_radii'],lg.NGC7814['m_velocities'],lg.NGC7814['m_v_errors']
# NGC 5005
r_NGC5005, v_NGC5005, v_err_NGC5005 = lg.NGC5005['m_radii'],lg.NGC5005['m_velocities'],lg.NGC5005['m_v_errors']
# NGC 3198
r_NGC3198, v_NGC3198, v_err_NGC3198 = lg.NGC3198['m_radii'],lg.NGC3198['m_velocities'],lg.NGC3198['m_v_errors']
# UGC 477
r_UGC477, v_UGC477, v_err_UGC477 = lg.UGC477['m_radii'],lg.UGC477['m_velocities'],lg.UGC477['m_v_errors']
# UGC 1281
r_UGC1281, v_UGC1281, v_err_UGC1281 = lg.UGC1281['m_radii'],lg.UGC1281['m_velocities'],lg.UGC1281['m_v_errors']
# UGC 1437
r_UGC1437, v_UGC1437, v_err_UGC1437 = lg.UGC1437['m_radii'],lg.UGC1437['m_velocities'],lg.UGC1437['m_v_errors']
# UGC 2953
r_UGC2953, v_UGC2953, v_err_UGC2953 = lg.UGC2953['m_radii'],lg.UGC2953['m_velocities'],lg.UGC2953['m_v_errors']
# UGC 4325
r_UGC4325, v_UGC4325, v_err_UGC4325 = lg.UGC4325['m_radii'],lg.UGC4325['m_velocities'],lg.UGC4325['m_v_errors']
# UGC 5253
r_UGC5253, v_UGC5253, v_err_UGC5253 = lg.UGC5253['m_radii'],lg.UGC5253['m_velocities'],lg.UGC5253['m_v_errors']
# UGC 6787
r_UGC6787, v_UGC6787, v_err_UGC6787 = lg.UGC6787['m_radii'],lg.UGC6787['m_velocities'],lg.UGC6787['m_v_errors']
# UGC 10075
r_UGC10075, v_UGC10075, v_err_UGC10075 = lg.UGC10075['m_radii'],lg.UGC10075['m_velocities'],lg.UGC10075['m_v_errors']
```
### Plot measured data with errorbars
Measured data points of 13 galaxies are plotted below.<br><br>
<div class="alert-info">Activity 1)</div>
>In the coding cell below, change the limits of the x-axis to zoom in and out of the graph. <br>
_Python help: change the limits of the x-axis by modifying the two numbers (left and right) of the line: `plt.xlim` then rerun the notebook or the cell._ <br><br>
<div class="alert-info">Activity 2)</div>
>Finding supermassive black holes: A high velocity at a radius close to zero (close to the center of the galaxy) indicates that there is a supermassive black hole present at the center of that galaxy, changing the velocities of the close-by stars. The reason the black hole does not have that much effect on the motion of stars at larger distances is because it acts as a point mass, which has negligible effect at large radii as the velocity drops off as $1 / \sqrt r$. <br>
Can you find the galaxies with a possible central supermassive black hole and hide the curves of the rest of the galaxies? <br>
_Python help: Turn off the display of all lines and go through them one by one. You can "turn off" the display of each galaxy by typing a `#` sign in front of the line `plt.errorbar`. This turns the line into a comment so that Python will ignore it._<br>
_Insight: In the `04_Plotting_Rotation_Curves.ipynb` notebook, you will be able to calculate the rotation curve of the central supermassive black hole._ <br><br>
<div class="alert-info">Activity 3)</div>
>What do you notice about the size of the error bars at radii close and far from the center? What might be the reason for this?
```python
# Define radius for plotting
r = np.linspace(0,100,100)
# Plot
plt.figure(figsize=(14,8)) # size of the plot
plt.title('Measured radial velocity of multiple galaxies', fontsize=14) # giving the plot a title
plt.xlabel('Radius (kpc)', fontsize=12) # labeling the x-axis
plt.ylabel('Velocity (km/s)', fontsize=12) # labeling the y-axis
plt.xlim(0,20) # limits of the x-axis (default from 0 to 20 kpc)
plt.ylim(0,420) # limits of the y-axis (default from 0 to 420 km/s)
# Plotting the measured data
plt.errorbar(r_NGC5533,v_NGC5533,yerr=v_err_NGC5533, label='NGC 5533', marker='o', markersize=6, linestyle='none', color='royalblue')
plt.errorbar(r_NGC0891,v_NGC0891,yerr=v_err_NGC0891, label='NGC 891', marker='o', markersize=6, linestyle='none', color='seagreen')
plt.errorbar(r_NGC7814,v_NGC7814,yerr=v_err_NGC7814, label='NGC 7814', marker='o', markersize=6, linestyle='none', color='m')
plt.errorbar(r_NGC5005,v_NGC5005,yerr=v_err_NGC5005, label='NGC 5005', marker='o', markersize=6, linestyle='none', color='red')
plt.errorbar(r_NGC3198,v_NGC3198,yerr=v_err_NGC3198, label='NGC 3198', marker='o', markersize=6, linestyle='none', color='gold')
plt.errorbar(r_UGC477,v_UGC477,yerr=v_err_UGC477, label='UGC 477', marker='o', markersize=6, linestyle='none', color='lightpink')
plt.errorbar(r_UGC1281,v_UGC1281,yerr=v_err_UGC1281, label='UGC 1281', marker='o', markersize=6, linestyle='none', color='aquamarine')
plt.errorbar(r_UGC1437,v_UGC1437,yerr=v_err_UGC1437, label='UGC 1437', marker='o', markersize=6, linestyle='none', color='peru')
plt.errorbar(r_UGC2953,v_UGC2953,yerr=v_err_UGC2953, label='UGC 2953', marker='o', markersize=6, linestyle='none', color='lightslategrey')
plt.errorbar(r_UGC4325,v_UGC4325,yerr=v_err_UGC4325, label='UGC 4325', marker='o', markersize=6, linestyle='none', color='darkorange')
plt.errorbar(r_UGC5253,v_UGC5253,yerr=v_err_UGC5253, label='UGC 5253', marker='o', markersize=6, linestyle='none', color='maroon')
plt.errorbar(r_UGC6787,v_UGC6787,yerr=v_err_UGC6787, label='UGC 6787', marker='o', markersize=6, linestyle='none', color='midnightblue')
plt.errorbar(r_UGC10075,v_UGC10075,yerr=v_err_UGC10075, label='UGC 10075', marker='o', markersize=6, linestyle='none', color='y')
plt.legend(bbox_to_anchor=(1,1), loc="upper left")
plt.show()
```
```python
#NBVAL_IGNORE_OUTPUT
#Because the timing won't be exactly the same each time.
# Time
executionTime = (time.time() - startTime)
ttt=executionTime/60
print(f'Execution time: {ttt:.2f} minutes')
```
Execution time: 0.06 minutes
#### References
>De Naray, Rachel Kuzio, Stacy S. McGaugh, W. J. G. De Blok, and A. Bosma. __"High-resolution optical velocity fields of 11 low surface brightness galaxies."__ The Astrophysical Journal Supplement Series 165, no. 2 (2006): 461. https://doi.org/10.1086/505345.<br><br>
>De Naray, Rachel Kuzio, Stacy S. McGaugh, and W. J. G. De Blok. __"Mass models for low surface brightness galaxies with high-resolution optical velocity fields."__ The Astrophysical Journal 676, no. 2 (2008): 920. https://doi.org/10.1086/527543.<br><br>
>Epinat, B., P. Amram, M. Marcelin, C. Balkowski, O. Daigle, O. Hernandez, L. Chemin, C. Carignan, J.-L. Gach, and P. Balard. __“GHASP: An Hα Kinematic Survey of Spiral and Irregular GALAXIES – Vi. New HΑ Data Cubes for 108 Galaxies.”__ Monthly Notices of the Royal Astronomical Society 388, no. 2 (July 19, 2008): 500–550. https://doi.org/10.1111/j.1365-2966.2008.13422.x. <br><br>
>Fraternali, F., R. Sancisi, and P. Kamphuis. __“A Tale of Two Galaxies: Light and Mass in NGC 891 and NGC 7814.”__ Astronomy & Astrophysics 531 (June 13, 2011). https://doi.org/10.1051/0004-6361/201116634.<br><br>
>Karukes, E. V., P. Salucci, and Gianfranco Gentile. __"The dark matter distribution in the spiral NGC 3198 out to 0.22 $R_{vir}$."__ _Astronomy & Astrophysics_ 578 (2015): A13. https://doi.org/10.1051/0004-6361/201425339. <br><br>
>Lelli, F., McGaugh, S. S., & Schombert, J. M. (2016). **SPARC: Mass models for 175 disk galaxies with Spitzer photometry and accurate rotation curves.** _The Astronomical Journal_, 152(6), 157. https://doi.org/10.3847/0004-6256/152/6/157 <br><br>
>Noordermeer, Edo. __"The rotation curves of flattened Sérsic bulges."__ _Monthly Notices of the Royal Astronomical Society_ 385, no. 3 (2007): 1359-1364. https://doi.org/10.1111/j.1365-2966.2008.12837.x<br><br>
>Richards, Emily E., L. van Zee, K. L. Barnes, S. Staudaher, D. A. Dale, T. T. Braun, D. C. Wavle, et al. __“Baryonic Distributions in the Dark Matter Halo of NGC 5005.”__ Monthly Notices of the Royal Astronomical Society 449, no. 4 (June 1, 2015): 3981–96. https://doi.org/10.1093/mnras/stv568.
***
| c73d3923c3a4c08ffd7d89c9ac0daf624fdf4447 | 87,609 | ipynb | Jupyter Notebook | binder/03_Measured_Data_Plotting.ipynb | villano-lab/galactic-spin-W1 | d95c706ccbc347f9bc61bb7c96b1314460bc2d0f | [
"CC-BY-4.0"
] | 1 | 2022-03-22T04:00:17.000Z | 2022-03-22T04:00:17.000Z | binder/03_Measured_Data_Plotting.ipynb | villano-lab/galactic-spin-W1 | d95c706ccbc347f9bc61bb7c96b1314460bc2d0f | [
"CC-BY-4.0"
] | 14 | 2021-11-05T18:17:19.000Z | 2022-02-19T20:35:05.000Z | binder/03_Measured_Data_Plotting.ipynb | villano-lab/galactic-spin-W1 | d95c706ccbc347f9bc61bb7c96b1314460bc2d0f | [
"CC-BY-4.0"
] | null | null | null | 286.303922 | 73,456 | 0.912406 | true | 3,378 | Qwen/Qwen-72B | 1. YES
2. YES | 0.930458 | 0.79053 | 0.735555 | __label__eng_Latn | 0.742484 | 0.547274 |
# Tarea Nro. 3 - LinAlg + Sympy
- Nombre y apellido: Ivo Andrés Astudillo
- Fecha: 26 de Noviembre de 2020
### Producto punto
```python
#from IPython.display import Image
#Image(filename='img/Tabla9.4.png')
```
5. La capacidad calorífica C<sub>p</sub> de un gas se puede modelar con la ecuación empírica
\begin{equation}
C_p = a + bT + cT^2+dT^3
\end{equation}
donde a, b, c y d son constantes empíricas y T es la temperatura en grados Kelvin. El cambio en entalpía (una medida de energía) conforme el gas se caliente de T 2 es la integral de esta ecuación con respecto a T:
\begin{equation}
\bigtriangleup h = \int_{T_{1}}^{T_{2}} C_{p}dT \,dx
\end{equation}
| 4c3fb3a98502f98e20946a2df0c456877d99b4d1 | 1,905 | ipynb | Jupyter Notebook | Tareas/PrimerBimestre/TareaNro3 LinalgSympy/prueba.ipynb | iaastudillo/PythonIntermedio | 2b1393761445e7ee521d82c1dbb33826a9dd475e | [
"MIT"
] | null | null | null | Tareas/PrimerBimestre/TareaNro3 LinalgSympy/prueba.ipynb | iaastudillo/PythonIntermedio | 2b1393761445e7ee521d82c1dbb33826a9dd475e | [
"MIT"
] | null | null | null | Tareas/PrimerBimestre/TareaNro3 LinalgSympy/prueba.ipynb | iaastudillo/PythonIntermedio | 2b1393761445e7ee521d82c1dbb33826a9dd475e | [
"MIT"
] | null | null | null | 22.411765 | 223 | 0.545407 | true | 232 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.795658 | 0.689864 | __label__spa_Latn | 0.931601 | 0.441117 |
## подготовка:
```python
import numpy as np
from numpy.linalg import *
rg = matrix_rank
from IPython.display import display, Math, Latex, Markdown
from sympy import *
pr = lambda s: display(Markdown('$'+str(latex(s))+'$'))
def pmatrix(a, intro='',ending='',row=False):
if len(a.shape) > 2:
raise ValueError('pmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{pmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{pmatrix}']
if row:
return(intro+'\n'.join(rv)+ending)
else:
display(Latex('$$'+intro+'\n'.join(rv)+ending+'$$'))
```
# Задача 7
## 1) доно:
```python
C = np.array([[1,2],
[0,1]])
pmatrix(C, intro=r'C_{2\times 2}=')
D = np.array([[3,1],
[1,0]])
pmatrix(D, intro=r'D_{2\times 2}=')
B = np.array([[5,1],
[5,2]])
pmatrix(B, intro=r'B_{2\times 2}=')
```
$$C_{2\times 2}=\begin{pmatrix}
1 & 2\\
0 & 1\\
\end{pmatrix}$$
$$D_{2\times 2}=\begin{pmatrix}
3 & 1\\
1 & 0\\
\end{pmatrix}$$
$$B_{2\times 2}=\begin{pmatrix}
5 & 1\\
5 & 2\\
\end{pmatrix}$$
```python
A = np.array([[5,6],
[3,4]])
pmatrix(rg(A), intro=r'rg(A)=')
```
$$rg(A)=\begin{pmatrix}
2\\
\end{pmatrix}$$
```python
pmatrix(inv(C), intro=r'C^{-1}=')
pmatrix(B.T, intro=r'C^{T}=')
pmatrix(B.dot(C), intro=r'BC=')
9
pmatrix(rg(B), intro=r'rg(C)=')
pmatrix(det(B), intro=r'det(C)=')
```
$$C^{-1}=\begin{pmatrix}
1. & -2.\\
0. & 1.\\
\end{pmatrix}$$
$$C^{T}=\begin{pmatrix}
5 & 5\\
1 & 2\\
\end{pmatrix}$$
$$BC=\begin{pmatrix}
5 & 11\\
5 & 12\\
\end{pmatrix}$$
$$rg(C)=\begin{pmatrix}
2\\
\end{pmatrix}$$
$$det(C)=\begin{pmatrix}
5.0\\
\end{pmatrix}$$
```python
A = np.array([[2,6],
[1,3]])
#pmatrix(rg(B), intro=r'rg(B)=')
pmatrix(rg(A), intro=r'rg(A)=')
#pmatrix(rg(A.dot(B)), intro=r'rg(AB)=')
```
$$rg(A)=\begin{pmatrix}
1\\
\end{pmatrix}$$
# 3 пункт
## примерчик
```python
a1 = Symbol('a_{12}')
b1 = Symbol('b_{11}')
c1 = Symbol('c_{22}')
d1 = Symbol('d_{21}')
X =np.array([[a1,b1],
[c1,d1]])
B = np.array([[5,1],
[5,2]])
C1 = np.array([[1,1],
[1,2]])
D1 = np.array([[2,1],
[1,0]])
C2 = np.array([[1,-1],
[0,1]])
D2 = np.array([[1,1],
[0,1]])
pmatrix(B.reshape((4, 1)), intro="X=")
```
$$X=\begin{pmatrix}
5\\
1\\
5\\
2\\
\end{pmatrix}$$
```python
pmatrix( (C1.dot(X)).dot(D1))
A = (C1.dot(X)).dot(D1) + (C2.dot(X)).dot(D2)
pmatrix(A)
F = np.array([[3,1,1,1],
[2,1,0,-1],
[2,1,5,2],
[1,0,3,1]])
pmatrix(F, ending=pmatrix(X.reshape((4, 1)),row=True)+"="+pmatrix(B.reshape((4, 1)),row=True))
pmatrix(rg(F), intro=r'rg(F)=')
print("Зничит есть нормальное решение!)")
```
$$\begin{pmatrix}
2*a1 & + & b1 & + & 2*c1 & + & d1 & a1 & + & c1\\
2*a1 & + & b1 & + & 4*c1 & + & 2*d1 & a1 & + & 2*c1\\
\end{pmatrix}$$
$$\begin{pmatrix}
3*a1 & + & b1 & + & c1 & + & d1 & 2*a1 & + & b1 & - & d1\\
2*a1 & + & b1 & + & 5*c1 & + & 2*d1 & a1 & + & 3*c1 & + & d1\\
\end{pmatrix}$$
$$\begin{pmatrix}
3 & 1 & 1 & 1\\
2 & 1 & 0 & -1\\
2 & 1 & 5 & 2\\
1 & 0 & 3 & 1\\
\end{pmatrix}\begin{pmatrix}
a1\\
b1\\
c1\\
d1\\
\end{pmatrix}=\begin{pmatrix}
5\\
1\\
5\\
2\\
\end{pmatrix}$$
$$rg(F)=\begin{pmatrix}
4\\
\end{pmatrix}$$
Зничит есть нормальное решение!)
# Решаем этоЁ!!!
```python
from sympy import Matrix, solve_linear_system
from sympy.abc import a,b,c,d
```
Examlpe:
x + 4 y == 2
-2 x + y == 14
>from sympy import Matrix, solve_linear_system
>from sympy.abc import x, y
>system = Matrix(( (1, 4, 2), (-2, 1, 14)))
>solve_linear_system(system, x, y)
```python
system = Matrix(( (3,1,1,1,5), (2,1,0,-1,1), (2,1,5,2,5),(1,0,3,1,2) ))
x = solve_linear_system(system, a,b,c,d)
X =np.array([[x[a],x[b]],[x[c],x[d]] ])
```
```python
pmatrix(X,intro="X=")
```
$$X=\begin{pmatrix}
10/11 & 9/11\\
-2/11 & 18/11\\
\end{pmatrix}$$
```python
```
```python
x = Symbol('x')
y = Symbol('y')
pr(integrate(sqrt(4*x-x**2), x))
```
$\int \sqrt{- x^{2} + 4 x}\, dx$
```python
```
| 129c5dc75e964d79c58737ff058df64fe79e03b4 | 13,031 | ipynb | Jupyter Notebook | 7ex/math.ipynb | TeamProgramming/ITYM | 1169db0238a74a2b2720b9539cb3bbcd16d0b380 | [
"MIT"
] | null | null | null | 7ex/math.ipynb | TeamProgramming/ITYM | 1169db0238a74a2b2720b9539cb3bbcd16d0b380 | [
"MIT"
] | null | null | null | 7ex/math.ipynb | TeamProgramming/ITYM | 1169db0238a74a2b2720b9539cb3bbcd16d0b380 | [
"MIT"
] | null | null | null | 21.017742 | 105 | 0.411097 | true | 1,816 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.712232 | 0.605377 | __label__ast_Latn | 0.069041 | 0.244824 |
# Multiple Regression Analysis: OLS Asymptotics
So other than the finite sample properties in the previous chapters, we also need to know the ***asymptotic properties*** or ***large sample properties*** of estimators and test statistics. And fortunately, under the assumptions we have made, OLS has satisfactory large sample properties.
$Review\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\plim}{plim}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\void}{\left.\right.}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\EE}{\mathbb{E}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\QQ}{\mathbb{Q}}
\newcommand{\AcA}{\mathscr{A}}
\newcommand{\FcF}{\mathscr{F}}
\newcommand{\Exp}{\mathrm{E}}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathrm{N} \left( #1 \right)}
\newcommand{\ow}{\text{otherwise}}$
1. Expected values unbiasedness: $\text{MLR}.1 \sim \text{MLR}.4$
2. Variance formulas: $\text{MLR}.1 \sim \text{MLR}.5$
3. Gauss-Markov Theorem: $\text{MLR}.1 \sim \text{MLR}.5$
4. Exact sampling distributions/tests: $\text{MLR}.1 \sim \text{MLR}.6$
## Consistency
In practise, time series data regressions will fail the unbiasedness, only **consistency** remains.
$Def$
>Let $W_n$ be an estimator of $\theta$ based on a sample $Y_1,Y_2,\dots,Y_n$ of size $n$. Then, $W_n$ is a consistent estimator of $u$ if for every $\varepsilon > 0$,
>
>$$P\CB{\abs{W_n - \theta} > \varepsilon} \to 0 \text{ as } n \to \infty $$
>
>Or alternatively, for arbitrary $\epsilon > 0$ and $n \to \infty$, we have $P\CB{\abs{W_n - \theta}< \epsilon} \to 1$
We can also write this as $\text{plim}\P{W_n} = \theta$
$Remark$
>In our real life we don't have infinite samples thus this property involves a thought experiment about what would happen as the sample size gets *large*.
$Theorem.1$
Under assumptions $MLR.1$ through $MLR.4$, the OLS estimator $\hat\beta_j$ is consistent for $\beta_j$, for all $j = 0,1,\dots,k$ meaning that $\plim\P{\hat\beta_j} = \beta_j$.
$Proof$
>$$
\hat\beta_1 = \ffrac{\d{\sum_{i=1}^{n} \P{x_{i1} - \bar{x}_1} y_i}} {\d{\sum_{i=1}^{n} \P{x_{i1} - \bar{x}_1}^2}}
\\[0.6em]$$
>
><center>since $y_i = \beta_0 + \beta_1 x_1 + u_i$</center>
>$$\begin{align}
\hat\beta_1&= \beta_1 + \ffrac{\d{\sum_{i=1}^{n} \P{x_{i1} - \bar{x}_1} u_i}} {\d{\sum_{i=1}^{n} \P{x_{i1} - \bar{x}_1}^2}}\\
&= \beta_1 + \ffrac{\d{\ffrac{1} {n}\sum_{i=1}^{n} \P{x_{i1} - \bar{x}_1} u_i}} {\d{\ffrac{1} {n}\sum_{i=1}^{n} \P{x_{i1} - \bar{x}_1}^2}} \\[0.5em]
\end{align}\\[0.6em]$$
>
><center> by **law of large number**</center>
>
>$$\begin{align}
\plim\P{\hat\beta_1}&= \beta_1 + \ffrac{\Cov{x_1,u}} {\Var{x_1}}\\
&= \beta_1 + \ffrac{0} {\Var{x_1}} = \beta_1
\end{align}$$
***
$Assumption.4'$ $MLR.4'$ **Zero Mean and Zero Correlation**
$\Exp\SB{u} = 0$ and $\Cov{x_j, u} = 0$, for $j = 1,2,\dots,k$
$Remark$
>The original one is the assumption of **Zero conditional mean** that is $\Exp\SB{u \mid x_1,x_2,\dots,x_k} = 0$. $MLR.4$ is stronger than $MLR.4'$.
>
>Also $MLR.4'$ cannot guarantee the unbiasdness but consistency only.
### Deriving the inconsistency in OLS
> If the error $u$ is correlated with any of the independent variables $x_j$, then OLS is biased and inconsistent.
In the simple regression case, the ***inconsistency*** in $\hat\beta_1$ (or loosely called the ***asymptotic bias***) is $\plim\P{\hat\beta_1} - \hat\beta_1 = \ffrac{\Cov{x_1,u}} {\Var{x_1}}$. And it's positive if $x_1$ and $u$ are positively correlated, negative otherwise.
And this formula will help us find the asymptotic analog of the omitted variable bias (ref **Chap_03.3**). Suppose the true model is $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + v$ and the OLS estimators with **first four Gauss-Markov assumptions** is $\hat\beta_0$, $\hat\beta_1$, and $\hat\beta_2$, and obviously these three are **consistent**. Then omit $x_2$ and do the simple regression of $y$ on $x_1$ with model $y = \beta_0 + \beta_1 x_1 + u$, then we have $u = \beta_2 x_2 + v$. Let $\tilde\beta_1$ denote the simple regression slope estimator. Then
$$\plim \tilde\beta_1 = \beta_1 + \beta_2 \ffrac{\Cov{x_1,x_2}} {\Var{x_1}} = \beta_1 + \beta_2 \delta_1$$
If $x_1$ and $x_2$ are *uncorrelated* (in the population), then $\delta_1 = 0$, and $\tilde\beta_1$ is a consistent estimator of $\beta_1$ (although not necessarily unbiased). However, if $x_2$ has a positive partial effect on $y$, so that $\beta_2 > 0$ and $\Cov{x_1,x_2}>0$, $\delta_1> 0$. Then the inconsistency in $\tilde\beta_1$ is positive.
## Asymptotic Normality and Large Sample Inference
>$\text{MLR}.6 \iff $ the distribution of $y$ given $x_1,x_2,\dots,x_k$, which is just $u$ then, is normal. Normality has nothing to do with the unbiasedness however to do statistical inference, we need that. And fortunately, by **central limit theorem**, even though the $y_i$ are not from a normal distribution, the OLS estimator still satisfy ***asymptotic normality***, which means they are approximately normally distributed in
large enough sample sizes.
$Theorem.2$ Asymptotic Normality of OLS
Under the Gauss-Markov Assumptions, $\text{MLR}.1$ through $\text{MLR}.5$,
- $\sqrt{n}\P{\hat\beta_j - \beta_j} \newcommand{\asim}{\overset{\text{a}}{\sim}}\asim \N{0, \ffrac{\sigma^2} {a_j^2}}$, where $\ffrac{\sigma^2} {a_j^2}$ is the ***asymptotic variance*** of $\sqrt{n}\P{\hat\beta_j - \beta_j} $; and for the slope coefficients, $a_j^2 = \plim \P{\ffrac{1} {n} \sum_{i=1}^{n} \hat r_{ij}^{2}}$ where the $r_{ij}$ are the residuals from regressing $x_j$ on the other independent variables. We say that $\hat\beta_j$ is *asymptotically normally distributed*;
- $\hat\sigma^2$ is a consistent estimator of $\sigma^2 = \Var{u}$;
- For each $j$, $\ffrac{\hat\beta_j - \beta_j} {\text{sd}\P{\hat\beta_j}}\asim \N{0,1}$; $\ffrac{\hat\beta_j - \beta_j} {\text{se}\P{\hat\beta_j}}\asim \N{0,1}$ where $\text{se}\P{\hat\beta_j} = \sqrt{\widehat{\Var{\hat\beta_j}}} = \sqrt{\ffrac{\hat\sigma^2} {\text{SST}_j \P{1-R^2_j}}}$ is the usual OLS standard error.
$Remark$
>Here we dropped the assumption $\text{MLR}.6$ and the only one restriction remained is that the error has finite variance.
>
>Also note that the population distribution of the error term, $u$, is immutable and has nothing to do with the sample size. Thie theorem only says that regardless of the population distribution of $u$, the OLS estimators, when properly standardized, have approximate standard normal distributions.
>
>$\text{sd}\P{\hat\beta_j}$ depends on $\sigma$ and is not observable, while $\text{se}\P{\hat\beta_j}$ depends on $\hat\sigma$ and can be computed. In the previous chapter we've already seen that: under **CLM**, $\text{MLR}.1$ through $\text{MLR}.6$, we have $\ffrac{\hat\beta_j - \beta_j} {\text{sd}\P{\hat\beta_j}}\sim \N{0,1}$ and $\ffrac{\hat\beta_j - \beta_j} {\text{se}\P{\hat\beta_j}}\sim t_{n-k-1} = t_{df}$.
>
>In large samples, the $t$-distribution is close to the $\N{0,1}$ distribution and thus $t$ test are valid in large samples *without* $\text{MLR}.6$. But still we need $\text{MLR}.1$ to $\text{MLR}.5$.
Now from $\hat\sigma^2$ is a consistent estimator of $\sigma^2$, let's have a closer look of ${\widehat{\Var{\hat\beta_j}}} = {\ffrac{\hat\sigma^2} {\text{SST}_j \P{1-R^2_j}}}$, where $\text{SST}_j$ is the total sum of squares of $x_j$ in the sample, $R^2_j$ is the $R$-squared from regressing $x_j$ on all of the other independent variables. As the **sample size** *grows*, $\hat\sigma^2$ converges in probability to the constant $\sigma^2$. Further, $R^2_j$ approaches a number strictly between $0$ and $1$. Then about the rate, the sample variance of $x_j$ is $\ffrac{\text{SST}_j} {n}$, so that it converges to $\Var{x_j}$ as the sample size grows, meaning that we have: $\text{SST}_j \approx n\sigma_j^2$, where $\sigma_j^2$ is the population variance of $x_j$. Combining all these facts:
$\bspace \widehat{\Var{\hat\beta_j}}$ shrinks to zero at the rate of $1/n$, $\text{se}\P{\hat\beta_j}$ also shrinks to zero at the rate of $\sqrt{1/n}$ . And the larger sample, the better.
When $u$ is not normally distributed, $\sqrt{\widehat{\Var{\hat\beta_j}}} = \sqrt{\ffrac{\hat\sigma^2} {\text{SST}_j \P{1-R^2_j}}}$ is called the **asymptotic standard error** and $t$ statistics are called **asymptotic *$\textbf{t}$* statistics**. We also have **asymptotic confidence interval**.
### Other Large Sample Tests: The Lagrange Multiplier Statistic
The ***Lagrange multiplier (LM) statistic***. We first consider the model: $y = \beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k + u$. The null hypothesis is $H_0:\beta_{k-q+1} = \beta_{k-q+2} = \cdots = \beta_k = 0$, the last $q$ parameters, putting $q$ exclusion restrictions on the model. The $LM$ statistic requires estimation of the restricted model only. Thus, assume that we have run the regression: $y = \tilde\beta_0 + \tilde\beta_1 x_1 + \cdots + \tilde\beta_{k-q} x_{k-q} + \tilde u$, where $\tilde\void$ indicates that the estimates are from the restricted model.
However it turns out that to get a usable test statistic, we must include *all* of the independent variables in the regression so that we run the regression of $\tilde u$ on $x_1, x_2,\dots,x_k$, that we call an ***auxiliary regression***, a regression that is used to compute a test statistic but whose coefficients are not of direct interest.
Then under the null hypothesis, the sample size $n$, multiplied by the usual $R$-squared from the auxiliary regression is distributed asymptotically as a $\chi^2$ $r.v.$ with $q$ degrees of freedom. Here's the overall procedure for testing the joint significance of a set of $q$ independent variables using this method.
***
<center>Lagrange Multiplier Statistic for $q$ exclusion restrictions</center>
1. Regress $y$ on the *restricted set* of independent variables, $x_1,\dots, x_{k-q}$, and save the residuals, $\tilde u$
2. Regress $\tilde u$ on *all* of the independent variables and obtain the $R$-squared, $R^2_u$. Just to distinguich from regress $y$ on them
3. Compute **Lagrange multiplier statistic**: $LM = nR_u^2$
4. Compare $LM$ to the appropriate critical value $c$, in a $\chi_q^2$ distribution; if $LM > c$ then the null hypothesis is *rejected*. Even better, obtain the $p$-value as the probability that a $\chi_q^2$ $r.v.$ exceeds the value of the test statistic. If the $p$-value is less than the desired significance level, then $H_0$ is rejected. If not, we fail to reject $H_0$. The rejection rule is essentially the same as for $F$ testing.
## Asymptotic Efficiency of OLS
In general, the $k$ regressor case, the class of consistent estimators is obtained by generalizing the OLS first order conditions:
$$\sum_{i=1}^n g_j\P{\mathbf{x}_i} \P{y_i - \tilde\beta_0 - \tilde\beta_1 x_{i1} - \cdots - \tilde\beta_k x_{ik}} = 0, \bspace j = 0,1,\dots,k$$
where $g_j\P{\mathbf{x}_i}$ denotes any function of all explanatory variables for observation $i$. And obviously $g_0\P{\mathbf{x}_i} = 1$ and $g_j\P{\mathbf{x}_j} = x_{ij}$ for $j=1,2,\dots,k$ are the conditions to obtain the OLS estimators.
Here's the theorem:
$Theorem.3$ Asymptotic Efficiency of OLS
Under the Gauss-Markov assumptions, let $\tilde\beta_j$ denote estimators that solve equations of the equation:
$$\sum_{i=1}^n g_j\P{\mathbf{x}_i} \P{y_i - \tilde\beta_0 - \tilde\beta_1 x_{i1} - \cdots - \tilde\beta_k x_{ik}} = 0, \bspace j = 0,1,\dots,k$$
and let $\hat\beta_j\newcommand{\Avar}[2][\,\!]{\mathrm{Avar}_{#1}\left[#2\right]}$ denote the OLS estimators. Then for $j=0,1,\dots,k$, the OLS estimators have the smallest asymptotic variances: $\Avar{\sqrt{n} \P{\hat\beta_j - \beta_j}} \leq \Avar{\sqrt{n} \P{\tilde\beta_j - \beta_j}}$
***
| 5728216be98958b6177646f60fec3e0c476338ab | 16,062 | ipynb | Jupyter Notebook | FinMath/Econometrics/Chap_05.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
] | 2 | 2018-11-27T10:31:08.000Z | 2019-01-20T03:11:58.000Z | FinMath/Econometrics/Chap_05.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
] | null | null | null | FinMath/Econometrics/Chap_05.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
] | 1 | 2020-07-14T19:57:23.000Z | 2020-07-14T19:57:23.000Z | 59.932836 | 825 | 0.593201 | true | 4,245 | Qwen/Qwen-72B | 1. YES
2. YES | 0.63341 | 0.808067 | 0.511838 | __label__eng_Latn | 0.949282 | 0.0275 |
# Calculus
Contains an overview of calculus.
## Common derivatives
The following derivatives must simply be memorised:
$$
\begin{align}
{\Large \text{Very common}}\\
\\\
\frac{d}{dx} [x^n] =\ & n \cdot x^{n-1}\\
\\\
\frac{d}{dx} [e^x] =\ & e^x\\
\\\
\frac{d}{dx} [sin\ x] =\ & cos\ x\\
\\\
\frac{d}{dx} [cos\ x] =\ & \text{-} sin\ x\\
\\\
\frac{d}{dx} [ln\ x] =\ & \frac{1}{x}\\
\\\
\\\
\\\
{\Large \text{Hyperbolic}}\\
\\\
\frac{d}{dx} [sinh\ x] =\ & cosh\ x\\
\\\
\frac{d}{dx} [cosh\ x] =\ & sinh\ x\\
\\\
\frac{d}{dx} [tanh\ x] =\ & \frac{1}{cosh^2\ x}\\
\\\
\end{align}
$$
## Differentiation rules
### Product rule
$$
\frac{d}{dx}(f \cdot g) = \frac{d}{dx}(f) \cdot g + f \cdot \frac{d}{dx}(g) \equiv f' g + f g'
$$
### Quotient rule
$$
\frac{d}{dx} \left ( \frac{f}{g} \right ) = \frac{f' g - f g'}{g^2}
$$
### Chain rule
$$
\frac{dy}{dx} = \frac{df}{dg} \frac{dg}{dx} = f'(g(x))\ g'(x)
$$
### Reciprocal rule
$$
\frac{dx}{dy} = \frac{1}{dy/dx}
$$
## Implicit differentiation
Sometimes not possible to solve for $y$ and then differentiate.
We can still do differentiation in this case by simply differentiating w.r.t. $x$ on all terms and then solve for $\frac{dy}{dx}$.
## Leibniz's formula
Gives _n_th derivative of a product of functions.
Looks like binomial expansion $(p+q)^n$:
$$
\begin{align}
\frac{d^n(f \cdot g)}{dx^n} &= \sum_{m=0}^{n} \begin{pmatrix} n \\ m \end{pmatrix}\ f^{(n - m)}\ g^{(m)}\\
\\\
&= f^{(n)} \cdot g^{(0)} + n f^{(n - 1)} \cdot g^{(1)} + \frac{n(n - 1)}{2!} f^{(n - 2)} \cdot g^{(2)} + \ldots + f^{(0)} \cdot g^{(n)}
\end{align}
$$
| 4fa26930190798de64f9c2ee84b92ca6fe8ebdab | 2,984 | ipynb | Jupyter Notebook | docs/mathematics/calculus.ipynb | JeppeKlitgaard/jepedia | c9af119a78b916bd01eb347a585ff6a6a0ca1782 | [
"MIT"
] | null | null | null | docs/mathematics/calculus.ipynb | JeppeKlitgaard/jepedia | c9af119a78b916bd01eb347a585ff6a6a0ca1782 | [
"MIT"
] | null | null | null | docs/mathematics/calculus.ipynb | JeppeKlitgaard/jepedia | c9af119a78b916bd01eb347a585ff6a6a0ca1782 | [
"MIT"
] | null | null | null | 26.642857 | 178 | 0.40315 | true | 670 | Qwen/Qwen-72B | 1. YES
2. YES | 0.953966 | 0.853913 | 0.814604 | __label__eng_Latn | 0.43476 | 0.73093 |
Lea cuidadosamente las siguientes **indicaciones** antes de comenzar el examen de prueba:
- Para resolver el examen edite este mismo archivo y renómbrelo de la siguiente manera: *Examen1_ApellidoNombre*, donde *ApellidoNombre* corresponde a su apellido paterno con la inicial en mayúscula, seguido de su primer nombre con la inicial en mayúscula **sin acentos**.
- Resuelva los puntos en el espacio provisto. Si requiere agregar más celdas para código o escritura, hágalo.
- Recuerde que también se está evaluando su capacidad de interpretar los resultados. Escriba sus interpretaciones/conclusiones en celdas utilizando *Markdown*.
- El formato de presentación de todo el examen en general debe ser adecuado. Use tamaños de letra, colores, etiquetas, etcétera.
## Optimización de funciones escalares usando `sympy`
**Enunciado.** Se desea fabricar un caja sin tapa en forma de paralelepípedo con base cuadrada y área lateral de $432$ unidades cuadradas. ¿Qué dimensiones debe tener la caja de máximo volumen?
*Note que las dimensiones son el lado de la base cuadrada $l$ y la altura de la caja $h$*.
- Encuentre el volumen de la caja (paralelepípedo) en función del lado de la base cuadrada $l$, $V(l)$. También provea el dominio de la función para que tenga sentido.
Realice este punto usando fórmulas en LaTeX, en celdas Markdown.
- Usando `sympy` maximice la función $V(l)$ en su dominio.
- ¿Cuáles son las dimensiones $l$ y $h$ de la caja de volumen máximo?
## Programación lineal
**Enunciado.** Una fábrica de carrocerías de automóviles y camiones tiene dos sedes. En la sede 1, para hacer la carrocería de un camión se invierten siete (7) días-operario, mientras que para hacer la carrocería de un coche se precisan dos (2) días-operario. En la sede 2 se invierten tres (3) días operario tanto en carrocerías de camión como de coche. Por limitaciones de mano de obra y maquinaria, la sede 1 dispone de 300 días-operario, y la sede 2 de 270 días-operario. Si las ganancias que se obtienen por cada carrocería de camión son de 6 millones de pesos y por cada carrocería de coche 2 millones de pesos, ¿Cuántas unidades de carrocerías de coche y camión se deben producir en cada sede para maximizar las ganancias?
La siguiente tabla resume toda la información.
```python
import pandas as pd
```
```python
df = pd.DataFrame(columns=['Camion', 'Coche', 'Disponible'], index = ['Sede1_dias-operario', 'Sede2_dias-operario', 'Ganancia por unidad'])
df.loc['Sede1_dias-operario', :] = [7, 2, 300]
df.loc['Sede2_dias-operario', :] = [3, 3, 270]
df.loc['Ganancia por unidad', :] = [6, 2, None]
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Camion</th>
<th>Coche</th>
<th>Disponible</th>
</tr>
</thead>
<tbody>
<tr>
<th>Sede1_dias-operario</th>
<td>7</td>
<td>2</td>
<td>300</td>
</tr>
<tr>
<th>Sede2_dias-operario</th>
<td>3</td>
<td>3</td>
<td>270</td>
</tr>
<tr>
<th>Ganancia por unidad</th>
<td>6</td>
<td>2</td>
<td>None</td>
</tr>
</tbody>
</table>
</div>
- Defina las variables, escriba la función a minimizar junto con las restricciones, explicando detalladamente cada paso (usando fórmulas en LaTeX, en celdas Markdown).
- Escriba el problema en la forma
\begin{equation}
\begin{array}{ll}
\min_{\boldsymbol{x}} & \boldsymbol{c}^T\boldsymbol{x} \\
\text{s. a. } & \boldsymbol{A}_{eq}\boldsymbol{x}=\boldsymbol{b}_{eq} \\
& \boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b},
\end{array}
\end{equation}
proporcionando $\boldsymbol{c}$, $\boldsymbol{A}$ y $\boldsymbol{b}$ ($\boldsymbol{A}_{eq}$ y $\boldsymbol{b}_{eq}$, de ser necesario) como arreglos de `NumPy` (no es necesario escribir en LaTeX el problema en la forma indicada, solo proporcionar las matrices como arreglos de `NumPy`).
Resuelva el problema utilizando la función `linprog` del módulo `optimize` de la librería `scipy`. ¿Cuántas unidades de carrocerías de coche y camión se deben producir en cada sede para maximizar las ganancias?
## Ajuste de curvas
**Enunciado**. El archivo `forest_land_data.csv` contiene información histórica anual de porcentajes de área forestal de México y Colombia desde el año 1990 hasta el año 2014. La primer columna corresponde a los años, la segunda y tercer columna corresponded al porcentaje de área forestal de México y Colombia, respectivamente.
Tomado de: https://data.worldbank.org/indicator/AG.LND.FRST.ZS?locations=MX&view=chart.
- Importar los datos en un DataFrame de pandas.
- Usando los años como variable independiente $x$ y el porcentaje de área forestal de México como variable dependiente $y$, ajustar polinomios de grado 1 hasta grado 3.
Mostrar en un solo gráfico los datos de población contra los años, y los polinomios ajustados.
Graficar el error cuadrático acumulado contra el número de términos. ¿Cuál es el polinomio que mejor se ajusta?
- Con el polinomio que eligió en el punto anterior, estime en qué año el porcentaje de área forestal disminuirá al 30%.
Concluya.
## Clasificador binario
**Enunciado.** Se tienen datos de temperatura (grados Celsius) y presion (kPa) de cien muestras de agua. Además, para cada una de ellas se tiene una clasificación si su estado es líquido (1) o vapor (0). Los datos son los siguientes:
```python
import numpy as np
x1 = 40 + 60*np.random.random((100,))
x2 = 0.6 + 100*np.random.random((100,))
X = np.array([x1, x2]).T
Y = (X[:,1] > 0.6*np.exp(17.625*X[:,0]/(X[:,0]+243.04)))*1
```
```python
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize = (8,6))
plt.scatter(X[:,0], X[:,1], c=Y)
plt.show()
```
- Diseñe un clasificador binario por regresión logística lineal utilizando únicamente los primeros 80 datos. Muestre, además, el gráfico de la división del clasificador con los puntos de entrenamiento.
- Utilice el clasificador diseñado para clasificar a los 20 datos restantes. ¿Cuántas moléculas de agua se clasifican bien? ¿Cuántas mal?
| f0eb08396ebf5cc5193bea3d46db8c8018509860 | 38,525 | ipynb | Jupyter Notebook | Modulo1/ProblemasAdicionales.ipynb | douglasparism/SimulacionM2018 | 85953efb86c7ebf2f398474608dfda18cb4cf5b8 | [
"MIT"
] | null | null | null | Modulo1/ProblemasAdicionales.ipynb | douglasparism/SimulacionM2018 | 85953efb86c7ebf2f398474608dfda18cb4cf5b8 | [
"MIT"
] | null | null | null | Modulo1/ProblemasAdicionales.ipynb | douglasparism/SimulacionM2018 | 85953efb86c7ebf2f398474608dfda18cb4cf5b8 | [
"MIT"
] | null | null | null | 133.767361 | 28,540 | 0.864581 | true | 1,884 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.92079 | 0.734962 | __label__spa_Latn | 0.984883 | 0.545895 |
# Tutorial
We will solve the following problem using a computer to assist with the technical aspects:
```{admonition} Problem
The matrix $A$ is given by $A=\begin{pmatrix}a & 1 & 1\\ 1 & a & 1\\ 1 & 1 & 2\end{pmatrix}$.
1. Find the determinant of $A$
2. Hence find the values of $a$ for which $A$ is singular.
3. For the following values of $a$, when possible obtain $A ^ {- 1}$ and confirm
the result by computing $AA^{-1}$:
1. $a = 0$;
2. $a = 1$;
3. $a = 2$;
4. $a = 3$.
```
`sympy` is once again the library we will use for this.
We will start by our matrix $A$:
```python
import sympy as sym
a = sym.Symbol("a")
A = sym.Matrix([[a, 1, 1], [1, a, 1], [1, 1, 2]])
```
We can now create a variable `determinant` and assign it the value of the
determinant of $A$:
```python
determinant = A.det()
determinant
```
$\displaystyle 2 a^{2} - 2 a$
A matrix is singular if it has determinant 0. We can find the values of $a$ for
which this occurs:
```python
sym.solveset(determinant, a)
```
$\displaystyle \left\{0, 1\right\}$
Thus it is not possible to find the inverse of $A$ for $a\in\{0, 1\}$.
However for $a = 2$:
```python
A.subs({a: 2})
```
$\displaystyle \left[\begin{matrix}2 & 1 & 1\\1 & 2 & 1\\1 & 1 & 2\end{matrix}\right]$
```python
A.subs({a: 2}).inv()
```
$\displaystyle \left[\begin{matrix}\frac{3}{4} & - \frac{1}{4} & - \frac{1}{4}\\- \frac{1}{4} & \frac{3}{4} & - \frac{1}{4}\\- \frac{1}{4} & - \frac{1}{4} & \frac{3}{4}\end{matrix}\right]$
To carry out matrix multiplication we use the `@` symbol:
```python
A.subs({a: 2}).inv() @ A.subs({a: 2})
```
$\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]$
and for $a = 3$:
```python
A.subs({a: 3}).inv()
```
$\displaystyle \left[\begin{matrix}\frac{5}{12} & - \frac{1}{12} & - \frac{1}{6}\\- \frac{1}{12} & \frac{5}{12} & - \frac{1}{6}\\- \frac{1}{6} & - \frac{1}{6} & \frac{2}{3}\end{matrix}\right]$
```python
A.subs({a: 3}).inv() @ A.subs({a: 3})
```
$\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]$
```{important}
In this tutorial we have
- Created a matrix.
- Calculated the determinant of the matrix.
- Substituted values in the matrix.
- Inverted the matrix.
```
| 55379c4199ba86e74f21c2ffed5a7996f9980b7e | 6,461 | ipynb | Jupyter Notebook | book/tools-for-mathematics/04-matrices/tutorial/.main.md.bcp.ipynb | daffidwilde/pfm | dcf38faccee3c212c8394c36f4c093a2916d283e | [
"MIT"
] | 8 | 2020-09-24T21:02:41.000Z | 2020-10-14T08:37:21.000Z | book/tools-for-mathematics/04-matrices/tutorial/.main.md.bcp.ipynb | daffidwilde/pfm | dcf38faccee3c212c8394c36f4c093a2916d283e | [
"MIT"
] | 87 | 2020-09-21T15:54:23.000Z | 2021-12-19T23:26:15.000Z | book/tools-for-mathematics/04-matrices/tutorial/.main.md.bcp.ipynb | daffidwilde/pfm | dcf38faccee3c212c8394c36f4c093a2916d283e | [
"MIT"
] | 3 | 2020-10-02T09:21:27.000Z | 2021-07-08T14:46:27.000Z | 21.608696 | 219 | 0.438322 | true | 872 | Qwen/Qwen-72B | 1. YES
2. YES | 0.974435 | 0.857768 | 0.835839 | __label__eng_Latn | 0.873963 | 0.780267 |
# Steinberg, Dave. Vibration Analysis for Electronic Equipment, 2nd ed., 1988
Steve Embleton | 20161116 | Notes
```python
%matplotlib inline
```
## Chapter 1, Introduction
Modes and vibrations basics. Designs for one input may fail when used in other areas with different forcing frequencies closer to the devices natural frequency. Air, water, and land have different frequency ranges of interest and shock considerations. Optimization needs to consider shock and vibration. Not one or the other.
* Fasteners
* Our group has no official specification, so I need to make my own.
* Steinberg recommends slotted hex head screws
* In through holes, locknuts should be used instead of lock washers.
* pg 13. "A good vibration isolator is often a poor shock isolator, and a good shock isolator is often a poor vibration isolator. The proper design must be incorporated into the isolator to satisfy both the vibration and the shock requirements."
* Lee [5] recommends a maximum resonant frequency of 100 Hz and a max acceleration of 200G on electronic components.
* pg 15. "On tall narrow cabinets the load-carrying isolators should be at the base and stabilizing isolators should be at the top." Excessive deflection at the top can significantly affect the system modes.
## Chapter 2, Vibrations of Simple Electronic Equipment
Walks through:
* Solving simple systems for naturl frequency given a displacement
* Effects of relationships between frequency/acceleration to displacements
* Damping at resonance as a function of spring rate and transmissibility
* Transmissibility of a system undergoing forced periodic motion
* Calculations for multiple mass-spring systems
* Need to be careful of the asusmptions within each section. Most assume no or negligible damping.
```python
## Equation (2.10), solving for the natural frequency of a mass-spring system given the static deflection.
def fn_dst(d_st):
"""Returns the natural frquency of a mass-spring system when given a static deflection in mm."""
from math import pi
g = 9.8*1000 #gravity [mm/s^2]
fn = (1/(2*pi)) * (g / d_st)**0.5
return(fn)
```
```python
fn_dst(1.19)
```
14.443075742745915
### Section 2.7, Relation of Frequency and Acceleration to Displacement
* Mass-Spring system, not a function of damping
* Assumes displacement can be represented by $Y = Y_0\sin(\Omega t)$
* ${\dot Y} = \Omega Y_0 \cos(\Omega t)$ = Velocity
* ${\ddot Y} = -\Omega^2 Y_0 \sin(\Omega t)$ = Acceleration
* Max acceleration at $\sin(\Omega t)=1$
```python
## Similar to equation 2.30 except with gravity as an input so user can define units. Solving for Y_0
def Y_G_fn(G, f_n, g=9800):
"""Assumes a Mass-Spring System approximated as a single degree of freedom system, solution can be
taken in the form Y = Y_0 sin(Omega t).
Input acceleration in units of Gravity (G) and the frequency (f_n). Standard gravity (g) is equal to 9800 mm/s^2 and the
result yields an answer in mm. The input g can be changed to yield a result in the desired units."""
from math import pi
Y = g*G / (f_n*2*pi)**2
return(Y)
```
```python
print(Y_G_fn(7, 14.4))
```
8.379910780604229
### 2.9, Forced Vibrations with Viscous Damping
* Solving the response of a MSD system given a harmonic force acting on the mass.
* Can I find a similar solution to this as a function of a harmonic deflection at the base of the spring and damper?
* $m{\ddot Y} + c{\dot Y} + KY = P_0cos(\Omega t)$
* This derivation is not detailed, Eqn 2.43
```python
# Plotting Figure 2.17 using Eqn. 2.48.
import numpy as np
import matplotlib.pyplot as plt
from math import sqrt
from sympy import zeros
R_c = np.arange(0,1,.1) # R_c = c/c_c. This is the ratio of the damping to the critical damping
R_Omega = np.arange(0,10,.01) # R_Omega = Omega / Omega_n. This is the ratio of the input frequency to the natural frequency.
# Initialize the transmissibility Q
Q = zeros(len(R_c), len(R_Omega))
# Equation 2.48
def Q_Rc_RO(R_Omega, R_c):
Q = sqrt((1 + (2 * R_Omega * R_c)**2)/((1 - R_Omega**2)**2 + (2 * R_Omega * R_c)**2))
return(Q)
for i in range(len(R_c)):
for j in range(len(R_Omega)):
Q[i,j] = Q_Rc_RO(R_Omega[j], R_c[i])
# Plot Results
plt.semilogy(R_Omega,Q.T)
plt.xlabel('$R_\Omega = \\frac{Forcing Frequency}{Natural Frequency} = \\frac{f}{f_n}$')
plt.ylabel('$Q = \\frac{{Maximum Output Force}}{Maximum Input Force}$')
plt.axis([0,10,0.1,10])
plt.plot([0,10],[1,1],'k')
plt.show()
```
Damping occurs to the right of the amplification peak at $R_{\Omega} = \sqrt(2)$.
## Chapter 3, Lumped Masses for Electronic Assemblies
* Correctly selecting boundary conditions has a greater efect then mass placement, up to 50%, on finding the correct natural frequency.
* Simplifying a uniform mass to a point can reduce the natural frequency by 30%
* Includes an analysis of a two mass system using general equations of motion.
* Does not detail the method for solving the equations that yeild Fig. 3.17.
## Chapter 4, Beam Structures for Electronic Subassemblies
* Calculates natural frequency, $f_n$, using representative equations for the deflection and boundary conditions of a beam.
* Beam natural frequency depends ont he ratio of $\frac{E}{\rho}$, Young's modulus over density.
* Estimating the natural frequency of non-uniform cross sections yeilds a $f_n$ lower then actual.
* Solves composites by translating the stiffness factor of the laminate into an equivalent base material thickness. Only works when bending is parallel to the lamination.
## Chapter 6, Printed-Circuit Boards and Flat Plates
* Calculate thedeflection and stress in a circuit board as a function of boundary conditions and component location.
* Effect of ribs and rib orientation
* For ribs to increase the stiffness of a system, they need to transfer the load to a support. Ribs are most effective when oriented to align with supports.
## Chapter 7, Preventing Sinusoidal-Vibration Failures
* Most environmental failures in military systems are due to thermal expansion.
## Chapter 8, Understanding Random Vibration
* "If two major structural resonances occur close to one another, they may produce severe dynamic coupling effects that can porduce rapid fatigue failures."
* For a cyclically alternating electric current, RMS is equal to the value of the direct current that would produce the same average power dissipation in a resistive load.
* Failure conditions
1. High acceleration levels
2. High stress levels
3. Large displacement amplitudes
4. Electrical signals out of tolerance - N.A.
* 3$\sigma$ is often the limit used because it captures 99.7% of the accelerations and most labs have a 3$\sigma$ limit on their equipment.
* In the math libray, natural log is called with (`log`) and log base 10 with (`log10`)
* Multiple degree of freedom systems (8.29). $G_{out} = \sqrt{\sum{P_i \Delta f_i Q_i^2}}$
* From: Crandall, Random Vibration, 1958. $P_{out} = Q^2 P$. Book available [online](https://babel.hathitrust.org/cgi/pt?id=mdp.39015060919126;view=1up;seq=17).
```python
## Calculating the Grms of a shaped random vibration input curve.
# Sec. 8.8, Eqns. 8.4 - 8.6.
def grms (freq, PSD):
"""Returns the Grms value for a shaped random vibration input curve.
Input the frequency and PSD values as a list in the form grms(freq, PSD).
The frequency and PSD list must have the same number of elements."""
from math import log10, log
A = 0
if len(freq)!=len(PSD):
print("Error: The number of elements in the Frequency and PSD lists do not match.")
else:
for i in range(1,len(freq)):
# Calculate the slope
dB = 10 * log10(PSD[i]/PSD[i-1]) # dB
OCT = log10(freq[i]/freq[i-1])/log10(2) # Octave
S = dB/OCT # Slope
# Calculate the area in units of [G^2]
if S == 0:
A = A + PSD[i] * (freq[i] - freq[i-1])
elif S == -3:
A = A + -freq[i] * PSD[i] * log(freq[i-1] / freq[i])
else:
A = A + (3 * PSD[i]/(3 + S)) * (freq[i] - (freq[i-1]/freq[i])**(S/3) * freq[i-1])
# Calculate the Grms [G]
grms = A**(0.5)
return(grms)
```
```python
# Find the GRMS of the ASTM common carrier profile.
Common_Carrier = [[1, 4, 100, 200], [.0001, .01, .01, .001]]
grms(Common_Carrier[0], Common_Carrier[1])
```
1.1469379669303117
```python
## Response to a random vibration, from Section 8.
# Valid for a single resonance with a Q > 10.
def resp_psd_Q (P_in, Q, f_n, g=9800):
"""At a natural frequency, calculates the Grms and Zrms (relative motion) of the point given the transmissibility,
and PSD."""
from math import pi
Grms = ((pi/2)*P_in*f_n*Q)**0.5 # RMS acceleration, Eqn. 8.61
#g = 9.8*1000 # Gravity, [mm/s^2]
Zrms = Y_G_fn(Grms, f_n, g) # Relative motion, Eqn. 8.62. Ref. Eqn. 2.30
return(Grms, Zrms)
```
```python
resp_psd_Q (0.00072, 3.146, 30.4)
```
(0.3288836909042307, 0.08834083693899289)
## Chapter 9, Designing for Shock Environments
* Eqn. 9.40 - Damping ratio ~ $R_c = \frac{C}{C_c} = \frac{1}{2 Q}$
* $Q = \frac{1}{2 R_c}$
* Figure 9.17 illustrates the difficulty in designing for both shock and vibration. Shock isolation occurs to the left of the amplification peak while vibration isoliation occurs on the right. Equation for calculating amplification missing for graphs in Chp. 9.
* Velocity shocks, used when modeling drop tests.
* Assuming single DOF M-S-D, the dork done on the spring is equal to the kinetic energy of the mass. $\frac{1}{2} K Y^2 = \frac{1}{2} M V^2$
* Max accelertaion: $G_{max} = \frac{a}{g} = \frac{V}{g} \sqrt(\frac{K}{M}) = \frac{V}{g} \Omega$, or $G = \frac{\Delta V \Omega}{g}$.
* Can not recreate Fig. 9.21 without a relationship between the amplification and the frequency ratio.
* Impact damping only occurs at $R = \frac{frequency_{response}}{frequency_{source}} <= 0.5$
* Danger area from $0.5 <= R <= 2$
* This assumes an damping ratio of $R_c = 0.1$ and $Q = 5$
* These equations do not relate to velocity shocks in which the forcing frequncy is determined by the natural frequency.
* For velocity shocks max expected G can be calculated or if the response is known, the natural freuqncy should fall out as function of the repsonse G and the height.
```python
# Calculate the velocity before impact during drop tests
def vf_H (H, g=9800):
'''Solves for the velocity before impact of an object dropped from a height H.
Assumes a standard gravity of 9800 mm/s. Input the gravity in the same unit system as the height.'''
v = (2*g*H)**0.5
return v
# For a 2" drop
vf = vf_H(2*2.54)
print(vf)
```
315.54397474837003
```python
# Find G_max due to a drop test
def gmax_drop (H, fn, g=9800):
'''Find the maximum acceleration due to a velocity shock given the drop height (H),
natural frequency (fn), and gravity (g). The gravity should be input in the same
unit system as the velocity. Defaults to mm/s^2
Returns max acceleration in Gs (unitless)'''
from math import pi
v = (2*g*H)**0.5
g_max = v*(2*pi*fn)/g
return(g_max)
def fn_drop (H, g_max, g=9800):
'''Find the natural frequency of a system based on a shock
from a given the height (H), maximum acceleration response (g_max),
and gravity (g). The gravity should be input in the same
unit system as the velocity. Defaults to mm/s^2
Returns natural freuqncy f_n [Hz]'''
from math import pi
v = (2*g*H)**0.5
fn = g_max*g/(v*2*pi)
return(fn)
H = 2 #[in]
H = H*25.4 #[mm]
g_max = gmax_drop(H, 14)
fn = fn_drop(H, 10)
print('Max G: ', g_max)
print('Natural Frequency: ', fn)
```
Max G: 8.95656991107948
Natural Frequency: 15.630983891145293
## Chapter 10, Designing Electronic Boxes
* Bolted efficiency factors for fastener performance during vibration. Ranges from 0 for no bolt to 100% for a welded joint. Easy to remove and quarter turn fasteners generally have a 10% BEF.
## Chapter 11, Vibration Fixtures and Vibration Testing
* Covers standard lab vibration equipment
## Chapter 12, Fatigue in Electronic Structures
| 07e0b759b8847d3fdcfb7807416c012f0f913bed | 58,490 | ipynb | Jupyter Notebook | public/ipy/Steinberg_1988/Steinberg_1988.ipynb | stembl/stembl.github.io | 5108fc33dccd8c321e1840b62a4a493309a6eeff | [
"MIT"
] | 1 | 2016-12-10T04:04:33.000Z | 2016-12-10T04:04:33.000Z | public/ipy/Steinberg_1988/Steinberg_1988.ipynb | stembl/stembl.github.io | 5108fc33dccd8c321e1840b62a4a493309a6eeff | [
"MIT"
] | 3 | 2021-05-18T07:27:17.000Z | 2022-02-26T02:16:11.000Z | public/ipy/Steinberg_1988/Steinberg_1988.ipynb | stembl/stembl.github.io | 5108fc33dccd8c321e1840b62a4a493309a6eeff | [
"MIT"
] | null | null | null | 103.339223 | 39,330 | 0.825543 | true | 3,490 | Qwen/Qwen-72B | 1. YES
2. YES | 0.752013 | 0.774583 | 0.582496 | __label__eng_Latn | 0.984653 | 0.191664 |
(sec:LDs)=
# The Method of Lagrangian Descriptors
## Introduction
One of the biggest challenges of dynamical systems theory or nonlinear dynamics is the development of mathematical techniques that provide us with the capability of exploring transport in phase space. Since the early 1900, the idea of pursuing a qualitative description of the solutions of differential equations, which emerged from the pioneering work carried out by Henri Poincaré on the three body problem of celestial mechanics {cite}`hp1890`, has had a profound impact on our understanding of the nonlinear character of natural phenomena. The qualitative theory of dynamical systems has now been widely embraced by the scientific community.
The goal of this section is to describe the details behind the method of Lagrangian descriptors. This simple and powerful technique unveils regions with qualitatively distinct dynamical behavior, the boundaries of which consist of invariant manifolds. In a procedure that is best characterised as *phase space tomography*, we can use low-dimensional slices we are able to completely reconstruct the intricate geometry of underlying invariant manifolds that governs phase space transport.
Consider a general time-dependent dynamical system given by the equation:
```{math}
---
label: eq:gtp_dynSys
---
\begin{equation}
\dfrac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x},t) \;,\quad \mathbf{x} \in \mathbb{R}^{n} \;,\; t \in \mathbb{R} \;,
\label{eq:gtp_dynSys}
\end{equation}
```
where the vector field $\mathbf{f}(\mathbf{x},t)$ is assumed to be sufficiently smooth both in space and time. The vector field $\mathbf{f}$ can be prescribed by an analytical model or given from numerical simulations as a discrete spatio-temporal data set. For instance, the vector field could represent the velocity field of oceanic or atmospheric currents obtained from satellite measurements or from the numerical solution of geophysical models. In the context of chemical reaction dynamics, the vector field could be the result of molecular dynamics simulations. For any initial condition $\mathbf{x}(t_0) = \mathbf{x}_0$, the system of first order nonlinear differential equations given in Eq. {eq}`eq:gtp_dynSys` has a unique solution represented by the trajectory that starts from that initial point $\mathbf{x}_0$ at time $t_0$.
Since all the information that determines the behavior and fate of the trajectories for the dynamical system is encoded in the initial conditions (ICs) from which they are generated, we are interested in the development of a mathematical technique with the capability of revealing the underlying geometrical structures that govern the transport in phase space.
Lagrangian descriptors (LDs) provide us with a simple and effective way of addressing this challenging task, because it is formulated as a scalar trajectory-diagnostic tool based on trajectories. The elegant idea behind this methodology is that it assigns to each initial condition selected in the phase space a positive number, which is calculated by accumulating the values taken by a predefined positive function along the trajectory when the system is evolved forward and backward for some time interval. The positive function of the phase space variables that is used to define different types of LD might have some geometrical or physical relevance, but this is not a necessary requirement for the implementation of the method. This approach is remarkably similar to the visualization techniques used in laboratory experiments to uncover the beautiful patterns of fluid flow structures with the help of drops of dye injected into the moving fluid {cite}`chien1986`. In fact, the development of LDs was originally inspired by the desire to explain the intricate geometrical flow patterns that are responsible for governing transport and mixing processes in Geophysical flows. The method was first introduced a decade ago based on the arclength of fluid parcel trajectories {cite}`madrid2009,mendoza2010`. Regions displaying qualitatively distinct dynamics will frequently contain trajectories with distinct arclengths and a large variation of the arclength indicate the presence of separatrices consisting of invariant manifolds {cite}`mancho2013lagrangian`.
Lagrangian descriptors have advantages in comparison with other methodologies for the exploration of phase space structures. A notable advantage is that they are straightforward to implement.
Since its proposal as a nonlinear dynamics tool to explore phase space, this technique has found a myriad of applications in different scientific areas. For instance, it has been used in oceanography to plan transoceanic autonomous underwater vehicle missions by taking advantage of the underlying dynamical structure of ocean currents {cite}`ramos2018`. Also, it has been shown to provide relevant information for the effective management of marine oil spills {cite}`gg2016`. LDs have been used to analyze the structure of the Stratospheric Polar Vortex and its relation to sudden stratospheric warmings and also to ozone hole formation {cite}`alvaro1,alvaro2,curbelo2019a,curbelo2019b`. In all these problems, the vector field defining the dynamical system is a discrete spatio-temporal dataset obtained from the numerical simulation of geophysical models. Recently, this tool has also received recognition in the field of chemistry, for instance in transition state theory {cite}`craven2015lagrangian,craven2016deconstructing,craven2017lagrangian,revuelta2019unveiling`, where the computation of chemical reaction rates relies on the know\-ledge of the phase space structures. These high-dimensional structures characterizing reaction dynamics are typically related to Normally Hyperbolic Invariant Manifolds (NHIMs) and their stable and unstable manifolds that occur in Hamiltonian systems. Other applications of LDs to chemical problems include the analysis of isomerization reactions {cite}`naik2020,GG2020b`, roaming {cite}`krajnak2019,gonzalez2020`, the study of the influence of bifurcations on the manifolds that control chemical reactions {cite}`GG2020a`, and also the explanation of the dynamical matching mechanism in terms of the existence of heteroclinic connections in a Hamiltonian system defined by Caldera-type potential energy surfaces {cite}`katsanikas2020a`.
### Lagrangian Descriptors versus Poincaré Maps
Poincaré maps have been a standard and traditional technique for understanding the global phase space structure of dynamical systems. However, Lagrangian descriptors offer substantial advantages over Poincaré maps. We will describe these advantages in the context of the most common settings in which they are applied. However, we note that Lagrangian descriptors can be applied in exactly the same way to both Hamiltonian and non-Hamiltonian vector fields. In keeping with the spirit of this book, we will frame our discussion and description in the Hamiltonian setting.
### Autonomous Hamiltonian vector fields
The consideration of the dimension of different geometric objects is crucial to understanding the advantages of Lagrangian descriptors over Poincaré maps. Therefore we will first consider the "simplest" situation in which these arise — the autonomous Hamiltonian systems with two degrees of freedom.
A two degree-of-freedom Hamiltonian system is described by a four dimensional phase space described by coordinates $(q_1, q_2, p_1, p_2)$. Moreover, we have seen in Section REF that trajectories are restricted to a three dimensional energy surface ("energy conservation in autonomous Hamiltonian systems"). We choose a two dimensional surface within the energy surface that is transverse to the Hamiltonian vector field. This means that at no point on the two dimensional surface is the Hamiltonian vector field tangent to the surface and that at every point on the surface the Hamiltonian vector field has the same directional sense (this is defined more precisely in REF). This two dimensional surface is referred to as a surface of section (SOS) or a Poincaré section, and it is the domain of the Poincaré map. The image of a point under the Poincaré map is the point on the trajectory, starting from that point, that first returns to the surface (and this leads to the fact that the Poincaré map is sometimes referred to as a "first return map").
The practical implementation of this procedure gives rise to several questions. Given a specific two degree-of-freedom Hamiltonian system can we find a two dimensional surface in the three dimensional energy surface having the property that it is transverse to the Hamiltonian vector field and "most" trajectories with initial conditions on the surface return to the surface? In general, the answer is "no" (unless we have some useful a priori knowledge of the phase space structure of the system). The advantage of the method of Lagrangian descriptors is that none of these features are required for its implementation, and it gives essentially the same information as Poincaré maps.
However, the real advantage comes in considering higher dimensions, e.g autonomous Hamiltonian systems with more than two degrees-of-freedom. For definiteness, we will consider a three degree-of-freedom autonomous Hamiltonian system. In this case the phase space is six dimensional and the energy surface is five dimensional. A cross section to the energy surface, in the sense described above, would be four dimensional (if an appropriate cross-section could be found). Solely on dimensionality considerations, we can see the difficulty. Choosing "enough" initial conditions on this four dimensional surface so that we can determine the phase space structures that are mapped out by the points that return to the cross-section is "non-trivial" (to say the least), and the situation only gets more difficult when we go to more than three degrees-of-freedom. One might imagine that you could start by considering lower dimensional subsets of the cross section. However, the probability that a trajectory would return to a lower dimensional subset is zero. Examples where Lagrangian descriptors have been used to analyse phase space structures in two and three degree-of-freedom Hamiltonian systems with this approach are given in {cite}`demian2017,naik2019a,naik2019b,GG2019`.
Lagrangian descriptors avoid all of these difficulties. In particular, they can be computed on any subset of the phase space since there is no requirement for trajectories to return to that subset. Since phase space structure is encoded in the initial conditions (not the final state) of trajectories a dense grid of initial conditions can be placed on any subset of the phase space and a "Lagrangian descriptor field" can be computed for that subset with high resolution and accuracy. Such computations are generally not possible using the Poincaré map approach.
### Nonautonomous Hamiltonian vector fields
Nonautonomous vector fields are fundamentally different than autonomous vector fields, and even more so for Hamiltonian vector fields. For example, one degree-of-freedom autonomous Hamiltonian vector fields are integrable. One degree-of-freedom autonomous Hamiltonian vector fields may exhibit chaos. Regardless of the dimension, a very significant difference is that energy is not conserved for nonautonomous Hamiltonian vector fields. Nevertheless, Lagrangian descriptors can be applied in exactly the same way as for autonomous Hamiltonian vector fields, *regardless of the nature of the time dependence. We add this last remark since the concept of Poincaré maps is not applicable unless the time dependence is periodic.*
## Formulations for Lagrangian Descriptors
### The Arclength Definition
In order to build some intuition on how the method works and understand its very simple and straightforward implementation, we start with the arclength definition mentioned in the previous section. This version of LDs is also known in the literature as function $M$. Consider any region of the phase space where one would like to reveal structures at time $t = t_0$, and create a uniformly-spaced grid of ICs $\mathbf{x}_0 = \mathbf{x}(t_0)$ on it. Select a fixed integration time $\tau$ that will be used to evolve all the trajectories generated from these ICs forward and backward in time for the time intervals $[t_0,t_0+\tau]$ and $[t_0-\tau,t_0]$ respectively. This covers a temporal range of $2\tau$ centered at $t = t_0$, marking the time at which we want to take a snapshot of the underlying structures in phase space. The arclength of a trajectory in forward time can be easily calculated by solving he integral:
```{math}
---
label: eq:M_function_fw
---
\begin{equation}
\mathcal{L}^{f}(\mathbf{x}_{0},t_0,\tau) = \int^{t_0+\tau}_{t_0} ||\dot{\mathbf{x}}|| \; dt \;,
\label{eq:M_function_fw}
\end{equation}
```
where
$\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}(t;\mathbf{x}_0),t)$ and
$||\cdot||$ is the Euclidean norm applied to the vector field defining the dynamical system in Eq. {eq}`eq:gtp_dynSys` . Similarly, one can define the arclength when the trajectory evolves in backward time as:
```{math}
---
label: eq:M_function_bw
---
\begin{equation}
\mathcal{L}^{b}(\mathbf{x}_{0},t_0,\tau) = \int^{t_0}_{t_0-\tau} ||\dot{\mathbf{x}}|| \; dt \;,
\label{eq:M_function_bw}
\end{equation}
```
It is common practice to combine these two quantities into one scalar value so that:
```{math}
---
label: eq:M_function
---
\begin{equation}
\mathcal{L}(\mathbf{x}_{0},t_0,\tau) = \mathcal{L}^{b}(\mathbf{x}_{0},t_0,\tau) + \mathcal{L}^{f}(\mathbf{x}_{0},t_0,\tau) \;,
\label{eq:M_function}
\end{equation}
```
and in this way the scalar field provided by the method will simultaneously reveal the location of the stable and unstable manifolds in the same picture. However, if one only considers the output obtained from the forward or backward contributions, we can separately depict the stable and unstable manifolds respectively.
We illustrate the logic behind the capabilities of this technique to display the stable and unstable manifolds of hyperbolic points with a very simple example, the one degree-of-freedom (DoF) linear Hamiltonian saddle system given by:
```{math}
---
label: eq:1dof_saddle
---
\begin{equation}
H(q,p) = \dfrac{1}{2} \left(p^2 - q^2\right) \quad \Leftrightarrow \quad
\begin{cases}
\dot{q} = \dfrac{\partial H}{\partial p} = p \\[.4cm]
\dot{p} = -\dfrac{\partial H}{\partial q} = q
\end{cases}
\label{eq:1dof_saddle}
\end{equation}
```
```{figure} figures/1d_saddle_ld.png
---
name: fig:1d_saddle
---
Forward {eq}`eq:M_function_bw`, backward {eq}`eq:M_function_fw` and combined {eq}`eq:M_function` Lagrangian descriptors for system {eq}`eq:1dof_saddle` respectively.
```
We know that this dynamical system has a hyperbolic equilibrium point at the origin and that its stable and unstable invariant manifolds correspond to the lines $p = \pm q$ respectively (refer to hyperbolic section). Outside of these lines, the trajectories are hyperbolas. What happens when we apply LDs to this system? Why does the method pick up the manifolds? Notice first that in {numref}`fig:1d_saddle` the value attained by LDs at the origin is zero, because it is an equilibrium point and hence it is not moving. Therefore, the arclength of its trajectory is zero. Next, let's consider the forward time evolution term of LDs, that is, $\mathcal{L}^f$. Take two neighboring ICs, one lying on the line that corresponds to the stable manifold and another slightly off it. If we integrate them for a time $\tau$, the initial condition that is on the manifold converges to the origin, while the other initial condition follows the arc of a hyperbola. If $\tau$ is small, both segments of trajectory are comparable in length, so that the value obtained from LDs for both ICs is almost equal. However, if we integrate the system for a larger $\tau$, the arclengths of the two trajectories become very different, because one converges while the other one diverges. Therefore, we can clearly see in {numref}`fig:1d_saddle` that the LD values vary significantly near the stable manifold in comparison to those elsewhere. Moreover, if we consider a curve of initial conditions that crosses transversely the stable manifold, the LD value along it attains a minimum on the manifold. Notice also that by the same argument we gave above, but constructing the backward time evolution term of LDs, $\mathcal{L}^b$, we can arrive to the conclusion that backward integration of initial conditions will highlight the unstable manifold of the hyperbolic equilibrium point at the origin. It is important to remark here that, although we have used above the simple linear saddle system as an example to illustrate how the method recovers phase space structure, this argument also applies to a nonlinear system with an hyperbolic point, whose stable and unstable manifolds are convoluted curves.
The sharp transitions obtained for the LD values across the stable and unstable manifolds, which imply large values of its gradient in the vicinity of them, are known in the literature as "singular features". These features present in the LD scalar field are very easy to visualize and detect when plotting the output provided by the method. We will see shortly that there exists a rigorous mathematical connection between the "singular features" displayed by the LD output and the stable and unstable manifolds of hyperbolic points. This result was first proved in {cite}`lopesino2017` for two-dimensional flows, it was extended to 3D dynamical systems in {cite}`gg2018`, and it has also been recently established for the stable and unstable manifolds of normally hyperbolic invariant manifolds in Hamiltonian systems with two or more degrees of freedom in {cite}`demian2017,naik2019a`. In fact, the derivation of this relationship relies on an alternative definition for LDs, where the positive scalar function accumulated along the trajectories of the system is the $p$-norm of the vector field that determines the flow. Considering this approach, the LD scalar field becomes now non-differentiable at the phase space points that belong to a stable or unstable manifold, and consequently the gradient at these locations is unbounded. This property is crucial in many ways, since it allows us to easily recover the location of the stable and unstable manifolds in the LD plot as if they were the edges of objects that appear in a digital photograph.
One key aspect that needs to be accounted for when setting up LDs for revealing the invariant manifolds in phase space, is the crucial role that the integration time $\tau$ plays in the definition of the method itself. It is very important to appreciate this point, since $\tau$ is the parameter responsible for controlling the complexity and intricate geometry of the phase space structures revealed in the scalar field displayed from the LD computation. A natural consequence of increasing the value for $\tau$ is that richer details of the underlying structures are unveiled, since this implies that we are incorporating more information about the past and future dynamical history of trajectories in the computation of LDs. This means that $\tau$ in some sense is intimately related to the time scales of the dynamical phenomena that occur in the model under consideration. This connection makes the integration time a problem-dependent parameter, and hence, there is no general "golden rule" for selecting its value for exploring phase space. Consequently, it is usually selected from the dynamical information obtained by performing beforehand several numerical experiments, and one needs to bear in mind the compromise that exists between the complexity of the structures revealed by the method to explain a certain dynamical mechanism, and the interpretation of the intricate manifolds displayed in the LD scalar output.
To finish this part on the arclength definition of LDs we show that the method is also capable of revealing other invariant sets in phase space such as KAM tori, by means of studying the convergence of the time averages of LDs. We illustrate this property with the 1 DoF linear Hamiltonian with a center equilibrium at the origin:
```{math}
---
label:
---
\begin{equation}
H(q,p) = \dfrac{\omega}{2} \left(p^2 + q^2\right) \quad \Leftrightarrow \quad
\begin{cases}
\dot{q} = \dfrac{\partial H}{\partial p} = \omega \, p \\[.4cm]
\dot{p} = -\dfrac{\partial H}{\partial q} = -\omega \, q
\end{cases}
\end{equation}
```
From the definition of the Hamiltonian we can see that the solutions to this system form a family of concentric circles about the origin with radius $R = \sqrt{2H/\omega}$. Moreover, each of this circles encloses an area of $A(H) = 2\pi H / \omega$. Using the definition of the Hamiltonian and the information provided by Hamilton's equations of motion we can easily evaluate the arclength LD for this system:
```{math}
---
label:
---
\begin{equation}
\mathcal{L}(q_0,p_0,\tau) = \int^{\tau}_{-\tau} \sqrt{\left(\dot{q}\right)^2 + \left(\dot{p}\right)^2} \; dt = \omega \int^{\tau}_{-\tau} \sqrt{q^2 + p^2} \; dt = 2 \tau \sqrt{2 \omega H_0}
\end{equation}
```
where the initial condition $(q_0,p_0)$ has energy $H = H_0$ and therefore it lies on a circular trajectory with radius $\sqrt{2H_0/\omega}$. Hence, in this case all trajectories constructed from initial conditions on that circle share the same LD value. Moreover, if we consider the convergence of the time average of LD, this yields:
```{math}
---
label:
---
\begin{equation}
\lim_{\tau \to \infty} \langle \, \mathcal{L}(\tau) \, \rangle = \dfrac{1}{2\tau} \int^{\tau}_{-\tau} \sqrt{\left(\dot{q}\right)^2 + \left(\dot{p}\right)^2} \; dt = \sqrt{2 \omega H_0} = \omega \sqrt{\frac{A}{\pi}}
\end{equation}
```
### The $p$-norm Definition
Besides the arclength definition of Lagrangian descriptors introduced in the previous subsection, there are many other versions used throughout the literature. An alternative definition of LDs, which is inspired by the $p$-norm of the vector field describing the dynamical system. We remark that we use the expression for $p\in(0,1]$, while the $p$-norm is only a norm for $p\geq 1$. For the sake of consistency with literature we retain the name $p$-norm even for $p<1$. The LD is defined as:
```{math}
---
label: eq:Mp_function
---
\begin{equation}
\mathcal{L}_p(\mathbf{x}_{0},t_0,\tau) = \int^{t_0+\tau}_{t_0-\tau} \, \sum_{k=1}^{n} \vert f_{k}(\mathbf{x}(t;\mathbf{x}_0),t) \vert^p \; dt \;, \quad p \in (0,1]
\label{eq:Mp_function}
\end{equation}
```
where $f_{k}$ is the $k$-th component of the vector field in Eq. {eq}`eq:gtp_dynSys` . Typically, the value used for the parameter $p$ in this version of the method is $p = 1/2$. Recall that all the variants of LDs can be split into its forward and backward time integration components in order to detect the stable and unstable manifolds separately. Hence, we can write:
```{math}
---
label:
---
\begin{equation}
\mathcal{L}_p(\mathbf{x}_{0},t_0,\tau) = \mathcal{L}^{b}_p(\mathbf{x}_{0},t_0,\tau) + \mathcal{L}^{f}_p(\mathbf{x}_{0},t_0,\tau)
\end{equation}
```
where we have that:
```{math}
---
label:
---
\begin{equation}
\begin{split}
\mathcal{L}_p^{b}(\mathbf{x}_{0},t_0,\tau) & = \int^{t_0}_{t_0-\tau} \sum_{k=1}^{n} |f_{k}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt \\[.2cm]
\mathcal{L}_p^{f}(\mathbf{x}_{0},t_0,\tau) & = \int^{t_0+\tau}_{t_0} \sum_{k=1}^{n} |f_{k}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt
\end{split}
\end{equation}
```
Although this alternative definition of LDs does not have such an intuitive physical interpretation as that of arclength, it has been shown to provide many advantages. For example, it allows for a rigorous analysis of the notion of "singular features" and to establish the mathematical connection of this notion to stable and unstable invariant manifolds in phase space. Another important aspect of the $p$-norm of LDs is that, since in the definition all the vector field components contribute separately, one can naturally decompose the LD in a way that allows to isolate individual degrees of freedom. This was used in {cite}`demian2017,naik2019a` to show that the method can be used to successfully detect NHIMs and their stable and unstable manifolds in Hamiltonian systems. Using the $p-$norm definition, it has been shown that the points on the LD contour map with non-differentiability identifies the invariant manifolds' intersections with the section on which the LD is computed, for specific systems {cite}`lopesino2017,demian2017,naik2019a`. In this context, where a fixed integration time is used, it has also been shown that the LD scalar field attains a minimum value at the locations of the stable and unstable manifolds, and hence:
```{math}
---
label: eq:min_LD_manifolds
---
\begin{equation}
\mathcal{W}^u(\mathbf{x}_{0},t_0) = \textrm{argmin } \mathcal{L}_p^{b}(\mathbf{x}_{0},t_0,\tau) \quad,\quad \mathcal{W}^s(\mathbf{x}_{0},t_0) = \textrm{argmin } \mathcal{L}_p^{f}(\mathbf{x}_{0},t_0,\tau) \;,
\label{eq:min_LD_manifolds}
\end{equation}
```
where $\mathcal{W}^u$ and $\mathcal{W}^s$ are, respectively, the unstable and stable manifolds calculated at time $t_0$ and $\textrm{argmin}(\cdot)$ denotes the phase space coordinates $\mathbf{x}_0$ that minimize the corresponding function. In addition, NHIMs at time $t_0$ can be calculated as the intersection of the stable and unstable manifolds:
```{math}
---
label: eq:min_NHIM_LD
---
\begin{equation}
\mathcal{N}(\mathbf{x}_{0},t_0) = \mathcal{W}^u(\mathbf{x}_{0},t_0) \cap \mathcal{W}^s(\mathbf{x}_{0},t_0) = \textrm{argmin } \mathcal{L}_p(\mathbf{x}_{0},t_0,\tau)
\label{eq:min_NHIM_LD}
\end{equation}
```
As we have pointed out, the location of the stable and unstable manifolds on the slice can be obtained by extracting them from the ridges of the gradient field, $\Vert \nabla \mathcal{L}^{f}_p \Vert$ or $\Vert \nabla \mathcal{L}^{b}_p \Vert$, respectively, since manifolds are located at points where the the forward and backward components of the function $\mathcal{L}_p$ are non-differentiable. Once the manifolds are known one can compute their intersection by means of a root search algorithm. In specific examples we have been able to extract NHIMs from the intersections. An alternative method to recover the manifolds and their associated NHIM is by minimizing the functions $\mathcal{L}^{f}_p$ and $\mathcal{L}^{b}_p$ using a search optimization algorithm. This second procedure and some interesting variations are described in {cite}`feldmaier2019`.
We finish the description of the $p$-norm version of LDs by showing that this definition recovers the stable and unstable manifolds of hyperbolic equilibria at phase space points where the scalar field is non-differentiable. We demonstrate this statement for the 1 DoF linear Hamiltonian introduced in Eq. {eq}`eq:Mp_function` that has a saddle equilibrium point at the origin. The general solution to this dynamical system can be written as:
```{math}
---
label:
---
\begin{equation}
q(t) = \dfrac{1}{2} \left(A e^{t} + B e^{-t}\right) \quad,\quad p(t) = \dfrac{1}{2} \left(A e^{t} - B e^{-t}\right)
\end{equation}
```
where $\mathbf{x}_0 = (q_0,p_0)$ is the initial condition and $A = q_0 + p_0$ and $B = q_0 - p_0$. If we compute the forward plus backward contribution of the LD function, we get that for $\tau$ sufficiently large the scalar field behaves asymptotically as:
```{math}
---
label: eq:M_hyp_asymp
---
\begin{equation}
\mathcal{L}_{p}\left(\mathbf{x}_0,\tau\right) \sim \left(|A|^{p} + |B|^{p}\right) e^{p \tau}
\label{eq:M_hyp_asymp}
\end{equation}
```
(sec:LDaction)=
### Lagrangian Descriptors Based on the Classical Action
In this section we discuss a formulation of Lagrangian descriptors that has a direct connection to classical Hamiltonian mechanics, namely the principle of least action. The principle of least action is treated in most advanced books on classical mechanics; see, for example, {cite}`arnol2013mathematical,goldstein2002classical,landau2013mechanics`. An intuitive and elementary discussion of the principle of least action is given by Richard Feynman in the following lecture <https://www.feynmanlectures.caltech.edu/II_19.html>,
To begin, we note that the general form of Lagrangian descriptors are as follows:
```{math}
---
label:
---
\begin{equation}
\mathcal{L}(\text{initial condition}) = \int_{t_0 - \tau}^{t_0 + \tau} \text{PositiveFunction} \, (\text{trajectory}) \; dt
\end{equation}
```
The positivity of the integrand is often imposed via an absolute value. In our discussion below we show that this is not necessary for the action.
#### One Degree-of-Freedom Autonomous Hamiltonian Systems
We consider a Hamiltonian of the form:
```{math}
---
label:
---
\begin{equation}
H(q, p) = \frac{p^2}{2m} + V(q) \;, \quad (q,p) \in \mathbb{R}^2.
\end{equation}
```
The integrand for the action integral is the following:
```{math}
---
label:
---
\begin{equation}
p \, dq \;,
\end{equation}
```
Using the chain rule and the definition of momentum, the following calculations are straightforward:
```{math}
---
label:
---
\begin{eqnarray}
p \, dq = p \frac{dq}{dt} dt = \frac{p^2}{m} dt \;.
\end{eqnarray}
```
The quantity $\frac{p^2}{m}$ is twice the kinetic energy and is known as the *vis viva*. It is the integrand for the integral that defines Maupertuis principle, which is very closely related to the principle of least action. We can also write $p \, dq$ slightly differently using Hamilton's equations:
```{math}
---
label:
---
\begin{equation}
\dfrac{p^2}{m}= 2 (H - V(q) ),
\end{equation}
```
from which it follows that:
```{math}
---
label:
---
\begin{equation}
p \, dq = \dfrac{p^2}{m} \, dt = 2 (H-V(q)) \, dt.
\end{equation}
```
Therefore, the positive quantities that appear multiplying the $dt$ are candidates for the integrand of Lagrangian descriptors {cite}`montoya2020phase`.
We will illustrate next how the action-based LDs successfully detects the stable invariant manifold of the hyperbolic equilibrium point in system introduced in Eq. {eq}`eq:1dof_saddle`. We know that the solutions to this dynamical system are given by the expressions:
```{math}
---
label:
---
\begin{equation}
q(t) = q_0 \cosh(t) + p_0 \sinh(t) \quad,\quad p(t) = p_0 \cosh(t) + q_0 \sinh(t)
\end{equation}
```
where $(q_0,p_0)$ represents any initial condition. We know from (refer to hyperbolic section) that the stable invariant manifold is given by $q = -p$. We compute the forward LD:
```{math}
---
label:
---
\begin{equation}
\begin{split}
\mathcal{A}^{f}(q_0,p_0,\tau) & = \int_{0}^{\tau} p \, \dfrac{dq}{dt} \, dt = \int_{0}^{\tau} p^2 \, dt = \\[.2cm]
& = \dfrac{1}{2} \left(p_0^2 - q_0^2\right) \tau + \dfrac{1}{4}\left(q_0^2 + p_0^2\right) \sinh(2\tau) + \dfrac{1}{2} q_0 \, p_0 \left(\cosh(2\tau) - 1\right)
\end{split}
\end{equation}
```
It is a simple exercise to check that $\mathcal{A}^{f}$ attains a local minimum at the points:
```{math}
---
label:
---
\begin{equation}
q_0 = - \dfrac{\cosh(2\tau) - 1}{\sinh(2\tau) - 2 \tau} \, p_0
\end{equation}
```
#### $n$ Degree-of-Freedom Autonomous Hamiltonian Systems
The above calculations for one DoF are easily generalized to $n$ degrees-of-freedom. We begin with a Hamiltonian of the form:
```{math}
---
label:
---
\begin{equation}
H(q_1, \ldots, q_n, p_1, \ldots, p_n) = \sum_{i=1}^n \dfrac{p_i^2}{2m_i} + V(q_1, \ldots, q_n) \;, \quad (q_1, \ldots, q_n, p_1, \ldots, p_n) \in \mathbb{R}^{2n}.
\end{equation}
```
The integrand for the action integral is the following:
```{math}
---
label:
---
\begin{equation}
p_1dq_1 + \cdots + p_n dq_n \quad , \quad p_i\equiv m_i \frac{dq_i}{dt} \;, \quad i \in \lbrace 1,\ldots,n \rbrace
\end{equation}
```
As above, using the chain rule and the definition of the momentum, we get:
```{math}
---
label:
---
\begin{equation}
p_1dq_1 + \cdots + p_n dq_n = \sum_{i=1}^n p_i \frac{dq_i}{dt} dt = \sum_{i=1}^n \dfrac{p_i^2}{m_i} dt
\end{equation}
```
where the quantity $\sum_{i=1}^n \dfrac{p_i^2}{m_i}$ is twice the kinetic energy and is the vis-viva in the $n$ degree-of-freedom setting. We can write $p_1dq_1 + \cdots + p_n dq_n $ slightly differently using Hamilton's equations,
```{math}
---
label:
---
\begin{equation}
\sum_{i=1}^n \frac{p_i ^2}{2m_i} = 2 (H - V(q_1, \ldots, q_n) )
\end{equation}
```
from which it follows that
```{math}
---
label:
---
\begin{equation}
p_1dq_1 + \cdots + p_n dq_n = \sum_{i=1}^n \frac{p_i ^2}{2m_i} dt = 2 (H-V(q_1, \ldots, q_n)) dt
\end{equation}
```
## Variable Integration Time Lagrangian Descriptors
At this point, we would like to discuss the issues that might arise from the definitions of LDs provided in Eqs. {eq}`eq:M_function` and {eq}`eq:Mp_function` when they are applied to analyze the dynamics in open Hamiltonian systems, that is, those for which phase space dynamics occurs in unbounded energy hypersurfaces. Notice that in both definitions, all the initial conditions considered by the method are integrated forward and backward for the same time $\tau$. Recent studies have revealed {cite}`junginger2017chemical,naik2019b,GG2020a` issues with trajectories that escape to infinity in finite time or at an increasing rate. The trajectories that show this behavior will give NaN (not-a-number) values in the LD scalar field, hiding some regions of the phase space, and therefore obscuring the detection of invariant manifolds. In order to circumvent this problem we explain here the approach that has been recently adopted in the literature {cite}`junginger2017chemical,naik2019b,GG2020a` known as variable integration time Lagrangian descriptors. In this methodology, LDs at any initial condition are calculated for a fixed initial integration time $\tau_0$ or until the trajectory corresponding to that initial condition leaves a certain phase space region $\mathcal{R}$ that we call the *interaction region*, whichever happens first. Therefore the total integration time depends on the initial conditions, that is $\tau(\mathbf{x}_0)$. In this variable-time formulation, given a fixed integration time $\tau_0 > 0$, the $p$-norm definition of LDs with $p \in (0,1]$ will take the form:
```{math}
---
label: eq:Mp_vt
---
\begin{equation}
\mathcal{L}_p(\mathbf{x}_{0},t_0,\tau_0) = \int^{t_0 + \tau^{+}_{\mathbf{x}_0}}_{t_0 - \tau^{-}_{\mathbf{x}_0}} \sum_{k=1}^{n} |f_{k}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt = \mathcal{L}^{f}_p(\mathbf{x}_{0},t_0,\tau) + \mathcal{L}^{b}_p(\mathbf{x}_{0},t_0,\tau)
\label{eq:Mp_vt}
\end{equation}
```
where the total integration time used for each initial condition is defined as:
```{math}
\begin{equation*}
\tau^{\pm}_{\mathbf{x}_{0}}(\tau_0,\mathcal{R}) = \min \left\lbrace \tau_0 \, , \, |t^{\pm}| \right\rbrace \; ,
\end{equation*}
```
and $t^{+}$, $t^{-}$ represent the times for which the trajectory leaves the interaction region $\mathcal{R}$ in forward and backward time respectively.
It is important to highlight that the variable time integration LD has also the capability of capturing the locations of the stable and unstable manifolds present in the phase space slice used for the computation, and it will do so at points where the LD values vary significantly. Moreover, KAM tori will also be detected by the contour values of the time-averaged LD. Therefore, the variable integration time LDs provides us with a suitable methodology to study the phase space structures that characterize escaping dynamics in open Hamiltonians, since it avoids the issue of trajectories escaping to infinity very fast. It is important to remark here that this alternative approach for computing LDs can be adapted to other definitions of the method, where a different positive and bounded function is integrated along the trajectories of the dynamical system. For example, going back to the arclength definition of LDs, the variable integration time strategy would yield the formulation:
```{math}
---
label: eq:M_vt
---
\begin{equation}
\mathcal{L}(\mathbf{x}_{0},t_0,\tau_0) = \int^{t_0 + \tau^{+}_{\mathbf{x}_0}}_{t_0 - \tau^{-}_{\mathbf{x}_0}} \Vert \mathbf{f}(\mathbf{x}(t;\mathbf{x}_0),t) \Vert \, dt
\label{eq:M_vt}
\end{equation}
```
## Examples
### The Duffing Oscillator
In the next example, we illustrate how the arclength and the function $M$ LDs capture the stable and unstable manifolds that determine the phase portrait of the forced and undamped Duffing oscillator. The Duffing equation arises when studying the motion of a particle on a line, i.e. a one DoF system, subjected to the influence of a symmetric double well potential and an external forcing. The second order ODE that describes this oscillator is given by:
```{math}
---
label:
---
\begin{equation}
\ddot{x} + x^3 - x = \varepsilon f(t)
\end{equation}
```
where $\varepsilon$ represents the strength of the forcing term $f(t)$, and we choose for this example a sinusoidal force $f(t) = \sin(\omega t + \phi)$, where $\omega$ the angular frequency and $\phi$ the phase of the forcing. Reformulated using a Hamiltonian function $H$, this system can be written as:
```{math}
---
label:
---
\begin{equation}
H(x,y) = \dfrac{1}{2} y^2 + \dfrac{1}{4} x^4 - \dfrac{1}{2} x^2 - \varepsilon f(t) x \quad \Leftrightarrow \quad
\begin{cases}
\dot{x} = y \\
\dot{y} = x - x^3 + \varepsilon f(t) \\
\end{cases}
\end{equation}
```
In the autonomous case, i.e. $\varepsilon = 0$, the system has three equilibrium points: a saddle located at the origin and two diametrically opposed centers at the points $(\pm 1,0)$. The stable and unstable manifolds that emerge from the saddle point form two homoclininc orbits in the form of a figure eight around the two center equilibria:
```{math}
---
label: eq:duff_homocMani
---
\begin{equation}
\mathcal{W}^{s} = \mathcal{W}^{u} = \left\{(x,y) \in \mathbb{R}^2 \; \Big| \; 2y^2 + x^4 - 2x^2 = 0 \right\}
\label{eq:duff_homocMani}
\end{equation}
```
```{figure} figures/duffing_tau_2.png
---
name:
---
Phase portrait of the autonomous and undamped Duffing oscillator obtained by applying the arclength definition of LDs in Eq. {eq}`eq:M_function` . A) LDs with $\tau = 2$
```
```{figure} figures/duffing_tau_10.png
---
name:
---
Phase portrait of the autonomous and undamped Duffing oscillator obtained by applying the arclength definition of LDs in Eq. {eq}`eq:M_function` . B) LDs with $\tau = 10$
```
```{figure} figures/duffing_maniDetect.png
---
name: fig:duffing1_lds
---
Phase portrait of the autonomous and undamped Duffing oscillator obtained by applying the arclength definition of LDs in Eq. {eq}`eq:M_function` . C) Value of LDs along the line $y = 0.5$ depicted in panel B) illustrating how the method detects the stable and unstable manifolds at points where the scalar field changes abruptly.
```
\label{fig:duffing1_lds}
\caption{Phase portrait of the autonomous and undamped Duffing oscillator obtained by applying the arclength definition of LDs in Eq. {eq}`eq:M_function`. A) LDs with $\tau = 2$; B) LDs with $\tau = 10$; C) Value of LDs along the line $y = 0.5$ depicted in panel B) illustrating how the method detects the stable and unstable manifolds at points where the scalar field changes abruptly.}
We move on to compute LDs for the forced Duffing oscillator. In this situation, the vector field is time-dependent and thus the dynamical system is nonautonomous. The consequence is that the homoclinic connection breaks up and the stable and unstable manifolds intersect, forming an intricate tangle that gives rise to chaos. We illustrate this phenomenon by computing LDs with $\tau = 10$ to reconstruct the phase portrait at the initial time $t_0 = 0$. For the forcing, we use a perturbation strength $\varepsilon = 0.1$, an angular frequency of $\omega = 1$ and a phase $\phi = 0$. This result is shown in {numref}`fig:duffing2_lds` C), and we also depict the forward $(\mathcal{L}^f)$ and backward $(\mathcal{L}^b)$ contributions of LDs in {numref}`fig:duffing2_lds` A) and B) respectively, demonstrating that the method can be used to recover the stable and unstable manifolds separately. Furthermore, by taking the value of LDs along the line $y = 0.5$, the location of the invariant manifolds are highlighted at points corresponding to sharp changes (and local minima) in the scalar field values of LDs.
```{figure} figures/duffing_stbl_tau_10_pert_01.png
---
name:
---
Phase portrait of the nonautonomous and undamped Duffing oscillator obtained at time $t = 0$ by applying the arclength definition of LDs in Eq. {eq}`eq:M_function` with an integration time $\tau = 10$. A) Forward LDs detect stable manifolds
```
```{figure} figures/duffing_unstbl_tau_10_pert_01.png
---
name:
---
Phase portrait of the nonautonomous and undamped Duffing oscillator obtained at time $t = 0$ by applying the arclength definition of LDs in Eq. {eq}`eq:M_function` with an integration time $\tau = 10$. B) Backward LDs highlight unstable manifolds of the system
```
```{figure} figures/duffing_tau_10_pert_01.png
---
name:
---
Phase portrait of the nonautonomous and undamped Duffing oscillator obtained at time $t = 0$ by applying the arclength definition of LDs in Eq. {eq}`eq:M_function` with an integration time $\tau = 10$. C) Total LDs (forward $+$ backward) showing that all invariant manifolds are recovered simultaneously.
```
```{figure} figures/duffing_maniDetect_pert_01.png
---
name: fig:duffing2_lds
---
Phase portrait of the nonautonomous and undamped Duffing oscillator obtained at time $t = 0$ by applying the arclength definition of LDs in Eq. {eq}`eq:M_function` with an integration time $\tau = 10$. D) Value taken by LDs along the line $y = 0.5$ in panel C) to illustrate how the method detects the stable and unstable manifolds at points where the scalar field changes abruptly.
```
\label{fig:duffing2_lds}
\caption{Phase portrait of the nonautonomous and undamped Duffing oscillator obtained at time $t = 0$ by applying the arclength definition of LDs in Eq. {eq}`eq:M_function` with an integration time $\tau = 10$. A) Forward LDs detect stable manifolds; B) Backward LDs highlight unstable manifolds of the system; C) Total LDs (forward $+$ backward) showing that all invariant manifolds are recovered simultaneously. D) Value taken by LDs along the line $y = 0.5$ in panel C) to illustrate how the method detects the stable and unstable manifolds at points where the scalar field changes abruptly.}
### The linear Hamiltonian saddle with 2 DoF
Consider the two DoF system given by the linear quadratic Hamiltonian associated to an index-1 saddle at the origin. This Hamiltonian and the equations of motion are given by the expressions:
```{math}
---
label: eq:index1_Ham
---
\begin{eqnarray}
H(x,y,p_x,p_y) = \dfrac{\lambda}{2}\left(p_x^2 - x^2\right) + \dfrac{\omega}{2} \left(p_y^2 + y^2 \right) \quad,\quad \begin{cases}
\dot{x} = \lambda \, p_x \\
\dot{p}_{x} = \lambda \, x \\
\dot{y} = \omega \, p_y \\
\dot{p}_{y} = -\omega \, y
\end{cases}
\label{eq:index1_Ham}
\end{eqnarray}
```
```{figure} figures/LD_p_05_Saddle_tau_10.png
---
name:
---
Phase portrait in the saddle space of the linear Hamiltonian given in Eq. {eq}`eq:index1_Ham`. A) Application of the $p$-norm definition of LDs in Eq. {eq}`eq:Mp_function` using $p = 1/2$ with $\tau = 10$.
```
```{figure} figures/manifolds_Saddle_tau_10.png
---
name:
---
B) Stable (blue) and unstable (red) invariant manifolds of the unstable periodic orbit at the origin extracted from the gradient of the $M_p$ function.
```
```{figure} figures/detectMani_Saddle_tau_10.png
---
name: eq:index1_lds
---
C) Value of LDs along the line $p_x = 0.5$ depicted in panel A) to illustrate how the method detects the stable and unstable manifolds at points where the scalar field is singular or non-differentiable and attains a local minimum.
```
\label{fig:index1_lds}
\caption{Phase portrait in the saddle space of the linear Hamiltonian given in Eq. {eq}`eq:index1_Ham` . A) Application of the $p$-norm definition of LDs in Eq. {eq}`eq:Mp_function` using $p = 1/2$ with $\tau = 10$; B) Stable (blue) and unstable (red) invariant manifolds of the unstable periodic orbit at the origin extracted from the gradient of the $M_p$ function; C) Value of LDs along the line $p_x = 0.5$ depicted in panel A) to illustrate how the method detects the stable and unstable manifolds at points where the scalar field is singular or non-differentiable and attains a local minimum.}
### The Cubic Potential
In order to illustrate the issues encountered by the fixed integration time LDs and how the variable integration approach resolves them, we apply the method to a basic one degree-of-freedom Hamiltonian known as the "fish potential", which is given by the formula:
```{math}
---
label: eq:fish_Ham
---
\begin{equation}
H = \dfrac{1}{2} p_x^2 + \dfrac{1}{2} x^2 + \dfrac{1}{3} x^3 \quad \Leftrightarrow \quad
\begin{cases}
\dot{x} = p_x \\
\dot{p}_{x} = - x - x^2
\end{cases} \;.
\label{eq:fish_Ham}
\end{equation}
```
```{figure} figures/LDfixTime_p_05_fishPot_tau_3.png
---
name:
---
Phase portrait of the "fish potential" Hamiltonian in Eq. {eq}`eq:fish_Ham` revealed by the $p$-norm LDs with $p = 1/2$. A) Fixed-time integration LDs in Eq. {eq}`eq:Mp_function` with $\tau = 3$
```
```{figure} figures/LD_p_05_fishPot_tau_8.png
---
name:
---
Phase portrait of the "fish potential" Hamiltonian in Eq. {eq}`eq:fish_Ham` revealed by the $p$-norm LDs with $p = 1/2$. B) Variable-time integration definition of LDs in Eq. {eq}`eq:Mp_vt` with $\tau = 8$
```
```{figure} figures/manifolds_fishPot_tau_8.png
---
name: eq:fish_lds
---
Phase portrait of the "fish potential" Hamiltonian in Eq. {eq}`eq:fish_Ham` revealed by the $p$-norm LDs with $p = 1/2$. C) Invariant stable (blue) and unstable (red) manifolds of the saddle fixed point extracted from the gradient of the variable time $M_p$ function.
```
\label{fig:fish_lds}
\caption{Phase portrait of the "fish potential" Hamiltonian in Eq. {eq}`eq:fish_Ham` revealed by the $p$-norm LDs with $p = 1/2$. A) Fixed-time integration LDs in Eq. {eq}`eq:Mp_function` with $\tau = 3$; B) Variable-time integration definition of LDs in Eq. {eq}`eq:Mp_vt` with $\tau = 8$; C) Invariant stable (blue) and unstable (red) manifolds of the saddle fixed point extracted from the gradient of the variable time $M_p$ function.}
### The Hénon-Heiles Hamiltonian System
We continue illustrating how to apply the method of Lagrangian descriptors to unveil the dynamical skeleton in systems with a high-dimensional phase space by applying this tool to a hallmark Hamiltonian of nonlinear dynamics, the Hénon-Heiles Hamiltonian. This model was introduced in 1964 to study the motion of stars in galaxies {cite}`henon1964` and is described by:
```{math}
---
label: eq:henon_system
---
\begin{equation}
H = \dfrac{1}{2} \left(p_x^2 + p_y^2\right) + \dfrac{1}{2}\left(x^2 + y^2\right) + x^2y - \dfrac{1}{3} y^3 \quad \Leftrightarrow \quad
\begin{cases}
\dot{x} = p_x \\
\dot{p}_{x} = - x - 2xy \\
\dot{y} = p_y \\
\dot{p}_{y} = - y - x^2 + y^2
\end{cases} \;.
\label{eq:henon_system}
\end{equation}
```
which has four equilibrium points: one minimum located at the origin and three saddle-center points at $(0,1)$ and $(\pm \sqrt{3}/2,-1/2)$. The potential energy surface is
$${
V(x,y) = x^2/2 + y^2/2 + x^2y - y^3/3
}$$
which has a $\pi/3$ rotational symmetry and is characterized by a central scattering region about the origin and three escape channels, see {numref}`fig:henonHeiles_pes` below for details.
In order to analyze the phase space of the Hénon-Heiles Hamiltonian by means of the variable integration time LDs, we fix an energy $H = H_0$ of the system and choose an interaction region $\mathcal{R}$ defined in configuration space by a circle of radius $15$ centered at the origin. For our analysis we consider the following phase space slices:
```{math}
---
label: eq:psos
---
\begin{eqnarray}
\mathcal{U}^{+}_{y,p_y} & = \left\{(x,y,p_x,p_y) \in \mathbb{R}^4 \;|\; H = H_0 \;,\; x = 0 \;,\; p_x > 0\right\} \\[.1cm]
\mathcal{V}^{+}_{x,p_x} &= \left\{(x,y,p_x,p_y) \in \mathbb{R}^4 \;|\; H = H_0 \;,\; y = 0 \;,\; p_y > 0\right\}
\label{eq:psos}
\end{eqnarray}
```
```{figure} figures/henonheiles_pot.png
---
name:
---
Potential energy surface for the Hénon-Heiles system.
```
```{figure} figures/hen_conts.png
---
name: fig:henonHeiles_pes
---
Potential energy surface projected onto XY plane for the Hénon-Heiles system.
```
\label{fig:henonHeiles_pes}
\caption{Potential energy surface for the Hénon-Heiles system.}
```{figure} figures/LDs_Henon_tau_50_x_0_E_1div12.png
---
name:
---
Phase space structures of the Hénon-Heiles Hamiltonian as revealed by the $p$-norm variable integration time LDs with $p = 1/2$. A) LDs computed for $\tau = 50$ in the SOS $\mathcal{U}^{+}_{y,p_y}$ with energy $H = 1/12$
```
```{figure} figures/Mani_Henon_tau_50_x_0_E_1div12.png
---
name:
---
Gradient of the LD function showing stable and unstable manifold intersections in blue and red respectively.
```
```{figure} figures/LDs_Henon_tau_10_x_0_E_1div3.png
---
name:
---
Phase space structures of the Hénon-Heiles Hamiltonian as revealed by the $p$-norm variable integration time LDs with $p = 1/2$. B) LDs for $\tau = 10$ in the SOS $\mathcal{U}^{+}_{y,p_y}$ with energy $H = 1/3$
```
```{figure} figures/Mani_Henon_tau_10_x_0_E_1div3.png
---
name:
---
Gradient of the LD function showing stable and unstable manifold intersections in blue and red respectively.
```
```{figure} figures/LDs_Henon_tau_10_y_0_E_1div3.png
---
name:
---
Phase space structures of the Hénon-Heiles Hamiltonian as revealed by the $p$-norm variable integration time LDs with $p = 1/2$. C) LDs for $\tau = 10$ in the SOS $\mathcal{V}^{+}_{x,p_x}$ with energy $H = 1/3$
```
```{figure} figures/Mani_Henon_tau_10_y_0_E_1div3.png
---
name: fig:henonHeiles_lds
---
Gradient of the LD function showing stable and unstable manifold intersections in blue and red respectively.
```
\label{fig:henonHeiles_lds}
\caption{Phase space structures of the Hénon-Heiles Hamiltonian as revealed by the $p$-norm variable integration time LDs with $p = 1/2$. A) LDs computed for $\tau = 50$ in the SOS $\mathcal{U}^{+}_{y,p_y}$ with energy $H = 1/12$; C) LDs for $\tau = 10$ in the SOS $\mathcal{U}^{+}_{y,p_y}$ with energy $H = 1/3$; E) LDs for $\tau = 10$ in the SOS $\mathcal{V}^{+}_{x,p_x}$ with energy $H = 1/3$;. In the right panels we have extracted the invariant stable (blue) and unstable (red) manifolds from the gradient of LDs.}
## Stochastic Lagrangian Descriptors
Lagrangian descriptors were extended to stochastic dynamical systems in {cite}`balibrea2016lagrangian`, and our discussion here is taken from this source, where the reader can also find more details. A basic introduction to stochastic differential equations is in the book {cite}`Oksendal2003`.
(sec:pc)=
### Preliminary concepts
Lagrangian descriptors are a trajectory based diagnostic. Therefore we first need to develop the concepts required to
describe the nature of trajectories of stochastic differential equations (SDEs). We begin by
considering a general system of SDEs expressed in differential form as follows:
```{math}
---
label: eq:SDE
---
\begin{equation}
\label{eq:SDE}
dX_{t} = b(X_{t},t)dt + \sigma (X_{t},t)dW_{t}, \quad t \in \mathbb{R},
\end{equation}
```
where $b(\cdot) \in C^{1}(\mathbb{R}^{n}\times \mathbb{R})$ is the deterministic part, $\sigma (\cdot) \in C^{1}(\mathbb{R}^{n}\times \mathbb{R})$ is the random forcing, $W_{t}$ is a Wiener process (also referred to as Brownian motion) whose definition we give later, and $X_{t}$ is the solution of the equation. All these functions take values in $\mathbb{R}^{n}$.
As the notion of solution of a SDE is closely related with the Wiener process, we state what is meant by $W(\cdot )$. This definition is given in {cite}`duan15`, and this reference serves to provide the background for all of the notions in this section. Also, throughout we will use $\Omega$ to denote the probability space where the Wiener process is defined.
(def:Wiener)=
__Definition__ _Wiener/Brownian process_
A real valued stochastic Wiener or Brownian process $W(\cdot)$ is a stochastic process defined in a probability space $(\Omega , {\cal F},{\cal P})$ which satisfies
1. $W_0 = 0$ (standard Brownian motion),
2. $W_t - W_s$ follows a Normal distribution $N(0,t-s)$ for all $t\geq s \geq 0$,
3. for all time $0 < t_1 < t_2 < ... < t_n$, the random variables $W_{t_1}, W_{t_2} - W_{t_1},... , W_{t_n} - W_{t_{n-1}}$ are independent (independent increments).
Moreover, $W(\cdot)$ is a real valued two-sided Wiener process if conditions (ii) and (iii) change into
2. $W_t - W_s$ follows a Normal distribution $N(0,|t-s|)$ for all $t, s \in \mathbb{R}$,
3. for all time $t_1 , t_2 , ... , t_{2n} \in \mathbb{R}$ such that the intervals $\lbrace (t_{2i-1},t_{2i}) \rbrace_{i=1}^{n}$ are non-intersecting between them (_Note_), the random variables $W_{t_1}-W_{t_2}, W_{t_3} - W_{t_4},... , W_{t_{2n-1}} - W_{t_{2n}}$ are independent.
````{margin}
```{note}
With the notation $(t_{2i-1},t_{2i})$ we refer to the interval of points between the values $t_{2i-1}$ and $t_{2i}$, regardless the order of the two extreme values. Also with the assertion we impose that every pair of intervals of the family $\lbrace (t_{2i-1},t_{2i}) \rbrace_{i=1}^{n}$ has an empty intersection, or alternatively that the union $\bigcup_{i=1}^{n}(t_{2i-1},t_{2i})$ is conformed by $n$ distinct intervals over $\mathbb{R}$.
```
````
This method of Lagrangian descriptors has been developed for deterministic differential equations whose temporal domain is $\mathbb{R}$. In this sense it is natural to work with two-sided solutions as well as two-sided Wiener processes. Henceforth, every Wiener process $W(\cdot )$ considered in the this article will be of this form.
Given that any Wiener process $W(\cdot )$ is a stochastic process, by definition this is a family of random real variables $\lbrace W_{t}, t\in \mathbb{R} \rbrace$ in such a way that for each $\omega \in \Omega$ there exists a mapping, $ t \longmapsto W_{t}(\omega )$, known as the trajectory of a Wiener process.
Analogously to the Wiener process, the solution $X_{t}$ of the SDE {eq}`eq:SDE` is also a stochastic process. In particular, it is a family of random variables $\lbrace X_{t}, t\in \mathbb{R} \rbrace$ such that for each $\omega \in \Omega$, the trajectory of $X_{t}$ satisfies
```{math}
---
label: eq:Xt
---
\begin{equation}
\label{Xt}
t \longmapsto X_{t}(\omega ) = X_{0}(\omega ) + \int_{0}^{t} b(X_{s}(\omega ), s)ds + \int_{0}^{t} \sigma (X_{s}(\omega ), s)dW_{s}(\omega ),
\end{equation}
```
where $X_{0}:\Omega \rightarrow \mathbb{R}^{n}$ is the initial condition. In addition, as $b(\cdot)$ and $\sigma(\cdot)$ are smooth functions, they are locally Lipschitz and this leads to existence and pathwise uniqueness of a local, continuous solution (see {cite}`duan15`). That is if any two stochastic processes $X^1$ and $X^2$ are local solutions in time of SDE {eq}`eq:SDE` , then $X^1_t(\omega) = X^2_t(\omega)$ over a time interval $t \in (t_{i},t_{f})$ and for almost every $\omega \in \Omega$.
At each instant of time $t$, the deterministic integral $\int_{0}^{t} b(X_{s}(\omega ))ds$ is defined by the usual Riemann integration scheme since $b$ is assumed to be a differentiable function. However, the stochastic integral term is chosen to be defined by the It\^{o} integral scheme:
```{math}
---
label: eq:Ito
---
\begin{equation}
\label{Ito}
\int_{0}^{t} \sigma (X_{s}(\omega ),s)dW_{s}(\omega ) = \lim_{N \rightarrow \infty} \sum_{i=0}^{N-1} \sigma (X_{i\frac{t}{N}}(\omega ), it/N) \cdot \left[ W_{(i+1)\frac{t}{N}}(\omega ) - W_{i\frac{t}{N}}(\omega ) \right].
\end{equation}
```
This scheme will also facilitate the implementation of a numerical method for computing approximations for the solution $X_{t}$ in the next section.
Once the notion of solution, $X_{t}$, of a SDE {eq}`eq:SDE` is established, it is natural to ask if the same notions and ideas familiar from the study of deterministic differential equations from the dynamical systems point of view are still valid for SDEs. In particular, we want to consider the notion of hyperbolic trajectory and its stable and unstable manifolds in the context of SDEs. We also want to consider how such notions would manifest themselves in the context of *phase space transport* for SDEs, and the stochastic Lagrangian descriptor will play a key role in considering these questions from a practical point of view.
We first discuss the notion of an invariant set for a SDE.
In the deterministic case the simplest possible invariant set is a single trajectory of the differential equation. More precisely, it is the set of points through which a solution passes. Building on this construction, an invariant set is a collection of trajectories of different solutions. This is the most basic way to characterize the invariant sets with respect to a deterministic differential equation of the form
```{math}
---
label: eq:deterministic_system
---
\begin{equation}
\label{deterministic_system}
\dot{x} = f(x,t), \quad x \in \mathbb{R}^{n}, \quad t \in \mathbb{R}.
\end{equation}
```
For verifying the invariance of such sets the solution mapping generated by the vector field is used. For deterministic autonomous systems these are referred to as *flows* (or "dynamical systems") and for deterministic nonautonomous systems they are referred to as *processes*. The formal definitions can be found in {cite}`kloe11`.
A similar notion of solution mapping for SDEs is introduced using the notion of a random dynamical system $\varphi$ (henceforth referred to as RDS) in the context of SDEs. This function $\varphi$ is also a solution mapping of a SDE that satisfies several conditions, but compared with the solution mappings in the deterministic case, this RDS depends on an extra argument which is the random variable $\omega \in \Omega$. Furthermore the random variable $\omega$ evolves with respect to $t$ by means of a dynamical system $\lbrace \theta_{t} \rbrace_{t \in \mathbb{R}}$ defined over the probability space $\Omega$.
(rds)=
__Definition__ _Random Dynamical System_
````{margin}
```{note}
Given the probability measure $\mathcal{P}$ associated with the space $(\Omega , \mathcal{F},\mathcal{P})$, this remains invariant under the dynamical system $\lbrace \theta_{t} \rbrace_{t \in \mathbb{R}}$. Formally, $\theta_{t}\mathcal{P} = \mathcal{P}$ for every $t \in \mathbb{R}$. This statement means that $\mathcal{P}(B)=\mathcal{P}(\theta_{t}(B))$ for every $t \in \mathbb{R}$ and every subset $B \in \mathcal{F}$. Indeed for any dynamical system $\lbrace \theta_{t} \rbrace_{t \in \mathbb{R}}$ defined over the same probability space $\Omega$ as a Wiener process $W(\cdot )$, we have the equality $W_{s}(\theta_{t}\omega ) = W_{t+s}(\omega )-W_{t}(\omega )$ which implies that $dW_{s}(\theta_{t}\omega ) = dW_{t+s}(\omega )$ for every $s,t \in \mathbb{R}$ (see {cite}`duan15` for a detailed explanation).
```
````
Let $\lbrace \theta_{t} \rbrace_{t \in \mathbb{R}}$ be a measure-preserving (_Note_) dynamical system defined over $\Omega$, and let $\varphi : \mathbb{R} \times \Omega \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ be a measurable mapping such that $(t,\cdot , x) \mapsto \varphi (t,\omega ,x)$ is continuous for all $\omega \in \Omega$, and the family of functions $\lbrace \varphi (t,\omega ,\cdot ): \mathbb{R}^{n} \rightarrow \mathbb{R}^{n} \rbrace$ has the cocycle property:
$$ \varphi (0,\omega ,x)=x \quad \text{and} \quad \varphi (t+s,\omega ,x) = \varphi(t,\theta_{s}\omega,\varphi (s,\omega ,x)) \quad \text{for all } t,s \in \mathbb{R}, \text{ } x \in \mathbb{R}^{n} \text{ and } \omega \in \Omega .$$
Then the mapping $\varphi$ is a random dynamical system with respect to the stochastic differential equation
\begin{equation*}
dX_{t} = b(X_{t})dt + \sigma (X_{t})dW_{t}
\end{equation*}
if $\varphi (t,\omega ,x)$ is a solution of the equation.
Analogous to the deterministic case, the definition of invariance with respect to a SDE can be characterized in terms of a RDS. This is an important topic in our consideration of stochastic Lagrangian descriptors. Now we introduce an example of a SDE for which the analytical expression of the RDS is obtained. This will be a benchmark example in our development of stochastic Lagrangian descriptors their relation to stochastic invariant manifolds.
**Noisy saddle point**
For the stochastic differential equation
```{math}
---
label: eq:noisy_saddle
---
\begin{equation}
\label{noisy_saddle}
\begin{cases} dX_{t} = X_{t}dt + dW_{t}^{1} \\ dY_{t} = -Y_{t}dt + dW_{t}^{2} \end{cases}
\end{equation}
```
where $W_{t}^{1}$ and $W_{t}^{2}$ are two different Wiener processes, the solutions take the expressions
```{math}
---
label: eq:noisy_saddle_solutions
---
\begin{equation}
\label{noisy_saddle_solutions}
X_{t} = e^{t} \left( X_{0}(\omega ) + \int_{0}^{t}e^{-s}dW_{s}^{1}(\omega ) \right) \quad , \quad Y_{t} = e^{-t} \left( Y_{0}(\omega ) + \int_{0}^{t}e^{s}dW_{s}^{2}(\omega ) \right)
\end{equation}
```
and therefore the random dynamical system $\varphi$ takes the form
```{math}
---
label: eq:noisy_saddle_RDS
---
\begin{equation}
\label{noisy_saddle_RDS}
\begin{array}{ccccccccc}
\varphi : & & \mathbb{R} \times \Omega \times \mathbb{R}^{2} & & \longrightarrow & & \mathbb{R}^{2} & & \\ & & (t,\omega ,(x,y)) & & \longmapsto & & \left( \varphi_{1}(t,\omega ,x),\varphi_{2}(t,\omega ,y) \right) & = & \left( e^{t} \left( x + \int_{0}^{t}e^{-s}dW_{s}^{1}(\omega ) \right) , e^{-t} \left( y + \int_{0}^{t}e^{s}dW_{s}^{2}(\omega ) \right) \right) . \end{array}
\end{equation}
```
Notice that this last definition {ref}`Random Dynamical System<rds>` is expressed in terms of SDEs with time-independent coefficients $b,\sigma$. For more general SDEs a definition of nonautonomous RDS is developed in {cite}`duan15`. However, for the remaining examples considered in this article we make use of the already given definition of RDS.
Once we have the notion of RDS, it can be used to describe and detect geometrical structures and determine their influence on the dynamics of trajectories. Specifically, in clear analogy with the deterministic case, we focus on those trajectories whose expressions do not depend explicitly on time $t$, which are referred as *random fixed points*. Moreover, their stable and unstable manifolds, which may also depend on the random variable $\omega$, are also objects of interest due to their influence on the dynamical behavior of nearby trajectories. Both types of objects are invariant. Therefore we describe a characterization of invariant sets with respect to a SDE by means of an associated RDS.
(invariant_set)=
__Definition__ _Invariant Set_
A non empty collection $M : \Omega \rightarrow \mathcal{P}(\mathbb{R}^{n})$, where $M(\omega ) \subseteq \mathbb{R}^{n}$ is a closed subset for every $\omega \in \Omega$, is called an invariant set for a random dynamical system $\varphi$ if
```{math}
---
label: eq:invariance
---
\begin{equation}
\label{invariance}
\varphi (t,\omega ,M(\omega )) = M(\theta_{t}\omega ) \quad \text{for every } t \in \mathbb{R} \text{ and every } \omega \in \Omega.
\end{equation}
```
Again, we return to the noisy saddle {eq}`eq:noisy_saddle` , which is an example of a SDE for which several invariant sets can be easily characterized by means of its corresponding RDS.
__Noisy saddle point__
For the stochastic differential equations
```{math}
---
label:
---
\begin{equation}
\begin{cases} dX_{t} = X_{t}dt + dW_{t}^{1} \\ dY_{t} = -Y_{t}dt + dW_{t}^{2} \end{cases}
\end{equation}
```
where $W_{t}^{1}$ and $W_{t}^{2}$ are two different Wiener processes, the solution mapping $\varphi$ is given by the following expression
```{math}
---
label:
---
\begin{equation}
\begin{array}{ccccccccc}
\varphi : & & \mathbb{R} \times \Omega \times \mathbb{R}^{2} & & \longrightarrow & & \mathbb{R}^{2} & & \\
& & (t,\omega ,(x,y)) & & \longmapsto & & (\varphi_{1}(t,\omega ,x),\varphi_{2}(t,\omega ,y)) & = & \left( e^{t} \left( x + \int_{0}^{t}e^{-s}dW_{s}^{1}(\omega ) \right) , e^{-t} \left( y + \int_{0}^{t}e^{s}dW_{s}^{2}(\omega ) \right) \right) . \end{array}
\end{equation}
```
Notice that this is a decoupled random dynamical system. There exists a solution whose components do not depend on variable $t$ and are convergent for almost every $\omega \in \Omega$ as a consequence of the properties of Wiener processes (see {cite}`duan15`). This solution has the form:
$$\tilde{X}(\omega) = (\tilde{x}(\omega ),\tilde{y}(\omega )) = \left( - \int_{0}^{\infty}e^{-s}dW_{s}^{1}(\omega ) , \int_{-\infty}^{0}e^{s}dW_{s}^{2}(\omega ) \right) .$$
Actually, $\tilde{X}(\omega )$ is a solution because it satisfies the invariance property that we now verify:
```{math}
---
label: eq:invariance_x
---
\begin{equation}
\label{invariance_x}
\begin{array}{ccl}
\varphi_{1} (t,\omega ,\tilde{x}(\omega )) & = & \displaystyle{e^{t}\left( -\int_{0}^{+\infty}e^{-s}dW^{1}_{s}(\omega )
+ \int_{0}^{t}e^{-s}dW^{1}_{s}(\omega ) \right) } = \displaystyle{-\int_{t}^{+\infty}e^{-(s-t)}dW^{1}_{s}(\omega ) }\\
& = & \displaystyle{-\int_{0}^{+\infty}e^{-t'}dW^{1}_{t'+t}(\omega) = -
\int_{0}^{+\infty}e^{-t'}dW^{1}_{t'}(\theta_{t}\omega ) = \tilde{x}(\theta_{t}\omega )} \quad \text{by means of }
t'=s-t,
\end{array}
\end{equation}
```
```{math}
---
label: eq:invariance_y
---
\begin{equation}
\label{invariance_y}
\begin{array}{lll}
\varphi_{2} (t,\omega ,\tilde{y}(\omega )) & = & \displaystyle{e^{-t}\left( \int_{-\infty}^{0}e^{s}dW^{2}_{s}(\omega ) +
\int_{0}^{t}e^{s}dW^{2}_{s}(\omega ) \right) = \int_{-\infty}^{t}e^{s-t}dW^{2}_{s}(\omega ) =
\int_{-\infty}^{0}e^{t'}dW^{2}_{t'+t}(\omega )} \\ & = & \displaystyle{
\int_{-\infty}^{0}e^{t'}dW^{2}_{t'}(\theta_{t}\omega ) = \tilde{y}(\theta_{t}\omega )} \quad \text{by means of } t'=s-t.
\end{array}
\end{equation}
```
This implies that $\varphi (t,\omega ,\tilde{X}(\omega )) = \tilde{X}(\theta_{t} \omega)$ for every $t \in \mathbb{R}$ and every $\omega \in \Omega$. Therefore $\tilde{X}(\omega )$ satisfies the invariance property {eq}`eq:invariance`. This conclusion comes from the fact that $\tilde{x}(\omega )$ and $\tilde{y}(\omega )$ are also invariant under the components $\varphi_{1}$ and $\varphi_{2}$, in case these are seen as separate RDSs defined over $\mathbb{R}$ (see {eq}`eq:invariance_x` and {ref}`eq:invariance_y`, respectively).
Due to its independence with respect to the time variable $t$, it is said that $\tilde{X}(\omega )$ is a random fixed point of the SDE {eq}`eq:noisy_saddle`, or more commonly a stationary orbit. As the trajectory of $\tilde{X}(\omega )$ (and separately its components $\tilde{x}(\omega )$ and $\tilde{y}(\omega )$) is proved to be an invariant set, it is straightforward to check that the two following subsets of $\mathbb{R}^{2}$,
$$\mathcal{S}(\omega ) = \lbrace (x,y) \in \mathbb{R}^{2} : x = \tilde{x}(\omega ) \rbrace \quad , \quad \mathcal{U}(\omega ) = \lbrace (x,y) \in \mathbb{R}^{2} : y = \tilde{y}(\omega ) \rbrace $$
are also invariant with respect to the RDS $\varphi$. Similarly to the deterministic setting, these are referred to as the stable and unstable manifolds of the stationary orbit respectively. Additionally, in order to prove the separating nature of these two manifolds and the stationary orbit with respect to their nearby trajectories, let's consider any other solution $(\overline{x}_{t},\overline{y}_{t})$ of the noisy saddle with initial conditions at time $t=0$,
```{math}
---
label:
---
\begin{equation}
\overline{x}_{0} = \tilde{x}(\omega ) + \epsilon_{1}(\omega ) , \quad \overline{y}_{0} = \tilde{y}(\omega ) + \epsilon_{2}(\omega ), \quad \text{being } \epsilon_{1}(\omega ), \epsilon_{2}(\omega ) \text{ two random variables.}
\end{equation}
```
If the corresponding RDS $\varphi$ is applied to compare the evolution of this solution $(\overline{x}_{t},\overline{y}_{t})$ and the stationary orbit, there arises an exponential dichotomy:
$$ (\overline{x}_{t},\overline{y}_{t}) - (\tilde{x}(\theta_{t}\omega ),\tilde{y}(\theta_{t}\omega )) = \varphi (t,\omega ,(\overline{x}_{0},\overline{y}_{0})) - \varphi (t,\omega ,(\tilde{x}(\omega ),\tilde{y}(\omega ))) $$
$$= \left( e^{t}\left[ \overline{x}_{0} + \int_{0}^{t}e^{-s}dW_{s}^{1}(\omega ) - \tilde{x}(\omega ) - \int_{0}^{t}e^{-s}dW_{s}^{1}(\omega ) \right] , e^{-t}\left[ \overline{y}_{0} + \int_{0}^{t}e^{s}dW_{s}^{2}(\omega ) - \tilde{y}(\omega ) - \int_{0}^{t}e^{s}dW_{s}^{2}(\omega ) \right] \right) $$
```{math}
---
label: eq:dichotomy
---
\begin{equation}
= \left( e^{t} \left( \tilde{x}(\omega )+\epsilon_{1}(\omega )-\tilde{x}(\omega ) \right) ,e^{-t} \left( \tilde{y}(\omega )+\epsilon_{2}(\omega )-\tilde{y}(\omega ) \right) \right) = \left( \epsilon_{1}(\omega )e^{t},\epsilon_{2}(\omega )e^{-t} \right) .
\end{equation}
```
Considering that $(\overline{x}_{t},\overline{y}_{t})$ is different from $(\tilde{x}(\omega ),\tilde{y}(\omega ))$ then one of the two cases $\epsilon_{1} \not \equiv 0$ or $\epsilon_{2} \not \equiv 0$ holds, let say $\epsilon_{1} \not = 0$ or $\epsilon_{2} \not = 0$ for almost every $\omega \in \Omega$. In the first case, the distance between both trajectories $(\overline{x}_{t},\overline{y}_{t})$ and $(\tilde{x},\tilde{y})$ increases at an exponential rate in positive time:
```{math}
---
label: eq:eq_1
---
\begin{equation}
\label{eq:eq_1}
\Vert \varphi (t,\omega ,(\overline{x}_{t},\overline{y}_{t}))-\varphi (t,\omega ,(\tilde{x},\tilde{y})) \Vert \geq |\epsilon_{1}(\omega )e^{t} | \longrightarrow + \infty \quad \text{when } t \rightarrow + \infty \text{ and for a.e. } \omega \in \Omega .
\end{equation}
```
Similarly to this case, when the second option holds the distance between both trajectories increases at a exponential rate in negative time. It does not matter how close the initial condition $(\overline{x}_{0},\overline{x}_{0})$ is from $(\tilde{x}(\omega ),\tilde{y}(\omega ))$ at the initial time $t=0$. Actually this same exponentially growing separation can be achieved for any other initial time $t \not = 0$. Following these arguments, one can check that the two manifolds $\mathcal{S}(\omega )$ and $\mathcal{U}(\omega )$ also exhibit this same separating behaviour as the stationary orbit. Moreover, we remark that almost surely the stationary orbit is the only solution whose components are bounded.
These facts highlight the distinguished nature of the stationary orbit (and its manifolds) in the sense that it is an isolated solution from the others. Apart from the fact that $(\tilde{x},\tilde{y})$ "moves" in a bounded domain for every $t \in \mathbb{R}$, any other trajectory eventually passing through an arbitrary neighborhood of $(\tilde{x},\tilde{y})$ at any given instant of time $t$, leaves the neighborhood and then separates from the stationary orbit in either positive or negative time. Specifically, this separation rate is exponential for the noisy saddle, just in the same way as for the deterministic saddle.
These features are also observed for the trajectories within the stable and unstable manifolds of the stationary orbit, but in a more restrictive manner than $(\tilde{x},\tilde{y})$. Taking for instance an arbitrary trajectory $(x^{s},y^{s})$ located at $\mathcal{S}(\omega )$ for every $t \in \mathbb{R}$, its first component $x^{s}_{t}=\tilde{x}(\omega )$ remains bounded for almost every $\omega \in \Omega$. In contrast, any other solution passing arbitrarily closed to $(x^{s},y^{s})$ neither being part of $\mathcal{S}(\omega )$ nor being the stationary orbit, satisfies the previous inequality {eq}`eq:eq_1` and therefore separates from $\mathcal{S}(\omega )$ at an exponential rate for increasing time. With this framework we can now introduce the formal definitions of stationary orbit and invariant manifold.
(stationary_orbit)=
__Definition__ _Stationary Orbit_
A random variable $\tilde{X}: \Omega \rightarrow \mathbb{R}^{n}$ is called a stationary orbit (random fixed point) for a random dynamical system $\varphi$ if
$$\varphi(t, \omega, \tilde{X}(\omega)) = \tilde{X}(\theta_t\omega), \quad \text{for every } t \in \mathbb{R} \text{ and every } \omega \in \Omega .$$
Obviously every stationary orbit $\tilde{X}(\omega )$ is an invariant set with respect to a RDS as it satisfies Definition {ref}`Invariant Set<invariant_set>`. Among several definitions of invariant manifolds given in the bibliography (for example {cite}`arno98`, {cite}`boxl89`, {cite}`duan15`), which have different formalisms but share the same philosophy, we choose the one given in {cite}`duan15` because it adapts to our example in a very direct way.
__Definition__
A random invariant set $M: \Omega \rightarrow \mathcal{P}(\mathbb{R}^{n})$ for a random dynamical system $\varphi$ is called a $C^{k}$-Lipschitz invariant manifold if it can be represented by a graph of a $C^{k}$ Lipschitz mapping ($k \geq 1$)
```{math}
\begin{equation*}
\gamma (\omega , \cdot ):H^{+} \to H^{-}, \quad \text{with direct sum decomposition } H^{+} \oplus H^{-} = \mathbb{R}^{n}
\end{equation*}
```
such that
```{math}
\begin{equation*}
\quad M(\omega ) = \lbrace x^{+} \oplus \gamma(\omega ,x^{+}) : x^{+} \in H^{+} \rbrace \quad \text{for every } \omega \in \Omega.
\end{equation*}
```
This is a very limited notion of invariant manifold as its formal definition requires the set to be represented by a Lipschitz graph. Anyway, it is consistent with the already established manifolds $\mathcal{S}(\omega )$ and $\mathcal{U}(\omega )$ as these can be represented as the graphs of two functions $\gamma_{s}$ and $\gamma_{u}$ respectively,
```{math}
\begin{align*}
\begin{array}{ccccc}
\gamma_{s} (\omega , \cdot ) & : & span \lbrace \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \rbrace & \longrightarrow & span \lbrace \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \rbrace \\
& & \left( \begin{array}{c} 0 \\ t \end{array} \right) & \longmapsto & \left( \begin{array}{c} \tilde{x}(\omega ) \\ 0 \end{array} \right) \end{array}
\; \text{and} \;
\begin{array}{ccccc} \gamma_{u} (\omega , \cdot ) & : & span \lbrace \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \rbrace & \longrightarrow & span \lbrace \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \rbrace \\
& & \left( \begin{array}{c} t \\ 0 \end{array} \right) & \longmapsto & \left( \begin{array}{c} 0 \\ \tilde{y}(\omega ) \end{array} \right) \quad .
\end{array}
\end{align*}
```
Actually the domains of such functions $\gamma_{s}$ and $\gamma_{u}$ are the linear subspaces $E^{s}(\omega )$ and $E^{u}(\omega )$, known as the stable and unstable subspaces of the random dynamical system $\Phi (t,\omega )$. This last mapping is obtained from linearizing the original RDS $\varphi$ over the stationary orbit $(\tilde{x},\tilde{y})$. This result serves as an argument to denote $\mathcal{S}(\omega )$ and $\mathcal{U}(\omega )$ as the stable and unstable manifolds of the stationary orbit, not only because these two subsets are invariant under $\varphi$, as one can deduce from {eq}`eq:invariance_x` and {eq}`eq:invariance_x`, but also due to the dynamical behaviour of their trajectories in a neighborhood of the stationary orbit $\tilde{X}(\omega )$. Hence the important characteristic of $\tilde{X}(\omega )=(\tilde{x},\tilde{y})$ is not only its independence with respect to the time variable $t$; but also the fact that it exhibits hyperbolic behaviour with respect to its neighboring trajectories. Considering the hyperbolicity of a given solution, as well as in the deterministic context, means considering the hyperbolicity of the RDS $\varphi$ linearized over such solution. Specifically, the Oseledets' multiplicative ergodic theorem for random dynamical systems ({cite}`arno98` and {cite}`duan15`) ensures the existence of a Lyapunov spectrum which is necessary to determine whether the stationary orbit $\tilde{X}(\omega )$ is hyperbolic or not. All these issues are well reported in {cite}`duan15`, including the proof that the noisy saddle {eq}`eq:noisy_saddle` satisfies the Oseledets' multiplicative ergodic theorem conditions.
Before implementing the numerical method of Lagrangian descriptors to several examples of SDEs, it is important to remark why random fixed points and their respective stable and unstable manifolds govern the nearby trajectories, and furthermore, how they may influence the dynamics throughout the rest of the domain. These are essential issues in order to describe the global phase space motion of solutions of SDEs. However, these questions do not have a simple answer. For instance, in the noisy saddle example {eq}`eq:noisy_saddle` the geometrical structures arising from the dynamics generated around the stationary orbit are quite similar to the dynamics corresponding to the deterministic saddle point $\lbrace \dot{x}=x,\dot{y}=-y \rbrace$. Significantly, the manifolds $\mathcal{S}(\omega )$ and $\mathcal{U}(\omega )$ of the noisy saddle form two dynamical barriers for other trajectories in the same way that the manifolds $\lbrace x = 0 \rbrace$ and $\lbrace y = 0 \rbrace$ of the deterministic saddle work. This means that for any particular experiment, i.e., for any given $\omega \in \Omega$, the manifolds $\mathcal{S}(\omega )$ and $\mathcal{U}(\omega )$ are determined and cannot be "crossed" by other trajectories due to the uniqueness of solutions (remember that the manifolds are invariant under the RDS {eq}`eq:noisy_saddle_RDS` and are comprised of an infinite family of solutions). Also by considering the exponential separation rates reported in {eq}`eq:eq_1` with the rest of trajectories, the manifolds $\mathcal{S}(\omega )$ and $\mathcal{U}(\omega )$ divide the plane $\mathbb{R}^{2}$ of initial conditions into four qualitatively distinct dynamical regions; therefore providing a phase portrait representation.
````{margin}
```{note}
Otherwise, if nonlinearity is dominating the behavior of the terms in equation {eq}`eq:SDE` then the correspondence between the manifolds for $\Phi (t, \omega )$ to the manifolds for $\varphi$ needs to be made by means of the local stable and unstable manifold theorem (see {cite}`moham99`, Theorem 3.1). Therein it is considered a homeomorphism $H(\omega )$ which establishes the equivalence of the geometrical structures arising for both sets of manifolds, and as a consequence the manifolds for $\varphi$ inherit the same dynamics as the ones for $\Phi (t, \omega )$ but only in a neighborhood of the stationary orbit. In this sense the existence of such manifolds for a nonlinear RDS $\varphi$ is only ensured locally. Anyway this result provides a very good approximation to the stochastic dynamics of a system, and enables us to discuss the different patterns of behavior of the solutions in the following examples.
```
````
Nevertheless it remains to show that such analogy can be found between other SDEs and their corresponding non-noisy deterministic differential equations (_Note_). In this direction there is a recent result ({cite}`cheng16`, Theorem 2.1) which ensures the equivalence in the dynamics of both kinds of equations when the noisy term $\sigma$ is additive (i.e., $\sigma$ does not depend on the solution $X_{t}$). Although this was done by means of the most probable phase portrait, a technique that closely resembles the ordinary phase space for deterministic systems, this fact might indicate that such analogy in the dynamics cannot be achieved when the noise does depend explicitly on the solution $X_{t}$. Actually any additive noise affects all the particles together at the same magnitude.
Anyway the noisy saddle serves to establish an analogy to the dynamics with the deterministic saddle. One of its features is the contrast between the growth of the components $X_{t}$ and $Y_{t}$, which mainly have a positive and negative exponential growth respectively. We will see that this is graphically captured when applying the stochastic Lagrangian descriptors method to the SDE {eq}`eq:noisy_saddle` over a domain of the stationary orbit. Moreover when representing the stochastic Lagrangian descriptor values for the noisy saddle, one can observe that the lowest values are precisely located on the manifolds $\mathcal{S}(\omega )$ and $\mathcal{U}(\omega )$. These are manifested as sharp features indicating a rapid change of the values that the stochastic Lagrangian descriptor assumes. This geometrical structure formed by "local minimums" has a very marked crossed form and it is straightforward to think that the stationary orbit is located at the intersection of the two cross-sections. These statements are supported afterwards by numerical simulations and analytical results.
(sec:SLD)=
### The stochastic Lagrangian descriptor
The definition of stochastic Lagrangian descriptors that we introduce here is based on the discretization of the continuous time definition given in Eq. {eq}`eq:Mp_function` that relies on the computation of the $p$-norm of trajectories. In fact, this discretization gave rise to a version of LDs that can be used to analyze discrete time dynamical systems (maps), see {cite}`carlos2015`. Let $\{x_i\}^{N}_{i =
-N}$ denote an orbit of $(2N + 1)$ points, where $x_i \in \mathbb{R}^n$ and $N \in \mathbb{N}$ is the number of iterations of the map. Considering the space of orbits as a sequence space, the discrete Lagrangian descriptor was defined in terms of the $\ell^p$-norm of an orbit as follows:
```{math}
---
label:
---
\begin{equation}
\displaystyle{MD_p(x_0, N) = \sum^{N-1}_{i = -N}\Vert x_{i+1} - x_i \Vert_p, \quad p \leq 1.}
\end{equation}
```
This alternative definition allows us to prove formally the non-differentiability of the $MD_p$ function through points that belong to invariant manifolds of a hyperbolic trajectory. This fact implies a better visualization of the invariant manifolds as they are detected over areas where the $MD_p$ function presents abrupt changes in its values.
Now we extend these ideas to the context of stochastic differential equations. For this purpose we consider a general SDE of the form:
```{math}
---
label: eq:stochastic_lagrangian_system
---
\begin{equation}
dX_t = b(X_t, t)dt + \sigma(X_t, t)dW_t, \quad X_{t_0} = x_0,
\label{eq:stochastic_lagrangian_system}
\end{equation}
```
where $X_t$ denotes the solution of the system, $b(\cdot)$ and $\sigma(\cdot)$ are Lipschitz functions which ensure uniqueness of solutions and $W_t$ is a two-sided Wiener process. Henceforth, we make use of the following notation
```{math}
---
label:
---
\begin{equation}
X_{t_j} := X_{t_0+j\Delta t},
\end{equation}
```
for a given $\Delta t>0$ small enough and $j=-N,\cdots ,-1,0,1, \cdots ,N$.
__Definition__
The stochastic Lagrangian descriptor evaluated for SDE {eq}`eq:stochastic_lagrangian_system` with general solution
$\textbf{X}_{t}(\omega )$ is given by
```{math}
---
label: eq:MS
---
\begin{equation}
MS_p(\textbf{x}_0, t_0, \tau, \omega) = \sum^{N-1}_{i = -N} \Vert \textbf{X}_{t_{i+1}} -
\textbf{X}_{t_i} \Vert_p
\label{eq:MS}
\end{equation}
```
where $\lbrace \textbf{X}_{t_{j}} \rbrace_{j=-N}^{N}$ is a discretization of the solution such that
$\textbf{X}_{t_{-N}} = \textbf{X}_{-\tau}$, $\textbf{X}_{t_N} = \textbf{X}_{\tau}$, $\textbf{X}_{t_0} = \textbf{x}_{0}$, for a given $\omega \in \Omega$.
__Definition__
Obviously every output of the $MS_p$ function highly depends on the experiment $\omega \in \Omega$ where $\Omega$ is the probability space that includes all the possible outcomes of a given phenomena. Therefore in order to analyze the homogeneity of a given set of outputs, we consider a sequence of results of the $MS_p$ function for the same stochastic equation {eq}`eq:stochastic_lagrangian_system` : $MS_p(\cdot, \omega_1)$, $MS_p(\cdot, \omega_2)$,
$\cdots$, $MS_p(\cdot, \omega_M)$. It is feasible that the following relation holds
```{math}
---
label: eq:deterministic_tol
---
\begin{equation}
d(MS_p(\cdot, \omega_i), MS_p(\cdot, \omega_j)) < \delta, \quad \text{for all } i,j,
\label{eq:deterministic_tol}
\end{equation}
```
where $d$ is a metric that measures the similarity between two matrices (for instance the Frobenius norm $\Vert A-B \Vert_F =
\sqrt{Tr((A-B)\cdot (A-B)^T)}$) and $\delta$ is a positive tolerance. Nevertheless for general stochastic differential equations, expression {eq}`eq:deterministic_tol` does not usually hold. Alternatively if the elements of the sequence of matrices $MS_p(\cdot, \omega_1)$, $MS_p(\cdot, \omega_2)$, $\cdots$, $MS_p(\cdot, \omega_M)$ do not have much similarity to each other, it may be of more use to define the mean of the outputs
```{math}
---
label: eq:mean_MSp_value
---
\begin{equation}
\displaystyle{\mathbb{E} \left[ MS_p(\cdot, \omega) \right] = \left(
\frac{MS_p(\cdot, \omega_1) + MS_p(\cdot, \omega_2) + \cdots +
MS_p(\cdot, \omega_M)}{M}\right) ,}
\label{eq:mean_MSp_value}
\end{equation}
```
(sec:num)=
## Numerical Simulation of the Stochastic Lagrangian Descriptor
In this section we describe the stochastic Lagrangian descriptor method that can be used to numerically solve and visualize the geometrical structures of SDEs. Consider a general $n$-dimensional SDE of the form
```{math}
---
label:
---
\begin{equation}
dX^j_t = b^j(X_t, t)dt + \sum^m_{k=1}\sigma^j_k(X_t, t)dW^k_t, \quad X_{t_0} = x_0 \in \mathbb{R}^n, \quad j=1,\cdots ,n
\end{equation}
```
where $X_t = (X^1_t, \cdots , X^n_t)$ and $W^1_t, \cdots, W^m_t$ are $m$ independent Wiener processes.
If the time step $\Delta t$ is firstly fixed, then the temporal grid $t_p = t_0 + p\Delta t$ ($p \in \mathbb{Z}$) is already determined and we arrive to the difference equation
```{math}
---
label:
---
\begin{equation}
X^j_{t+\Delta t} = X^j_t + b^j(X_t, t)\Delta t + \sum^m_{k=1} \sigma^j_k(X_t, t)dW^k_t.
\end{equation}
```
This scheme is referred to as the Euler-Marayuma method for solving a single path of the SDE. If the stochastic part is removed from the equation, then we obtain the classical Euler method. Suppose $X_{t_p}$ is the solution of the SDE and
$\tilde{X}_{t_p}$ its numerical approximation at any time $t_p$. Since both of them are random variables, the accuracy of the method must be determined in probabilistic terms. With this aim, the following definition is introduced.
__Definition__
A stochastic numerical method has an order of convergence equal to $\gamma$ if there exists a constant $C>0$ such that
```{math}
---
label:
---
\begin{equation}
\mathbb{E} \left[ X_{t_p} - \tilde{X}_{t_p} \right] \leq C \Delta t^{\gamma},
\end{equation}
```
for any arbitrary $t_p = t_0 + p\Delta t$ and $\Delta t$ small enough.
Indeed, the Euler-Maruyama method has an order of convergence equal to $1/2$ (see {cite}`kloeden2013numerical` for further details).
### The noisy saddle
The noisy saddle is a fundamental benchmark for assessing numerical methods for revealing phase space structures. Its main value is in the simplicity of the expressions taken by the components of the stationary orbit and its corresponding stable and unstable manifolds. From these one clearly observes the exponential separation rates between particles passing
near the manifolds. Now for the stochastic differential equations
```{math}
---
label: eq:general_noisy
---
\begin{equation}
\label{eq:general_noisy}
\begin{cases}
dX_t = a_1X_t dt + b_1dW^1_t \\
dY_t = -a_2Y_t dt + b_2dW^2_t
\end{cases}
\end{equation}
```
it is straightforward to check that the only stationary orbit takes the expression
```{math}
---
label:
---
\begin{equation}
\widetilde{X}(\omega ) = \left( \tilde{x}(\omega ), \tilde{y}(\omega ) \right) = \left(
-\int_{0}^{\infty}e^{-a_{1}s} b_1dW^1_t(\omega ) , \int_{-\infty}^{0}e^{b_{1}s}
b_2dW^2_t(\omega ) \right)
\end{equation}
```
where $a_{1},a_{2},b_{1},b_{2} \in \mathbb{R}$ are constants and $a_{1},a_{2}>0$. Its corresponding stable and unstable manifolds are
```{math}
---
label:
---
\begin{equation}
\mathcal{S}(\omega ) = \lbrace (x,y) \in \mathbb{R}^{2} : x = \tilde{x}(\omega ) \rbrace , \quad
\mathcal{U}(\omega ) = \lbrace (x,y) \in \mathbb{R}^{2} : y = \tilde{y}(\omega ) \rbrace .
\end{equation}
```
These play a very relevant role as dynamical barriers for the particles tracked by the RDS, which is generated by the SDE {eq}`eq:general_noisy`. This fact has been justified in the previous section, but can be analytically demonstrated when computing the stochastic Lagrangian descriptor {eq}`eq:MS` for the solution of the noisy saddle.
According to the notation used for the definition of SLD
$${
MS_p (\mathbf{x}_0, t_0, \tau, \omega) = \sum_{i = -N}^{N-1} \Vert \mathbf{X}_{t_{i+1}} - \mathbf{X}_{t_i} \Vert_p
}$$
at which the components of the solution satisfy the initial conditions $\textbf{X}_{t_{0}}= \left( X_{t_{0}},Y_{t_{0}}
\right) = (x_{0},y_{0}) = \textbf{x}_{0}$, these take the expressions
```{math}
---
label: eq:general_noisy_saddle_solutions
---
\begin{equation}
\label{general_noisy_saddle_solutions}
X_{t} = e^{a_{1}t} \left( x_{0} + \int_{0}^{t}e^{-a_{1}s}b_{1}dW_{s}^{1} \right) \quad , \quad Y_{t} = e^{-a_{2}t}
\left( y_{0} + \int_{0}^{t}e^{a_{2}s}b_{2}dW_{s}^{2}(\omega ) \right)
\end{equation}
```
and the temporal nodes satisfy the rule $t_{i} = t_{0} + i\Delta t$ with $t_{0}$ and $\Delta t$ already given. Now it is possible to compute analytically the increments $\Vert \textbf{X}_{t_{i+1}} - \mathbf{X}_{t_i} \Vert_p = \vert X_{t_{i+1}} - X_{t_i} \vert^{p} + \vert Y_{t_{i+1}} - Y_{t_i} \vert^{p}$:
$$\left| X_{t_{i+1}} - X_{t_i} \right|^{p} = \left| e^{a_{1}t_{i+1}} \left( x_{0} +
\int_{0}^{t_{i+1}}e^{-a_{1}s}b_{1}dW_{s}^{1} \right) - e^{a_{1}t_{i}} \left( x_{0} +
\int_{0}^{t_{i}}e^{-a_{1}s}b_{1}dW_{s}^{1} \right) \right|^{p}$$
$$= \left| e^{a_{1}t_{i}}\left( e^{a_{1}\Delta t} - 1 \right) \left[ x_{0} + \int_{0}^{t_{i}}e^{-a_{1}s}b_{1}dW_{s}^{1}
\right] + e^{a_{1}(t_{i}+\Delta t)} \int_{t_{i}}^{t_{i}+\Delta t}e^{-a_{1}s}b_{1}dW_{s}^{1} \right|^{p}$$
$$= \left| e^{a_{1}t_{i}}\left( e^{a_{1}\Delta t} - 1 \right) \left[ x_{0} +
\int_{0}^{t_{i}}e^{-a_{1}s}b_{1}dW_{s}^{1} \right] + e^{a_{1}\Delta t}b_{1}dW_{t_{i}}^{1} \right|^{p} $$
The last expression is obtained using Itô formula {eq}`eq:Ito` .
Moreover for large values of $t_{i}$ such that $e^{a_{1}t_{i}} \gg e^{a_{1}\Delta t}$ and taking into account that $dW_{t_{i}}^{1}$ is finite almost surely, we can consider the following approximation
```{math}
---
label: eq:x_increments
---
\begin{equation}
\label{x_increments}
\left| X_{t_{i+1}} - X_{t_i} \right|^{p} \hspace{0.1cm} \approx \hspace{0.1cm} e^{a_{1}t_{i}\cdot p}\left|
e^{a_{1}\Delta t} - 1 \right|^{p} \left| x_{0} + \int_{0}^{t_{i}}e^{-a_{1}s}b_{1}dW_{s}^{1} \right|^{p}.
\end{equation}
```
By following these arguments, one can get an analogous result for the second component $Y_{t}$:
$$\left| Y_{t_{i+1}} - Y_{t_i} \right|^{p} = \left| e^{-a_{2}t_{i}}\left( e^{-a_{2}\Delta t} - 1 \right) \left[
y_{0} + \int_{0}^{t_{i}}e^{a_{2}s}b_{2}dW_{s}^{2} \right] + e^{-a_{2}\Delta t}b_{2}dW_{t_{i}}^{2} \right|^{p},$$
which for small values of $t_{i}$, such that $e^{-a_{2}t_{i}} \gg e^{-a_{2}\Delta t}$, this approximation can be further simplified as follows
```{math}
---
label: eq:y_increments
---
\begin{equation}
\label{y_increments}
\left| Y_{t_{i+1}} - Y_{t_i} \right|^{p} \hspace{0.1cm} \approx \hspace{0.1cm} e^{-a_{2}t_{i}\cdot p}\left|
e^{-a_{2}\Delta t} - 1 \right|^{p} \left| y_{0} + \int_{0}^{t_{i}}e^{a_{2}s}b_{2}dW_{s}^{2} \right|^{p}.
\end{equation}
```
Once the analytic expression of the SLD applied to the noisy saddle {eq}`eq:general_noisy` is known, it can be proved that the stable and unstable manifolds of the stationary orbit are manifested as singularities of the SLD function over any given domain of initial conditions containing the stationary orbit. This fact implies that the SLD method realizes a procedure to detect these geometrical objects and, consequently, provides a phase portrait representation of the dynamics generated by the noisy saddle. In the same way as described in {cite}`mancho2013lagrangian`, we refer to singularities as points of the domain of spatial initial conditions where the derivative of the SLD is not defined. The paradigm example of the mathematical manifestation of singularities of the LD on stable and unstable manifolds of hyperbolic trajectories is provided by the scalar function $|\cdot |^{p}$ with $p \in (0,1]$. This function is singular, alternatively non-differentiable, at those points where its argument is zero. Graphically this feature is observed as sharp changes in the representation of the SLD values, where the contour lines concentrate in a very narrow space.
In this particular example we are able to explicitly identify within the expression of the SLD the terms that are largest in magnitude. In other words, we are able to identify the terms whose particular singularities determine the non-differentiability of the entire sum (_Note_). This is better understandable if the expression of the SLD is divided into two sums
````{margin}
```{note}
Note that the differentiability of the SLD is analyzed with respect to the components of the initial conditions $(x_{0},y_{0})$, as the experiment $\omega \in \Omega$ and the starting point $t_{0}$ are previously fixed.
```
````
```{math}
---
label: eq:higher_order_x
---
MS_p(\mathbf{x}_0, t_0, \tau, \omega) = \sum_{i = -N}^{N-1} \Vert \mathbf{X}_{t_{i+1}} -
\mathbf{X}_{t_i} \Vert_p = \sum^{N-1}_{i = -N} \vert X_{t_{i+1}} - X_{t_i} \vert^{p} + \sum^{N-1}_{i = -N} \vert
Y_{t_{i+1}} - Y_{t_i} \vert^{p}
```
The highest order term within the first sum is $\left| X_{t_{N}} - X_{t_{N-1}} \right|^{p} = \left| X_{\tau } - X_{\tau - \Delta t} \right|^{p}$, which according to {eq}`eq:x_increments` is approximated by
```{math}
---
label: eq:higher_order_x
---
\begin{equation}
\label{higher_order_x}
\left| X_{\tau } - X_{\tau - \Delta t} \right|^{p} \hspace{0.1cm} \approx \hspace{0.1cm} e^{a_{1}(\tau - \Delta
t)\cdot p}\left| e^{a_{1}\Delta t} - 1 \right|^{p} \left| x_{0} + \int_{0}^{\tau - \Delta t}e^{-a_{1}s}b_{1}dW_{s}^{1}
\right|^{p} \quad \text{for enough large values of } \tau .
\end{equation}
```
Similarly the highest order term within the second sum is $\left| Y_{t_{-N+1}} - Y_{t_{-N}} \right|^{p} = \left|
Y_{-\tau +\Delta t} - Y_{-\tau } \right|^{p}$, approximated by
```{math}
---
label: eq:higher_order_y
---
\begin{equation}
\label{higher_order_y}
\left| Y_{-\tau +\Delta t} - Y_{-\tau } \right|^{p} \hspace{0.1cm}
\approx \hspace{0.1cm} e^{a_{2}\tau \cdot p}\left| e^{-a_{2}\Delta t} - 1 \right|^{p} \left| y_{0} -
\int_{-\tau}^{0}e^{a_{2}s}b_{2}dW_{s}^{2} \right|^{p} \quad \text{for enough large values of } \tau .
\end{equation}
```
Consequently, it is evident that the sharper features will be located closed to the points where these two last
quantities {eq}`eq:higher_order_x`, {eq}`eq:higher_order_y` are zero. In other words where the initial condition
$(x_{0},y_{0})$ satisfies one of the two following
```{math}
---
label:
---
x_{0} = - \int_{0}^{\tau - \Delta t}e^{-a_{1}s}b_{1}dW_{s}^{1} \quad \text{or} \quad y_{0} =
\int_{-\tau}^{0}e^{a_{2}s}b_{2}dW_{s}^{2} \quad \text{for enough large values of } \tau .
```
This statement is in agreement with the distinguished nature of the manifolds of the stationary orbit discussed in the previous section. Note also that the two quantities for $x_{0}$ and $y_{0}$ converge to the coordinates of the stationary orbit $(\tilde{x}(\omega ),\tilde{y}(\omega ))$ with $\tau$ tending to infinity. These features are observed in {numref}`fig:saddle`, where the sharpness of the SLD representation highlights the location of the stable and unstable manifolds.
```{figure} figures/fig1a.png
---
---
Figure A) from {cite}`balibrea2016lagrangian` showing two different experiments representing contours of $MS_p$ for $p=0.1$ and $\tau=15$. The contours of $MS_p$ are computed on a 1200$\times$1200 points grid of initial conditions and the time step for integration of the vector field is chosen to be $\Delta t= 0.05$. The magenta colored point corresponds to the location of the stationary orbit for each experiment. The chosen parameters are $a_1 = a_2 = b_2 = 1$ and $b_1 = -1$.
```
```{figure} figures/fig1b.png
---
name: fig:saddle
---
Figure B) from {cite}`balibrea2016lagrangian` showing two different experiments representing contours of $MS_p$ for $p=0.1$ and $\tau=15$. The contours of $MS_p$ are computed on a 1200$\times$1200 points grid of initial conditions and the time step for integration of the vector field is chosen to be $\Delta t= 0.05$. The magenta colored point corresponds to the location of the stationary orbit for each experiment. The chosen parameters are $a_1 = a_2 = b_2 = 1$ and $b_1 = -1$.
```
\label{fig:saddle}
\caption{Figures from {cite}`balibrea2016lagrangian` showing two different experiments representing contours of $MS_p$ for $p=0.1$ and $\tau=15$. The contours of $MS_p$ are computed on a 1200$\times$1200 points grid of initial conditions and the time step for integration of the vector field is chosen to be $\Delta t= 0.05$. The magenta colored point corresponds to the location of the stationary orbit for each experiment. The chosen parameters are $a_1 = a_2 = b_2 = 1$ and $b_1 = -1$.}
> __Remark__
> Due to the properties of Itô integrals, see for instance {cite}`duan15`, the components of the stationary orbit satisfy
```{math}
---
label:
---
\mathbb{E} \left[ \tilde{x}(\omega ) \right] = \mathbb{E} \left[ - \int_{0}^{\infty}e^{-s}dW_{s}^{1} \right] = 0
\quad , \quad \mathbb{E} \left[ \tilde{y}(\omega ) \right] = \mathbb{E} \left[ \int_{-\infty}^{0}e^{s}dW_{s}^{2} \right]
= 0
```
```{math}
---
label:
---
\mathbb{V} \left[ \tilde{x}(\omega ) \right] = \mathbb{E} \left[ \tilde{x}(\omega )^{2} \right] = \mathbb{E} \left[
\int_{0}^{\infty}e^{-2s}ds \right] = \frac{1}{2} \quad , \quad \mathbb{V} \left[ \tilde{y}(\omega ) \right] = \mathbb{E}
\left[ \tilde{y}(\omega )^{2} \right] = \mathbb{E} \left[ \int_{-\infty}^{0}e^{2s}ds \right] = \frac{1}{2}.
```
>This means that the stationary orbit $(\tilde{x}(\omega ),\tilde{y}(\omega ))$ is highly probable to be located closed to the origin of coordinates $(0,0)$, and this feature is displayed in {numref}`fig:saddle`. This result gives more evidences and supports the similarities between the stochastic differential equation {eq}`eq:noisy_saddle` and the deterministic analogue system $\lbrace \dot{x}=x, \hspace{0.1cm} \dot{y}=-y \rbrace$ whose only fixed point is $(0,0)$.
Therefore we can assert that the stochastic Lagrangian descriptor is a technique that provides a phase portrait representation of the dynamics generated by the noisy saddle equation {eq}`eq:general_noisy`. Next we apply this same technique to further examples.
(sec:examp)=
### The Stochastically forced Duffing Oscillator
Another classical problem is that of the Duffing oscillator. The deterministic version is given by
```{math}
---
label: eq:duffing_determ
---
\begin{equation}
\label{eq:duffing_determ}
\ddot{x} = \alpha \dot{x} + \beta x + \gamma x^3 + \epsilon \cos(t).
\end{equation}
```
If $\epsilon = 0$ the Duffing equation becomes time-independent, meanwhile for $\epsilon \neq 0$ the oscillator is a time-dependent system, where $\alpha$ is the damping parameter, $\beta$ controls the rigidity of the system and $\gamma$ controls the size of the nonlinearity of the restoring force. The stochastically forced Duffing
equation is studied in {cite}`datta01` and can be written as follows:
```{math}
---
label:
---
\begin{equation}
\begin{cases}
dX_t = \alpha Y_t, \\
dY_t = (\beta X_t + \gamma X^3_t)dt + \epsilon dW_t.
\end{cases}
\end{equation}
```
```{figure} figures/fig2a.png
---
name:
---
Figure A) from {cite}`balibrea2016lagrangian` showing three different experiments representing $MS_p$ contours for $p=0.5$ over a grid of initial conditions.
```
```{figure} figures/fig2b.png
---
name:
---
Figure B) from {cite}`balibrea2016lagrangian` showing three different experiments representing $MS_p$ contours for $p=0.5$ over a grid of initial conditions.
```
```{figure} figures/fig2c.png
---
name:
---
Figure C) from {cite}`balibrea2016lagrangian` showing three different experiments representing $MS_p$ contours for $p=0.5$ over a grid of initial conditions. d) The last image corresponds to the $M_p$ function for equation {eq}`eq:duffing_determ` and $p=0.75$. All these pictures were computed for $\tau=15$, and over a $1200 \times 1200$ points grid. The time step for integration of the vector field was chosen to be $\Delta t = 0.05$.
```
\label{fig:duffing}
\caption{a), b), c) Figures from {cite}`balibrea2016lagrangian` showing three different experiments representing $MS_p$ contours for $p=0.5$ over a grid of initial conditions. d) The last image corresponds to the $M_p$ function for equation {eq}`eq:duffing_determ` and $p=0.75$. All these pictures were computed for $\tau=15$, and over a $1200 \times 1200$ points grid. The time step for integration of the vector field was chosen to be $\Delta t = 0.05$.}
# References
```{bibliography} bibliography/chapter3.bib
```
| 087b4e2278550820013ac56f3e65a1ee11b7b8c8 | 122,433 | ipynb | Jupyter Notebook | book/content/.ipynb_checkpoints/chapter3-checkpoint.ipynb | champsproject/lagrangian_descriptors | b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19 | [
"CC-BY-4.0"
] | 12 | 2020-07-24T17:35:42.000Z | 2021-08-12T17:31:53.000Z | book/_build/html/_sources/content/chapter3.ipynb | champsproject/lagrangian_descriptors | b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19 | [
"CC-BY-4.0"
] | 12 | 2020-05-26T17:28:38.000Z | 2020-07-27T10:40:54.000Z | book/content/chapter3.ipynb | champsproject/lagrangian_descriptors | b3a88a2243bd5b0dce7cc945f9504bfadc9a4b19 | [
"CC-BY-4.0"
] | null | null | null | 67.197036 | 2,195 | 0.638782 | true | 29,629 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.689306 | 0.766294 | 0.528211 | __label__eng_Latn | 0.993579 | 0.065539 |
# Naive Bayes Classifier
Naive Bayes classifier assumes that the effect of a particular feature in a class is independent of other features.
$
\begin{align}
\ P(h|D) = \frac{P(D|h) P(h)}{P(D)}
\end{align}
$
- P(h): the probability of hypothesis h being true (regardless of the data). This is known as the prior probability of h
- P(D): the probability of the data (regardless of the hypothesis). This is known as the prior probability
- P(h|D): the probability of hypothesis h given the data D. This is known as posterior probability
- P(D|h): the probability of data d given that the hypothesis h was true. This is known as posterior probability
```python
import pickle as pkl
with open('../data/titanic_tansformed.pkl', 'rb') as f:
df_data = pkl.load(f)
```
```python
df_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Survived</th>
<th>Age</th>
<th>SibSp</th>
<th>Parch</th>
<th>Fare</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>female</th>
<th>male</th>
<th>C</th>
<th>Q</th>
<th>S</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>22.0</td>
<td>1</td>
<td>0</td>
<td>7.2500</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>38.0</td>
<td>1</td>
<td>0</td>
<td>71.2833</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>26.0</td>
<td>0</td>
<td>0</td>
<td>7.9250</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>35.0</td>
<td>1</td>
<td>0</td>
<td>53.1000</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>35.0</td>
<td>0</td>
<td>0</td>
<td>8.0500</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```python
df_data.shape
```
(889, 13)
```python
data = df_data.drop("Survived",axis=1)
label = df_data["Survived"]
```
```python
from sklearn.model_selection import train_test_split
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size = 0.2, random_state = 101)
```
```python
from sklearn.naive_bayes import GaussianNB
import time
tic = time.time()
nb_cla = GaussianNB()
nb_cla.fit(data_train,label_train)
print('Time taken for training Naive Bayes', (time.time()-tic), 'secs')
predictions = nb_cla.predict(data_test)
print('Accuracy', nb_cla.score(data_test, label_test))
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(label_test, predictions))
print(classification_report(label_test, predictions))
```
Time taken for training Naive Bayes 0.0014061927795410156 secs
Accuracy 0.8202247191011236
[[95 12]
[20 51]]
precision recall f1-score support
0 0.83 0.89 0.86 107
1 0.81 0.72 0.76 71
avg / total 0.82 0.82 0.82 178
## Multinomial Naive-Bayes
- Used when the values are discrete
```python
from sklearn.naive_bayes import MultinomialNB
import time
tic = time.time()
nb_cla = MultinomialNB()
nb_cla.fit(data_train,label_train)
print('Time taken for training Naive Bayes', (time.time()-tic), 'secs')
predictions = nb_cla.predict(data_test)
print('Accuracy', nb_cla.score(data_test, label_test))
```
Time taken for training Naive Bayes 0.12916111946105957 secs
Accuracy 0.7303370786516854
## Bernoulli Naive-Bayes
- Used when the values of all the features are binary
```python
from sklearn.naive_bayes import BernoulliNB
import time
tic = time.time()
nb_cla = BernoulliNB()
nb_cla.fit(data_train,label_train)
print('Time taken for training Naive Bayes', (time.time()-tic), 'secs')
predictions = nb_cla.predict(data_test)
print('Accuracy', nb_cla.score(data_test, label_test))
```
Time taken for training Naive Bayes 0.0020270347595214844 secs
Accuracy 0.7865168539325843
```python
```
| 278f3688830ec995075dbb045b2ef97cea569dbd | 9,740 | ipynb | Jupyter Notebook | classification/notebooks/08 - Naive Bayes.ipynb | pshn111/Machine-Learning-Package | fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2 | [
"MIT"
] | null | null | null | classification/notebooks/08 - Naive Bayes.ipynb | pshn111/Machine-Learning-Package | fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2 | [
"MIT"
] | null | null | null | classification/notebooks/08 - Naive Bayes.ipynb | pshn111/Machine-Learning-Package | fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2 | [
"MIT"
] | null | null | null | 26.831956 | 129 | 0.424333 | true | 1,664 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.831143 | 0.770772 | __label__eng_Latn | 0.540488 | 0.629093 |
+ This notebook is part of lecture 10 *The four fundamental subspaces* in the OCW MIT course 18.06 by Prof Gilbert Strang [1]
+ Created by me, Dr Juan H Klopper
+ Head of Acute Care Surgery
+ Groote Schuur Hospital
+ University Cape Town
+ <a href="mailto:juan.klopper@uct.ac.za">Email me with your thoughts, comments, suggestions and corrections</a>
<a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" property="dct:title" rel="dct:type">Linear Algebra OCW MIT18.06</span> <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">IPython notebook [2] study notes by Dr Juan H Klopper</span> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
+ [1] <a href="http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/index.htm">OCW MIT 18.06</a>
+ [2] Fernando Pérez, Brian E. Granger, IPython: A System for Interactive Scientific Computing, Computing in Science and Engineering, vol. 9, no. 3, pp. 21-29, May/June 2007, doi:10.1109/MCSE.2007.53. URL: http://ipython.org
```python
from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
```
```python
#import numpy as np
from sympy import init_printing, Matrix, symbols
#import matplotlib.pyplot as plt
#import seaborn as sns
#from IPython.display import Image
from warnings import filterwarnings
init_printing(use_latex = 'mathjax') # Pretty Latex printing to the screen
#%matplotlib inline
filterwarnings('ignore')
```
# The four fundamental subspaces
# Introducing the matrix space
## The four fundamental subspaces
* Columnspace, C(A)
* Nullspace, N(A)
* Rowspaces
* All linear combinations of rows
* All the linear combinations of the colums of A<sup>T</sup>, C(A<sup>T</sup>)
* Nullspace of A<sup>T</sup>, N(A<sup>T</sup>) (the left nullspace of A)
## Where are these spaces for a matrix A<sub>m×n</sub>?
* C(A) is in ℝ<sup>m</sup>
* N(A) is in ℝ<sup>n</sup>
* C(A<sup>T</sup>) is in ℝ<sup>n</sup>
* N(A<sup>T</sup>) is in ℝ<sup>m</sup>
## Calculating basis and dimension
### For C(A)
* The bases are the pivot columns
* The dimension is the rank *r*
### For N(A)
* The bases are the special solutions (one for every free variable, *n* - *r*)
* The dimension is *n* - *r*
### For C(A<sup>T</sup>)
* If A undergoes row reduction to row echelon form (R), then C(R) ≠ C(A), but rowspace(R) = rowspace(A) (or C(R<sup>T</sup>) = C(A<sup>T</sup>))
* A basis for the rowspace of A (or R) is the first *r* rows of R
* So we row reduce A and take the pivot rows and transpose them
* The dimension is also equal to the rank *r*
### For N(A<sup>T</sup>)
* It is also called the left, because it ends up on the left (as seen below)
* Here we have A<sup>T</sup>**y** = **0**
* **y**<sup>T</sup>(A<sup>T</sup>)<sup>T</sup> = **0**<sup>T</sup>
* **y**<sup>T</sup>A = **0**<sup>T</sup>
* This is (again) the pivot columns of A<sup>T</sup> (after row reduction)
* The dimension is *m* - *r*
## Example problems
### Consider this example matrix and calculate the bases and dimension for all four fundamental spaces
```python
A = Matrix([[1, 2, 3, 1], [1, 1, 2, 1], [1, 2, 3, 1]]) # We note that rows 1 and three are identical and that
# columns 3 is the addtion of columns 1 and 2 and column 1 equals column 4
A
```
#### Columnspace
```python
A.rref() # Remember that the columnspace contains the pivot columns as a basis
```
* The basis is thus:
$$ \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} $$
* It is indeed in ℝ<sup>3</sup> (rows of A = *m* = 3, i.e. each column vector is in 3-space or has 3 components)
* The rank (no of columns with pivots) are 2, thus dim(A) = 2
#### Nullspace
```python
A.nullspace() # Calculating the nullspace vectors
```
* So, indeed the basis is in ℝ<sup>4</sup> (A has *n* = 4 columns)
```python
A.rref() # No pivots for columns 3 and 4
```
* The dimension is two (there are 2 column vectors, which is indeed *n* - *r* = 4 - 2 = 2)
#### Rowspace C(A<sup>T</sup>)
* So we are looking for the pivot columns of A<sup>T</sup>
```python
A.rref()
```
* The pivot rows are rows 1 and 2
* We take them and transpose them
$$ \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \\ 1 & 0 \end{bmatrix} $$
* As stated above, it is in ℝ<sup>4</sup>
* The dimension is *n* - *r* = 4 - 2 = 2
#### Nullspace of A<sup>T</sup>
```python
A.nullspace()
```
* Which is indeed in ℝ<sup>3</sup>
* The dimension is 1, since *m* - *r* = 3 - 2 = 1 (remember that the rank is the number of pivot columns)
### Consider this example matrix (in LU form) and calculate the bases and dimension for all four fundamental spaces
```python
L = Matrix([[1, 0, 0], [2, 1, 0], [-1, 0, 1]])
U = Matrix([[5, 0, 3], [0, 1, 1], [0, 0, 0]])
A = L * U
L, U, A
```
#### Columnspace of A
```python
A.rref()
```
* The basis is thus:
$$ \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} $$
* Another basis would be the pivot columns of L:
$$ \begin{bmatrix} 1 & 0 \\ 2 & 1 \\ -1 & 0 \end{bmatrix} $$
* It is in ℝ<sup>3</sup>, since *m* = 3
* It has a rank of 2 (two pivot columns)
* Since the dimension of the columnspace is equal to the rank, dim(A) = 2
* Note that it is also equal to the number of pivot columns in U
#### Nullspace of A
```python
A.nullspace()
```
* The nullspace is in ℝ<sup>3</sup>, since *n* = 3
* The basis is the special solution(s), which is one column vector for every free variable
* Since we only have a single free variable, we have a single nullspace column vector
* This fits in with the fact that it needs to be *n* - *r*
* It can also be calculated by taking U, setting the free variable to 1 and solving for the other rows by setting each equal to zero
* The dimension of the nullspace is also 1 (*n* - *r*, i.e. a single column)
* It is also the number of free variables
#### The rowspace
* This is the columnspace of A<sup>T</sup>
* Don't take the transpose first!
* Row reduce, identify the rows with pivots and transpose them
```python
A.rref()
```
* The basis is can also be written down by identifying the rows with pivots in U and writing them down as columns (getting their transpose)
$$ \begin{bmatrix} 5 & 0 \\ 0 & 1 \\ 3 & 1 \end{bmatrix} $$
* It is in ℝ<sup>3</sup>, since *n* = 3
* The rank *r* = 2, which is equal to the dimension, i.e. dim(A<sup>T</sup>) = 2
#### The nullspace of A<sup>T</sup>
```python
A.transpose().nullspace()
```
* It is indeed in ℝ<sup>3</sup>, since *m* = 3
* A good way to do it is to take the inverse of L, such that L<sup>-1</sup>A = U
* Now the free variable row in U is row three
* Take the corresponding row in L<sup>-1</sup> and transpose it
* The dimension in *m* - 2 = 3 - 2 = 1
## The matrix space
* A square matrix is also a 'vector' space, because they obey the vector space rules of addition and scalar multiplication
* Subspaces (of same) would include
* Upper triangular matrices
* Symmetric matrices
```python
```
| a60d8d81f72974bc37b93c9f424db1abe2ec204f | 13,257 | ipynb | Jupyter Notebook | _math/MIT_OCW_18_06_Linear_algebra/I_11_Subspaces.ipynb | aixpact/data-science | f04a54595fbc2d797918d450b979fd4c2eabac15 | [
"MIT"
] | 2 | 2020-07-22T23:12:39.000Z | 2020-07-25T02:30:48.000Z | _math/MIT_OCW_18_06_Linear_algebra/I_11_Subspaces.ipynb | aixpact/data-science | f04a54595fbc2d797918d450b979fd4c2eabac15 | [
"MIT"
] | null | null | null | _math/MIT_OCW_18_06_Linear_algebra/I_11_Subspaces.ipynb | aixpact/data-science | f04a54595fbc2d797918d450b979fd4c2eabac15 | [
"MIT"
] | null | null | null | 25.943249 | 708 | 0.531719 | true | 2,322 | Qwen/Qwen-72B | 1. YES
2. YES | 0.705785 | 0.853913 | 0.602679 | __label__eng_Latn | 0.991949 | 0.238555 |
### Imports
```python
#imports
from __future__ import print_function
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
import sys
import os
import pandas as pd
import time
import math
import random
```
### Initialisation Functions
```python
def calculate_minimum_trucks_from_demand(demand_from_nodes, avg_vehicle_capacity):
total = 0
with open(demand_from_nodes, 'r') as f:
for line in f:
total = total + int(line)
f.close()
minimum_num = math.floor(total / avg_vehicle_capacity)
return minimum_num
def vehicle_demand_initialiser(total_number_of_nodes, demand_from_nodes):
with open(demand_from_nodes, 'w') as f:
f.write("0" + '\n')
for i in range(total_number_of_nodes - 1):
demand = random.randint(1, 10)
f.write(str(demand) + '\n')
f.close()
def vehicle_time_window_initialiser(total_number_of_nodes, nodes_time_window):
with open(nodes_time_window, 'w') as f:
f.write("1"+ ' ' + "300" + '\n')
for i in range(total_number_of_nodes - 1):
begin = random.randint(1,10)
end = random.randint(begin+1, 100)
f.write(str(begin) + ' ' + str(end) + '\n')
f.close()
```
```python
def format_demand_data():
demands = []
with open('data/Node_Demands.txt', 'r') as f:
for line in f:
demands.append(int(line))
f.close()
demands = demands[1:] #removing depot node having zero demand
with open('data/demand_delivery_contraint.csv', 'w') as f:
f.write("Store" + "," + "Demand" + "," + "Acceptance constraint" + "\n")
for i in range(len(demands)):
f.write("S" + str(i) + "," + str(demands[i]) + "," + str(float(demands[i]/2)) + "\n")
```
```python
def format_obtained_routes(number_of_stations, obtained_routes = 'output/route.txt'):
#Maximum number of stops in header of table
max_num_of_stops = 10
with open('data/final_routing_table.csv', 'w') as f:
#creating the header of table
f.write("Routes" + "," + "Stop_1" + ",")
for i in range(2, max_num_of_stops):
f.write("Stop_" + str(i) + ",")
f.write("\n")
for i in range(number_of_stations):
f.write(str(i) + "," + "S" + str(i) + "\n")
f.close()
with open(obtained_routes, 'r') as f:
current_station = number_of_stations #just a counter
for line in f:
# Ignore rest, select only the route from the output
if(line.split()[0][0] != '0'):
continue
split = line.split('->')
route = [int(i) for i in split[1:-1]]
#creating a string to append to the final routing csv file
route_string = ''
for stop in route:
route_string += 'S' + str(stop) + ','
route_string += '\n'
with open('data/final_routing_table.csv', 'a') as f2:
f2.write(str(current_station) + "," + route_string)
current_station += 1
f2.close()
f.close()
```
### Data Model
```python
def create_data_model(demand_from_nodes, nodes_time_window):
"""Stores the data for the problem."""
### INITIALISE DATA STRUCTURE
data = {}
data['demands'] = []
data['time_matrix'] = []
data['time_windows'] = []
data['num_vehicles'] = 2
data['depot'] = 0
data['vehicle_capacities'] = [50] * data['num_vehicles'] #Modified later on, shouldn't be a concern here.
### READ DEMAND FOR EACH NODE FROM FILE
with open(demand_from_nodes, 'r') as f:
for line in f:
data['demands'].append(int(line.rstrip()))
f.close()
number_of_nodes = len(data['demands'])
### READ TRAVEL TIME DATA FROM FILE
time_matrix_data = pd.read_csv('data/site_time.csv')
time_matrix_data = time_matrix_data.iloc[:number_of_nodes, 1:number_of_nodes+1]
time_matrix_data[time_matrix_data.columns[0]] = 0
time_matrix = time_matrix_data.values.tolist()
for site_time_data_row in time_matrix:
data['time_matrix'].append(site_time_data_row)
### READ TIME WINDOW DATA FROM FILE
with open(nodes_time_window, 'r') as f:
for line in f:
window_start, window_end = map(int,line.rstrip().split())
data['time_windows'].append((window_start, window_end))
f.close()
return data
```
### Solution Printing Fn
```python
def print_solution(data, manager, routing, assignment):
"""Prints assignment on console."""
total_distance = 0
for vehicle_id in range(data['num_vehicles']):
index = routing.Start(vehicle_id)
plan_output = 'Route for vehicle {}:\n'.format(vehicle_id)
route_distance = 0
while not routing.IsEnd(index):
plan_output += '{}->'.format(manager.IndexToNode(index))
previous_index = index
index = assignment.Value(routing.NextVar(index))
route_distance += routing.GetArcCostForVehicle(
previous_index, index, vehicle_id)
plan_output += '{}\n'.format(manager.IndexToNode(index))
plan_output += 'Time of the route: {}m\n'.format(route_distance)
print(plan_output)
with open('output/route.txt', 'a') as f:
f.write(plan_output)
total_distance += route_distance
print('Total Time of all routes: {}m'.format(total_distance))
```
### Methodolgy Wrapper Function
```python
def wrapper(number_of_vehicles, vehicle_capacity, demand_from_nodes, nodes_time_window):
# Instantiate the data problem.
data = create_data_model(demand_from_nodes, nodes_time_window)
data['num_vehicles'] = number_of_vehicles
data['vehicle_capacities'] = [vehicle_capacity] * number_of_vehicles
# Create the routing index manager.
manager = pywrapcp.RoutingIndexManager(len(data['time_matrix']), data['num_vehicles'], data['depot'])
# Create Routing Model.
routing = pywrapcp.RoutingModel(manager)
# Time Callback and constraints
def time_callback(from_index, to_index):
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['time_matrix'][from_node][to_node]
time_callback_index = routing.RegisterTransitCallback(time_callback)
# Using time transit callback as optimisation parameter
routing.SetArcCostEvaluatorOfAllVehicles(time_callback_index)
### Smaller values limit the travel of vehicles, the following values have no affect on time windows
### Large values makes sure that we get the best solution for now
'''Time Window Constraint'''
time = 'Time'
routing.AddDimension(
time_callback_index,
10000, # allow waiting time
10000, # maximum time per vehicle
False, # Don't force start cumul to zero.
time)
time_dimension = routing.GetDimensionOrDie(time)
'''
Limiting the number of stops travelled by each vehicle
We set an upper bound of one of the routing dimensions (here, time dimension)
No vehicle is allowed to travel more than the specified units for that dimension
We can use distance, time, or any other dimension for setting the upper bound.
Please note: minimum value of upper bound = max(distance(depot, node))
Otherwise solution will not exist, as at least one node would become unreachable
'''
for vehicle_id in range(data['num_vehicles']):
time_dimension.SetSpanUpperBoundForVehicle(10, vehicle_id)
# Add time window constraints for each location except depot.
for location_idx, time_window in enumerate(data['time_windows']):
if location_idx == 0:
continue
index = manager.NodeToIndex(location_idx)
time_dimension.CumulVar(index).SetRange(time_window[0], time_window[1])
# Add time window constraints for each vehicle start node.
for vehicle_id in range(data['num_vehicles']):
index = routing.Start(vehicle_id)
# Require that a vehicle must visit a location during the location's time window.
time_dimension.CumulVar(index).SetRange(data['time_windows'][0][0],
data['time_windows'][0][1])
# Instantiate route start and end times to produce feasible times.
for i in range(data['num_vehicles']):
routing.AddVariableMinimizedByFinalizer(time_dimension.CumulVar(routing.Start(i)))
routing.AddVariableMinimizedByFinalizer(time_dimension.CumulVar(routing.End(i)))
# Demand callback and constaints
def demand_callback(from_index):
from_node = manager.IndexToNode(from_index)
return data['demands'][from_node]
demand_callback_index = routing.RegisterUnaryTransitCallback(demand_callback)
routing.AddDimensionWithVehicleCapacity(
demand_callback_index,
0, # null capacity slack
data['vehicle_capacities'], # vehicle maximum capacities
True, # start cumul to zero
'Capacity')
# Setting first solution heuristic.
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
search_parameters.local_search_metaheuristic = (routing_enums_pb2.LocalSearchMetaheuristic.GUIDED_LOCAL_SEARCH)
search_parameters.time_limit.FromSeconds(1)
# Solve the problem.
solution = routing.SolveWithParameters(search_parameters)
# Print solution on console.
if solution:
if(os.path.exists('output/route.txt')):
os.remove('output/route.txt')
print_solution(data, manager, routing, solution)
print("Vehicles Required: {}".format(number_of_vehicles))
print("-"*40)
return 1
else:
print("Current Number of Vehicles = {}".format(number_of_vehicles))
print("No Solution Yet")
print("-"*40)
print("Incrementing number of vehicles")
return 0
```
### Main Function
```python
def main():
""" Files Required
- site_time.csv (time to travel between any two nodes)
- Node_Demands.txt (can be generated)
- nodes_time_window.txt (can be generated)
To change number of nodes, just edit the variable below and regenerate demand and time windows.
Solution isn't guaranteed when new time windows are created
Preferably change the name of text file so that previous data isn't lost.
"""
demand_from_nodes = "data/Node_Demands.txt"
nodes_time_window = "data/nodes_time_window.txt" # Filename containing time window for each node
number_of_nodes = 20
# vehicle_demand_initialiser(number_of_nodes, demand_from_nodes)
# vehicle_time_window_initialiser(number_of_nodes, nodes_time_window)
vehicle_capacity = 30
current_num_of_trucks = calculate_minimum_trucks_from_demand(demand_from_nodes, vehicle_capacity)
print("minimum_num_of_vehicles = {}".format(current_num_of_trucks) + '\n' + '-'*40)
solution = wrapper(current_num_of_trucks, vehicle_capacity, demand_from_nodes, nodes_time_window)
cntr = 0
while True:
current_num_of_trucks += 1
cntr += 1
solution = wrapper(current_num_of_trucks, vehicle_capacity, demand_from_nodes, nodes_time_window)
if(current_num_of_trucks > 100):
break
if(cntr == 10):
break
#TODO: Add binary search instead of linear search.
'''
Optimisation Inventory Routing Problem Part:
Stations = Total nodes - depot
'''
format_demand_data()
format_obtained_routes(number_of_nodes - 1, obtained_routes = 'output/route.txt')
```
```python
if __name__ == '__main__':
main()
```
minimum_num_of_vehicles = 4
----------------------------------------
Current Number of Vehicles = 4
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 5
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 6
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 7
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 8
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 9
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 10
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 11
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 12
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 13
No Solution Yet
----------------------------------------
Incrementing number of vehicles
Current Number of Vehicles = 14
No Solution Yet
----------------------------------------
Incrementing number of vehicles
#### Some Results
Upper Bound = 12m | Min Vehicles 5 | Min time = 46m
Upper Bound = 11m | Min Vehicles 6 | Min time = 49m
```python
```
```python
```
# Inventory Routing Formulation
<b>Decision Variable:</b>
\begin{equation}
\begin{array}{11}
\ Y_{i,k} & \forall i \in R, k \in WD
\end{array}
\end{equation}
\begin{equation}
\begin{array}{11}
\ X_{i,j,k} & \forall i \in R, j \in S, k \in WD
\end{array}
\end{equation}
<b>Objective Function:</b>
\begin{equation}
\begin{array}{ll}
\text{minimise} & \sum_{i \in R} \sum_{k \in W} Y_{i,k} c_{i,k}\\
\end{array}
\end{equation}
<b>Subject to:</b>
\
<i>Respecting Demand Constraints</i>
\begin{equation}
\begin{array}{11}
\sum_{i \in R} \sum_{k \in W} X_{i,j,k} m_{i,j} = d_j & \forall j \in S
\end{array}
\end{equation}
\
<i>Respecting Delivery Acceptance Constraints</i>
\begin{equation}
\begin{array}{11}
\sum_{i \in R} X_{i,j,k} m_{i,j} \le a_j & \forall j \in S, k \in W
\end{array}
\end{equation}
\
<i>Respecting Truck Constraints</i>
\begin{equation}
\begin{array}{11}
\sum_{j \in S} X_{i,j,k}m_{i,j} \le C_iY_{i,k} & \forall i \in R, k \in W
\end{array}
\end{equation}
\
<b>General Representations</b>
\
<i>Sets</i>
* <i>R</i>: Set of routes
* <i>S</i>: Set of stores
* <i>W</i>: Set of days in a week
\
<i>Parameters</i>
\
$
\begin{equation}
\begin{array}{11}
d_j & \text{: Weekly demand of store j,} & j \in S
\end{array}
\end{equation}
$
$
\begin{equation}
\begin{array}{11}
a_j & \text{: Delivery acceptance limits for store j,} & j \in S
\end{array}
\end{equation}
$
$
\begin{equation}
\begin{array}{11}
m_{i,j} & \text{: Dummy variable to indicate 1 if store j is along route i and 0 otherwise,} & i \in R, j \in S
\end{array}
\end{equation}
$
$
\begin{equation}
\begin{array}{11}
c_{i,k} & \text{: Cost of truck on each route i on day k }
\end{array}
\end{equation}
$
$
\begin{equation}
\begin{array}{11}
C_{i} & \text{: Full capacity of truck on route i on each day }
\end{array}
\end{equation}
$
\
<i>Variables</i>
$
\begin{equation}
\begin{array}{11}
X_{i,j,k} & \text{: Units on each route i to be delivered to a store j on day k} \\
Y_{i,k} & \text{: If route i is used on day k}
\end{array}
\end{equation}
$
\
<i>Entities</i>
* <i>i</i>: Route in R
* <i>j</i>: Store in S
* <i>k</i>: Day in W
```python
#Importing Packages
from gurobipy import *
from math import sqrt
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import xlrd
```
```python
#Importing Data
routes = pd.read_csv("data/final_routing_table.csv").to_numpy()
print(routes)
demand = pd.read_csv("data/demand_delivery_contraint.csv").to_numpy()
print(demand)
```
[[0 'S0' nan nan nan nan nan nan nan nan nan]
[1 'S1' nan nan nan nan nan nan nan nan nan]
[2 'S2' nan nan nan nan nan nan nan nan nan]
[3 'S3' nan nan nan nan nan nan nan nan nan]
[4 'S4' nan nan nan nan nan nan nan nan nan]
[5 'S5' nan nan nan nan nan nan nan nan nan]
[6 'S6' nan nan nan nan nan nan nan nan nan]
[7 'S7' nan nan nan nan nan nan nan nan nan]
[8 'S8' nan nan nan nan nan nan nan nan nan]
[9 'S9' nan nan nan nan nan nan nan nan nan]
[10 'S10' nan nan nan nan nan nan nan nan nan]
[11 'S11' nan nan nan nan nan nan nan nan nan]
[12 'S12' nan nan nan nan nan nan nan nan nan]
[13 'S13' nan nan nan nan nan nan nan nan nan]
[14 'S14' nan nan nan nan nan nan nan nan nan]
[15 'S15' nan nan nan nan nan nan nan nan nan]
[16 'S16' nan nan nan nan nan nan nan nan nan]
[17 'S17' nan nan nan nan nan nan nan nan nan]
[18 'S18' nan nan nan nan nan nan nan nan nan]
[19 'S5' 'S3' 'S7' 'S9' 'S16' nan nan nan nan nan]
[20 'S2' 'S4' 'S12' 'S14' nan nan nan nan nan nan]
[21 'S8' 'S6' 'S1' 'S10' 'S19' nan nan nan nan nan]
[22 'S11' 'S17' 'S13' nan nan nan nan nan nan nan]
[23 'S18' 'S15' nan nan nan nan nan nan nan nan]]
[['S0' 1 0.5]
['S1' 7 3.5]
['S2' 6 3.0]
['S3' 7 3.5]
['S4' 3 1.5]
['S5' 10 5.0]
['S6' 6 3.0]
['S7' 10 5.0]
['S8' 10 5.0]
['S9' 6 3.0]
['S10' 5 2.5]
['S11' 10 5.0]
['S12' 10 5.0]
['S13' 6 3.0]
['S14' 10 5.0]
['S15' 5 2.5]
['S16' 7 3.5]
['S17' 1 0.5]
['S18' 1 0.5]]
```python
#Demand
d = demand[:,1]
#Delivery constraints
dc = demand[:,2]
#Route matrix
#Store names
snames = demand[:,0]
count = 1
num_routes = len(routes)
num_stores = len(demand)
mat = np.zeros(num_routes)
for i in snames:
routestore1 = (routes[:,1] == str(i)) * 1
routestore2 = (routes[:,2] == str(i)) * 1
mainroutestore = routestore1 + routestore2
mat = np.row_stack((mat, mainroutestore))
#mat = np.concatenate((mat, mainroutestore),axis=1)
mat = np.array(mat, dtype='int16')[1:,:].transpose((1,0))
print(mat)
```
[[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
[0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1]]
```python
#Shipping costs
c = np.full((num_routes, num_stores), 1000)
c
#Supply constraints (truck capacity)
t = np.full((num_routes,7), 30)
t
```
array([[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30],
[30, 30, 30, 30, 30, 30, 30]])
```python
#Transportation Problem
m4 = Model('transportation')
#Variables
#Variables are in proportion
var = m4.addVars(num_routes,num_stores,7)
yvar = m4.addVars(num_routes,7,vtype = GRB.BINARY)
#Objective
m4.setObjective(sum(yvar[i,k] for i in range(num_routes) for k in range(7)), GRB.MINIMIZE)
#Weekly Demand
for j in range(num_stores):
m4.addConstr(sum(var[i,j,k]*mat[i,j] for i in range(num_routes) for k in range(7)) == d[j])
#Delivery Constraints
for j in range(num_stores):
for k in range(7):
m4.addConstr(sum(var[i,j,k]*mat[i,j] for i in range(num_routes)) <= 0.6*dc[j])
#Supply constraint
for i in range(num_routes):
for k in range(7):
m4.addConstr(sum(var[i,j,k]*mat[i,j] for j in range(num_stores)) <= t[i,k]*yvar[i,k])
#Solving the optimization problem
m4.optimize()
#Printing the optimal solutions obtained
print("Optimal Solutions:")
for i, val in var.items():
if val.getAttr("x") != 0:
print("Number of units from route %g to store %g on day %g:\t %g " %(i[0]+1, i[1]+1, i[2]+1, val.getAttr("x")))
#Printing y
for i, val in yvar.items():
print("Run route %g on day %g:\t %g " %(i[0]+1, i[1]+1, val.getAttr("x")))
print(yvar)
```
Academic license - for non-commercial use only - expires 2021-05-10
Using license file /Users/adityagoel/gurobi.lic
Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (mac64)
Thread count: 2 physical cores, 4 logical processors, using up to 4 threads
Optimize a model with 320 rows, 3360 columns and 777 nonzeros
Model fingerprint: 0x9cfca9f0
Variable types: 3192 continuous, 168 integer (168 binary)
Coefficient statistics:
Matrix range [1e+00, 3e+01]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [3e-01, 1e+01]
Found heuristic solution: objective 75.0000000
Presolve removed 283 rows and 3311 columns
Presolve time: 0.04s
Presolved: 37 rows, 49 columns, 105 nonzeros
Found heuristic solution: objective 60.0000000
Variable types: 28 continuous, 21 integer (21 binary)
Root relaxation: objective 5.533333e+01, 35 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 55.33333 0 4 60.00000 55.33333 7.78% - 0s
H 0 0 58.0000000 55.33333 4.60% - 0s
H 0 0 57.0000000 55.33333 2.92% - 0s
H 0 0 56.0000000 55.33333 1.19% - 0s
0 0 55.33333 0 4 56.00000 55.33333 1.19% - 0s
Explored 1 nodes (35 simplex iterations) in 0.14 seconds
Thread count was 4 (of 4 available processors)
Solution count 5: 56 57 58 ... 75
Optimal solution found (tolerance 1.00e-04)
Best objective 5.600000000000e+01, best bound 5.600000000000e+01, gap 0.0000%
Optimal Solutions:
Number of units from route 1 to store 1 on day 2: 0.3
Number of units from route 1 to store 1 on day 3: 0.1
Number of units from route 1 to store 1 on day 4: 0.3
Number of units from route 1 to store 1 on day 5: 0.3
Number of units from route 2 to store 2 on day 2: 2.1
Number of units from route 2 to store 2 on day 4: 2.1
Number of units from route 2 to store 2 on day 5: 0.7
Number of units from route 2 to store 2 on day 7: 2.1
Number of units from route 8 to store 8 on day 2: 3
Number of units from route 8 to store 8 on day 4: 3
Number of units from route 8 to store 8 on day 5: 1
Number of units from route 8 to store 8 on day 7: 3
Number of units from route 10 to store 10 on day 2: 1.8
Number of units from route 10 to store 10 on day 3: 0.6
Number of units from route 10 to store 10 on day 4: 1.8
Number of units from route 10 to store 10 on day 5: 1.8
Number of units from route 11 to store 11 on day 2: 1.5
Number of units from route 11 to store 11 on day 3: 0.5
Number of units from route 11 to store 11 on day 4: 1.5
Number of units from route 11 to store 11 on day 5: 1.5
Number of units from route 13 to store 13 on day 2: 3
Number of units from route 13 to store 13 on day 4: 3
Number of units from route 13 to store 13 on day 5: 1
Number of units from route 13 to store 13 on day 7: 3
Number of units from route 14 to store 14 on day 2: 1.8
Number of units from route 14 to store 14 on day 3: 0.6
Number of units from route 14 to store 14 on day 4: 1.8
Number of units from route 14 to store 14 on day 5: 1.8
Number of units from route 15 to store 15 on day 2: 3
Number of units from route 15 to store 15 on day 4: 3
Number of units from route 15 to store 15 on day 5: 1
Number of units from route 15 to store 15 on day 7: 3
Number of units from route 17 to store 17 on day 2: 2.1
Number of units from route 17 to store 17 on day 4: 2.1
Number of units from route 17 to store 17 on day 5: 0.7
Number of units from route 17 to store 17 on day 7: 2.1
Number of units from route 20 to store 4 on day 1: 2.1
Number of units from route 20 to store 4 on day 2: 0.7
Number of units from route 20 to store 4 on day 3: 2.1
Number of units from route 20 to store 4 on day 6: 2.1
Number of units from route 20 to store 6 on day 1: 3
Number of units from route 20 to store 6 on day 2: 1
Number of units from route 20 to store 6 on day 3: 3
Number of units from route 20 to store 6 on day 6: 3
Number of units from route 21 to store 3 on day 3: 1.8
Number of units from route 21 to store 3 on day 4: 1.8
Number of units from route 21 to store 3 on day 5: 0.6
Number of units from route 21 to store 3 on day 7: 1.8
Number of units from route 21 to store 5 on day 3: 0.9
Number of units from route 21 to store 5 on day 4: 0.9
Number of units from route 21 to store 5 on day 5: 0.3
Number of units from route 21 to store 5 on day 7: 0.9
Number of units from route 22 to store 7 on day 1: 1.8
Number of units from route 22 to store 7 on day 2: 0.6
Number of units from route 22 to store 7 on day 3: 1.8
Number of units from route 22 to store 7 on day 6: 1.8
Number of units from route 22 to store 9 on day 1: 3
Number of units from route 22 to store 9 on day 2: 3
Number of units from route 22 to store 9 on day 3: 3
Number of units from route 22 to store 9 on day 6: 1
Number of units from route 23 to store 12 on day 4: 3
Number of units from route 23 to store 12 on day 5: 3
Number of units from route 23 to store 12 on day 6: 1
Number of units from route 23 to store 12 on day 7: 3
Number of units from route 23 to store 18 on day 4: 0.3
Number of units from route 23 to store 18 on day 5: 0.3
Number of units from route 23 to store 18 on day 6: 0.1
Number of units from route 23 to store 18 on day 7: 0.3
Number of units from route 24 to store 16 on day 1: 1.5
Number of units from route 24 to store 16 on day 4: 0.5
Number of units from route 24 to store 16 on day 6: 1.5
Number of units from route 24 to store 16 on day 7: 1.5
Number of units from route 24 to store 19 on day 1: 0.3
Number of units from route 24 to store 19 on day 4: 0.3
Number of units from route 24 to store 19 on day 6: 0.3
Number of units from route 24 to store 19 on day 7: 0.1
Run route 1 on day 1: -0
Run route 1 on day 2: 1
Run route 1 on day 3: 1
Run route 1 on day 4: 1
Run route 1 on day 5: 1
Run route 1 on day 6: -0
Run route 1 on day 7: -0
Run route 2 on day 1: -0
Run route 2 on day 2: 1
Run route 2 on day 3: -0
Run route 2 on day 4: 1
Run route 2 on day 5: 1
Run route 2 on day 6: -0
Run route 2 on day 7: 1
Run route 3 on day 1: -0
Run route 3 on day 2: -0
Run route 3 on day 3: -0
Run route 3 on day 4: -0
Run route 3 on day 5: -0
Run route 3 on day 6: -0
Run route 3 on day 7: -0
Run route 4 on day 1: -0
Run route 4 on day 2: -0
Run route 4 on day 3: -0
Run route 4 on day 4: -0
Run route 4 on day 5: -0
Run route 4 on day 6: -0
Run route 4 on day 7: -0
Run route 5 on day 1: -0
Run route 5 on day 2: -0
Run route 5 on day 3: -0
Run route 5 on day 4: -0
Run route 5 on day 5: -0
Run route 5 on day 6: -0
Run route 5 on day 7: -0
Run route 6 on day 1: -0
Run route 6 on day 2: -0
Run route 6 on day 3: -0
Run route 6 on day 4: -0
Run route 6 on day 5: -0
Run route 6 on day 6: -0
Run route 6 on day 7: -0
Run route 7 on day 1: -0
Run route 7 on day 2: -0
Run route 7 on day 3: -0
Run route 7 on day 4: -0
Run route 7 on day 5: -0
Run route 7 on day 6: -0
Run route 7 on day 7: -0
Run route 8 on day 1: -0
Run route 8 on day 2: 1
Run route 8 on day 3: -0
Run route 8 on day 4: 1
Run route 8 on day 5: 1
Run route 8 on day 6: -0
Run route 8 on day 7: 1
Run route 9 on day 1: -0
Run route 9 on day 2: -0
Run route 9 on day 3: -0
Run route 9 on day 4: -0
Run route 9 on day 5: -0
Run route 9 on day 6: -0
Run route 9 on day 7: -0
Run route 10 on day 1: -0
Run route 10 on day 2: 1
Run route 10 on day 3: 1
Run route 10 on day 4: 1
Run route 10 on day 5: 1
Run route 10 on day 6: -0
Run route 10 on day 7: -0
Run route 11 on day 1: -0
Run route 11 on day 2: 1
Run route 11 on day 3: 1
Run route 11 on day 4: 1
Run route 11 on day 5: 1
Run route 11 on day 6: -0
Run route 11 on day 7: -0
Run route 12 on day 1: -0
Run route 12 on day 2: -0
Run route 12 on day 3: -0
Run route 12 on day 4: -0
Run route 12 on day 5: -0
Run route 12 on day 6: -0
Run route 12 on day 7: -0
Run route 13 on day 1: -0
Run route 13 on day 2: 1
Run route 13 on day 3: -0
Run route 13 on day 4: 1
Run route 13 on day 5: 1
Run route 13 on day 6: -0
Run route 13 on day 7: 1
Run route 14 on day 1: -0
Run route 14 on day 2: 1
Run route 14 on day 3: 1
Run route 14 on day 4: 1
Run route 14 on day 5: 1
Run route 14 on day 6: -0
Run route 14 on day 7: -0
Run route 15 on day 1: -0
Run route 15 on day 2: 1
Run route 15 on day 3: -0
Run route 15 on day 4: 1
Run route 15 on day 5: 1
Run route 15 on day 6: -0
Run route 15 on day 7: 1
Run route 16 on day 1: -0
Run route 16 on day 2: -0
Run route 16 on day 3: -0
Run route 16 on day 4: -0
Run route 16 on day 5: -0
Run route 16 on day 6: -0
Run route 16 on day 7: -0
Run route 17 on day 1: -0
Run route 17 on day 2: 1
Run route 17 on day 3: -0
Run route 17 on day 4: 1
Run route 17 on day 5: 1
Run route 17 on day 6: -0
Run route 17 on day 7: 1
Run route 18 on day 1: -0
Run route 18 on day 2: -0
Run route 18 on day 3: -0
Run route 18 on day 4: -0
Run route 18 on day 5: -0
Run route 18 on day 6: -0
Run route 18 on day 7: -0
Run route 19 on day 1: -0
Run route 19 on day 2: -0
Run route 19 on day 3: -0
Run route 19 on day 4: -0
Run route 19 on day 5: -0
Run route 19 on day 6: -0
Run route 19 on day 7: -0
Run route 20 on day 1: 1
Run route 20 on day 2: 1
Run route 20 on day 3: 1
Run route 20 on day 4: -0
Run route 20 on day 5: -0
Run route 20 on day 6: 1
Run route 20 on day 7: -0
Run route 21 on day 1: -0
Run route 21 on day 2: -0
Run route 21 on day 3: 1
Run route 21 on day 4: 1
Run route 21 on day 5: 1
Run route 21 on day 6: -0
Run route 21 on day 7: 1
Run route 22 on day 1: 1
Run route 22 on day 2: 1
Run route 22 on day 3: 1
Run route 22 on day 4: -0
Run route 22 on day 5: -0
Run route 22 on day 6: 1
Run route 22 on day 7: -0
Run route 23 on day 1: -0
Run route 23 on day 2: -0
Run route 23 on day 3: -0
Run route 23 on day 4: 1
Run route 23 on day 5: 1
Run route 23 on day 6: 1
Run route 23 on day 7: 1
Run route 24 on day 1: 1
Run route 24 on day 2: -0
Run route 24 on day 3: -0
Run route 24 on day 4: 1
Run route 24 on day 5: -0
Run route 24 on day 6: 1
Run route 24 on day 7: 1
{(0, 0): <gurobi.Var C3192 (value -0.0)>, (0, 1): <gurobi.Var C3193 (value 1.0)>, (0, 2): <gurobi.Var C3194 (value 1.0)>, (0, 3): <gurobi.Var C3195 (value 1.0)>, (0, 4): <gurobi.Var C3196 (value 1.0)>, (0, 5): <gurobi.Var C3197 (value -0.0)>, (0, 6): <gurobi.Var C3198 (value -0.0)>, (1, 0): <gurobi.Var C3199 (value -0.0)>, (1, 1): <gurobi.Var C3200 (value 1.0)>, (1, 2): <gurobi.Var C3201 (value -0.0)>, (1, 3): <gurobi.Var C3202 (value 1.0)>, (1, 4): <gurobi.Var C3203 (value 1.0)>, (1, 5): <gurobi.Var C3204 (value -0.0)>, (1, 6): <gurobi.Var C3205 (value 1.0)>, (2, 0): <gurobi.Var C3206 (value -0.0)>, (2, 1): <gurobi.Var C3207 (value -0.0)>, (2, 2): <gurobi.Var C3208 (value -0.0)>, (2, 3): <gurobi.Var C3209 (value -0.0)>, (2, 4): <gurobi.Var C3210 (value -0.0)>, (2, 5): <gurobi.Var C3211 (value -0.0)>, (2, 6): <gurobi.Var C3212 (value -0.0)>, (3, 0): <gurobi.Var C3213 (value -0.0)>, (3, 1): <gurobi.Var C3214 (value -0.0)>, (3, 2): <gurobi.Var C3215 (value -0.0)>, (3, 3): <gurobi.Var C3216 (value -0.0)>, (3, 4): <gurobi.Var C3217 (value -0.0)>, (3, 5): <gurobi.Var C3218 (value -0.0)>, (3, 6): <gurobi.Var C3219 (value -0.0)>, (4, 0): <gurobi.Var C3220 (value -0.0)>, (4, 1): <gurobi.Var C3221 (value -0.0)>, (4, 2): <gurobi.Var C3222 (value -0.0)>, (4, 3): <gurobi.Var C3223 (value -0.0)>, (4, 4): <gurobi.Var C3224 (value -0.0)>, (4, 5): <gurobi.Var C3225 (value -0.0)>, (4, 6): <gurobi.Var C3226 (value -0.0)>, (5, 0): <gurobi.Var C3227 (value -0.0)>, (5, 1): <gurobi.Var C3228 (value -0.0)>, (5, 2): <gurobi.Var C3229 (value -0.0)>, (5, 3): <gurobi.Var C3230 (value -0.0)>, (5, 4): <gurobi.Var C3231 (value -0.0)>, (5, 5): <gurobi.Var C3232 (value -0.0)>, (5, 6): <gurobi.Var C3233 (value -0.0)>, (6, 0): <gurobi.Var C3234 (value -0.0)>, (6, 1): <gurobi.Var C3235 (value -0.0)>, (6, 2): <gurobi.Var C3236 (value -0.0)>, (6, 3): <gurobi.Var C3237 (value -0.0)>, (6, 4): <gurobi.Var C3238 (value -0.0)>, (6, 5): <gurobi.Var C3239 (value -0.0)>, (6, 6): <gurobi.Var C3240 (value -0.0)>, (7, 0): <gurobi.Var C3241 (value -0.0)>, (7, 1): <gurobi.Var C3242 (value 1.0)>, (7, 2): <gurobi.Var C3243 (value -0.0)>, (7, 3): <gurobi.Var C3244 (value 1.0)>, (7, 4): <gurobi.Var C3245 (value 1.0)>, (7, 5): <gurobi.Var C3246 (value -0.0)>, (7, 6): <gurobi.Var C3247 (value 1.0)>, (8, 0): <gurobi.Var C3248 (value -0.0)>, (8, 1): <gurobi.Var C3249 (value -0.0)>, (8, 2): <gurobi.Var C3250 (value -0.0)>, (8, 3): <gurobi.Var C3251 (value -0.0)>, (8, 4): <gurobi.Var C3252 (value -0.0)>, (8, 5): <gurobi.Var C3253 (value -0.0)>, (8, 6): <gurobi.Var C3254 (value -0.0)>, (9, 0): <gurobi.Var C3255 (value -0.0)>, (9, 1): <gurobi.Var C3256 (value 1.0)>, (9, 2): <gurobi.Var C3257 (value 1.0)>, (9, 3): <gurobi.Var C3258 (value 1.0)>, (9, 4): <gurobi.Var C3259 (value 1.0)>, (9, 5): <gurobi.Var C3260 (value -0.0)>, (9, 6): <gurobi.Var C3261 (value -0.0)>, (10, 0): <gurobi.Var C3262 (value -0.0)>, (10, 1): <gurobi.Var C3263 (value 1.0)>, (10, 2): <gurobi.Var C3264 (value 1.0)>, (10, 3): <gurobi.Var C3265 (value 1.0)>, (10, 4): <gurobi.Var C3266 (value 1.0)>, (10, 5): <gurobi.Var C3267 (value -0.0)>, (10, 6): <gurobi.Var C3268 (value -0.0)>, (11, 0): <gurobi.Var C3269 (value -0.0)>, (11, 1): <gurobi.Var C3270 (value -0.0)>, (11, 2): <gurobi.Var C3271 (value -0.0)>, (11, 3): <gurobi.Var C3272 (value -0.0)>, (11, 4): <gurobi.Var C3273 (value -0.0)>, (11, 5): <gurobi.Var C3274 (value -0.0)>, (11, 6): <gurobi.Var C3275 (value -0.0)>, (12, 0): <gurobi.Var C3276 (value -0.0)>, (12, 1): <gurobi.Var C3277 (value 1.0)>, (12, 2): <gurobi.Var C3278 (value -0.0)>, (12, 3): <gurobi.Var C3279 (value 1.0)>, (12, 4): <gurobi.Var C3280 (value 1.0)>, (12, 5): <gurobi.Var C3281 (value -0.0)>, (12, 6): <gurobi.Var C3282 (value 1.0)>, (13, 0): <gurobi.Var C3283 (value -0.0)>, (13, 1): <gurobi.Var C3284 (value 1.0)>, (13, 2): <gurobi.Var C3285 (value 1.0)>, (13, 3): <gurobi.Var C3286 (value 1.0)>, (13, 4): <gurobi.Var C3287 (value 1.0)>, (13, 5): <gurobi.Var C3288 (value -0.0)>, (13, 6): <gurobi.Var C3289 (value -0.0)>, (14, 0): <gurobi.Var C3290 (value -0.0)>, (14, 1): <gurobi.Var C3291 (value 1.0)>, (14, 2): <gurobi.Var C3292 (value -0.0)>, (14, 3): <gurobi.Var C3293 (value 1.0)>, (14, 4): <gurobi.Var C3294 (value 1.0)>, (14, 5): <gurobi.Var C3295 (value -0.0)>, (14, 6): <gurobi.Var C3296 (value 1.0)>, (15, 0): <gurobi.Var C3297 (value -0.0)>, (15, 1): <gurobi.Var C3298 (value -0.0)>, (15, 2): <gurobi.Var C3299 (value -0.0)>, (15, 3): <gurobi.Var C3300 (value -0.0)>, (15, 4): <gurobi.Var C3301 (value -0.0)>, (15, 5): <gurobi.Var C3302 (value -0.0)>, (15, 6): <gurobi.Var C3303 (value -0.0)>, (16, 0): <gurobi.Var C3304 (value -0.0)>, (16, 1): <gurobi.Var C3305 (value 1.0)>, (16, 2): <gurobi.Var C3306 (value -0.0)>, (16, 3): <gurobi.Var C3307 (value 1.0)>, (16, 4): <gurobi.Var C3308 (value 1.0)>, (16, 5): <gurobi.Var C3309 (value -0.0)>, (16, 6): <gurobi.Var C3310 (value 1.0)>, (17, 0): <gurobi.Var C3311 (value -0.0)>, (17, 1): <gurobi.Var C3312 (value -0.0)>, (17, 2): <gurobi.Var C3313 (value -0.0)>, (17, 3): <gurobi.Var C3314 (value -0.0)>, (17, 4): <gurobi.Var C3315 (value -0.0)>, (17, 5): <gurobi.Var C3316 (value -0.0)>, (17, 6): <gurobi.Var C3317 (value -0.0)>, (18, 0): <gurobi.Var C3318 (value -0.0)>, (18, 1): <gurobi.Var C3319 (value -0.0)>, (18, 2): <gurobi.Var C3320 (value -0.0)>, (18, 3): <gurobi.Var C3321 (value -0.0)>, (18, 4): <gurobi.Var C3322 (value -0.0)>, (18, 5): <gurobi.Var C3323 (value -0.0)>, (18, 6): <gurobi.Var C3324 (value -0.0)>, (19, 0): <gurobi.Var C3325 (value 1.0)>, (19, 1): <gurobi.Var C3326 (value 1.0)>, (19, 2): <gurobi.Var C3327 (value 1.0)>, (19, 3): <gurobi.Var C3328 (value -0.0)>, (19, 4): <gurobi.Var C3329 (value -0.0)>, (19, 5): <gurobi.Var C3330 (value 1.0)>, (19, 6): <gurobi.Var C3331 (value -0.0)>, (20, 0): <gurobi.Var C3332 (value -0.0)>, (20, 1): <gurobi.Var C3333 (value -0.0)>, (20, 2): <gurobi.Var C3334 (value 1.0)>, (20, 3): <gurobi.Var C3335 (value 1.0)>, (20, 4): <gurobi.Var C3336 (value 1.0)>, (20, 5): <gurobi.Var C3337 (value -0.0)>, (20, 6): <gurobi.Var C3338 (value 1.0)>, (21, 0): <gurobi.Var C3339 (value 1.0)>, (21, 1): <gurobi.Var C3340 (value 1.0)>, (21, 2): <gurobi.Var C3341 (value 1.0)>, (21, 3): <gurobi.Var C3342 (value -0.0)>, (21, 4): <gurobi.Var C3343 (value -0.0)>, (21, 5): <gurobi.Var C3344 (value 1.0)>, (21, 6): <gurobi.Var C3345 (value -0.0)>, (22, 0): <gurobi.Var C3346 (value -0.0)>, (22, 1): <gurobi.Var C3347 (value -0.0)>, (22, 2): <gurobi.Var C3348 (value -0.0)>, (22, 3): <gurobi.Var C3349 (value 1.0)>, (22, 4): <gurobi.Var C3350 (value 1.0)>, (22, 5): <gurobi.Var C3351 (value 1.0)>, (22, 6): <gurobi.Var C3352 (value 1.0)>, (23, 0): <gurobi.Var C3353 (value 1.0)>, (23, 1): <gurobi.Var C3354 (value -0.0)>, (23, 2): <gurobi.Var C3355 (value -0.0)>, (23, 3): <gurobi.Var C3356 (value 1.0)>, (23, 4): <gurobi.Var C3357 (value -0.0)>, (23, 5): <gurobi.Var C3358 (value 1.0)>, (23, 6): <gurobi.Var C3359 (value 1.0)>}
```python
for i in range(num_routes):
for k in range(7):
print(yvar[i,k].getAttr("x"))
```
-0.0
1.0
1.0
1.0
1.0
-0.0
-0.0
-0.0
1.0
-0.0
1.0
1.0
-0.0
1.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
1.0
-0.0
1.0
1.0
-0.0
1.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
1.0
1.0
1.0
1.0
-0.0
-0.0
-0.0
1.0
1.0
1.0
1.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
1.0
-0.0
1.0
1.0
-0.0
1.0
-0.0
1.0
1.0
1.0
1.0
-0.0
-0.0
-0.0
1.0
-0.0
1.0
1.0
-0.0
1.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
1.0
-0.0
1.0
1.0
-0.0
1.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
-0.0
1.0
1.0
1.0
-0.0
-0.0
1.0
-0.0
-0.0
-0.0
1.0
1.0
1.0
-0.0
1.0
1.0
1.0
1.0
-0.0
-0.0
1.0
-0.0
-0.0
-0.0
-0.0
1.0
1.0
1.0
1.0
1.0
-0.0
-0.0
1.0
-0.0
1.0
1.0
```python
```
```python
print("Finish")
```
Finish
```python
```
| 0eda39a6e45423069740bc4ba2f034efd42cd66a | 55,539 | ipynb | Jupyter Notebook | Inventory Routing Problem/IVR_Final-Ammends.ipynb | Ellusionists/Thesis | 3c5b33ef2379b3ac5c8974de5c25b656bd410d26 | [
"Apache-2.0"
] | null | null | null | Inventory Routing Problem/IVR_Final-Ammends.ipynb | Ellusionists/Thesis | 3c5b33ef2379b3ac5c8974de5c25b656bd410d26 | [
"Apache-2.0"
] | null | null | null | Inventory Routing Problem/IVR_Final-Ammends.ipynb | Ellusionists/Thesis | 3c5b33ef2379b3ac5c8974de5c25b656bd410d26 | [
"Apache-2.0"
] | null | null | null | 39.305732 | 6,940 | 0.497218 | true | 16,298 | Qwen/Qwen-72B | 1. YES
2. YES | 0.874077 | 0.658418 | 0.575508 | __label__eng_Latn | 0.700192 | 0.175427 |
```python
# deep learning related tools
import sympy
import numpy as np
import tensorflow as tf
# quantum ML tools
import tensorflow_quantum as tfq
import cirq
import collections
# visualization tools (inline matploit only notebook needed)
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
```
```python
## Prepare classicial data..
# prepare mnist data
def load_mnist(prepro=True):
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
if prepro:
# Rescale the images from [0,255] to the [0.0,1.0] range.
x_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0
return (x_train, y_train), (x_test, y_test)
# only keep label 3, 6 imgs ; transform into binary label ( True : 3 / False : 6 )
def filter_36(x, y):
keep = (y == 3) | (y == 6)
x, y = x[keep], y[keep]
y = y == 3
return x, y
#get dummy meta information
def show_meta_info(tra_data, tst_data, tra_lab, only_vis=False):
if only_vis:
print("Number of training examples:", len(tra_data))
print("Number of test examples:", len(tst_data))
return
plt.figure()
plt.title( str(tra_lab[0]) )
plt.imshow(tra_data[0, :, :, 0])
plt.colorbar()
# Downsampling image may allow the duplicate image exists
def remove_contradicting(xs, ys):
mapping = collections.defaultdict(set)
orig_x = {}
# Determine the set of labels for each unique image:
for x,y in zip(xs,ys):
orig_x[tuple(x.flatten())] = x
mapping[tuple(x.flatten())].add(y)
new_x = []
new_y = []
# use set-dict to store label & dict
for flatten_x in mapping:
x = orig_x[flatten_x]
labels = mapping[flatten_x]
if len(labels) == 1:
new_x.append(x)
new_y.append(next(iter(labels)))
else:
# Throw out images that match more than one label.
pass
num_uniq_3 = sum(1 for value in mapping.values() if len(value) == 1 and True in value)
num_uniq_6 = sum(1 for value in mapping.values() if len(value) == 1 and False in value)
num_uniq_both = sum(1 for value in mapping.values() if len(value) == 2)
print("Number of unique images:", len(mapping.values()))
print("Number of unique 3s: ", num_uniq_3)
print("Number of unique 6s: ", num_uniq_6)
print("Number of unique contradicting labels (both 3 and 6): ", num_uniq_both)
print()
print("Initial number of images: ", len(xs))
print("Remaining non-contradicting unique images: ", len(new_x))
return np.array(new_x), np.array(new_y)
```
```python
## Quantum data transformation ..
def convert_to_circuit(image):
"""Encode truncated classical image into quantum datapoint."""
qbit_shape = image.shape[:-1] # eliminate batch size channel
values = np.ndarray.flatten(image)
qubits = cirq.GridQubit.rect(*qbit_shape)
circuit = cirq.Circuit()
for i, value in enumerate(values):
if value:
circuit.append(cirq.X(qubits[i]))
return circuit
class CircuitLayerBuilder():
def __init__(self, data_qubits, readout):
self.data_qubits = data_qubits
self.readout = readout
def add_layer(self, circuit, gate, prefix):
for i, qubit in enumerate(self.data_qubits):
symbol = sympy.Symbol(prefix + '-' + str(i))
circuit.append(gate(qubit, self.readout)**symbol)
```
## **Transform classical data into quantum data (quantum circuit)**
```python
## Prepare classical data
(x_train, y_train), (x_test, y_test) = load_mnist(prepro=True)
#show_meta_info(x_train, x_test, y_train)
## Reduce the origianl task into binary-classification
x_train, y_train = filter_36(x_train, y_train)
x_test, y_test = filter_36(x_test, y_test)
#print("\n\n After preprocessing : \n\n")
#show_meta_info(x_train, x_test, y_train)
## Down-sampling the data itself for fitting the qbit-limitation (about 20 bits)
dwn_im = lambda im, dwn_shap : tf.image.resize(im, dwn_shap).numpy()
x_tra_dwn = dwn_im(x_train, (4, 4))
x_tst_dwn = dwn_im(x_test, (4, 4)) ## 4 x 4 = 16 < 20 bit hardware limitation..
x_tra, y_tra = remove_contradicting(x_tra_dwn, y_train) # dwn_im may let the img become similar
#show_meta_info(x_tra, x_tst_dwn, y_tra)
## Encode the data as quantum circuits
THRESHOLD = 0.5
x_tra_bin = np.array(x_tra > THRESHOLD, dtype=np.float32)
x_tst_bin = np.array(x_tst_dwn > THRESHOLD, dtype=np.float32)
#_ = remove_contradicting(x_train_bin, y_tra) # num of data may not enough..
show_meta_info(x_tra_bin, x_tst_bin, y_tra)
## package binary image into quantum circuit
x_tra_circ = [ convert_to_circuit(bin_im) for bin_im in x_tra_bin ]
x_tst_circ = [ convert_to_circuit(bin_im) for bin_im in x_tst_bin ]
SVGCircuit(x_tra_circ[0])
```
## **Build Qauntum Classifier**
```python
## Convert circuit into tf.tensor
x_train_tfcirc = tfq.convert_to_tensor(x_train_circ)
x_test_tfcirc = tfq.convert_to_tensor(x_test_circ)
demo_builder = CircuitLayerBuilder(data_qubits = cirq.GridQubit.rect(4,1),
readout=cirq.GridQubit(-1,-1))
circuit = cirq.Circuit()
demo_builder.add_layer(circuit, gate = cirq.XX, prefix='xx')
SVGCircuit(circuit)
```
```python
def create_quantum_model():
"""Create a QNN model circuit and readout operation to go along with it."""
data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid.
readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1]
circuit = cirq.Circuit()
# Prepare the readout qubit.
circuit.append(cirq.X(readout))
circuit.append(cirq.H(readout))
builder = CircuitLayerBuilder(
data_qubits = data_qubits,
readout=readout)
# Then add layers (experiment by adding more).
builder.add_layer(circuit, cirq.XX, "xx1")
builder.add_layer(circuit, cirq.ZZ, "zz1")
# Finally, prepare the readout qubit.
circuit.append(cirq.H(readout))
return circuit, cirq.Z(readout)
model_circuit, model_readout = create_quantum_model()
```
```python
# Build the Keras model.
model = tf.keras.Sequential([
# The input is the data-circuit, encoded as a tf.string
tf.keras.layers.Input(shape=(), dtype=tf.string),
# The PQC layer returns the expected value of the readout gate, range [-1,1].
tfq.layers.PQC(model_circuit, model_readout),
])
```
Next, describe the training procedure to the model, using the `compile` method.
Since the the expected readout is in the range `[-1,1]`, optimizing the hinge loss is a somewhat natural fit.
Note: Another valid approach would be to shift the output range to `[0,1]`, and treat it as the probability the model assigns to class `3`. This could be used with a standard a `tf.losses.BinaryCrossentropy` loss.
To use the hinge loss here you need to make two small adjustments. First convert the labels, `y_train_nocon`, from boolean to `[-1,1]`, as expected by the hinge loss.
```python
y_tra_hinge = 2.0*y_tra-1.0
y_tst_hinge = 2.0*y_tst-1.0
def hinge_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true) > 0.0
y_pred = tf.squeeze(y_pred) > 0.0
result = tf.cast(y_true == y_pred, tf.float32)
return tf.reduce_mean(result)
model.compile(
loss=tf.keras.losses.Hinge(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[hinge_accuracy])
print(model.summary())
```
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
pqc (PQC) (None, 1) 32
=================================================================
Total params: 32
Trainable params: 32
Non-trainable params: 0
_________________________________________________________________
None
```python
EPOCHS = 1
BATCH_SIZE = 32
NUM_EXAMPLES = len(x_tra_tfcirc)
x_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES]
y_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES]
```
```python
qnn_history = model.fit(
x_train_tfcirc_sub, y_train_hinge_sub,
batch_size=32,
epochs=EPOCHS,
verbose=1,
validation_data=(x_test_tfcirc, y_test_hinge))
qnn_results = model.evaluate(x_test_tfcirc, y_test)
```
```python
```
| 1548d041694759a5ebac4399d89b8de7b6beef36 | 68,791 | ipynb | Jupyter Notebook | q_mnist.ipynb | HuangChiEn/tfq_tutorial_refactor | d64e1c62e8766fe28495763e161440b5219d4770 | [
"Apache-2.0"
] | null | null | null | q_mnist.ipynb | HuangChiEn/tfq_tutorial_refactor | d64e1c62e8766fe28495763e161440b5219d4770 | [
"Apache-2.0"
] | null | null | null | q_mnist.ipynb | HuangChiEn/tfq_tutorial_refactor | d64e1c62e8766fe28495763e161440b5219d4770 | [
"Apache-2.0"
] | null | null | null | 159.97907 | 23,610 | 0.616185 | true | 2,279 | Qwen/Qwen-72B | 1. YES
2. YES | 0.709019 | 0.743168 | 0.52692 | __label__eng_Latn | 0.648185 | 0.062542 |
# Numerical Methods in Scientific Computing
# Assignment 4
# Q1.
To compute $\int_0^1e^{x^2}dx$ using Trapezoidal rule and modified Trapezoidal rule.
- Trapezoidal Rule is given by,
\begin{equation}
\int_{x_0}^{x_N}f(x)dx = \frac{h}{2}\sum_{i=0}^{N-1} [f(x_i)+f(x_{i+1})] + O(h^2)
\end{equation}
- Trapezoidal Rule with end corrections using first derivative is given by,
\begin{equation}
\int_{x_0}^{x_N}f(x)dx = \frac{h}{2}\sum_{i=0}^{N-1} [f(x_i)+f(x_{i+1})] - \frac{h^2}{2}[f^{\prime}(x_N)-f^{\prime}(x_N)] + O(h^4)
\end{equation}
- Trapezoidal Rule with end corrections using third derivative is given by,
To introduce third derivatives into the end corrections, say
\begin{equation}
f^{\prime\prime}(y_{i+1}) = a_{-1}f^{\prime}(x_{i}) + a_1f^{\prime}(x_{i+1}) + b_{-1}f^{\prime\prime\prime}(x_{i}) + b_{1}f^{\prime\prime\prime}(x_{i+1})
\end{equation}
By taylor series expansion we have,
\begin{equation}
f^{\prime}(x_{i}) = f^{\prime}(y_{i+1}) - \frac{h}{2}f^{\prime\prime}(y_{i+1}) + \frac{(\frac{h}{2})^2}{2!}f^{\prime\prime\prime}(y_{i+1}) - \frac{(\frac{h}{2})^3}{3!}f^{\prime\prime\prime\prime}(y_{i+1})+\frac{(\frac{h}{2})^4}{4!}f^{\prime\prime\prime\prime\prime}(y_{i+1})-\frac{(\frac{h}{2})^5}{5!}f^{\prime\prime\prime\prime\prime\prime}(y_{i+1}) + O(h^6)
\end{equation}
\begin{equation}
f^{\prime}(x_{i+1}) = f^{\prime}(y_{i+1}) + \frac{h}{2}f^{\prime\prime}(y_{i+1}) + \frac{(\frac{h}{2})^2}{2!}f^{\prime\prime\prime}(y_{i+1}) + \frac{(\frac{h}{2})^3}{3!}f^{\prime\prime\prime\prime}(y_{i+1})+\frac{(\frac{h}{2})^4}{4!}f^{\prime\prime\prime\prime\prime}(y_{i+1})+\frac{(\frac{h}{2})^5}{5!}f^{\prime\prime\prime\prime\prime\prime}(y_{i+1}) + O(h^6)
\end{equation}
\begin{equation}
f^{\prime\prime\prime}(x_{i}) = f^{\prime\prime\prime}(y_{i+1}) - \frac{h}{2}f^{\prime\prime\prime\prime}(y_{i+1}) + \frac{(\frac{h}{2})^2}{2!}f^{\prime\prime\prime\prime\prime}(y_{i+1}) - \frac{(\frac{h}{2})^3}{3!}f^{\prime\prime\prime\prime\prime\prime}(y_{i+1}) + O(h^4)
\end{equation}
\begin{equation}
f^{\prime\prime\prime}(x_{i+1}) = f^{\prime\prime\prime}(y_{i+1}) + \frac{h}{2}f^{\prime\prime\prime\prime}(y_{i+1}) + \frac{(\frac{h}{2})^2}{2!}f^{\prime\prime\prime\prime\prime}(y_{i+1}) + \frac{(\frac{h}{2})^3}{3!}f^{\prime\prime\prime\prime\prime\prime}(y_{i+1})+ O(h^4)
\end{equation}
Substituting Taylor series expansions and solving for the coefficients, we have,
\begin{equation}
a_{1}=-a_{-1}=\frac{1}{h} \quad b_{1}=-b_{-1}=-\frac{h}{24}
\end{equation}
The trailing terms amount to order of $h^4$ and hence the finite difference equation is given by,
\begin{equation}
\Rightarrow f^{\prime\prime}(y_{i+1}) = \frac{f^{\prime}(x_{i+1}) - f^{\prime}(x_{i})}{h} - \frac{h(f^{\prime\prime\prime}(x_{i+1}) - f^{\prime\prime\prime}(x_{i}))}{24} + O(h^4)
\end{equation}
And by central difference,
\begin{equation}
f^{\prime\prime\prime\prime}(y_{i+1}) = \frac{f^{\prime\prime\prime}(x_{i+1}) - f^{\prime\prime\prime}(x_{i})}{h} + O(h^2)
\end{equation}
We know,
\begin{equation}
I_{i+1} = I_{i+1}^{trap} - \frac{h^3}{12}f^{\prime\prime}(y_{i+1}) - \frac{h^5}{480}f^{\prime\prime\prime\prime}(y_{i+1}) + O(h^7)
\end{equation}
Substituting the relevant terms and summing over all i we get,
\begin{equation}
I = I^{trap} - \frac{h^3}{12}(\frac{f^{\prime}(x_{N}) - f^{\prime}(x_{0})}{h} - \frac{h(f^{\prime\prime\prime}(x_{N}) - f^{\prime\prime\prime}(x_{0}))}{24}) - \frac{h^5}{480}(\frac{f^{\prime\prime\prime}(x_{N}) - f^{\prime\prime\prime}(x_{0})}{h}) + O(h^6)
\end{equation}
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy
```
```python
def func(N):
h = 1/N
X = [h*i for i in range(N+1)]
F = np.exp(np.power(X,2))
return X, F
def trap_rule(N):
h = 1/N
X, F = func(N)
I_trap = (h/2)*sum([F[i]+F[i+1] for i in range(0,N)])
return I_trap
def mod_trap_rule_first_der(N):
h = 1/N
X, F = func(N)
F_prime = [0, 0]
F_prime[0] = np.exp(np.power(X[0],2))*2*X[0]
F_prime[1] = np.exp(np.power(X[N],2))*2*X[N]
I_mod_trap1 = (h/2)*sum([F[i]+F[i+1] for i in range(0,N)])-(h**2/12)*(F_prime[1]-F_prime[0])
return I_mod_trap1
def mod_trap_rule_third_der(N):
h = 1/N
X, F = func(N)
F_1prime = [0, 0]
F_1prime[0] = np.exp(np.power(X[0],2))*2*X[0]
F_1prime[1] = np.exp(np.power(X[N],2))*2*X[N]
F_3prime = [0, 0]
F_3prime[0] = np.exp(np.power(X[0],2))*2*(4*np.power(X[0],3)+6*X[0])
F_3prime[1] = np.exp(np.power(X[N],2))*2*(4*np.power(X[N],3)+6*X[N])
I_mod_trap3 = (h/2)*sum([F[i]+F[i+1] for i in range(0,N)]) - (h**2/12)*(F_1prime[1]-F_1prime[0]) + (h**4/(12*24))*(F_3prime[1]-F_3prime[0]) - (h**4/480)*(F_3prime[1]-F_3prime[0])
return I_mod_trap3
```
```python
I_exact = 1.4626517459071816
N_list = [2, 5, 10, 20, 50, 100, 200, 500, 1000]
I_trap = []
I_mod_trap1 = []
I_mod_trap3 = []
for i,N in enumerate(N_list):
I_trap.append(trap_rule(N))
I_mod_trap1.append(mod_trap_rule_first_der(N))
I_mod_trap3.append(mod_trap_rule_third_der(N))
```
```python
# Plot the results to compare between Numerical and Exact solutions to the ODE for different values of n
fig = plt.figure(figsize=(15,7))
fig.suptitle("Plot of absolute Errors for the Three methods", fontsize=16)
I_numerical = {'Trapezoidal Rule':I_trap,
'Trapezoidal rule with end corrections using first derivative':I_mod_trap1,
'Trapezoidal rule with end corrections using third derivative':I_mod_trap3}
for i, method in enumerate(I_numerical):
plt.subplot(1, 3, i+1)
plt.loglog(N_list, np.abs(np.subtract(I_numerical[method],I_exact)),
marker='o',color='r', label="abs error", linestyle='dashed')
plt.grid()
plt.legend()
plt.xlabel('N')
plt.ylabel('Absolute error')
plt.title(method if len(method)<35 else method[:37]+'\n'+method[37:])
# Plot the results to compare between Numerical and Exact solutions to the ODE for different values of n
fig = plt.figure(figsize=(15,7))
fig.suptitle("[Common scale for axes] Plot of absolute Errors for the Three methods", fontsize=16)
I_numerical = {'Trapezoidal Rule':I_trap,
'Trapezoidal rule with end corrections using first derivative':I_mod_trap1,
'Trapezoidal rule with end corrections using third derivative':I_mod_trap3}
for i, method in enumerate(I_numerical):
plt.subplot(1, 3, i+1)
plt.loglog(N_list, np.abs(np.subtract(I_numerical[method],I_exact)),
marker='o',color='r', label="abs error", linestyle='dashed')
plt.grid()
plt.legend()
plt.xlabel('N')
plt.ylabel('Absolute error')
plt.title(method if len(method)<35 else method[:37]+'\n'+method[37:])
plt.xlim(10**0, 10**3+250)
plt.ylim(10**-17, 10**0)
```
- Trapezoidal rule - Slope = 4/2 = 2 $\Rightarrow Error is O(1/h^2)$
- Trapezoidal rule with end correction using first derivative- Slope = 8/2 = 4 $\Rightarrow Error is O(1/h^4)$
- Trapezoidal rule with end correction using third derivative- Slope = 12/2 = 6 $\Rightarrow Error is O(1/h^6)$
# Q2.
To obtain $log(n!) = log(C(\frac{n}{e})^n\sqrt{n})+O(1/n)$ using Euler-Macluarin, where C is some constant.
The Euler-Maclaurin Formula is given by,
\begin{equation}
\sum_{n=a}^{b} f(n) = \int_{a}^{b}f(x)dx + [\frac{f(b)+f(a)}{2}] + \sum_{k=1}^{p} \frac{b_{2k}}{(2k)!} [f^{(2k-1)}(b) - f^{(2k-1)}(a)] - \int_{a}^{b} \frac{B_{2p}(\{t\})}{(2p)!}f^{2p}(t)dt
\end{equation}
\begin{equation}
log(N!) = \sum_{n=1}^{N} log(n) \Rightarrow f(x) = log(x)
\end{equation}
\begin{equation}
\sum_{n=1}^{N} log(n) = \int_{1}^{N}log(x)dx + [\frac{log(N)+log(1)}{2}] + \sum_{k=1}^{p} \frac{b_{2k}}{(2k)!} (-1)^{2k-2}(2k-2)!(\frac{1}{N^{2k-1}} - 1) - \int_{1}^{N} \frac{B_{2p}(\{t\})(-1)}{(2p)!t^2}dt
\end{equation}
\begin{equation}
\sum_{n=1}^{N} log(n) = (Nlog(N)-N+1) + \frac{log(N)}{2} + \sum_{k=1}^{p} \frac{b_{2k}}{(2k)(2k-1)} (-1)^{2k}(\frac{1}{N^{2k-1}} - 1) + (\int_{1}^{\infty} \frac{B_{2p}(\{t\})}{(2p)!t^2}dt - \int_{N}^{\infty} \frac{B_{2p}(\{t\})}{(2p)!t^2}dt)
\end{equation}
\begin{equation}
\lim_{n \to \infty}( \sum_{n=1}^{N} log(n) - (Nlog(N)-N+1) - \frac{log(N)}{2} )= \lim_{n \to \infty}(\sum_{k=1}^{p} \frac{b_{2k}}{(2k)(2k-1)} (-1)^{2k}(\frac{1}{N^{2k-1}} - 1)) + \lim_{n \to \infty}((\int_{1}^{\infty} \frac{B_{2p}(\{t\})}{(2p)!t^2}dt - \int_{N}^{\infty} \frac{B_{2p}(\{t\})}{(2p)!t^2}dt))
\end{equation}
\begin{equation}
\lim_{n \to \infty}( \sum_{n=1}^{N} log(n) - (Nlog(N)-N+1) - \frac{log(N)}{2} )= (\sum_{k=1}^{p} \frac{b_{2k}}{(2k)(2k-1)} (-1)^{2k}(-1) + \int_{1}^{\infty} \frac{B_{2p}(\{t\})}{(2p)!t^2}dt) - \lim_{n \to \infty}(\int_{N}^{\infty} \frac{B_{2p}(\{t\})}{(2p)!t^2}dt))
\end{equation}
Taking the following expression as some constant,
\begin{equation}
(\sum_{k=1}^{p} \frac{b_{2k}}{(2k)(2k-1)} (-1)^{2k}(-1) + \int_{1}^{\infty} \frac{B_{2p}(\{t\})}{(2p)!t^2}dt) = log(C)-1
\end{equation}
While a bound to the following expression is to be found,
\begin{equation}
(\int_{N}^{\infty} \frac{B_{2p}(\{t\})}{(2p)!t^2}dt))
\end{equation}
Taking p = 1,
\begin{equation}
B_{2}(\{t\}) = \{t^2\} - \{t\} + \frac{1}{6} \Rightarrow |B_{2}(\{t\})| \lt 3
\end{equation}
So,
\begin{equation}
|\int_{N}^{\infty} \frac{B_{2}(\{t\})}{(2)!t^2}dt)| \leq \int_{N}^{\infty} \frac{|B_{2}(\{t\})|}{(2)!t^2}dt) \leq \frac{3}{2N}
\end{equation}
which is O(1/N).
\begin{equation}
\Rightarrow \sum_{n=1}^{N} log(n) = (Nlog(N)-N+1) + \frac{log(N)}{2} + log(C) - 1 + O(1/N) = log((\frac{N}{e})^N) + log(\sqrt{N}) + log(C) + O(1/N)
\end{equation}
\begin{equation}
\Rightarrow \sum_{n=1}^{N} log(n) = log(C(\frac{N}{e})^N\sqrt{N}) + O(1/N)
\end{equation}
# Q3.
- To evaluate
\begin{equation}
I_k = \int_{0}^{\pi/2} sin^k(x)dx
\end{equation}
Let $u = sin^{k-1}(x) \Rightarrow du = (k-1)sin^{k-2}(x)cos(x)dx$ and $dv = sin(x)dx \Rightarrow v = -cos(x)$.
\begin{equation}
I_k = [-sin^{k-1}(x)cos(x)]_0^{\pi/2} + \int_{0}^{\pi/2} (k-1)sin^{k-2}(x)cos^2(x)dx
\end{equation}
With $[-sin^{k-1}(x)cos(x)]_0^{\pi/2} = 0$,
\begin{equation}
I_k = \int_{0}^{\pi/2} (k-1)sin^{k-2}(x)(1-sin^2(x))dx \Rightarrow I_k = \int_{0}^{\pi/2} (k-1)sin^{k-2}(x)dx + (k-1)I_k
\end{equation}
\begin{equation}
I_k = \frac{k-1}{k}\int_{0}^{\pi/2} sin^{k-2}(x)dx = \frac{k-1}{k}I_{k-2}
\end{equation}
We can substitute for $I_k$ recursively to find that for when k is even,
\begin{equation}
I_k = \frac{(n-1)(n-3)...1}{n(n-2)...2}\int_{0}^{\pi/2} sin^{0}(x)dx
\end{equation}
\begin{equation}
\Rightarrow I_k = \frac{(n-1)(n-3)...1}{n(n-2)...2}\frac{\pi}{2}
\end{equation}
And, when k is odd
\begin{equation}
I_k = \frac{(n-1)(n-3)...2}{n(n-2)...3}\int_{0}^{\pi/2} sin^{1}(x)dx
\end{equation}
\begin{equation}
\Rightarrow I_k = \frac{(n-1)(n-3)...2}{n(n-2)...3}
\end{equation}
- From the recursive relation $I_k = \frac{k-1}{k}I_{k-2}$ as $\frac{k-1}{k} \lt 1 \quad \forall k \gt 0$ we have $I_{k} \lt I_{k-2}$. Hence $I_k$ is monotone decreasing sequence.
- $\lim_{m \to \infty} \frac{I_{2m-1}}{I_{2m+1}}$
\begin{equation}
\lim_{m \to \infty} \frac{I_{2m-1}}{I_{2m+1}} = \lim_{m \to \infty} \frac{I_{2m-1}}{\frac{2m}{2m+1}I_{2m-1}} = \lim_{m \to \infty} \frac{2m+1}{2m} = 1
\end{equation}
- $\lim_{m \to \infty} \frac{I_{2m}}{I_{2m+1}}$
We know that since $I_k$ is monotone decreasing sequence $I_{2m-1} \geq I_{2m} \geq I_{2m+1}$. Dividing throughout by $I_{2m+1}$ we have,
\begin{equation}
\frac{I_{2m-1}}{I_{2m+1}} \geq \frac{I_{2m}}{I_{2m+1}} \geq \frac{I_{2m+1}}{I_{2m+1}} = 1
\end{equation}
And as $\lim_{m \to \infty} \frac{I_{2m-1}}{I_{2m+1}} = \lim_{m \to \infty} \frac{2m+1}{2m} = 1$, by sandwich theorem,
\begin{equation}
\lim_{m \to \infty} \frac{I_{2m}}{I_{2m+1}} = 1
\end{equation}
- Central Binomial Coefficient
We know that $\lim_{m \to \infty} \frac{I_{2m}}{I_{2m+1}} = 1$.
\begin{equation}
\lim_{m \to \infty} \frac{I_{2m}}{I_{2m+1}} = \lim_{m \to \infty} \frac{\frac{(2m-1)(2m-3)...1.\pi}{(2m)(2m-2)...2.2}}{\frac{(2m)(2m-2)...2}{(2m+1)(2m-1)...3}} = \lim_{m \to \infty} (2m+1)(\frac{(2m-1)(2m-3)...3.1}{(2m)(2m-2)...4.2})^2\frac{\pi}{2} = 1
\end{equation}
\begin{equation}
\Rightarrow \lim_{m \to \infty} \frac{((2m)(2m-2)...4.2)^2}{(2m+1)((2m-1)(2m-3)...3.1)^2} = \frac{\pi}{2}
\end{equation}
Simplifying the expression,
\begin{equation}
\frac{(m.(m-1)...2.1.2^m)^2}{(2m+1)((2m-1)(2m-3)...3.1)^2} = \frac{(m!)^2.2^{2m}}{(2m+1)((2m-1)(2m-3)...3.1)^2}
\end{equation}
Multiplying and dividing by $((2m)(2m-2)...4.2)^2$
\begin{equation}
\frac{(m!)^2.2^{2m}.((2m)(2m-2)...4.2)^2}{(2m+1)((2m)(2m-1)(2m-2)(2m-3)...3.2.1)^2} = \frac{(m!)^4.2^{4m}}{(2m+1)(2m!)^2} = \frac{2^{4m}}{(2m+1){2m \choose m}^2}
\end{equation}
\begin{equation}
\lim_{m \to \infty} \frac{2^{4m}}{(2m+1){2m \choose m}^2} = \frac{\pi}{2} \Rightarrow \lim_{m \to \infty} {2m \choose m} = \lim_{m \to \infty} 2^{2m}\sqrt{\frac{2}{(2m+1)\pi}}
\end{equation}
\begin{equation}
\Rightarrow {2m \choose m} \sim \frac{4^{m}}{\sqrt{m\pi}}
\end{equation}
- Evaluating C
We know,
\begin{equation}
log(2m!) = log(C(\frac{2m}{e})^{2m}\sqrt{2m}) + O(1/2m) \quad;\quad 2.log(m!) = 2log(C(\frac{m}{e})^m\sqrt{m}) + O(1/m)
\end{equation}
\begin{equation}
log(2m!)-2.log(m!) = log(\frac{C(\frac{2m}{e})^{2m}\sqrt{2m})}{(C(\frac{m}{e})^m\sqrt{m})^2}
\end{equation}
\begin{equation}
log(\frac{2m!}{m!}) = log(\frac{2^{2m}\sqrt{2}}{C\sqrt{m}})
\end{equation}
\begin{equation}
\Rightarrow log(\frac{2^{2m}\sqrt{2}}{C\sqrt{m}}) = log(\frac{4^{m}}{\sqrt{m\pi}}) \Rightarrow C = \sqrt{2\pi}
\end{equation}
- Substituting this back into the equation $log(N!) = log(C(\frac{N}{e})^N\sqrt{N}) + O(1/N)$ ,
\begin{equation}
log(N!) = log(\sqrt{2\pi}(\frac{N}{e})^N\sqrt{N}) + O(1/N)
\end{equation}
\begin{equation}
\Rightarrow N! \sim (\frac{N}{e})^N\sqrt{2\pi N} \quad \text{(Stirling Formula)}
\end{equation}
- $O(1/n^3)$
Including $\frac{b_2.f^{\prime}(x)|_N}{2!} = \frac{1}{12N}$
\begin{equation}
\Rightarrow \sum_{n=1}^{N} log(n) = log((\frac{N}{e})^N) + log(\sqrt{N}) + log(\sqrt{2\pi}) + O(1/N) = log((\frac{N}{e})^N) + log(\sqrt{N}) + log(\sqrt{2\pi}) + \frac{1}{12N} + O(1/N^3)
\end{equation}
\begin{equation}
\Rightarrow N! \sim (\frac{N}{e})^N\sqrt{2\pi N}.e^{\frac{1}{12N}}
\end{equation}
```python
# Relative Error for {20, 50}
N = [20, 50]
n = N[0]
factorial_n = scipy.math.factorial(n)
stirling_n = (np.power(n/np.exp(1),n))*np.power(2*np.pi*n, 0.5)
print('The factorial for n = 20 using: \nStirling formula \t=',stirling_n, '\nExact value \t\t=', factorial_n)
print('Relative Error (%)\t=', 100*(stirling_n-factorial_n)/factorial_n)
n = N[1]
factorial_n = scipy.math.factorial(n)
stirling_n = (np.power(n/np.exp(1),n))*np.power(2*np.pi*n, 0.5)
print('The factorial for n = 50 using: \nStirling formula \t=',stirling_n, '\nExact value \t\t=', factorial_n)
print('Relative Error (%)\t=', 100*(stirling_n-factorial_n)/factorial_n)
```
The factorial for n = 20 using:
Stirling formula = 2.422786846761135e+18
Exact value = 2432902008176640000
Relative Error (%) = -0.41576526228796995
The factorial for n = 50 using:
Stirling formula = 3.036344593938168e+64
Exact value = 30414093201713378043612608166064768844377641568960512000000000000
Relative Error (%) = -0.16652563663756476
```python
# Factorial with O(1/n^3)
N = [20, 50]
n = N[0]
factorial_n = scipy.math.factorial(n)
stirling_n = (np.power(n/np.exp(1),n))*np.power(2*np.pi*n, 0.5)*np.exp(1/(12*n))
print('The factorial for n = 20 using: \nStirling formula \t=',stirling_n, '\nExact value \t\t=', factorial_n)
print('Relative Error (%)\t=', 100*(stirling_n-factorial_n)/factorial_n)
n = N[1]
factorial_n = scipy.math.factorial(n)
stirling_n = (np.power(n/np.exp(1),n))*np.power(2*np.pi*n, 0.5)*np.exp(1/(12*n))
print('The factorial for n = 50 using: \nStirling formula \t=',stirling_n, '\nExact value \t\t=', factorial_n)
print('Relative Error (%)\t=', 100*(stirling_n-factorial_n)/factorial_n)
```
The factorial for n = 20 using:
Stirling formula = 2.432902852332159e+18
Exact value = 2432902008176640000
Relative Error (%) = 3.469747306463279e-05
The factorial for n = 50 using:
Stirling formula = 3.041409387750502e+64
Exact value = 30414093201713378043612608166064768844377641568960512000000000000
Relative Error (%) = 2.221968747857392e-06
| 290c012fc711f14eeb14283c14cbbb4583766300 | 113,333 | ipynb | Jupyter Notebook | me16b077_4.ipynb | ENaveen98/Numerical-methods-and-Scientific-computing | 5b931621e307386c8c20430db9cb8dae243d38ba | [
"MIT"
] | 1 | 2021-01-05T12:31:51.000Z | 2021-01-05T12:31:51.000Z | me16b077_4.ipynb | ENaveen98/Numerical-methods-and-Scientific-computing | 5b931621e307386c8c20430db9cb8dae243d38ba | [
"MIT"
] | null | null | null | me16b077_4.ipynb | ENaveen98/Numerical-methods-and-Scientific-computing | 5b931621e307386c8c20430db9cb8dae243d38ba | [
"MIT"
] | null | null | null | 211.048417 | 48,812 | 0.873064 | true | 7,115 | Qwen/Qwen-72B | 1. YES
2. YES | 0.951142 | 0.882428 | 0.839314 | __label__eng_Latn | 0.158181 | 0.788342 |
```python
from sympy import *
from sympy.abc import *
import numpy as np
import matplotlib.pyplot as plt
init_printing()
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
```
```python
x = Function('x')
dxdt = Derivative(x(t), t)
dxxdtt = Derivative(x(t), t, t)
edo = Eq(dxxdtt - (k/w*dxdt) + g, 0)
edo
```
```python
classify_ode(edo)
```
('nth_linear_constant_coeff_undetermined_coefficients',
'nth_linear_constant_coeff_variation_of_parameters',
'nth_order_reducible',
'nth_linear_constant_coeff_variation_of_parameters_Integral')
```python
#solução da edo
sol = dsolve(edo)
sol
```
```python
sol.subs({t:3*w/(g*k)})
```
```python
0.5/3
```
```python
1/6
```
```python
```
| 1595f6f5a4e55a6bd5686b4716b3973c8d148032 | 19,075 | ipynb | Jupyter Notebook | Mec_Flu_I/Exercicios_recomendados_FoxMcDonalds.ipynb | Chabole/7-semestre-EngMec | 520e6ca0394d554e8de102e1b509ccbd0f0e1cbb | [
"MIT"
] | 1 | 2022-01-05T14:17:04.000Z | 2022-01-05T14:17:04.000Z | Mec_Flu_I/Exercicios_recomendados_FoxMcDonalds.ipynb | Chabole/7-semestre-EngMec | 520e6ca0394d554e8de102e1b509ccbd0f0e1cbb | [
"MIT"
] | null | null | null | Mec_Flu_I/Exercicios_recomendados_FoxMcDonalds.ipynb | Chabole/7-semestre-EngMec | 520e6ca0394d554e8de102e1b509ccbd0f0e1cbb | [
"MIT"
] | null | null | null | 87.100457 | 4,422 | 0.836697 | true | 221 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.774583 | 0.69534 | __label__eng_Latn | 0.229842 | 0.453839 |
# 低次元化
```python
import sympy as sy
from sympy.printing.numpy import NumPyPrinter
from sympy import julia_code
from sympy.utilities.codegen import codegen
import tqdm
import os
from pathlib import Path
from kinematics import Local
```
```python
# アクチュエータベクトル
l1, l2, l3 = sy.symbols("l1, l2, l3")
q = sy.Matrix([[l1, l2, l3]]).T
# ディスク位置を表すスカラ変数
xi = sy.Symbol("xi")
```
```python
kinema = Local()
P = kinema.P(q, xi) # 位置ベクトル
R = kinema.R(q, xi) # 回転行列
```
# 2次まで減らす
```python
P[2,0]
```
$\displaystyle 60131984.3684659 \xi^{9} \left(l_{1} + l_{2} + l_{3} + 0.45\right) \left(l_{1}^{2} - l_{1} l_{2} - l_{1} l_{3} + l_{2}^{2} - l_{2} l_{3} + l_{3}^{2}\right)^{4} - 1522090.85432679 \xi^{7} \left(l_{1} + l_{2} + l_{3} + 0.45\right) \left(l_{1}^{2} - l_{1} l_{2} - l_{1} l_{3} + l_{2}^{2} - l_{2} l_{3} + l_{3}^{2}\right)^{3} + 22474.6227709191 \xi^{5} \left(l_{1} + l_{2} + l_{3} + 0.45\right) \left(l_{1}^{2} - l_{1} l_{2} - l_{1} l_{3} + l_{2}^{2} - l_{2} l_{3} + l_{3}^{2}\right)^{2} - 79.0123456790123 \xi^{3} \left(l_{1} + l_{2} + l_{3} + 0.45\right) \left(2 l_{1}^{2} - 2 l_{1} l_{2} - 2 l_{1} l_{3} + 2 l_{2}^{2} - 2 l_{2} l_{3} + 2 l_{3}^{2}\right) + \frac{\xi \left(l_{1} + l_{2} + l_{3} + 0.45\right)}{3}$
```python
f = P[2,0].expand()
f
```
$\displaystyle 60131984.3684659 l_{1}^{9} \xi^{9} - 180395953.105398 l_{1}^{8} l_{2} \xi^{9} - 180395953.105398 l_{1}^{8} l_{3} \xi^{9} + 27059392.9658097 l_{1}^{8} \xi^{9} + 360791906.210795 l_{1}^{7} l_{2}^{2} \xi^{9} - 108237571.863239 l_{1}^{7} l_{2} \xi^{9} + 360791906.210795 l_{1}^{7} l_{3}^{2} \xi^{9} - 108237571.863239 l_{1}^{7} l_{3} \xi^{9} - 1522090.85432679 l_{1}^{7} \xi^{7} - 360791906.210795 l_{1}^{6} l_{2}^{3} \xi^{9} + 360791906.210795 l_{1}^{6} l_{2}^{2} l_{3} \xi^{9} + 270593929.658097 l_{1}^{6} l_{2}^{2} \xi^{9} + 360791906.210795 l_{1}^{6} l_{2} l_{3}^{2} \xi^{9} + 216475143.726477 l_{1}^{6} l_{2} l_{3} \xi^{9} + 3044181.70865359 l_{1}^{6} l_{2} \xi^{7} - 360791906.210795 l_{1}^{6} l_{3}^{3} \xi^{9} + 270593929.658097 l_{1}^{6} l_{3}^{2} \xi^{9} + 3044181.70865359 l_{1}^{6} l_{3} \xi^{7} - 684940.884447057 l_{1}^{6} \xi^{7} + 180395953.105398 l_{1}^{5} l_{2}^{4} \xi^{9} - 1443167624.84318 l_{1}^{5} l_{2}^{3} l_{3} \xi^{9} - 432950287.452954 l_{1}^{5} l_{2}^{3} \xi^{9} - 324712715.589716 l_{1}^{5} l_{2}^{2} l_{3} \xi^{9} - 4566272.56298038 l_{1}^{5} l_{2}^{2} \xi^{7} - 1443167624.84318 l_{1}^{5} l_{2} l_{3}^{3} \xi^{9} - 324712715.589716 l_{1}^{5} l_{2} l_{3}^{2} \xi^{9} + 4566272.56298038 l_{1}^{5} l_{2} l_{3} \xi^{7} + 2054822.65334117 l_{1}^{5} l_{2} \xi^{7} + 180395953.105398 l_{1}^{5} l_{3}^{4} \xi^{9} - 432950287.452954 l_{1}^{5} l_{3}^{3} \xi^{9} - 4566272.56298038 l_{1}^{5} l_{3}^{2} \xi^{7} + 2054822.65334117 l_{1}^{5} l_{3} \xi^{7} + 22474.6227709191 l_{1}^{5} \xi^{5} + 180395953.105398 l_{1}^{4} l_{2}^{5} \xi^{9} + 1623563577.94858 l_{1}^{4} l_{2}^{4} l_{3} \xi^{9} + 514128466.350383 l_{1}^{4} l_{2}^{4} \xi^{9} + 721583812.421591 l_{1}^{4} l_{2}^{3} l_{3}^{2} \xi^{9} + 108237571.863239 l_{1}^{4} l_{2}^{3} l_{3} \xi^{9} + 1522090.85432679 l_{1}^{4} l_{2}^{3} \xi^{7} + 721583812.421591 l_{1}^{4} l_{2}^{2} l_{3}^{3} \xi^{9} + 649425431.179432 l_{1}^{4} l_{2}^{2} l_{3}^{2} \xi^{9} - 9132545.12596076 l_{1}^{4} l_{2}^{2} l_{3} \xi^{7} - 4109645.30668234 l_{1}^{4} l_{2}^{2} \xi^{7} + 1623563577.94858 l_{1}^{4} l_{2} l_{3}^{4} \xi^{9} + 108237571.863239 l_{1}^{4} l_{2} l_{3}^{3} \xi^{9} - 9132545.12596076 l_{1}^{4} l_{2} l_{3}^{2} \xi^{7} - 2054822.65334117 l_{1}^{4} l_{2} l_{3} \xi^{7} - 22474.6227709191 l_{1}^{4} l_{2} \xi^{5} + 180395953.105398 l_{1}^{4} l_{3}^{5} \xi^{9} + 514128466.350383 l_{1}^{4} l_{3}^{4} \xi^{9} + 1522090.85432679 l_{1}^{4} l_{3}^{3} \xi^{7} - 4109645.30668234 l_{1}^{4} l_{3}^{2} \xi^{7} - 22474.6227709191 l_{1}^{4} l_{3} \xi^{5} + 10113.5802469136 l_{1}^{4} \xi^{5} - 360791906.210795 l_{1}^{3} l_{2}^{6} \xi^{9} - 1443167624.84318 l_{1}^{3} l_{2}^{5} l_{3} \xi^{9} - 432950287.452954 l_{1}^{3} l_{2}^{5} \xi^{9} + 721583812.421591 l_{1}^{3} l_{2}^{4} l_{3}^{2} \xi^{9} + 108237571.863239 l_{1}^{3} l_{2}^{4} l_{3} \xi^{9} + 1522090.85432679 l_{1}^{3} l_{2}^{4} \xi^{7} - 2886335249.68636 l_{1}^{3} l_{2}^{3} l_{3}^{3} \xi^{9} - 432950287.452954 l_{1}^{3} l_{2}^{3} l_{3}^{2} \xi^{9} + 19787181.1062483 l_{1}^{3} l_{2}^{3} l_{3} \xi^{7} + 4794586.1911294 l_{1}^{3} l_{2}^{3} \xi^{7} + 721583812.421591 l_{1}^{3} l_{2}^{2} l_{3}^{4} \xi^{9} - 432950287.452954 l_{1}^{3} l_{2}^{2} l_{3}^{3} \xi^{9} - 4566272.56298038 l_{1}^{3} l_{2}^{2} l_{3}^{2} \xi^{7} + 2054822.65334117 l_{1}^{3} l_{2}^{2} l_{3} \xi^{7} + 22474.6227709191 l_{1}^{3} l_{2}^{2} \xi^{5} - 1443167624.84318 l_{1}^{3} l_{2} l_{3}^{5} \xi^{9} + 108237571.863239 l_{1}^{3} l_{2} l_{3}^{4} \xi^{9} + 19787181.1062483 l_{1}^{3} l_{2} l_{3}^{3} \xi^{7} + 2054822.65334117 l_{1}^{3} l_{2} l_{3}^{2} \xi^{7} - 89898.4910836763 l_{1}^{3} l_{2} l_{3} \xi^{5} - 20227.1604938272 l_{1}^{3} l_{2} \xi^{5} - 360791906.210795 l_{1}^{3} l_{3}^{6} \xi^{9} - 432950287.452954 l_{1}^{3} l_{3}^{5} \xi^{9} + 1522090.85432679 l_{1}^{3} l_{3}^{4} \xi^{7} + 4794586.1911294 l_{1}^{3} l_{3}^{3} \xi^{7} + 22474.6227709191 l_{1}^{3} l_{3}^{2} \xi^{5} - 20227.1604938272 l_{1}^{3} l_{3} \xi^{5} - 158.024691358025 l_{1}^{3} \xi^{3} + 360791906.210795 l_{1}^{2} l_{2}^{7} \xi^{9} + 360791906.210795 l_{1}^{2} l_{2}^{6} l_{3} \xi^{9} + 270593929.658097 l_{1}^{2} l_{2}^{6} \xi^{9} - 324712715.589716 l_{1}^{2} l_{2}^{5} l_{3} \xi^{9} - 4566272.56298038 l_{1}^{2} l_{2}^{5} \xi^{7} + 721583812.421591 l_{1}^{2} l_{2}^{4} l_{3}^{3} \xi^{9} + 649425431.179432 l_{1}^{2} l_{2}^{4} l_{3}^{2} \xi^{9} - 9132545.12596076 l_{1}^{2} l_{2}^{4} l_{3} \xi^{7} - 4109645.30668234 l_{1}^{2} l_{2}^{4} \xi^{7} + 721583812.421591 l_{1}^{2} l_{2}^{3} l_{3}^{4} \xi^{9} - 432950287.452954 l_{1}^{2} l_{2}^{3} l_{3}^{3} \xi^{9} - 4566272.56298038 l_{1}^{2} l_{2}^{3} l_{3}^{2} \xi^{7} + 2054822.65334117 l_{1}^{2} l_{2}^{3} l_{3} \xi^{7} + 22474.6227709191 l_{1}^{2} l_{2}^{3} \xi^{5} + 649425431.179432 l_{1}^{2} l_{2}^{2} l_{3}^{4} \xi^{9} - 4566272.56298038 l_{1}^{2} l_{2}^{2} l_{3}^{3} \xi^{7} - 6164467.96002351 l_{1}^{2} l_{2}^{2} l_{3}^{2} \xi^{7} + 67423.8683127572 l_{1}^{2} l_{2}^{2} l_{3} \xi^{5} + 30340.7407407407 l_{1}^{2} l_{2}^{2} \xi^{5} + 360791906.210795 l_{1}^{2} l_{2} l_{3}^{6} \xi^{9} - 324712715.589716 l_{1}^{2} l_{2} l_{3}^{5} \xi^{9} - 9132545.12596076 l_{1}^{2} l_{2} l_{3}^{4} \xi^{7} + 2054822.65334117 l_{1}^{2} l_{2} l_{3}^{3} \xi^{7} + 67423.8683127572 l_{1}^{2} l_{2} l_{3}^{2} \xi^{5} + 360791906.210795 l_{1}^{2} l_{3}^{7} \xi^{9} + 270593929.658097 l_{1}^{2} l_{3}^{6} \xi^{9} - 4566272.56298038 l_{1}^{2} l_{3}^{5} \xi^{7} - 4109645.30668234 l_{1}^{2} l_{3}^{4} \xi^{7} + 22474.6227709191 l_{1}^{2} l_{3}^{3} \xi^{5} + 30340.7407407407 l_{1}^{2} l_{3}^{2} \xi^{5} - 71.1111111111111 l_{1}^{2} \xi^{3} - 180395953.105398 l_{1} l_{2}^{8} \xi^{9} - 108237571.863239 l_{1} l_{2}^{7} \xi^{9} + 360791906.210795 l_{1} l_{2}^{6} l_{3}^{2} \xi^{9} + 216475143.726477 l_{1} l_{2}^{6} l_{3} \xi^{9} + 3044181.70865359 l_{1} l_{2}^{6} \xi^{7} - 1443167624.84318 l_{1} l_{2}^{5} l_{3}^{3} \xi^{9} - 324712715.589716 l_{1} l_{2}^{5} l_{3}^{2} \xi^{9} + 4566272.56298038 l_{1} l_{2}^{5} l_{3} \xi^{7} + 2054822.65334117 l_{1} l_{2}^{5} \xi^{7} + 1623563577.94858 l_{1} l_{2}^{4} l_{3}^{4} \xi^{9} + 108237571.863239 l_{1} l_{2}^{4} l_{3}^{3} \xi^{9} - 9132545.12596076 l_{1} l_{2}^{4} l_{3}^{2} \xi^{7} - 2054822.65334117 l_{1} l_{2}^{4} l_{3} \xi^{7} - 22474.6227709191 l_{1} l_{2}^{4} \xi^{5} - 1443167624.84318 l_{1} l_{2}^{3} l_{3}^{5} \xi^{9} + 108237571.863239 l_{1} l_{2}^{3} l_{3}^{4} \xi^{9} + 19787181.1062483 l_{1} l_{2}^{3} l_{3}^{3} \xi^{7} + 2054822.65334117 l_{1} l_{2}^{3} l_{3}^{2} \xi^{7} - 89898.4910836763 l_{1} l_{2}^{3} l_{3} \xi^{5} - 20227.1604938272 l_{1} l_{2}^{3} \xi^{5} + 360791906.210795 l_{1} l_{2}^{2} l_{3}^{6} \xi^{9} - 324712715.589716 l_{1} l_{2}^{2} l_{3}^{5} \xi^{9} - 9132545.12596076 l_{1} l_{2}^{2} l_{3}^{4} \xi^{7} + 2054822.65334117 l_{1} l_{2}^{2} l_{3}^{3} \xi^{7} + 67423.8683127572 l_{1} l_{2}^{2} l_{3}^{2} \xi^{5} + 216475143.726477 l_{1} l_{2} l_{3}^{6} \xi^{9} + 4566272.56298038 l_{1} l_{2} l_{3}^{5} \xi^{7} - 2054822.65334117 l_{1} l_{2} l_{3}^{4} \xi^{7} - 89898.4910836763 l_{1} l_{2} l_{3}^{3} \xi^{5} + 474.074074074074 l_{1} l_{2} l_{3} \xi^{3} + 71.1111111111111 l_{1} l_{2} \xi^{3} - 180395953.105398 l_{1} l_{3}^{8} \xi^{9} - 108237571.863239 l_{1} l_{3}^{7} \xi^{9} + 3044181.70865359 l_{1} l_{3}^{6} \xi^{7} + 2054822.65334117 l_{1} l_{3}^{5} \xi^{7} - 22474.6227709191 l_{1} l_{3}^{4} \xi^{5} - 20227.1604938272 l_{1} l_{3}^{3} \xi^{5} + 71.1111111111111 l_{1} l_{3} \xi^{3} + \frac{l_{1} \xi}{3} + 60131984.3684659 l_{2}^{9} \xi^{9} - 180395953.105398 l_{2}^{8} l_{3} \xi^{9} + 27059392.9658097 l_{2}^{8} \xi^{9} + 360791906.210795 l_{2}^{7} l_{3}^{2} \xi^{9} - 108237571.863239 l_{2}^{7} l_{3} \xi^{9} - 1522090.85432679 l_{2}^{7} \xi^{7} - 360791906.210795 l_{2}^{6} l_{3}^{3} \xi^{9} + 270593929.658097 l_{2}^{6} l_{3}^{2} \xi^{9} + 3044181.70865359 l_{2}^{6} l_{3} \xi^{7} - 684940.884447057 l_{2}^{6} \xi^{7} + 180395953.105398 l_{2}^{5} l_{3}^{4} \xi^{9} - 432950287.452954 l_{2}^{5} l_{3}^{3} \xi^{9} - 4566272.56298038 l_{2}^{5} l_{3}^{2} \xi^{7} + 2054822.65334117 l_{2}^{5} l_{3} \xi^{7} + 22474.6227709191 l_{2}^{5} \xi^{5} + 180395953.105398 l_{2}^{4} l_{3}^{5} \xi^{9} + 514128466.350383 l_{2}^{4} l_{3}^{4} \xi^{9} + 1522090.85432679 l_{2}^{4} l_{3}^{3} \xi^{7} - 4109645.30668234 l_{2}^{4} l_{3}^{2} \xi^{7} - 22474.6227709191 l_{2}^{4} l_{3} \xi^{5} + 10113.5802469136 l_{2}^{4} \xi^{5} - 360791906.210795 l_{2}^{3} l_{3}^{6} \xi^{9} - 432950287.452954 l_{2}^{3} l_{3}^{5} \xi^{9} + 1522090.85432679 l_{2}^{3} l_{3}^{4} \xi^{7} + 4794586.1911294 l_{2}^{3} l_{3}^{3} \xi^{7} + 22474.6227709191 l_{2}^{3} l_{3}^{2} \xi^{5} - 20227.1604938272 l_{2}^{3} l_{3} \xi^{5} - 158.024691358025 l_{2}^{3} \xi^{3} + 360791906.210795 l_{2}^{2} l_{3}^{7} \xi^{9} + 270593929.658097 l_{2}^{2} l_{3}^{6} \xi^{9} - 4566272.56298038 l_{2}^{2} l_{3}^{5} \xi^{7} - 4109645.30668234 l_{2}^{2} l_{3}^{4} \xi^{7} + 22474.6227709191 l_{2}^{2} l_{3}^{3} \xi^{5} + 30340.7407407407 l_{2}^{2} l_{3}^{2} \xi^{5} - 71.1111111111111 l_{2}^{2} \xi^{3} - 180395953.105398 l_{2} l_{3}^{8} \xi^{9} - 108237571.863239 l_{2} l_{3}^{7} \xi^{9} + 3044181.70865359 l_{2} l_{3}^{6} \xi^{7} + 2054822.65334117 l_{2} l_{3}^{5} \xi^{7} - 22474.6227709191 l_{2} l_{3}^{4} \xi^{5} - 20227.1604938272 l_{2} l_{3}^{3} \xi^{5} + 71.1111111111111 l_{2} l_{3} \xi^{3} + \frac{l_{2} \xi}{3} + 60131984.3684659 l_{3}^{9} \xi^{9} + 27059392.9658097 l_{3}^{8} \xi^{9} - 1522090.85432679 l_{3}^{7} \xi^{7} - 684940.884447057 l_{3}^{6} \xi^{7} + 22474.6227709191 l_{3}^{5} \xi^{5} + 10113.5802469136 l_{3}^{4} \xi^{5} - 158.024691358025 l_{3}^{3} \xi^{3} - 71.1111111111111 l_{3}^{2} \xi^{3} + \frac{l_{3} \xi}{3} + 0.15 \xi$
```python
f = f.coeff(l1, 1)
f = f.coeff(l2, 0)
f = f.coeff(l3, 0)
f
```
$\displaystyle \frac{\xi}{3}$
| f7535eb999cc351164fff4d7bd1590032c0d0271 | 21,273 | ipynb | Jupyter Notebook | o/soft_robot/derivation_of_dynamics/reduction.ipynb | YoshimitsuMatsutaIe/ctrlab2021_soudan | 7841c981e6804cc92d34715a00e7c3efce41d1d0 | [
"MIT"
] | null | null | null | o/soft_robot/derivation_of_dynamics/reduction.ipynb | YoshimitsuMatsutaIe/ctrlab2021_soudan | 7841c981e6804cc92d34715a00e7c3efce41d1d0 | [
"MIT"
] | null | null | null | o/soft_robot/derivation_of_dynamics/reduction.ipynb | YoshimitsuMatsutaIe/ctrlab2021_soudan | 7841c981e6804cc92d34715a00e7c3efce41d1d0 | [
"MIT"
] | null | null | null | 131.314815 | 9,806 | 0.57077 | true | 6,311 | Qwen/Qwen-72B | 1. YES
2. YES | 0.912436 | 0.626124 | 0.571298 | __label__lmo_Latn | 0.031091 | 0.165647 |
Solution to: [Day 3: Drawing Marbles](https://www.hackerrank.com/challenges/s10-mcq-6/problem)
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
- Table of Contents
- Math Solution
- Facts
- Monte Carlo Solution
- Imports
- Constants
- Auxiliary functions
- Main
```javascript
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
```
<IPython.core.display.Javascript object>
This script contains 2 sections:
1. Math solution to the problem
2. Monte Carlo simulation of the problem
# Math Solution
A bag contains 3 red marbles and 4 blue marbles. Then, 2 marbles are drawn from the bag, at random, without replacement.
If the first marble drawn is red, what is the probability that the second marble is blue?
## Facts
- 7 marbles in the bag
- 1st one is always red
- P(B)?
If the first one is always red, we don't calculate the probability of the 1st draw.
Thus, we use the 6 remaining marbles.
\begin{equation}
\large
P(B) = \frac{4}{6} = \frac{2}{3}
\end{equation}
# Monte Carlo Solution
## Imports
```python
from typing import List
import random
```
## Constants
```python
MARBLE_DICT = {
'r' : 3,
'b' : 4
}
FIRST_MARBLE = 'r'
SECOND_MARBLE = 'b'
```
## Auxiliary functions
```python
def create_marble_bag(marbles: dict) -> List[str]:
"""Returns list of marbles to draw from."""
bag = []
for k, v in marbles.items():
m = [k for _ in range(v)]
bag += m
return bag
```
```python
def remove_first_marble(bag: List[str], marble: str) -> List[str]:
"""Returns bag after removing marble."""
bag.remove(marble)
return bag
```
```python
def check_second_marble(bag: List[str], marble: str) -> bool:
"""Returns boolean if sample from bag is the marble."""
return random.choice(bag) == marble
```
```python
def get_ratio(bag: List[str], marble: str, iterations: int) -> float:
"""Returns ratio of times sample from bag is marble."""
was_marble = 0
for _ in range(iterations):
if check_second_marble(bag, marble):
was_marble += 1
return was_marble / iterations
```
## Main
```python
def main():
bag = create_marble_bag(MARBLE_DICT)
bag = remove_first_marble(bag, FIRST_MARBLE)
iterations = 1000000
ratio = get_ratio(bag, SECOND_MARBLE, iterations)
print(ratio)
```
```python
if __name__ == "__main__":
main()
```
0.665724
| c02358eb71fb8b59b600a4b84cc8a278fa0c9c54 | 5,699 | ipynb | Jupyter Notebook | statistics/10_days/11_day3drawingmarbles.ipynb | jaimiles23/hacker_rank | 0580eac82e5d0989afabb5c2e66faf09713f891b | [
"Apache-2.0"
] | null | null | null | statistics/10_days/11_day3drawingmarbles.ipynb | jaimiles23/hacker_rank | 0580eac82e5d0989afabb5c2e66faf09713f891b | [
"Apache-2.0"
] | null | null | null | statistics/10_days/11_day3drawingmarbles.ipynb | jaimiles23/hacker_rank | 0580eac82e5d0989afabb5c2e66faf09713f891b | [
"Apache-2.0"
] | 3 | 2021-09-22T11:06:58.000Z | 2022-01-25T09:29:24.000Z | 21.751908 | 130 | 0.519389 | true | 693 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.880797 | 0.804764 | __label__eng_Latn | 0.82499 | 0.708068 |
# Logistic map explorations (Taylor and beyond)
Adapted by Dick Furnstahl from the Ipython Cookbook by Cyrille Rossant.
Lyapunov plot modified on 29-Jan-2019 after discussion with Michael Heinz.
Here we consider the *logistic map*, which illustrates how chaos can arise from a simple nonlinear equation. The logistic map models the evolution of a population including reproduction and mortality (see Taylor section 12.9).
The defining function for the logistic map is
$ f_r(x) = r x (1-x) \;, $
where $r$ is the control parameter analogous to $\gamma$ for the damped, driven pendulum. To study the pendulum, we looked at $\phi(t)$ at $t_0$, $t_0 + \tau$, $t_0 + 2\tau$, and so on, where $\tau$ is the driving frequency and we found $\phi(t_0 + \tau)$ from $\phi(t_0)$ by solving the differential equation for the pendulum. After transients had died off, we characterized the value of $\gamma$ being considered by the periodicity of the $\phi(t_n)$ values: period one (all the same), period two (alternating), chaotic (no repeats) and so on.
Here instead of a differential equation telling us how to generate a trajectory, we use $f_r(x)$:
$\begin{align}
x_1 = f_r(x_0) \ \rightarrow\ x_2 = f_r(x_1) \ \rightarrow\ x_3 = f_r(x_2) \ \rightarrow\ \ldots
\end{align}$
There will be transients at the beginning, but this sequence of $x_i$ values may reach a point $x = x^*$ such that $f(x^*) = x^*$. We call this a *fixed point*. If instead it ends up bouncing between two values of $x$, it is period two and we call it a two-cycle. We can have a cascade of period doubling, as found for the damped, driven pendulum, leading to chaos, which is characterized by the mapping never repeating. We can make a corresponding bifurcation diagram and identify Lyapunov exponents for each initial condition and $r$.
To adapt this notebook to a different map (such as the sine map), you will need to:
* Change the map function and its derivative.
* Change the range of the $r$ parameter to an appropriate range for the new map.
* Possibly change the initial $x$ value ($x_0$).
* Modify the limits of the plots appropriately.
```python
%matplotlib inline
# standard numpy and matplotlib imports
import numpy as np
import matplotlib.pyplot as plt
```
```python
def logistic(r, x):
"""Logistic map function: f(x) = r x(1-x)
"""
return r * x * (1.-x)
def logistic_deriv(r, x):
"""Logistic map derivative: f'(x) = r(1-2x)
"""
return r * (1. - 2.*x)
```
## Explore the logistic map and its fixed points
Start with a simple plot of the logistic function. **What does changing $r$ do?**
```python
x, step = np.linspace(0,1, num=101, endpoint=True, retstep=True)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
r_param = 2
ax.plot(x, logistic(r_param, x), 'k-')
ax.set_xlabel('x')
ax.set_ylabel('f(x)')
fig.tight_layout()
```
Make plots showing the approach to fixed points.
**Play with the number of steps taken and the values of $r$.**
1. **Increase the number of steps in the plots to more cleanly identify the long-time trend for that value of $r$.**
2. **Try smaller values of $r$. Are there always non-zero fixed points for small $r$?**
```python
def plot_system(r, x0, n, ax=None):
"""Plot the function and the y=x diagonal line."""
t = np.linspace(0,1, num=101)
ax.plot(t, logistic(r,t), 'k', lw=2) # black, linewidth 2
ax.plot([0,1], [0,1], 'k', lw=2) # x is an array of 0 and 1,
# y is the same array, so this plots
# a straight line from (0,0) to (1,1).
# Recursively apply y=f(x) and plot two additional straight lines:
# line from (x, x) to (x, y)
# line from (x, y) to (y, y)
x = x0
for i in range(n): # do n iterations, i = 0, 1, ..., n-1
y = logistic(r, x)
# Plot the two lines
ax.plot([x,x], [x,y], color='blue', lw=1)
ax.plot([x,y], [y,y], color='blue', lw=1)
# Plot the positions with increasing opacity of the circles
ax.plot([x], [y], 'or', ms=10, alpha=(i+1)/n)
x = y # recursive: reset x to y for the next iteration
ax.set_xlim(0,1)
ax.set_ylim(0,1)
ax.set_title(f'$r={r:.1f}$, $x_0={x0:.1f}$')
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(2,2,1)
# start at 0.1, r parameter is 2.5, take n steps
plot_system(r=0.7, x0=0.9, n=100, ax=ax1)
ax2 = fig.add_subplot(2,2,2)
# start at 0.1, r parameter is 3.2, take n steps
plot_system(r=3.2, x0=0.9, n=100, ax=ax2)
ax3 = fig.add_subplot(2,2,3)
# start at 0.1, r parameter is 2.6, take n steps
plot_system(r=3.5, x0=0.1, n=10, ax=ax3)
ax4 = fig.add_subplot(2,2,4)
# start at 0.1, r parameter is 3.7, take n steps
plot_system(r=3.7, x0=0.1, n=10, ax=ax4)
```
**What periods do these exhibit? Are any chaotic?**
## Find the unstable point numerically
To find the value of $r$ at which the second fixed point becomes unstable, we must solve simultaneously the equations (see Taylor 12.9 in the "A Test for Stability" subsection):
$\begin{align}
&f_r(x^*) = x^* \\
&f_r'(x^*) = -1
\end{align}$
where $f'$ means the derivative with respect to $x$, or
$\begin{align}
&f_r(x^*) - x^* = 0 \\
&f_r'(x^*) + 1 = 0
\end{align}$
It is the latter form of the equations that we will pass to the
scipy function `fsolve` to find the values of $r$ and $x*$.
```python
from scipy.optimize import fsolve # Google this to learn more!
def equations(passed_args):
"""Return the two expressions that must equal to zero."""
r, x = passed_args
return (logistic(r, x) - x, logistic_deriv(r, x) + 1)
# Call fsolve with initial values for r and x
r, x = fsolve(equations, (0.5, 0.5))
print_string = f'x* = {x:.10f}, r = {r:.10f}'
print(print_string)
```
**Verify analytically that these are the correct values for the logistic map.** We use this same procedure for other maps where only numerical solutions are known.
## Now make the bifurcation diagram and calculate Lyapunov exponents
To make a bifurcation diagram, we:
* define an array of closely spaced $r$ values;
* for each $r$, iterate the map $x_{i+1} = f_r(x_i)$ a large number of times (e.g., 1000) to ensure convergence;
* plot the last 100 or so iterations as points at each $r$.
To think about how to calculate a Lyapunov exponent, consider two iterations of the map starting from two values of $x$ that are close together. Call these initial values $x_0$ and $x_0 + \delta x_0$. These are mapped by $f_r$ to $x_1$ and $x_1 + \delta x_1$, then $x_2$ and $x_2 + \delta x_2$, up to $x_n + \delta x_n$.
How are the $\delta x_i$ related? Each $\delta x_i$ is small, so use a Taylor expansion!
Expanding $f(x)$ about $x_n$:
$\begin{align}
f(x_i &+ \delta x_i) = &f(x_i) + \delta x_i f'(x_i) + \frac{1}{2}(\delta x_i)^2 f''(x_i) + \cdots \\
\end{align}$
so, neglecting terms of order $(\delta x_i)^2$ and higher,
$\begin{align}
x_{i+1} + \delta x_{i+1} \approx x_{i+1} + \delta x_i f'(x_i)
\quad\mbox{or}\quad \delta x_{i+1} \approx f'(x_i)\, \delta x_i \;.
\end{align}$
Iterating this result we get
$\begin{align}
\delta x_1 &= f'(x_0)\, \delta x_0 \\
\delta x_2 &= f'(x_1)\, \delta x_1 = f'(x_1)\times f'(x_0)\, \delta x_0 \\
& \qquad\vdots \\
|\delta x_n| &= \left(\prod_{i=0}^{n-1} |f'(x_i)|\right) |\delta x_0| \;,
\end{align}$
where the last equation gives us the separation of two trajectories after $n$ steps $\delta x_n$ in terms of their
initial separation, $\delta x_0$. (Why can we just take the absolute value of all the terms?)
We expect that this will vary exponentially at large $n$ like
$\begin{align}
\left| \frac{\delta x_n}{\delta x_0} \right| = e^{\lambda n}
\end{align}$
and so we define the Lypunov exponent $\lambda$ by
$\begin{align}
\lambda = \lim_{n\rightarrow\infty} \frac{1}{n} \sum_{i=1}^{n} \ln |f'(x_i)| \;,
\end{align}$
which in practice is well approximated by the sum for large $n$.
If $\lambda > 0$, then nearby trajectories diverge from each other exponentially at large $n$, which corresponds to chaos. However, if the
trajectories converge to a fixed point or a limit cycle, they will get closer together with increasing $n$, which corresponds to $\lambda<0$.
Ok, let's do it!
```python
n = 10000
# Here we'll use n values of r linearly spaced between 2.8 and 4.0
### You may want to change the range of r for other maps
r = np.linspace(2.8, 4.0, n)
iterations = 1000 # iterations of logistic map; keep last iterations
last = 100 # where results should have stabilized
x = 0.1 * np.ones(n) # x_0 initial condition
### (you may want to change for other maps)
lyapunov = np.zeros(n) # initialize vector used for the Lyapunov sums
```
```python
fig = plt.figure(figsize=(8,9))
ax1 = fig.add_subplot(2,1,1) # bifurcation diagram
ax2 = fig.add_subplot(2,1,2) # Lyapunov exponent
# Display the bifurcation diagram with one pixel per point x_n^(r) for last iterations
for i in range(iterations):
x = logistic(r,x) # just iterate: x_{i+1} = f_r(x_i)
# Compute the partial sum of the Lyapunov exponent, which is the sum
# of derivatives of the logistic function (absolute value)
lyapunov += np.log(abs(logistic_deriv(r,x)))
# Display the bifurcation diagram.
if i >= (iterations-last): # only plot the last iterations
ax1.plot(r, x, ',k', alpha=0.25)
ax1.set_xlim(2.8, 4)
ax1.set_xlabel(r'$r$')
ax1.set_title("Bifurcation")
# Display the Lyapunov exponent
# Negative Lyapunov exponent
ax2.plot(r[lyapunov < 0],
lyapunov[lyapunov < 0] / iterations, 'o',
color='black', alpha=0.5, ms=0.5)
# Positive Lyapunov exponent
ax2.plot(r[lyapunov >= 0],
lyapunov[lyapunov >= 0] / iterations, 'o',
color='red', alpha=0.5, ms=0.5)
# Add a zero line (lightened with alpha=0.5)
ax2.axhline(0, color='k', lw=0.5, alpha=0.5)
ax2.set_xlim(2.8, 4)
ax2.set_ylim(-2, 1)
ax2.set_xlabel(r'$r$')
ax2.set_title("Lyapunov exponent")
plt.tight_layout()
```
We see there is a fixed point for $r < 3$, then two and four periods and a chaotic behavior when $r$ belongs to certain areas of the parameter space. **Do the values of $r$ where these behaviors occur agree with what you found in the plots with particular $r$ values?**
As for the pendulum, the Lyapunov exponent is positive when the system is chaotic (in red). **Do these regions look consistent with the characteristics of chaos?**
```python
```
| 97d235cf99b041bdfc800d864b4ef97487847a24 | 126,205 | ipynb | Jupyter Notebook | 2020_week_3/Logistic_map_explorations.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 2020_week_3/Logistic_map_explorations.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 2020_week_3/Logistic_map_explorations.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 296.952941 | 94,040 | 0.916176 | true | 3,297 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.835484 | 0.719801 | __label__eng_Latn | 0.991522 | 0.510671 |
# Monte Carlo Calculation of π
### Christina C. Lee
### Category: Numerics
### Monte Carlo Physics Series
* [Monte Carlo: Calculation of Pi](../Numerics_Prog/Monte-Carlo-Pi.ipynb)
* [Monte Carlo Markov Chain](../Numerics_Prog/Monte-Carlo-Markov-Chain.ipynb)
* [Monte Carlo Ferromagnet](../Prerequisites/Monte-Carlo-Ferromagnet.ipynb)
* [Phase Transitions](../Prerequisites/Phase-Transitions.ipynb)
## Monte Carlo- Random Numbers to Improve Calculations
When one hears "Monte Carlo", most people might think of something like this:
Monte Carlo, Monaco: known for extremely large amounts of money, car racing, no income taxes,and copious gambling.
In addition to Monaco, Europe, Las Vegas decided to host a Monte Carlo-themed casino as well. So during the Manhattan project, when the best minds in the United States were camped out in the New Mexican desert, they had plenty of inspiration from Las Vegas, and plenty of difficult problems to work on in the form of quantifying the inner workings of nuclei. Enrico Fermi first played with these ideas, but Stanislaw Ulam invented the modern Monte Carlo Markov Chain later.
At the same time, these scientists now had computers at their disposal. John von Neumann programmed Ulam's algorithm onto ENIAC (Electronic Numerical Integrator and Computer), the very first electronic, general purpose computer, even though it did still run on vacuum tubes.
That still doesn't answer, why do random numbers actually help us solve problems?
Imagine you are visiting a new city for the first time (maybe Monte Carlo). You only have a day or two, and you want to really get to know the city. You have two options for your visit
* Hit the tourist sites you researched online
* Wander around. Try and communicate with the locals. Find an out-of-the-way restaurant and sample food not tamed for foreigners. Watch people interact. Get lost.
Both are legitimate ways to see the city. But depending on what you want, you might choose a different option. The same goes for exploring physics problems. Sometimes you want to go in and study just everything you knew about beforehand, but sometimes you need to wander around, not hit everything, but get a better feeling for what everything might be like.
## Buffon's Needle: Calculation of π
Even back in the 18th century, Georges-Louis Leclerc, Comte de Buffon posed a problem in geometric probability. Nowadays, we use a slightly different version of that problem to calculate π and illustrate Monte Carlo simulations.
Suppose we have a square dartboard, and someone with really bad, completely random aim, even though he/she always at least hits inside the dartboard. We then inscribe a circle inside that dartboard. After an infinite number of hits, what is the ratio between hits in the circle, and hits in the square?
\begin{equation}
f= \frac{N_{circle}}{N_{square}} =\frac{\text{Area of circle}}{\text{Area of square}} =\frac{\pi r^2}{4 r^2}= \frac{\pi}{4}
\end{equation}
\begin{equation}
\pi = 4 f
\end{equation}
## Onto the Code!
```julia
using Statistics
using Plots
gr()
```
Plots.GRBackend()
We will generate our random numbers on the unit interval. Thus the radius in our circumstance is $.5$.
Write a function `incircle(r2)` such that if `r2` is in the circle, it returns true, else, it returns false. We will use this with the Julia function `filter`. Assume `r2` is the radius squared, and already centered around the middle of the unit circle
```julia
function incircle(r2)
if r2<.25
return true
else
return false
end
end
```
incircle (generic function with 1 method)
```julia
#The number of darts we will throw at the board. We will see how accurate different numbers are
N=[10,25,50,75,100,250,500,750,1000,2500,5000,7500,10000];
# We will perform each number multiple times in order to calculate error bars
M=15;
```
```julia
πapprox=zeros(Float64,length(N),M);
for ii in 1:length(N)
for jj in 1:M
#popular our array with random numbers on the unit interval
X=rand(N[ii],2)
#calculate their radius squared
R2=(X[:,1].-0.5).^2.0.+(X[:,2].-0.5).^2
# 4*number in circle / total number
πapprox[ii,jj]=4.0*length(filter(incircle,R2))/N[ii];
end
end
```
```julia
# Get our averages and standard deviations
πave=mean(πapprox,dims=2);
πstd=std(πapprox,dims=2);
```
## Analysis
So that was a nice, short little piece of code. Lets plot it now to see means.
```julia
plot(N,π*ones(length(N)),xscale=:log10);
for j in 1:M
scatter!(N,πapprox[:,j]);
end
scatter!(N,πave,yerr=πstd)
plot!(xlabel="Number of Darts",ylabel="pi Estimate",
title="Monte Carlo Estimate of pi",legend=:false)
```
When we have fewer numbers of points, our estimates vary much more wildly, and much further from 3.1415926 .
But, at least, the guesses from our different runs all seem equally distributed around the correct value, so it seems we have no systematic error.
As we get up to $10^4$, our estimate starts getting much more accurate and consistent.
```julia
plot(N,πstd,xscale=:log10)
plot!(xlabel="N points"
,ylabel="standard deviation"
,title="Dependence of Monte Carlo Error on Number of Points")
```
So what we guessed in the first plot about dispersion in estimate, we quantify here in this plot. When we only have 10 darts, the guesses vary by up to .3, but when we get down to 1,000 trials, we are starting to be consistent to .0002
```julia
plot(N,πave,xscale=:log10,label="π Guess")
plot!(N,π*ones(length(N)),label="π")
plot!(xlabel="N steps"
,ylabel="Average of 15 runs"
,title="Overall Averages")
```
Now lets just make a graphical representation of what we've been doing this whole time. Plot our points on a unit square, and color the ones inside the circle a different color.
```julia
X=rand(1000);
Y=rand(1000);
R2=(X.-0.5).^2.0.+(Y.-0.5).^2;
Xc=[];
Yc=[]
for ii in 1:length(X)
if R2[ii]<.25
push!(Xc,X[ii]);
push!(Yc,Y[ii]);
end
end
```
```julia
scatter(X,Y)
scatter!(Xc,Yc)
plot!(aspect_ratio=1,xlabel="X",ylabel="Y",legend=:false,
title="Dartboard")
```
That's all folks!
Now here's a picture of some pie to congratulate you on calculating π.
<sub>By Scott Bauer, USDA ARS - This image was released by the Agricultural Research Service, the research agency of the United States Department of Agriculture, with the ID K7252-47 (next).This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing for more information.English | français | македонски | +/−, Public Domain, https://commons.wikimedia.org/w/index.php?curid=264106 </sub>
@article{Markov,
author = {A. Markov , A},
year = {2006},
month = {12},
pages = {591 - 600},
title = {An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains},
volume = {19},
journal = {Science in Context},
doi = {10.1017/S0269889706001074}
}
```julia
```
| 51563c8beb54d912180cc1a555b42cad79f62be6 | 627,343 | ipynb | Jupyter Notebook | Numerics_Prog/Monte-Carlo-Pi.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 22 | 2015-11-15T08:47:04.000Z | 2022-02-25T10:47:12.000Z | Numerics_Prog/Monte-Carlo-Pi.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 11 | 2016-02-23T12:18:26.000Z | 2019-09-14T07:14:26.000Z | Numerics_Prog/Monte-Carlo-Pi.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 6 | 2016-02-24T03:08:22.000Z | 2022-03-10T18:57:19.000Z | 125.318218 | 484 | 0.576669 | true | 1,883 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.779993 | 0.644829 | __label__eng_Latn | 0.992451 | 0.336485 |
We have a square matrix $R$. We consider the error for $T = R'R$ where $R'$ is the transpose of $R$.
The elements of $R$ are $r_{i,j}$, where $i = 1 \dots N, j = 1 \dots N$.
$r_{i, *}$ is row $i$ of $R$.
Now let $R$ be a rotation matrix. $T$ at infinite precision will be the identity matrix $I$
Assume the maximum error in the specification of values $r_{i, j}$ is constant, $\delta$. That is, any floating point value $r_{i, j}$ represents an infinite precision value between $r_{i, j} \pm \delta$
```
from sympy import Symbol, symarray, Matrix, matrices, simplify, nsimplify
R = Matrix(symarray('r', (3,3)))
R
```
[r_0_0, r_0_1, r_0_2]
[r_1_0, r_1_1, r_1_2]
[r_2_0, r_2_1, r_2_2]
```
T = R.T * R
T
```
[ r_0_0**2 + r_1_0**2 + r_2_0**2, r_0_0*r_0_1 + r_1_0*r_1_1 + r_2_0*r_2_1, r_0_0*r_0_2 + r_1_0*r_1_2 + r_2_0*r_2_2]
[r_0_0*r_0_1 + r_1_0*r_1_1 + r_2_0*r_2_1, r_0_1**2 + r_1_1**2 + r_2_1**2, r_0_1*r_0_2 + r_1_1*r_1_2 + r_2_1*r_2_2]
[r_0_0*r_0_2 + r_1_0*r_1_2 + r_2_0*r_2_2, r_0_1*r_0_2 + r_1_1*r_1_2 + r_2_1*r_2_2, r_0_2**2 + r_1_2**2 + r_2_2**2]
Now the same result with error $\delta$ added to each element
```
d = Symbol('d')
E = matrices.ones((3,3)) * d
RE = R + E
RE
```
[d + r_0_0, d + r_0_1, d + r_0_2]
[d + r_1_0, d + r_1_1, d + r_1_2]
[d + r_2_0, d + r_2_1, d + r_2_2]
Calculate the result $T$ with error
```
TE = RE.T * RE
TE
```
[ (d + r_0_0)**2 + (d + r_1_0)**2 + (d + r_2_0)**2, (d + r_0_0)*(d + r_0_1) + (d + r_1_0)*(d + r_1_1) + (d + r_2_0)*(d + r_2_1), (d + r_0_0)*(d + r_0_2) + (d + r_1_0)*(d + r_1_2) + (d + r_2_0)*(d + r_2_2)]
[(d + r_0_0)*(d + r_0_1) + (d + r_1_0)*(d + r_1_1) + (d + r_2_0)*(d + r_2_1), (d + r_0_1)**2 + (d + r_1_1)**2 + (d + r_2_1)**2, (d + r_0_1)*(d + r_0_2) + (d + r_1_1)*(d + r_1_2) + (d + r_2_1)*(d + r_2_2)]
[(d + r_0_0)*(d + r_0_2) + (d + r_1_0)*(d + r_1_2) + (d + r_2_0)*(d + r_2_2), (d + r_0_1)*(d + r_0_2) + (d + r_1_1)*(d + r_1_2) + (d + r_2_1)*(d + r_2_2), (d + r_0_2)**2 + (d + r_1_2)**2 + (d + r_2_2)**2]
Subtract the true result to get the absolute error
```
TTE = TE-T
TTE.simplify()
TTE
```
[ d*(3*d + 2*r_0_0 + 2*r_1_0 + 2*r_2_0), d*(3*d + r_0_0 + r_0_1 + r_1_0 + r_1_1 + r_2_0 + r_2_1), d*(3*d + r_0_0 + r_0_2 + r_1_0 + r_1_2 + r_2_0 + r_2_2)]
[d*(3*d + r_0_0 + r_0_1 + r_1_0 + r_1_1 + r_2_0 + r_2_1), d*(3*d + 2*r_0_1 + 2*r_1_1 + 2*r_2_1), d*(3*d + r_0_1 + r_0_2 + r_1_1 + r_1_2 + r_2_1 + r_2_2)]
[d*(3*d + r_0_0 + r_0_2 + r_1_0 + r_1_2 + r_2_0 + r_2_2), d*(3*d + r_0_1 + r_0_2 + r_1_1 + r_1_2 + r_2_1 + r_2_2), d*(3*d + 2*r_0_2 + 2*r_1_2 + 2*r_2_2)]
```
TTE[0,0], TTE[1,1], TTE[2,2]
```
(d*(3*d + 2*r_0_0 + 2*r_1_0 + 2*r_2_0),
d*(3*d + 2*r_0_1 + 2*r_1_1 + 2*r_2_1),
d*(3*d + 2*r_0_2 + 2*r_1_2 + 2*r_2_2))
Assuming $\delta$ is small ($\delta^2$ is near zero) then the diagonal values $TTE_{k, k}$ are approximately $2\delta \Sigma_i r_{i, k}$
$\Sigma_i r_{i, k}$ is the column sum for column $k$ of $R$. We know that the column $L^2$ norms of $R$ are each 1. We know $\|x\|_1 \leq \sqrt{n}\|x\|_2$ - http://en.wikipedia.org/wiki/Lp_space. Therefore the column sums must be $\le \sqrt{N}$. Therefore the maximum error for the diagonal of $T$ is $\sqrt{N} 2\delta$.
More generally, the elements $k, m$ of $TTE$ are approximately $\delta (\Sigma_i{r_{i, k}} + \Sigma_i{r_{i, m}})$
So the error for each of the elements of $TTE$ is also bounded by $\sqrt{N} 2\delta$.
Now consider the floating point calculation error. This depends on the floating point representation we use for the calculations. Let $\epsilon = x-1$ where $x$ is the smallest number greater than 1 that is representable in our floating point format (see http://matthew-brett.github.com/pydagogue/floating_error.html). The largest error for a calculation resulting in a value near 1 is $\frac{\epsilon}{2}$. For the diagonal values, the calculation error will be the error for the $r_{i,*} r_{i, *}'$ dot product. This comprises $N$ scalar products with results each bounded by 1 ($r_{i, j} r_{i, j}$) followed by $N-1$ sums each bounded by 1. Maximum error is therefore $(2N-1) \frac{\epsilon}{2}$ = $\frac{5}{2} \epsilon$ where $N=3$.
For the off-diagonal values $T_{k, m}$, we have the $r_{k,*} r_{m, *}'$ dot product. Because $R$ is a rotation matrix, by definition this result must be zero.
Because the column and row $L^2$ norms of $R$ are 1, the values in $R$ cannot be greater than 1. Therefore $r_{k,*} r_{m, *}'$ consists of the $N$ products with results each bounded by 1 ($r_{k, j} r_{m, j}$) followed by $N-2$ sums each bounded by 1 (the last of the sums must be approximately 0). Maximum error is therefore $(2N-2) \frac{\epsilon}{2}$ = $2\epsilon$ where $N=3$.
So, assuming an initial error of $\delta$ per element, and $N=3$, the maximum error for diagonal elements is $\sqrt{3} 2 \delta + \frac{5}{3} \epsilon$. For the off-diagonal elements it is $\sqrt{3} 2 \delta + 2 \epsilon$.
| cf35cda2bee9298d5a963b9e546723741eeb8ee2 | 8,608 | ipynb | Jupyter Notebook | doc/source/notebooks/ata_error.ipynb | tobon/nibabel | ff2b5457207bb5fd6097b08f7f11123dc660fda7 | [
"BSD-3-Clause"
] | 1 | 2015-10-01T01:13:59.000Z | 2015-10-01T01:13:59.000Z | doc/source/notebooks/ata_error.ipynb | tobon/nibabel | ff2b5457207bb5fd6097b08f7f11123dc660fda7 | [
"BSD-3-Clause"
] | 2 | 2015-11-13T03:05:24.000Z | 2016-08-06T19:18:54.000Z | doc/source/notebooks/ata_error.ipynb | tobon/nibabel | ff2b5457207bb5fd6097b08f7f11123dc660fda7 | [
"BSD-3-Clause"
] | 1 | 2019-02-27T20:48:03.000Z | 2019-02-27T20:48:03.000Z | 34.294821 | 754 | 0.492565 | true | 2,292 | Qwen/Qwen-72B | 1. YES
2. YES | 0.928409 | 0.903294 | 0.838626 | __label__eng_Latn | 0.913562 | 0.786743 |
```python
import sympy as sp
sp.init_printing()
```
Analyitc solution for CSTR of 2 A -> B
```python
symbs = t, f, fcA, fcB, IA, IB, k, c1, c2 = sp.symbols('t f phi_A phi_B I_A I_B k c1 c2', real=True, positive=True)
symbs
```
```python
def analytic(t, f, fcA, fcB, IA, IB, k, c1, c2):
u = sp.sqrt(f*(f + 4*fcA*k))
v = u*sp.tanh(u/2*(t - c1))
return [
(-f + v)/(2*k),
sp.exp(-f*t)*c2 + (f + 2*(fcA + fcB)*k - v)/(2*k)
]
```
```python
exprs = analytic(*symbs)
exprs
```
```python
exprs0 = [expr.subs(t, 0) for expr in exprs]
exprs0
```
```python
sol = sp.solve([expr - c0 for expr, c0 in zip(exprs0, [IA, IB])], [c1, c2])
sol
```
```python
exprs2 = [expr.subs(dict(zip([c1, c2], sol[1]))) for expr in exprs]
exprs2
```
```python
[expr.subs(t, 0).simplify() for expr in exprs2]
```
```python
%matplotlib inline
import numpy as np
from chempy import ReactionSystem
from chempy.kinetics.ode import get_odesys
rsys = ReactionSystem.from_string("OH + OH -> H2O2; 'k'")
cstr, extra = get_odesys(rsys, include_params=False, cstr=True)
fr, fc = extra['cstr_fr_fc']
print(cstr.names, cstr.param_names)
cstr.exprs
```
```python
args = tout, c0, params = np.linspace(0, .17), {'OH': 2, 'H2O2': 3}, {'k': 5, fc['OH']: 42, fc['H2O2']: 11, fr: 13}
res = cstr.integrate(*args)
res.plot()
```
```python
def analytic_alt(t, f, fcA, fcB, IA, IB, k):
u = sp.sqrt(f*(f + 4*fcA*k))
q = sp.atanh(-u*(sp.sqrt(f) + 2*k*IA)/(f*(f + 4*fcA*k)))
v = u*sp.tanh(u/2*t - q)
w = sp.exp(-f*t)/2/k
y = 2*k*(fcA + fcB)
return [
(-f + v)/(2*k),
w*(sp.exp(f*t)*(f + y - v) - y + 2*k*(IA + IB))
]
```
```python
def analytic_alt2(t, f, fcA, fcB, IA, IB, k, n):
one_point_five = sp.S(3)/2
a0, b0, x, y = fcA, fcB, IA, IB
Sqrt, Tanh, ArcTanh, E = sp.sqrt, sp.tanh, sp.atanh, sp.E
return [
(-f + Sqrt(f)*Sqrt(f + 8*a0*k)*
Tanh((Sqrt(f)*Sqrt(f + 8*a0*k)*t - 2*ArcTanh((-(f**one_point_five*Sqrt(f + 8*a0*k)) - 4*Sqrt(f)*k*Sqrt(f + 8*a0*k)*x)/(f**2 + 8*a0*f*k)))/2)
)/(4*k),
(-8*b0*k + 8*b0*E**(f*t)*k + E**(f*t)*f*n - 4*a0*k*n + 4*a0*E**(f*t)*k*n + 4*k*n*x + 8*k*y -
E**(f*t)*Sqrt(f)*Sqrt(f + 8*a0*k)*n*Tanh((Sqrt(f)*Sqrt(f + 8*a0*k)*
(t - (2*ArcTanh((-(f**one_point_five*Sqrt(f + 8*a0*k)) - 4*Sqrt(f)*k*Sqrt(f + 8*a0*k)*x)/(f**2 + 8*a0*f*k)))/(Sqrt(f)*Sqrt(f + 8*a0*k))))
/2))/(8*E**(f*t)*k)
]
# return [
# (-f + Sqrt(f)*Sqrt(f + 8*a0*k)*
# Tanh((Sqrt(f)*Sqrt(f + 8*a0*k)*t - 2*ArcTanh((-(f**one_point_five*Sqrt(f + 8*a0*k)) - 4*Sqrt(f)*k*Sqrt(f + 8*a0*k)*x)/(f**2 + 8*a0*f*k)))/2)
# )/(4*k),
# (E**(f*t)*f - 4*a0*k - 8*b0*k + 4*a0*E**(f*t)*k + 8*b0*E**(f*t)*k + 4*k*x + 8*k*y -
# E**(f*t)*Sqrt(f)*Sqrt(f + 8*a0*k)*Tanh((Sqrt(f)*Sqrt(f + 8*a0*k)*
# (t - (2*ArcTanh((-(f**one_point_five*Sqrt(f + 8*a0*k)) - 4*Sqrt(f)*k*Sqrt(f + 8*a0*k)*x)/(f**2 + 8*a0*f*k)))/(Sqrt(f)*Sqrt(f + 8*a0*k))))
# /2))/(8*E**(f*t)*k)
# ]
```
```python
n = sp.Symbol('n')
exprs_alt = analytic_alt2(*symbs[:-2], n)
exprs_alt
```
```python
cses, expr_cse = sp.cse([expr.subs({fcA: sp.Symbol('fr'), fcB: sp.Symbol('fp'), f: sp.Symbol('fv'),
IA: sp.Symbol('r'), IB: sp.Symbol('p')}) for expr in exprs_alt])
```
```python
print(
'\n'.join(['%s = %s' % (lhs, rhs) for lhs, rhs in cses] + ['return (\n %s\n)' % str(expr_cse)[1:-1]]).replace(
'3/2', 'three/2').replace(
'exp', 'be.exp').replace(
'sqrt', 'be.sqrt').replace(
'atanh', 'ATANH').replace(
'tanh', 'be.tanh\n ').replace(
'ATANH', 'atanh')
)
```
```python
[expr.subs(t, 0).simplify() for expr in exprs_alt]
```
```python
[expr.diff(t).subs(t, 0).simplify() for expr in exprs_alt]
```
```python
print(list(rsys.substances))
rsys
```
```python
print(cstr.names, cstr.param_names)
cstr.exprs
```
```python
symbs[:-2]
```
```python
_cb = sp.lambdify(symbs[:-2] + (n,), exprs_alt)
def calc_analytic(xout, y0, p):
return _cb(xout, p[fr], p[fc['OH']], p[fc['H2O2']], y0['OH'], y0['H2O2'], p['k'], 1)
```
```python
def get_analytic(result):
ref = calc_analytic(
result.xout,
{k: res.get_dep(k)[0] for k in result.odesys.names},
{k: res.get_param(k) for k in result.odesys.param_names})
return np.array([ref[{'OH': 0, 'H2O2': 1}[k]] for k in result.odesys.names]).T
```
```python
yref = get_analytic(res)
print(yref.shape)
```
```python
res.plot()
res.plot(y=yref)
```
```python
res.plot(y=res.yout - yref)
```
```python
from chempy.kinetics.integrated import binary_irrev_cstr
```
```python
def get_analytic2(result):
ref = binary_irrev_cstr(result.xout, result.get_param('k'), result.get_dep('OH')[0],
result.get_dep('H2O2')[0], result.get_param(fc['OH']), result.get_param(fc['H2O2']),
result.get_param(fr))
return np.array([ref[{'OH': 0, 'H2O2': 1}[k]] for k in result.odesys.names]).T
```
```python
res.plot(y=res.yout - get_analytic2(res))
```
| 68efe3a9973f0570150c175ba3cef510f8ef23a9 | 10,081 | ipynb | Jupyter Notebook | chempy/kinetics/tests/_derive_analytic_cstr_bireac.ipynb | Narsil/chempy | ac7217f45a8cfe3b11ca771f78f0a04c07708818 | [
"BSD-2-Clause"
] | null | null | null | chempy/kinetics/tests/_derive_analytic_cstr_bireac.ipynb | Narsil/chempy | ac7217f45a8cfe3b11ca771f78f0a04c07708818 | [
"BSD-2-Clause"
] | null | null | null | chempy/kinetics/tests/_derive_analytic_cstr_bireac.ipynb | Narsil/chempy | ac7217f45a8cfe3b11ca771f78f0a04c07708818 | [
"BSD-2-Clause"
] | 1 | 2022-03-21T09:01:48.000Z | 2022-03-21T09:01:48.000Z | 23.945368 | 155 | 0.472275 | true | 2,039 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.851953 | 0.727493 | __label__kor_Hang | 0.158695 | 0.528543 |
# Solar Panel Power
This notebook calculates the various parameters of the shadow cast by dipole antennae on a panel under them using a simple Monte Carlo algorithm. The light source is assumed to be point-like and at infinity.
The setup is as follows:
We assume the antennae to be cylinders of radius $r_0=3cm$, at angles $\theta = \theta_0 \sim \pi/2$ and $\phi = \phi_0, \phi_0+\pi/2, \phi_0+\pi, \phi_0+3\pi/2$ starting from $(0,0,h=0.1)$. The panel is a square surface of side $s=1m$ centered at the origin.
We assume the positive x-axis to point North, and the positive y-axis to point East. This gives the 3D setup as follows:
<div>
</div>
In the Notebook below, we will compute the area and shape of the antenna shadow using a simple Monte Carlo technique.
### Algorithm: Monte Carlo Shadow Finder
The essential idea is to compute whether a vector to the sun from a random point on the panel is blocked by the geometry of the antennae. This allows us to freely change the geometry of the problem at a later stage and also simplifies the computations required.
Assuming the positive x-axis to point North, the positive y-axis points West. Then the Sun at a given Alt-Az corresponds to the polar location
\begin{equation}
\theta_\odot = \pi/2 - Alt \\ \phi_\odot = 2\pi - Az
\end{equation}
After randomly choosing a point $P(p_x,p_y,0)$ on the panel, we construct the vector to the sun at a given Alt-Az by using its polar location:
\begin{equation}
\vec{S_\odot} = \sin(\theta_\odot) \cos(\phi_\odot) \hat{i} + \sin(\theta_\odot) \sin(\phi_\odot) \hat{j} + \cos(\theta_\odot) \hat{k}
\end{equation}
For the four antennae starting from $(0,0,h=0.1)$, we can define their axis vectors similarly:
\begin{equation}
\vec{A_i} = \sin(\theta_i) \cos(\phi_i) \hat{i} + \sin(\theta_i) \sin(\phi_i) \hat{j} + \cos(\theta_i) \hat{k}
\end{equation}
Now we check the shortest distance $d$ between the sun vector passing through $P(p_x,p_y,0)$, and the axis vector passing through $A(0,0,h)$:
\begin{equation}
d = |(\vec{P}-\vec{A}) \cdot \hat{n} |
\end{equation}
where $\hat{n}$ is the unit vector along $\vec{S_\odot} \times \vec{A_i}$. If this distance is less than the radius of the antenna, we conclude that the sun vector from P is blocked by the antenna.
Thus, for every randomly sampled point, we can check if the point is in shadow (blue) or not (orange), below:
<div>
</div>
### Implementation:
I. For every Sun Alt-Az, we will sample the panel with randomly chosen points and save them as lists `inShadow` or `notInShadow` depending on the condition above.
-Since the sampler uses a uniform distribution on the panel, the area of the shadow is simply the ratio $\frac{\#inShadow}{(\#inShadow \ + \ \#notInShadow)}$
-For the given latitudes, we precompute ~5000 points for every Sun position and save the lists inside `shadow_samples` labelled by the latitude and the Sun's Alt-Az. These lists can be appended to, in case further accuracy is required.
II. Using the lists we can now easily plot the points that were in shadow, and those that were not. Also, the area of the shadow is calculated and saved.
III. To obtain a function for the shadow area, we interpolate along the solar track (parametrized for now by the Alt-Az index itself).
For a uniform distribution of random samples, generating more samples improves accuracy in general. However, it gives diminishing returns on computation time, since most points are sampled from 'obviously' not-in-shadow areas. The appropriate way to improve this code is to implement a 'guess' distribution for the sampler that is close to the shadowed region - allowing for optimal resolution of the two areas.
This code can check ~5000 samples/second for each position, on a laptop with Intel i7-3rd Gen. For ~1400 positions per latitude, it takes about ~20 mins per latitude to sample all positions. Arguably, it might not be necessary to sample all 1400 positions since we can interpolate later. Total storage required is ~360MB per latitude.
### Imports
##### Dependencies
`lusee`, `numpy`, `matplotlib`, `scipy`
`ImageMagick` for GIF animations
```python
import lusee
```
```python
from mpl_toolkits import mplot3d
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import norm
from scipy.interpolate import interp1d
import os
```
### Helper Function Definitions
```python
def luseepy_sun_AltAz(night, lat):
## we use trickery to get sun latitude by setting long to zero
obs = lusee.LObservation(night, lun_lat_deg=lat, lun_long_deg=0, deltaT_sec=15*60)
alt, az = obs.get_track_solar('sun')
w=np.where(alt>0)
return (alt[w],az[w])
def sun_vector(sun_alt, sun_az):
# returns the Sun's polar coordinates given its Alt-Az
# assumes that light source is at infinity
# Note: ϕ=0 is North, ϕ=π/2 is West, ϕ=π is South, and ϕ=3π/2 is East, thus ϕ = 2π - azimuth
# Note: θ=0 is Zenith, θ=π/2 is Horizon, thus θ = π/2 - altitude
θ=np.pi/2 - sun_alt
ϕ=2*np.pi - sun_az
sun_vec = np.array([np.sin(θ)*np.cos(ϕ),np.sin(θ)*np.sin(ϕ),np.cos(θ)])
return sun_vec
def random_point(side):
rand_x=side*(np.random.random()-0.5)
rand_y=side*(np.random.random()-0.5)
rand_z=0
return np.array([rand_x, rand_y, rand_z])
def to2D(point):
x,y,z=point
return np.array([x,y])
def antenna_axis_vector(θ, ϕ):
axis_vec = np.array([np.sin(θ)*np.cos(ϕ),np.sin(θ)*np.sin(ϕ),np.cos(θ)])
return axis_vec
def distance(a1, v1, a2, v2):
# return distance between two lines defined as \vec{a}+t*\vec{b}
a1=np.array(a1)
a2=np.array(a2)
normal_vec=np.cross(v1,v2)
nhat=normal_vec/norm(normal_vec)
distance = abs(np.dot(a2-a1, nhat))
return distance
def shadow_check(point, sun_vec, antenna_origin, antenna_axis_vec, antenna_radius):
distance_to_axis = distance(point, sun_vec, antenna_origin, antenna_axis_vec)
shadow=True if distance_to_axis<antenna_radius else False
return shadow
def monte_carlo_shadow(sun_vec, nsamples=1000):
antenna_origin=np.array([0.0,0.0,0.1])
antenna_axis_vec1=antenna_axis_vector(θ_antenna,ϕ_antenna)
antenna_axis_vec2=antenna_axis_vector(θ_antenna,ϕ_antenna+np.pi/2)
inShadow=[]
notInShadow=[]
for i in range(nsamples):
point=random_point(side)
shadowed1=shadow_check(point,sun_vec, antenna_origin, antenna_axis_vec1, antenna_radius=radius)
shadowed2=shadow_check(point,sun_vec, antenna_origin, antenna_axis_vec2, antenna_radius=radius)
shadowed = shadowed1 or shadowed2
if shadowed:
inShadow.append(to2D(point))
else:
notInShadow.append(to2D(point))
return inShadow, notInShadow
def save_shadow_samples(night, lat, sun_alt, sun_az, inShadow, notInShadow):
cwd=os.getcwd()
path=f'/shadow_samples/night{night}_lat{lat}/'
if not os.path.exists(cwd+path): os.makedirs(cwd+path)
fname=f'sunAltAz_{sun_alt}_{sun_az}_'
np.savetxt(cwd+path+fname+'inShadow.txt',inShadow)
np.savetxt(cwd+path+fname+'noInShadow.txt',notInShadow)
def load_shadow_samples(night,lat, sun_alt, sun_az):
path=f'./shadow_samples/night{night}_lat{lat}/'
fname=f'sunAltAz_{sun_alt}_{sun_az}_'
inShadow=np.loadtxt(path+fname+'inShadow.txt')
notInShadow=np.loadtxt(path+fname+'noInShadow.txt')
return inShadow, notInShadow
def plot_shadow_area_function(lat,**kwargs):
shadow_area=np.loadtxt(f'./shadow_area/lat{lat}.txt')
sun_altaz_index=range(len(shadow_area))
shadow_area_function=interp1d(sun_altaz_index,shadow_area)
ax=plt.gca()
ax.plot(sun_altaz_index,shadow_area_function(sun_altaz_index),**kwargs)
def plot_shadow_area(lat, sun_alt, sun_az,**kwargs):
inShadow, notInShadow=load_shadow_samples(lat,sun_alt,sun_az)
ax=plt.gca()
ax.scatter(inShadow,**kwargs)
```
### Panel, Antennae, Sun Alt-Az Setup
```python
# set up panel constants
side=1.0
# set up antennae constants
length=6.0; radius=0.015; θ_antenna=np.pi/2.0; ϕ_antenna=30.0*np.pi/180.0
height_above_panel=0.1
antenna_origin_offset=(0.0,0.0,height_above_panel)
# set up luseepy Sun Alt-Az
lat=30
night = 2500
```
### Run Monte Carlo Shadow Finder
```python
## Est. runtime ~20 mins
print(f'calculating Sun AltAz for night={night} and lat={lat}')
sun_alt,sun_az=luseepy_sun_AltAz(night,lat)
npositions=len(sun_alt)
print('done, now calculating shadow samples...')
shadow_area=[]
for i,altaz in enumerate(zip(sun_alt,sun_az)):
#percent progress
if 10*i//npositions!=10*(i-1)//npositions: print(f'{100*i//npositions}% completed')
#calculate if inShadow or notInShadow
sun_vec=sun_vector(*altaz)
inShadow, notInShadow=monte_carlo_shadow(sun_vec, nsamples=5000)
save_shadow_samples(night, lat,*altaz,inShadow,notInShadow)
#calculate shadow area
area=len(inShadow)/(len(inShadow)+len(notInShadow))
shadow_area.append(area)
#save files
np.savetxt(f'sun_AltAz_night{night}_lat{lat}.txt',np.column_stack([sun_alt,sun_az]))
np.savetxt(f'shadow_area_night{night}_lat{lat}.txt', shadow_area)
print('done!')
```
calculating Sun AltAz for night=2500 and lat=30
done, now calculating shadow samples...
0% completed
10% completed
20% completed
30% completed
40% completed
50% completed
60% completed
70% completed
80% completed
90% completed
done!
```python
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# If you have run and saved all samples, you can change these values to see individual frames
sun_altaz_index=1000
#<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
%matplotlib inline
fig,ax=plt.subplots(1,2, figsize=(12,6))
sun_alt,sun_az=np.loadtxt(f'sun_AltAz_night{night}_lat{lat}.txt', unpack=True)
# panel shadow plot
inShadow, notInShadow = load_shadow_samples(night,lat,sun_alt[sun_altaz_index],sun_az[sun_altaz_index])
if len(inShadow)!=0: ax[0].scatter(*inShadow.T,color='C0')
ax[0].scatter(*notInShadow.T,color='C1')
ax[0].set_xlim(-0.5,0.5)
ax[0].set_ylim(-0.5,0.5)
ax[0].set_xlabel('East (x-axis)')
ax[0].set_ylabel('South (y-axis)')
# area function
shadow_area=np.loadtxt(f'shadow_area_night{night}_lat{lat}.txt')
shadow_area_function=interp1d(range(len(shadow_area)),shadow_area)
ax[1].plot(range(len(shadow_area)),shadow_area_function(range(len(shadow_area))))
ax[1].axvline(x=sun_altaz_index, color='k', label=f'Sun Alt: {sun_alt[sun_altaz_index]} \n Sun Az:{sun_az[sun_altaz_index]} \n Shadow Area: {shadow_area_function(sun_altaz_index)}')
ax[1].set_ylabel('Area $[m^2]$')
ax[1].set_xlabel('Alt-Az Index')
ax[1].legend()
fig.suptitle(f'Antenna Shadow over time at Latitude={lat}')
# plt.savefig(f'./shadow_area/lat{lat}/{sun_altaz_index}.png')
# plt.close(fig)
```
### Creating the Animation GIF
We create all the frames of the animation and save them. Then use ImageMagick to combine them into a GIF.
```python
sun_alt,sun_az=np.loadtxt(f'sun_AltAz_night{night}_lat{lat}.txt', unpack=True)
for sun_altaz_index in range(0,len(sun_alt),20):
fig,ax=plt.subplots(1,2, figsize=(12,6))
# panel shadow plot
inShadow, notInShadow = load_shadow_samples(lat,sun_alt[sun_altaz_index],sun_az[sun_altaz_index])
if len(inShadow)!=0: ax[0].scatter(*inShadow.T,color='C0')
ax[0].scatter(*notInShadow.T,color='C1')
ax[0].set_xlim(-0.5,0.5)
ax[0].set_ylim(-0.5,0.5)
ax[0].set_xlabel('East (x-axis)')
ax[0].set_ylabel('South (y-axis)')
# area function
shadow_area=np.loadtxt(f'shadow_area_night{night}_lat{lat}.txt')
shadow_area_function=interp1d(range(len(shadow_area)),shadow_area)
ax[1].plot(range(len(shadow_area)),shadow_area_function(range(len(shadow_area))))
ax[1].axvline(x=sun_altaz_index, color='k', label=f'Sun Alt: {sun_alt[sun_altaz_index]} \n Sun Az:{sun_az[sun_altaz_index]} \n Shadow Area: {shadow_area_function(sun_altaz_index)}')
ax[1].set_ylabel('Area $[m^2]$')
ax[1].set_xlabel('Alt-Az Index')
ax[1].legend()
fig.suptitle(f'Antenna Shadow over time at Latitude={lat}')
if not os.path.exists(f'./animation/night{night}_lat{lat}'): os.makedirs(f'./animation/night{night}_lat{lat}')
plt.savefig(f'./animation/night{night}_lat{lat}/altaz_index{sun_altaz_index:04d}.png', dpi=96)
plt.close(fig)
```
Here's a jupyter magic script to run the bash commands from within a cell!
```python
cwd=os.getcwd()
print(f'doing lat{lat}')
!convert -delay 8 {cwd}/animation/night_{night}_lat{lat}/*.png {cwd}/night{night}_lat{lat}.gif
print('done')
```
```python
```
| f275813298c5ee0acd9d26b3d1506222ae910c79 | 142,173 | ipynb | Jupyter Notebook | solar_power/shadows.ipynb | lusee-night/notebooks | 688dec8d07f33d67f7891486b48a233525338eb7 | [
"MIT"
] | null | null | null | solar_power/shadows.ipynb | lusee-night/notebooks | 688dec8d07f33d67f7891486b48a233525338eb7 | [
"MIT"
] | null | null | null | solar_power/shadows.ipynb | lusee-night/notebooks | 688dec8d07f33d67f7891486b48a233525338eb7 | [
"MIT"
] | 2 | 2022-03-11T02:17:24.000Z | 2022-03-14T06:01:29.000Z | 271.8413 | 122,756 | 0.914105 | true | 3,691 | Qwen/Qwen-72B | 1. YES
2. YES | 0.893309 | 0.841826 | 0.752011 | __label__eng_Latn | 0.811205 | 0.585505 |
# Chasing the power of optimal length increase
(c) 2021 Tom Röschinger. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT).
***
```julia
using LinearAlgebra, Distributions, Plots, Polynomials, LinearAlgebra, Turing, Printf
# Custom package
using Jevo, Jedi
# Set plotting style
Jedi.default_gr!()
```
┌ Info: Precompiling Jedi [b681c197-c997-42fd-b5bb-d7d7839f617e]
└ @ Base loading.jl:1278
WARNING: using StatsBase.ecdf in module Jedi conflicts with an existing identifier.
┌ Warning: Package Jedi does not have Measures in its dependencies:
│ - If you have Jedi checked out for development and have
│ added Measures as a dependency but haven't updated your primary
│ environment's manifest file, try `Pkg.resolve()`.
│ - Otherwise you may need to report an issue with Jedi
└ Loading Measures into Jedi from project dependency, future warnings for Jedi are suppressed.
Plots.GRBackend()
```julia
@model function fit_exponent(x, y)
sigma ~ truncated(Normal(0, 10^-6), 0, Inf)
a ~ truncated(Normal(0, 1), 0, Inf)
b ~ Normal(0, 1)
_y = (x .* a) .+ b
y ~ MvNormal(_y, sigma)
end
```
fit_exponent (generic function with 1 method)
## 1. Full Gamma Dynamics
First, let's have a look at the full $\Gamma$ dynamics. Therefore, we use the Kimura substitution probability to compute a distribution. Let's write down the rates for a match turning into a mismatch, $u_+(\Gamma, l)$, and the rate for a mismatch turning into a match, $u_-(\Gamma, l)$,
\begin{align}
u_+(\Gamma) &= \left(1 - \frac{\Gamma}{\epsilon l}\right) \left[p(F(\Gamma + \epsilon, l) - F(\Gamma, l) +\frac{\kappa}{N}\right],\\
u_-(\Gamma) &= \frac{\Gamma}{\epsilon l}\frac{1}{1-n} \left[p(F(\Gamma - \epsilon, l) - F(\Gamma, l) +\frac{\kappa}{N}\right],
\end{align}
where $p(s)$ is the Kimura substitution probability. The prefactors determine the probability of a match/mismatch mutating, which depends on the current state of the population. The extra prefactor $\frac{1}{1-n}$ comes from the consideration that a mismatch can mutate to another mismatch, which is not detectable on the level of quantitative traits. A similar factor is absent from the rate turning a match into am mismatch, since every mutation on a match leads to a mismatch.
Then we use these rates to determine the probability distribution over the binding energies,
$$
Q(\Gamma, l) \propto \prod_{\Gamma^\prime=\epsilon}^\Gamma \frac{u_+(\Gamma-\epsilon, l)}{u_-(\Gamma, l)}.
$$
It might be more useful to compute the free fitness $\Psi$ instead, which is simply the log of the distribution,
$$
\Psi(\Gamma, l) \propto \sum_{\Gamma^\prime=\epsilon}^\Gamma \log\left[\frac{u_+(\Gamma-\epsilon, l)}{u_-(\Gamma, l)}\right].
$$
```julia
# Neutral Expectation
γ_0(n) = (n-1)/n
# Binding Threshold
γ_1(l, n, l_0) = γ_0(n) - l_0/l
# Binding probability
pb(γ, l, n, l_0, gap) = 1 / (1 + exp(gap * (l / l_0) * (γ - γ_1(l, n, l_0))))
# Fitness component for functional binding
F_b(γ, l, n, l_0, gap, f0) = f0 * pb(γ, l, n, l_0, gap)
# Fitness component of genomic constraint
F_c(l, l_0, fl) = - fl * l / l_0
# Total Fitness
F(γ, l, n, l_0, gap, f0, fl) = F_b(γ, l, n, l_0, gap, f0) + F_c(l, l_0, fl)
function kimura(s)
if s^2 < 10^(-20)
return 1
elseif s < -10^7
return 0
else
return s/(1 - exp(-s))
end
end
# Substitution rate for trait changes
# (k, l) -> (k + 1, l)
up(γ, l, n, l_0, gap, f0, fl, κ) = (1 - γ) * (kimura(F(γ + 1/l, l, n, l_0, gap, f0, fl)
- F(γ, l, n, l_0, gap, f0, fl)) + κ )
# (k, l) -> (k - 1, l)
um(γ, l, n, l_0, gap, f0, fl, κ) = γ * (1 - (n-2)/(n-1)) * (kimura(F(γ - 1/l, l, n, l_0, gap, f0, fl)
- F(γ, l, n, l_0, gap, f0, fl)) + κ )
# Free Fitness
function Ψ(l, n, l_0, gap, f0, fl, κ)
# Start at neutral γ
γ_i = floor(γ_0(n) * l) / l
return collect(0:1/l:γ_i), push!(Float64[0], cumsum([log(up(γ - 1/l, l, n, l_0, gap, f0, fl, κ) / um(γ, l, n, l_0, gap, f0, fl, κ)) for γ in 1/l:1/l:γ_i])...)
end
```
Ψ (generic function with 1 method)
Using the free fitness, we can also compute the resulting distribution.
```julia
# Probability distribution (from free Fitness)
function Q(l, n, l_0, gap, f0, fl, κ)
x, y = Ψ(l, n, l_0, gap, f0, fl, κ)
return x, exp.(y)
end
```
Q (generic function with 1 method)
Let's make some plots. Therefore we first determine some parameters.
```julia
gap = 10
l_0 = 20
f0 = 40l_0
fl = 1l_0
n = 4;
```
First we look at the fitness landscape and the free fitness as a sanity check, to determine that we actually implemented them correctly.
```julia
x, y = Ψ(100, n, l_0, gap, f0, fl, 0)
f = [F(γ, 100, n, l_0, gap, f0, fl) for γ in x]
println(x[argmax(y)])
plot(x, (y .- minimum(y))/(maximum(y)-minimum(y)), label="Free Fitness")
plot!(x, (f .- minimum(f))/(maximum(f)-minimum(f)), color="orange", label="Fitness")
```
0.44
Looking good so far. Let's have a look at a couple at a distribution and how it changes with some non-equilibrium.
```julia
x1, y1 = Q(100, n, l_0, gap, f0, 0, 0)
x2, y2 = Q(100, n, l_0, gap, f0, 0, 20)
plot(x1, y1/sum(y1), title="Distributions at l=100", titlefontsize=12, label="κ = 0")
plot!(x2, y2/sum(y2), label="κ = 20")
```
We can nicely see how the distribution moves to less specificity and already gets a little mass at the neutral peak, which we have to consider later. Let's have a look at how the mean fitness looks for increasing length.
```julia
function mean_fitness(l, n, l_0, gap, f0, fl, κ)
x, y = Q(l, n, l_0, gap, f0, fl, κ)
y = y/sum(y)
sum([F(γ, l, n, l_0, gap, f0, fl) for γ in x] .* y)
end
p_mf_0 = [mean_fitness(l, n, l_0, gap, f0, fl, 0) for l in 40:200]
p_mf_20 = [mean_fitness(l, n, l_0, gap, f0, fl, 20) for l in 40:200]
plot(40:200, p_mf_0, label="κ=0", title="Mean Fitness")
plot!(40:200, p_mf_20, label="κ=20")
```
As we expected, for increased non-equilibrium, it is not possible for some lengths to adapt. Now let's find the optimal length. Therefore we first compute some stuff. We start of with writing functions to compute the new intensive energy, where we have to consider the discreteness of the sequence, therefore the intensive energy changes for all types of length mutations (match/mismatch addition/removal). After computing the new intensive binding energy, we compute the selection coefficient of a length mutation at a given $\gamma$.
```julia
# (k,l) -> (k+1,l+1)
γ_pp(γ, l) = min((γ + 1/l)/(1 + 1/l), 1)
# (k,l) -> (k,l+1)
γ_pm(γ, l) = max(γ /(1 + 1/l), 0)
# (k,l) -> (k,l-1)
γ_mp(γ, l) = min(γ/(1 - 1/l), 1)
# (k,l) -> (k-1,l-1)
γ_mm(γ, l) = max((γ - 1/l)/(1 - 1/l), 0)
# selection coeffcients for length increase and decrease
# (k,l) -> (k+1,l+1)
s_pp(γ, l, n, l_0, gap, f0, fl) =
F(γ_pp(γ, l), l + 1, n, l_0, gap, f0, fl) -
F(γ, l, n, l_0, gap, f0, fl)
# (k,l) -> (k,l+1)
s_pm(γ, l, n, l_0, gap, f0, fl) =
F(γ_pm(γ, l), l + 1, n, l_0, gap, f0, fl) -
F(γ, l, n, l_0, gap, f0, fl)
# (k,l) -> (k-1,l-1)
s_mm(γ, l, n, l_0, gap, f0, fl) =
F(γ_mm(γ, l), l - 1, n, l_0, gap, f0, fl) -
F(γ, l, n, l_0, gap, f0, fl)
# (k,l) -> (k,l-1)
s_mp(γ, l, n, l_0, gap, f0, fl) =
F(γ_mp(γ, l), l - 1, n, l_0, gap, f0, fl) -
F(γ, l, n, l_0, gap, f0, fl)
```
s_mp (generic function with 1 method)
Now that we can compute the selection coefficient, we compute the length substitution rates. For that we have to consider the difference of mutations that increase the length, where we add a random position, hence a match with probability $1/n$, and mutations where we decrease length, where the probability of removing a match depends on the intensive energy of the site, $(1-\gamma)$. The total length increase and decrease mutation rates are both averaged over the steady state distribution at that length, which comes from the assumption of separation of time scales.
```julia
function p_l_th(κ, n, l_0, gap, f0, fl, l_range=collect(40:200))
vp = []
vm = []
for l in l_range
x, y = Q(l, n, l_0, gap, f0, fl, κ)
y = y/sum(y)
s_mm_p = [s_pp(γ, l, n, l_0, gap, f0, fl) for γ in x]
s_m_p = [s_pm(γ, l, n, l_0, gap, f0, fl) for γ in x]
s_mm_m = [s_mm(γ, l, n, l_0, gap, f0, fl) for γ in x]
s_m_m = [s_mp(γ, l, n, l_0, gap, f0, fl) for γ in x]
v_mm_p = [(n-1)/n * kimura(s) for s in s_mm_p]
v_m_p = [1/n * kimura(s) for s in s_m_p]
v_mm_m = [ γ * kimura(s) for (s, γ) in zip(s_mm_m, x)]
v_m_m = [(1 - γ) * kimura(s) for (s, γ) in zip(s_m_m, x)]
push!(vp, sum((v_m_p .+ v_mm_p) .* y))
push!(vm, sum((v_mm_m .+ v_m_m) .* y))
end
return vp, vm, push!([0.01], (cumsum([log(vp[i]/vm[i+1]) for i in 1:length(vp)-1]).+0.01)...)
end
```
p_l_th (generic function with 2 methods)
Now let's have a look at the length increase and decrease substitution rates and the resulting effective fitness.
```julia
l_arr = collect(30:150)
vp, vm, p_l = p_l_th(0, n, l_0, gap, f0, fl, l_arr)
p1 = plot(l_arr, vp, label="vp", legend=:topright, title="κ=0", xlabel="l")
plot!(p1, l_arr, vm, label="vm")
p2 = plot(l_arr, p_l ./ maximum(p_l), label="Effective Fitness", legend=:bottomleft, title="Fitnesses (both scaled)", xlabel="l")
mf = [mean_fitness(l, n, l_0, gap, f0, fl, 0) for l in l_arr]
plot!(p2, l_arr, (mf .- mf[1]) ./ (maximum(mf)-mf[1]) , label="Mean Fitness")
plot([p1, p2]..., size=(700, 300))
```
We can nicely see how the effective fitness is shifted to higher lengths due to the asymmetry of length mutations. When we consider optimal lengths in non-equilibrium, we have to consider that for smaller lengths sites will not able to adapt at all, and therefore be non-functional. We want to exclude these non-functional sites for now. Hence we use a function to find the minimal length of functional sites.
```julia
function min_func_length(n, l_0, gap, f0, fl, κ, l_arr=collect(20:200))
gamma_max_list = zeros(length(l_arr))
for (i, l) in enumerate(l_arr)
x, y = Q(l, n, l_0, gap, f0, fl, κ)
gamma_max_list[i] = x[argmax(y)]
end
iterator = [[x, y] for (x, y) in zip(gamma_max_list, l_arr)]
return findfirst( x -> x[1] < γ_1(x[2], n, l_0), iterator) + l_arr[1]-1
end
lmin_arr = [min_func_length(n, l_0, gap, f0, fl, κ) for κ in 0:20]
κ_arr = collect(0:20)
plot(
κ_arr .+ 1,
lmin_arr,
xlabel="κ",
ylabel="l_m",
title="Minimal Functional Length",
linewidth=2
)
```
This looks nicely linear. Now let's look at the length mutation rates and the effective fitness for high non-equilibrium.
```julia
lm = min_func_length(n, l_0, gap, f0, fl, 20)
l_arr = collect(lm:200)
vp, vm, p_l = p_l_th(20, n, l_0, gap, f0, fl, l_arr)
p1 = plot(l_arr, vp, label="vp", legend=:topright, title="κ=20", xlabel="l")
plot!(p1, l_arr, vm, label="vm")
p2 = plot(l_arr, p_l ./ maximum(p_l), label="Effective Fitness", legend=:bottomleft, title="Both scaled", xlabel="l")
mf = [mean_fitness(l, n, l_0, gap, f0, fl, 20) for l in l_arr]
plot!(p2, l_arr, (mf .- mf[1]) ./ (maximum(mf)-mf[1]) , label="Mean Fitness")
plot([p1, p2]..., size=(700, 300))
```
Now we have all the tools to compute the optimal length as a function of non-equilibrium.
```julia
function optimal_length_full(n, l_0, gap, f0, fl, κ_arr=collect(0:20))
l_opt_th = []
for κ in κ_arr
lm = min_func_length(n, l_0, gap, f0, fl, κ)
l_arr = collect(lm:250)
p_l = p_l_th(κ, n, l_0, gap, f0, fl, l_arr)[3]
l_opt = argmax(p_l) + lm - 1
push!(l_opt_th, l_opt)
end
return l_opt_th
end
function plot_opt_length_full(n, l_0, gap, f0, fl, κ_arr=collect(0:20))
l_opt_lin = optimal_length_full(n, l_0, gap, f0, fl, κ_arr)
p_lin = plot(
κ_arr,
l_opt_lin,
xlabel="κ",
ylabel="l_opt",
title="f0=$(f0/l_0)l_0, fl=$(fl/l_0)l_0"
)
#=
plot!(p_lin,
κ_arr[1:100],
(1 .+κ_arr[1:100]) .^(1/2) * l_opt_lin[1],
color="gray",
linestyle=:dash
)
=#
κ_arr_log = exp.(range(0, length=length(κ_arr), stop=log(κ_arr[end] + 1))) .- 1
l_opt_log = optimal_length_full(n, l_0, gap, f0, fl, κ_arr_log)
logx = log.(1 .+ κ_arr_log)
logy = log.(l_opt_log)
chn = sample(fit_exponent(logx[end-15:end], logy[end-15:end]), NUTS(0.75), 5_000)
p_log = scatter(
log.(κ_arr_log .+ 1),
log.(l_opt_log),
legend=:topleft,
xlabel="log(1+κ)",
ylabel="log(l_opt)"
)
plot!(
p_log,
log.(κ_arr_log .+ 1),
log.((κ_arr_log .+ 1).^(mean(chn[:a]))) .+ mean(chn[:b]),
linestyle=:dash,
title=@sprintf "LogLog with slope %.5f" mean(chn[:a])
)
return plot([p_lin, p_log]..., size=(700, 300))
end
```
plot_opt_length_full (generic function with 2 methods)
```julia
plot_opt_length_full(n, l_0, gap, 40l_0, l_0, collect(0:30))
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 6.103515625e-6
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
```julia
plot_opt_length_full(n, l_0, gap, 200l_0, 1.6l_0, collect(0:40))
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 6.103515625e-6
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:04[39m
```julia
plot_opt_length_full(n, l_0, gap, 300l_0, 2l_0, collect(0:40))
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 3.814697265625e-7
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:03[39m
# Only use the maximum of the energy distribution
Now we want to compare using the full distribution as we did above to simply using the maximum point of the distribution to compare. Therefore we first compute the maximum of the distribution.
```julia
function γ_star(l, n, l_0, gap, f0, fl, κ)
x, y = Ψ(l, n, l_0, gap, f0, fl, κ)
max_ind = argmax(y)
if max_ind == length(x)
return x[end]
elseif max_ind == 1
return x[1]
else
f = Polynomials.fit(x[max_ind-1:max_ind+1], y[max_ind-1:max_ind+1], 2)
return -f[1]/2f[2]
end
end
```
γ_star (generic function with 1 method)
```julia
# (k,l) -> (k+1,l+1)
γ_pp(γ, l) = min((γ + 1/l)/(1 + 1/l), 1)
# (k,l) -> (k,l+1)
γ_pm(γ, l) = max(γ /(1 + 1/l), 0)
# (k,l) -> (k,l-1)
γ_mp(γ, l) = min(γ/(1 - 1/l), 1)
# (k,l) -> (k-1,l-1)
γ_mm(γ, l) = max((γ - 1/l)/(1 - 1/l), 0)
# selection coeffcients for length increase and decrease
# (k,l) -> (k+1,l+1)
s_pp(γ, l, n, l_0, gap, f0, fl) =
F(γ_pp(γ, l), l + 1, n, l_0, gap, f0, fl) -
F(γ, l, n, l_0, gap, f0, fl)
# (k,l) -> (k,l+1)
s_pm(γ, l, n, l_0, gap, f0, fl) =
F(γ_pm(γ, l), l + 1, n, l_0, gap, f0, fl) -
F(γ, l, n, l_0, gap, f0, fl)
# (k,l) -> (k-1,l-1)
s_mm(γ, l, n, l_0, gap, f0, fl) =
F(γ_mm(γ, l), l - 1, n, l_0, gap, f0, fl) -
F(γ, l, n, l_0, gap, f0, fl)
# (k,l) -> (k,l-1)
s_mp(γ, l, n, l_0, gap, f0, fl) =
F(γ_mp(γ, l), l - 1, n, l_0, gap, f0, fl) -
F(γ, l, n, l_0, gap, f0, fl)
# substitution rates of length increase and decrease
v_pp(γ, l, n, l_0, gap, f0, fl) = γ_0(n) * kimura(s_pp(γ, l, n, l_0, gap, f0, fl))
v_pm(γ, l, n, l_0, gap, f0, fl) = (1 - γ_0(n)) * kimura(s_pm(γ, l, n, l_0, gap, f0, fl))
v_plus(γ, l, n, l_0, gap, f0, fl) = v_pp(γ, l, n, l_0, gap, f0, fl) + v_pm(γ, l, n, l_0, gap, f0, fl)
v_plus_star(l, n, l_0, gap, f0, fl, κ) = v_plus(γ_star(l, n, l_0, gap, f0, fl, κ), l, n, l_0, gap, f0, fl)
v_mm(γ, l, n, l_0, gap, f0, fl) = γ * kimura(s_mm(γ, l, n, l_0, gap, f0, fl))
v_mp(γ, l, n, l_0, gap, f0, fl) = (1 - γ) * kimura(s_mp(γ, l, n, l_0, gap, f0, fl))
v_minus(γ, l, n, l_0, gap, f0, fl) = v_mm(γ, l, n, l_0, gap, f0, fl) + v_mp(γ, l, n, l_0, gap, f0, fl)
v_minus_star(l, n, l_0, gap, f0, fl, κ) = v_minus(γ_star(l, n, l_0, gap, f0, fl, κ), l, n, l_0, gap, f0, fl)
# Effective Fitness
F_eff(l, n, l_0, gap, f0, fl, κ, lmin) = sum([log(v_plus_star(l_, n, l_0, gap, f0, fl, κ)/
v_minus_star(l_+1, n, l_0, gap, f0, fl, κ)) for l_ in lmin:l])
γ_star_list(n, l_0, gap, f0, fl, κ, l_max=200) = [l > 10 ? γ_star(l, n, l_0, gap, f0, fl, κ) : missing for l in 1:l_max]
function l_star(n, l_0, gap, f0, fl, κ, l_max=200)
gstarlist = γ_star_list(n, l_0, gap, f0, fl, κ, l_max)
funct_pos_list = []
for l in 1:l_max
if (~ismissing(gstarlist[l])) && (gstarlist[l] < γ_1(l, n, l_0))
push!(funct_pos_list, l)
end
end
γ_fun_list = gstarlist[funct_pos_list]
lm = funct_pos_list[1]
f_list = zeros(l_max)
f_list[lm+1:l_max] = cumsum([log(v_plus(gstarlist[i-1], i-1, n, l_0, gap, f0, fl)/
v_minus(gstarlist[i], i, n, l_0, gap, f0, fl)) for i in lm+1:l_max])
max_ind = argmax(f_list)
fmax = f_list[max_ind]
lst = max_ind
return gstarlist[max_ind], γ_fun_list, f_list, fmax, lst, lm
end
```
l_star (generic function with 2 methods)
```julia
function p_l_star(κ, n, l_0, gap, f0, fl, l_range=collect(20:250))
vp = []
vm = []
for l in l_range
γ = γ_star(l, n, l_0, gap, f0, fl, κ)
s_mm_p = s_pp(γ, l, n, l_0, gap, f0, fl)
s_m_p = s_pm(γ, l, n, l_0, gap, f0, fl)
s_mm_m = s_mm(γ, l, n, l_0, gap, f0, fl)
s_m_m = s_mp(γ, l, n, l_0, gap, f0, fl)
v_mm_p = γ_0(n) * kimura(s_mm_p)
v_m_p = (1 - γ_0(n)) * kimura(s_m_p)
v_mm_m = γ * kimura(s_mm_m)
v_m_m = (1 - γ) * kimura(s_m_m)
push!(vp, v_m_p + v_mm_p)
push!(vm, v_mm_m .+ v_m_m)
end
return vp, vm, push!([0.01], (cumsum([log(vp[i-1]/vm[i]) for i in 2:length(vp)]).+0.01)...)
end
```
p_l_star (generic function with 2 methods)
```julia
κ = 40
lm = min_func_length(n, l_0, gap, f0, fl, κ)
plot(lm:200, p_l_star(κ, n, l_0, gap, f0, fl, collect(lm:200))[3])
plot!(twinx(), lm:200, [F_eff(l, n, l_0, gap, f0, fl, κ, lm) for l in lm:200], color = "orange")
```
```julia
function optimal_length_star(n, l_0, gap, f0, fl, κ_arr=collect(0:20))
l_opt_star = []
for κ in κ_arr
push!(l_opt_star, l_star(n, l_0, gap, f0, fl, κ)[end-1])
end
return l_opt_star
end
function plot_opt_length_star(n, l_0, gap, f0, fl, κ_arr=collect(0:20))
l_opt_lin = optimal_length_star(n, l_0, gap, f0, fl, κ_arr)
p_lin = plot(
κ_arr,
l_opt_lin,
xlabel="κ",
ylabel="l_opt",
title="f0=$(f0/l_0)l_0, fl=$(fl/l_0)l_0"
)
κ_arr_log = exp.(range(0, length=length(κ_arr), stop=log(κ_arr[end] + 1))) .- 1
l_opt_log = optimal_length_star(n, l_0, gap, f0, fl, κ_arr_log)
logx = log.(1 .+ κ_arr_log)
logy = log.(l_opt_log)
chn = sample(fit_exponent(logx[end-15:end], logy[end-15:end]), NUTS(0.75), 5_000)
p_log = scatter(
log.(κ_arr_log .+ 1),
log.(l_opt_log),
legend=:topleft,
xlabel="log(1+κ)",
ylabel="log(l_opt)"
)
plot!(
p_log,
log.(κ_arr_log .+ 1),
log.((κ_arr_log .+ 1).^(mean(chn[:a]))) .+ mean(chn[:b]),
linestyle=:dash,
title=@sprintf "LogLog with slope %.3f" mean(chn[:a])
)
return plot([p_lin, p_log]..., size=(900, 450))
end
```
plot_opt_length_star (generic function with 2 methods)
```julia
plot_opt_length_star(n, l_0, gap, 40l_0, l_0, collect(0:40))
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 3.0517578125e-6
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
```julia
plot_opt_length_star(n, l_0, gap, 200l_0, 1.6fl, collect(0:40))
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 1.220703125e-5
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
```julia
plot_opt_length_star(n, l_0, gap, 300l_0, 2fl, collect(0:40))
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 1.52587890625e-6
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:02[39m
## Gamma Star
```julia
f0=100l_0
fl=1.6l_0
κ_arr = collect(0:30)
κ_log = exp.(range(0, length=length(κ_arr), stop=log(κ_arr[end] + 1))) .- 1
p_lin = plot(
κ_arr,
[l_star(n, l_0, gap, f0, fl, κ)[1] for κ in κ_arr],
xlabel="κ",
ylabel="γ_star"
)
p_log = plot(
log.(1 .+ κ_log),
[log(l_star(n, l_0, gap, f0, fl, κ)[1]) for κ in κ_log],
xlabel="log(1+κ)",
ylabel="log(sstar)"
)
y = 0:0.01:1
x = 10:150
ff(x, y) = begin
F(y, x, n, l_0, gap, f0, fl)/l_0
end
X = repeat(reshape(x, 1, :), length(y), 1)
Y = repeat(y, 1, length(x))
Z = map(ff, X, Y)
p1 = contour(x, y, ff, fill = true, color=:viridis, levels=20)
gs = [l_star(n, l_0, gap, f0, fl, κ)[1] for κ in κ_arr]
ls = [l_star(n, l_0, gap, f0, fl, κ)[end-1] for κ in κ_arr]
scatter!(p1, ls, gs)
plot([p_lin, p_log, p1]..., size=(1100, 300), layout=(1, 3))
```
## S_Star
```julia
sstar(l, n, l_0, gap, f0, fl, κ) = F(γ_star(l, n, l_0, gap, f0, fl, κ) - 1/l, l, n, l_0, gap, f0, fl) - F(γ_star(l, n, l_0, gap, f0, fl, κ), l, n, l_0, gap, f0, fl)
```
sstar (generic function with 1 method)
```julia
function plot_sstar(n, l_0, gap, f0, fl, κ_arr=collect(0:30))
l_opt = optimal_length_star(n, l_0, gap, f0, fl, κ_arr)
star = [sstar(l, n, l_0, gap, f0, fl, κ) for (l, κ) in zip(l_opt, κ_arr)]
κ_log = exp.(range(0, length=length(κ_arr), stop=log(κ_arr[end] + 1))) .- 1
star_log = [sstar(l, n, l_0, gap, f0, fl, κ) for (l, κ) in zip(l_opt, κ_log)]
logx = log.(1 .+ κ_log)
logy = log.(star_log)
chn = sample(fit_exponent(logx[2:end], logy[2:end]),NUTS(0.65), 3_000)
p_lin = plot(
κ_arr,
star,
xlabel="κ",
ylabel="s_star",
title="f0=$(f0/l_0)l_0, fl=$(fl/l_0)l_0"
)
p_log = plot(log.(1 .+ κ_log), log.(star_log))
plot!(
p_log,
log.(1 .+ κ_log),
log.((1 .+ κ_log) .^(mean(chn[:a]))) .+ mean(chn[:b]),
linestyle=:dash,
xlabel="log(1+κ)",
ylabel="log(s_star)",
title=@sprintf "LogLog with slope %.3f" mean(chn[:a])
)
return plot([p_lin, p_log]..., size=(900, 450))
end
```
plot_sstar (generic function with 2 methods)
```julia
plot_sstar(n, l_0, gap, 40l_0, l_0)
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 7.62939453125e-7
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, true, true, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
```julia
plot_sstar(n, l_0, gap, 200l_0, 1.6l_0)
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 6.103515625e-6
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
```julia
plot_sstar(n, l_0, gap, 300l_0, 2l_0)
```
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/tomroschinger/.julia/packages/AdvancedHMC/MIxdK/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 3.0517578125e-6
└ @ Turing.Inference /Users/tomroschinger/.julia/packages/Turing/O1Pn0/src/inference/hmc.jl:195
[32mSampling: 100%|█████████████████████████████████████████| Time: 0:00:00[39m
```julia
```
| 8fd90ded242cab8b9747ad1ecb066b053b148064 | 779,053 | ipynb | Jupyter Notebook | notebooks/old_notebooks/3_l_opt_scaling.ipynb | tomroesch/complexity_evolution | 428905cf179b9fcfd2be3c1b5bb24e324241eb63 | [
"MIT"
] | null | null | null | notebooks/old_notebooks/3_l_opt_scaling.ipynb | tomroesch/complexity_evolution | 428905cf179b9fcfd2be3c1b5bb24e324241eb63 | [
"MIT"
] | null | null | null | notebooks/old_notebooks/3_l_opt_scaling.ipynb | tomroesch/complexity_evolution | 428905cf179b9fcfd2be3c1b5bb24e324241eb63 | [
"MIT"
] | null | null | null | 99.103549 | 8,840 | 0.627907 | true | 18,722 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.672332 | 0.536646 | __label__eng_Latn | 0.529027 | 0.085139 |
# 1-D Convection-Diffusion equation
In this tutorial, we consider the **1D** convection-diffusion equation
$$
\frac{\partial u}{\partial t} + c \partial_x u - \nu \frac{\partial^2 u}{\partial x^2} = 0
$$
```python
# needed imports
from numpy import zeros, ones, linspace, zeros_like
from matplotlib.pyplot import plot, show
%matplotlib inline
```
```python
# Initial condition
import numpy as np
u0 = lambda x: np.exp(-(x-.5)**2/.05**2)
grid = linspace(0., 1., 401)
u = u0(grid)
plot(grid, u) ; show()
```
### Time scheme
$$\frac{u^{n+1}-u^n}{\Delta t} + c \partial_x u^{n+1} - \nu \partial_{xx} u^{n+1} = 0 $$
$$ \left(I + c \Delta t \partial_x - \nu \Delta t \partial_{xx} \right) u^{n+1} = u^n $$
### Weak formulation
$$
\langle v, u^{n+1} \rangle - c \Delta t ~ \langle \partial_x v, u^{n+1} \rangle + \nu \Delta t ~ \langle \partial_x v, \partial_x u^{n+1} \rangle = \langle v, u^n \rangle
$$
expending $u^n$ over the fem basis, we get the linear system
$$A U^{n+1} = M U^n$$
where
$$
M_{ij} = \langle b_i, b_j \rangle
$$
$$
A_{ij} = \langle b_i, b_j \rangle - c \Delta t ~ \langle \partial_x b_i, b_j \rangle + \nu \Delta t ~ \langle \partial_x b_i, \partial_x b_j \rangle
$$
## Abstract Model using SymPDE
```python
from sympde.core import Constant
from sympde.expr import BilinearForm, LinearForm, integral
from sympde.topology import ScalarFunctionSpace, Line, element_of, dx
from sympde.topology import dx1 # TODO: this is a bug right now
```
```python
# ... abstract model
domain = Line()
V = ScalarFunctionSpace('V', domain)
x = domain.coordinates
u,v = [element_of(V, name=i) for i in ['u', 'v']]
c = Constant('c')
nu = Constant('nu')
dt = Constant('dt')
# bilinear form
# expr = v*u - c*dt*dx(v)*u # TODO BUG not working
expr = v*u - c*dt*dx1(v)*u + nu*dt*dx1(v)*dx1(u)
a = BilinearForm((u,v), integral(domain , expr))
# bilinear form for the mass matrix
expr = u*v
m = BilinearForm((u,v), integral(domain , expr))
# linear form for initial condition
from sympy import exp
expr = exp(-(x-.5)**2/.05**2)*v
l = LinearForm(v, integral(domain, expr))
```
## Discretization using Psydac
```python
from psydac.api.discretization import discretize
```
```python
c = 1 # wavespeed
nu = 0.01 # viscosity
T = 0.2 # T final time
dt = 0.001
niter = int(T / dt)
degree = [3] # spline degree
ncells = [64] # number of elements
```
```python
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=ncells, comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=degree)
# Discretize the bilinear forms
ah = discretize(a, domain_h, [Vh, Vh])
mh = discretize(m, domain_h, [Vh, Vh])
# Discretize the linear form for the initial condition
lh = discretize(l, domain_h, Vh)
```
```python
# assemble matrices and convert them to scipy
M = mh.assemble().tosparse()
A = ah.assemble(c=c, nu=nu, dt=dt).tosparse()
# assemble the rhs and convert it to numpy array
rhs = lh.assemble().toarray()
```
```python
from scipy.sparse.linalg import cg, gmres
```
```python
# L2 projection of the initial condition
un, status = cg(M, rhs, tol=1.e-8, maxiter=5000)
```
```python
from simplines import plot_field_1d
plot_field_1d(Vh.knots[0], Vh.degree[0], un, nx=401)
```
```python
for i in range(0, niter):
b = M.dot(un)
un, status = gmres(A, b, tol=1.e-8, maxiter=5000)
```
```python
plot_field_1d(Vh.knots[0], Vh.degree[0], un, nx=401)
```
```python
```
```python
```
| bb0f5620cefda6a003b7ddc4cfc2fe5d4e2a9d56 | 37,596 | ipynb | Jupyter Notebook | lessons/Chapter2/03_convection_diffusion_1d.ipynb | ratnania/IGA-Python | a9d7aa9bd14d4b3f1b12cdfbc2f9bf3c0a68fff4 | [
"MIT"
] | 6 | 2018-04-27T15:40:17.000Z | 2020-08-13T08:45:35.000Z | lessons/Chapter2/03_convection_diffusion_1d.ipynb | GabrielJie/IGA-Python | a9d7aa9bd14d4b3f1b12cdfbc2f9bf3c0a68fff4 | [
"MIT"
] | 4 | 2021-06-08T22:59:19.000Z | 2022-01-17T20:36:56.000Z | lessons/Chapter2/03_convection_diffusion_1d.ipynb | GabrielJie/IGA-Python | a9d7aa9bd14d4b3f1b12cdfbc2f9bf3c0a68fff4 | [
"MIT"
] | 4 | 2018-10-06T01:30:20.000Z | 2021-12-31T02:42:05.000Z | 116.757764 | 10,652 | 0.872992 | true | 1,156 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92944 | 0.828939 | 0.770449 | __label__eng_Latn | 0.642376 | 0.628344 |
```python
from resources.workspace import *
from IPython.display import display
from scipy.integrate import odeint
import copy
%matplotlib inline
```
# Lyapunov exponents and eigenvalues
A **Lypunov exponent** can be understood loosely as a kind of generalized eigenvalue for time-depenent linear transformations, or for the linearization of a nonlinear evolution.
What do eigenvalues tell us about a matrix and why might the above results seem intuitive?
Consider the equation for the <em>evolution</em> of the pertubations <span style='font-size:1.25em'>$\boldsymbol{\delta}^i_k$</span></a>. We can write,
<h3>
$$\begin{align}
& \boldsymbol{\delta}_k^i = \mathbf{x}_k^c - \mathbf{x}_k^i \\
\Rightarrow & \dot{\boldsymbol{\delta}}_k^i = f(\mathbf{x}_k^c) - f(\mathbf{x}_k^i).
\end{align}$$
</h3>
But for small perturbations, we can reasonably make an approximation with a Taylor expansion,
<h3>
$$\begin{align}
f(\mathbf{x}_k^c) - f(\mathbf{x}_k^i) \approx \nabla f\rvert_{\mathbf{x}c} \boldsymbol{\delta}^i_k, & &
\end{align}$$
</h3>
where the term,
<h2>
$$f\rvert_{\mathbf{x}c}$$
</h2>
is the gradient with respect to the state variables, i.e., the **[Jacobian matrix](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant)**, evaluated at the control trajectory.
This means that for small perturbations, the evolution is well approximated by the linear Jacobian equations, and we can think of these linear equations having some kind of generalized eigenvalues, describing the invariant (exponential) growth and decay rates for the system.
#### The power method
The method of breeding errors above is conceptually very similar to the classical [power method](https://en.wikipedia.org/wiki/Power_iteration) for finding the leading eigenvalue of a diagonalizable matrix:
* Suppose <span style='font-size:1.25em'>$\mathbf{M}\in\mathbb{R}^{n\times n}$</span> is a diagonalizable matrix, with eigenvalues,
<h3>
$$
\rvert \mu_1 \rvert > \rvert\mu_2\rvert \geq \cdots \geq \rvert\mu_n\rvert,
$$
</h3>
i.e., <span style='font-size:1.25em'>$\mathbf{M}$</span> has a single eigenvalue of magnitude greather than all its others.
* Let <span style='font-size:1.25em'>$\mathbf{v}_0 \in \mathbb{R}^n$</span> be a randomly selected vector, with respect to the Gaussian distribution on <span style='font-size:1.25em'>$\mathbb{R}^n$</span>.
* We define the algorithm,
<h3>
$$\begin{align}
\mathbf{v}_{k+1} \triangleq \frac{\mathbf{M} \mathbf{v}_k}{ \left\rvert \mathbf{M} \mathbf{v}_k\right\rvert} & &
\widehat{\mu}_{k+1} \triangleq \mathbf{v}_{k+1}^{\rm T} \mathbf{M} \mathbf{v}_{k+1}
\end{align}$$
</h3>
as the power method.
It is easy to verify that with probability one, the sequence <span style='font-size:1.25em'>$\widehat{\mu}_k$</span> converges to the dominant eigenvalue, <span style='font-size:1.25em'>$\mu_1$</span>, and <span style='font-size:1.25em'>$\mathbf{v}_k$</span> converges to an eigenvector for the dominant eigenvalue.
**Exc 4.20**: Fill in the code below to write an algorithm for the power method.
```python
def power_method(M, v, number_iterations):
"""takes a diagonalizable matrix M and returns approximations for the leading eigenvector/eigenvalue"""
for i in range(number_iterations):
### fill in missing lines here
return v, mu
```
```python
# Example solution
# show_answer('power_method')
```
**Exc 4.22**: Test your solution to **Exc 4.20**. Use the code and slider below to study the rate of convergence. In this case, the matrix will have eigenvalues
<h3>$$\begin{align}
\left\{r^i : \hspace{2mm} i =0, 1, 2, \hspace{2mm} \text{and} \hspace{2mm} r\in(1,2]\right\}
\end{align}$$</h3>
The parameter <span style='font-size:1.25em'>$k$</span> defines how many iterations of the power method are computed. How does the value <span style='font-size:1.25em'>$r$</span> affect the number of iterations necessary to reach convergence?
```python
def animate_power_convergence_rate(k=1, r=1.5):
# We define a well conditioned matrix M, depending on the ratio of the eigenvalues
M = array([r ** i for i in range(3)])
M = np.diag(M)
e_3 = array([0, 0, 1])
# define a random initial condition
np.random.seed(0)
v = randn(3)
v = v / sqrt(v.T @ v)
# and storage for the series of approximations
v_hist = zeros(k+1)
v_hist[0] = e_3.T @ v
mu_hist = zeros(k+1)
mu_hist[0] = v.T @ M @ v
# for the number of iterations k, return the power method approximation
for it in range(1,k+1):
np.random.seed(0)
v, mu = power_method(M, v, it)
v_hist[it] = np.arccos(e_3.T @ v)
mu_hist[it] = mu
# PLOTTING
fig = plt.figure(figsize=(16,8))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
ax1.plot(range(0,k+1), v_hist)
ax2.plot(range(0,k+1), mu_hist)
ax1.set_ybound([0,1.05])
ax2.set_ybound([.9,4])
t_scl = np.floor_divide(k+1, 10)
ax1.set_xticks(range(0, k+1, t_scl + 1))
ax2.set_xticks(range(0, k+1, t_scl + 1))
ax1.text(0, 1.07, r'Angle between $\mathbf{v}_k$ and eigenvector', size=20)
ax2.text(0, 4.05, r'Value of $\mu_k$', size=20)
ax1.tick_params(
labelsize=20)
ax2.tick_params(
labelsize=20)
plt.show()
w = interactive(animate_power_convergence_rate, k=(1,15), r=(1.05,2, .05))
w
```
<b>Exc 4.24.a </b>: Suppose the power method is performed on a generic diagonalizable matrix <span style='font-size:1.25em'>$\mathbf{M}\in\mathbb{R}^{n\times n}$</span>, with eigenvalues
<h3>$$\begin{align}
\rvert \mu_1 \rvert > \rvert\mu_2 \rvert\geq \cdots \geq \rvert\mu_n \rvert,
\end{align}$$</h3>
with a randomly selected initial vector <span style='font-size:1.25em'>$\mathbf{v}_0$</span>, with respect to the Gaussian distribution on <span style='font-size:1.25em'>$\mathbb{R}^n$</span>.
Can you conjecture what is the order of convegence for the sequences <span style='font-size:1.25em'>$\mathbf{v}_k$</span> and <span style='font-size:1.25em'>$\widehat{\mu}_k$</span>?
**Hint**: the rate depends on the eigenvalues.
**Exc 4.42.b***: Prove the rate of convergence.
```python
# Answer
# show_answer('power_method_convergence_rate')
```
<b>Exc 4.28* </b>: We have brushed over why the algorithm described above converges with *probability one*, can you prove why this is the case?
```python
# Answer
# show_answer('probability_one')
```
<b>Exc 4.30.a </b>: Let <span style='font-size:1.25em'>$\widehat{\mu}_k$</span> be defined as in **Exc 4.24**. Suppose we define a sequence of values,
<h3>$$\begin{align}
\widehat{\lambda}_T = \frac{1}{T} \sum_{k=1}^T\log\left(\rvert \widehat{\mu}_k\right \rvert).
\end{align}$$</h3>
Answer the following:
<ol>
<li> Can you conjecture what <span style='font-size:1.25em'>$\widehat{\lambda}_T$</span> converges to as <span style='font-size:1.25em'>$T \rightarrow \infty$</span>?
**Hint**: Use the fact that <span style='font-size:1.25em'>$\widehat{\mu}_k \rightarrow \mu_1$</span> as <span style='font-size:1.25em'>$k \rightarrow \infty$</span></li>
<li> Suppose we define the Lyapunov exponents as the log-average growth rates of the matrix <span style='font-size:1.25em'>$\mathbf{M}$</span>. What can you guess about the relationship between the eigenvalues and the Lyapunov exponents of the matrix <span style='font-size:1.25em'>$\mathbf{M}$</span>?</li>
<ol>
<b>Exc 4.30.b*</b>: Prove that the limit
<h3>$$\begin{align}
\lim_{T \rightarrow \infty} \widehat{\lambda}_T
\end{align}$$</h3>
exists, and what quantity it converges to.
```python
# Answers
# show_answer('lyapunov_exp_power_method')
```
#### The QR algorithm
The power method is an intuitive method for finding the dominant eigenvalue for a special class of matrices. However, we generally want to find directions that may also be growing, though more slowly than the dominant direction.
Intuitively, if we are tracking a control trajectory with data assimilation and we corrected the forecast errors only in the direction of dominant error growth, we may still lose track of the control trajectory, only it would be more slowly than the dominant rate of growth.
There is a simple generalization of the power method for finding higher dimensional subspaces. We may consider *separating* perturbations into directions that grow at different rates. One easy way to perform this is to construct a *moving frame* in the span of the perturbations. If there is only one perturbation, then the power method constructs precisely a 1-dimensional moving frame, with a vector that is always of norm 1.
If there are two perturbations we can construct a moving frame in the span of the perturbations with a [Gram-Schmidt](https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process) step. A visualization of the Gram-Schmidt process for three vectors is picuted in the visualization below.
<div style='width:900px'>
</div>
**By Lucas V. Barbosa [Public domain], <a href="https://commons.wikimedia.org/wiki/File:Gram-Schmidt_orthonormalization_process.gif">from Wikimedia Commons</a>**
In our case, suppose we have two initial, orthogonal vectors
<h3>$$
\mathbf{x}_0^1, \mathbf{x}_0^2
$$</h3>
which we will propagate forward. We define for each $j=1,2$,
<h3>$$
\widehat{\mathbf{x}}^j_1 \triangleq \mathbf{M} \mathbf{x}^j_0.
$$</h3>
The first vector will follow the usual power method, i.e.,
<h3>$$
\mathbf{x}^1_1 \triangleq \frac{\widehat{\mathbf{x}}_1^1}{\left\rvert \widehat{\mathbf{x}}_1^1\right\rvert},
$$</h3>
However, we want to separate the second vector <span style='font-size:1.25em'>$\widehat{\mathbf{x}}_1^2$</span> so the new perturbations don't align. We thus remove the components in the direction of <span style='font-size:1.25em'>$\mathbf{x}_1^1$</span>, before we normalize <span style='font-size:1.25em'>$\widehat{\mathbf{x}}_1^2$</span>.
<h3>$$\begin{align}
\mathbf{y}^2_1 &\triangleq \widehat{\mathbf{x}}_1^2- \langle \mathbf{x}_1^1, \widehat{\mathbf{x}}^2_1\rangle \mathbf{x}_1^1 \\
\mathbf{x}^2_1 & \triangleq \frac{\mathbf{y}_1^2}{\left\rvert \mathbf{y}_1^2 \right\rvert}
\end{align}$$</h3>
It is easy to see by definition that <span style='font-size:1.25em'>$\mathbf{x}_1^1, \mathbf{x}_1^2$</span> are orthogonal, but we can also show an important dynamical property with this transformation. Define the following coefficients,
<h3>$$
\begin{align}
U^{11}_1 &=\left\rvert \widehat{\mathbf{x}}_1^1\right\rvert \\
U^{22}_1 &=\left\rvert \mathbf{y}_1^2 \right\rvert \\
U^{12}_1 &= \langle \mathbf{x}^1_1, \widehat{\mathbf{x}}_1^2\rangle
\end{align}
$$<h3>
**Exc 4.32**: Can you write the recursion for the vectors <span style='font-size:1.25em'>$\mathbf{x}_0^1, \mathbf{x}_0^2$</span> transformed into <span style='font-size:1.25em'>$\mathbf{x}_1^1,\mathbf{x}_1^2$</span> with the coefficients <span style='font-size:1.25em'>$U^{ij}_1$</span> defined above in matrix form? Can you write the recursion for an arbitrary number of steps $k\in\{1,2,\cdots\}$?
```python
# Answer
# show_answer('gram-schmidt')
```
The above procedure defines the *naive* QR algorithm --- one should note that there are more computationally efficient versions of this algorithm utilized in standard linear algebra software libraries. However, this simple intuition forms the basis for many powerful theoretical results.
The QR algorithm (in its refined version) is the standard method for computing the <b>[Schur decomposition](https://en.wikipedia.org/wiki/Schur_decomposition)</b> for a matrix, which is used for many purposes as it is a numerically stable alternative to the <b>[Jordan Cannonical Form](https://en.wikipedia.org/wiki/Jordan_normal_form)</b>, pictued below:
<div style='width:900px'>
</div>
**By Jakob.scholbach [<a href="https://creativecommons.org/licenses/by-sa/3.0">CC BY-SA 3.0</a> or <a href="http://www.gnu.org/copyleft/fdl.html">GFDL</a>], <a href="https://commons.wikimedia.org/wiki/File:Jordan_blocks.svg">from Wikimedia Commons</a>**
The Jordan Canonical form is highly appealing as it is the diagonal or "almost-diagonal" form of a matrix. However, this is highly unstable to compute in most applications.
The Schur decomposition relaxes this further, from "almost-diagonal" to upper triangular, another useful form for a matrix. In particular, the Schur decomposition is one approach to find **all eigenvalues** for a matrix, separated into a **chain of descending growth and decay rates**.
<b>Exc 4.34</b>: Suppose a matrix <span style='font-size:1.25em'>$\mathbf{M}$</span> has a Schur decomposition, given as,
<h3> $$ \begin{align}
\mathbf{M} = \mathbf{Q} \mathbf{U} \mathbf{Q}^{\rm T},
\end{align}$$ </h3>
where <span style='font-size:1.25em'>$\mathbf{U}$</span> is strictly upper triangular, and <span style='font-size:1.25em'>$\mathbf{Q}$</span> is orthogonal such that <span style='font-size:1.25em'>$\mathbf{Q}^{\rm T} = \mathbf{Q}^{-1}$</span>. Can you prove that the eigenvalues of <span style='font-size:1.25em'>$\mathbf{M}$</span> are the diagonal elements of <span style='font-size:1.25em'>$\mathbf{U}$?</span>
If <span style='font-size:1.25em'>$\mathbf{Q}^j$</span> is the $j$-th column of <span style='font-size:1.25em'>$\mathbf{Q}^j$</span>, what does the product
<h3>$$\begin{align}
\left(\mathbf{Q}^j\right)^{\rm T} \mathbf{M} \mathbf{Q}^j
\end{align}$$</h3>
equal in terms of the earlier quantities? <b>Hint</b>: how does this relate to the power method?
```python
# Answer
# show_answer('schur_decomposition')
```
<b>Exc 4.36</b>: Can you conjecture what the Schur decomposition will take in the case that the matrix <span style='font-size:1.25em'>$\mathbf{M}$</span> has complex eigenvalues?
```python
# Answer
# show_answer('real_schur')
```
### Next: [Lyapunov vectors and ensemble based covariances](T4 - Lyapunov vectors and ensemble based covariances.ipynb)
| 4bd4dedef8b077e30ad9a4a07aaeed49497564bc | 19,820 | ipynb | Jupyter Notebook | tutorials/DA and the Dynamics of Ensemble Based Forecasting/T3 - Lyapunov exponents and eigenvalues.ipynb | brajard/DAPPER | 1a513b2f23041b15fb335aeb17906607bf2a5350 | [
"MIT"
] | 3 | 2021-07-31T10:13:11.000Z | 2022-01-14T16:52:04.000Z | tutorials/DA and the Dynamics of Ensemble Based Forecasting/T3 - Lyapunov exponents and eigenvalues.ipynb | franktoffel/dapper | 373a27273ea109f349e5edcdcef0cfe0b83b925e | [
"MIT"
] | null | null | null | tutorials/DA and the Dynamics of Ensemble Based Forecasting/T3 - Lyapunov exponents and eigenvalues.ipynb | franktoffel/dapper | 373a27273ea109f349e5edcdcef0cfe0b83b925e | [
"MIT"
] | 3 | 2020-01-25T16:35:00.000Z | 2021-04-08T03:20:48.000Z | 39.325397 | 437 | 0.590061 | true | 4,349 | Qwen/Qwen-72B | 1. YES
2. YES | 0.782662 | 0.824462 | 0.645275 | __label__eng_Latn | 0.937955 | 0.337522 |
# Introduction to sympy
A Python library for symbolic computations
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Python-set-up" data-toc-modified-id="Python-set-up-1"><span class="toc-item-num">1 </span>Python set-up</a></span></li><li><span><a href="#Numbers" data-toc-modified-id="Numbers-2"><span class="toc-item-num">2 </span>Numbers</a></span><ul class="toc-item"><li><span><a href="#Integers" data-toc-modified-id="Integers-2.1"><span class="toc-item-num">2.1 </span>Integers</a></span></li><li><span><a href="#Floats-or-Reals" data-toc-modified-id="Floats-or-Reals-2.2"><span class="toc-item-num">2.2 </span>Floats or Reals</a></span></li><li><span><a href="#Rationals-or-Fractions" data-toc-modified-id="Rationals-or-Fractions-2.3"><span class="toc-item-num">2.3 </span>Rationals or Fractions</a></span></li><li><span><a href="#Surds" data-toc-modified-id="Surds-2.4"><span class="toc-item-num">2.4 </span>Surds</a></span></li><li><span><a href="#Useful-constants" data-toc-modified-id="Useful-constants-2.5"><span class="toc-item-num">2.5 </span>Useful constants</a></span></li><li><span><a href="#Complex-numbers" data-toc-modified-id="Complex-numbers-2.6"><span class="toc-item-num">2.6 </span>Complex numbers</a></span></li><li><span><a href="#Miscellaneous" data-toc-modified-id="Miscellaneous-2.7"><span class="toc-item-num">2.7 </span>Miscellaneous</a></span></li><li><span><a href="#Be-a-little-careful-when-dividing-by-zero-and-of-arithmetic-with-infinities" data-toc-modified-id="Be-a-little-careful-when-dividing-by-zero-and-of-arithmetic-with-infinities-2.8"><span class="toc-item-num">2.8 </span>Be a little careful when dividing by zero and of arithmetic with infinities</a></span></li></ul></li><li><span><a href="#Symbols" data-toc-modified-id="Symbols-3"><span class="toc-item-num">3 </span>Symbols</a></span><ul class="toc-item"><li><span><a href="#Import-symbols-from-sympy.abc" data-toc-modified-id="Import-symbols-from-sympy.abc-3.1"><span class="toc-item-num">3.1 </span>Import symbols from sympy.abc</a></span></li><li><span><a href="#Define--one-symbol-at-a-time" data-toc-modified-id="Define--one-symbol-at-a-time-3.2"><span class="toc-item-num">3.2 </span>Define one symbol at a time</a></span></li><li><span><a href="#Define-multiple-symbols-in-one-line-of-code" data-toc-modified-id="Define-multiple-symbols-in-one-line-of-code-3.3"><span class="toc-item-num">3.3 </span>Define multiple symbols in one line of code</a></span></li><li><span><a href="#Set-the-attributes-of-a-symbol" data-toc-modified-id="Set-the-attributes-of-a-symbol-3.4"><span class="toc-item-num">3.4 </span>Set the attributes of a symbol</a></span></li><li><span><a href="#Check-the-assumptions/properties-of-a-symbol" data-toc-modified-id="Check-the-assumptions/properties-of-a-symbol-3.5"><span class="toc-item-num">3.5 </span>Check the assumptions/properties of a symbol</a></span></li><li><span><a href="#Symbolic-functions" data-toc-modified-id="Symbolic-functions-3.6"><span class="toc-item-num">3.6 </span>Symbolic functions</a></span></li></ul></li><li><span><a href="#Functions" data-toc-modified-id="Functions-4"><span class="toc-item-num">4 </span>Functions</a></span><ul class="toc-item"><li><span><a href="#In-line-arithmetic" data-toc-modified-id="In-line-arithmetic-4.1"><span class="toc-item-num">4.1 </span>In-line arithmetic</a></span></li><li><span><a href="#Absolute-Values" data-toc-modified-id="Absolute-Values-4.2"><span class="toc-item-num">4.2 </span>Absolute Values</a></span></li><li><span><a href="#Factorials" data-toc-modified-id="Factorials-4.3"><span class="toc-item-num">4.3 </span>Factorials</a></span></li><li><span><a href="#Trig-functions" data-toc-modified-id="Trig-functions-4.4"><span class="toc-item-num">4.4 </span>Trig functions</a></span></li><li><span><a href="#Exponential-and-logarithmic-functions" data-toc-modified-id="Exponential-and-logarithmic-functions-4.5"><span class="toc-item-num">4.5 </span>Exponential and logarithmic functions</a></span></li></ul></li><li><span><a href="#Expressions" data-toc-modified-id="Expressions-5"><span class="toc-item-num">5 </span>Expressions</a></span><ul class="toc-item"><li><span><a href="#Creating-an-expression" data-toc-modified-id="Creating-an-expression-5.1"><span class="toc-item-num">5.1 </span>Creating an expression</a></span></li><li><span><a href="#Creating-expressions-from-strings" data-toc-modified-id="Creating-expressions-from-strings-5.2"><span class="toc-item-num">5.2 </span>Creating expressions from strings</a></span></li><li><span><a href="#Substituting-values-into-an-expression" data-toc-modified-id="Substituting-values-into-an-expression-5.3"><span class="toc-item-num">5.3 </span>Substituting values into an expression</a></span></li><li><span><a href="#Simplifying-expressions" data-toc-modified-id="Simplifying-expressions-5.4"><span class="toc-item-num">5.4 </span>Simplifying expressions</a></span><ul class="toc-item"><li><span><a href="#Finding-factors" data-toc-modified-id="Finding-factors-5.4.1"><span class="toc-item-num">5.4.1 </span>Finding factors</a></span></li><li><span><a href="#Expanding-out" data-toc-modified-id="Expanding-out-5.4.2"><span class="toc-item-num">5.4.2 </span>Expanding out</a></span></li><li><span><a href="#Collecting-terms" data-toc-modified-id="Collecting-terms-5.4.3"><span class="toc-item-num">5.4.3 </span>Collecting terms</a></span></li><li><span><a href="#Canceling-common-factors,-expressing-as-$\frac{p}{q}$" data-toc-modified-id="Canceling-common-factors,-expressing-as-$\frac{p}{q}$-5.4.4"><span class="toc-item-num">5.4.4 </span>Canceling common factors, expressing as $\frac{p}{q}$</a></span></li><li><span><a href="#Trig-simplification" data-toc-modified-id="Trig-simplification-5.4.5"><span class="toc-item-num">5.4.5 </span>Trig simplification</a></span></li><li><span><a href="#Trig-expansions" data-toc-modified-id="Trig-expansions-5.4.6"><span class="toc-item-num">5.4.6 </span>Trig expansions</a></span></li><li><span><a href="#Power-simplifications" data-toc-modified-id="Power-simplifications-5.4.7"><span class="toc-item-num">5.4.7 </span>Power simplifications</a></span></li><li><span><a href="#Log-simplifications" data-toc-modified-id="Log-simplifications-5.4.8"><span class="toc-item-num">5.4.8 </span>Log simplifications</a></span></li><li><span><a href="#Rewriting-functions" data-toc-modified-id="Rewriting-functions-5.4.9"><span class="toc-item-num">5.4.9 </span>Rewriting functions</a></span></li></ul></li><li><span><a href="#Solving-expressions" data-toc-modified-id="Solving-expressions-5.5"><span class="toc-item-num">5.5 </span>Solving expressions</a></span><ul class="toc-item"><li><span><a href="#Example-quadratic-solution" data-toc-modified-id="Example-quadratic-solution-5.5.1"><span class="toc-item-num">5.5.1 </span>Example quadratic solution</a></span></li><li><span><a href="#Generalised-quadratic-solution" data-toc-modified-id="Generalised-quadratic-solution-5.5.2"><span class="toc-item-num">5.5.2 </span>Generalised quadratic solution</a></span></li><li><span><a href="#Quadratic-with-a-complex-solution" data-toc-modified-id="Quadratic-with-a-complex-solution-5.5.3"><span class="toc-item-num">5.5.3 </span>Quadratic with a complex solution</a></span></li><li><span><a href="#Manipulating-expressions-to-re-arrange-terms" data-toc-modified-id="Manipulating-expressions-to-re-arrange-terms-5.5.4"><span class="toc-item-num">5.5.4 </span>Manipulating expressions to re-arrange terms</a></span></li></ul></li><li><span><a href="#Plotting-expressions" data-toc-modified-id="Plotting-expressions-5.6"><span class="toc-item-num">5.6 </span>Plotting expressions</a></span></li></ul></li><li><span><a href="#Equations" data-toc-modified-id="Equations-6"><span class="toc-item-num">6 </span>Equations</a></span><ul class="toc-item"><li><span><a href="#Creating-equations" data-toc-modified-id="Creating-equations-6.1"><span class="toc-item-num">6.1 </span>Creating equations</a></span></li><li><span><a href="#Solving-equations" data-toc-modified-id="Solving-equations-6.2"><span class="toc-item-num">6.2 </span>Solving equations</a></span><ul class="toc-item"><li><span><a href="#Rearranging-terms" data-toc-modified-id="Rearranging-terms-6.2.1"><span class="toc-item-num">6.2.1 </span>Rearranging terms</a></span></li><li><span><a href="#Exponential-example" data-toc-modified-id="Exponential-example-6.2.2"><span class="toc-item-num">6.2.2 </span>Exponential example</a></span></li><li><span><a href="#Quadratic-example" data-toc-modified-id="Quadratic-example-6.2.3"><span class="toc-item-num">6.2.3 </span>Quadratic example</a></span></li><li><span><a href="#A-trigonometric-example" data-toc-modified-id="A-trigonometric-example-6.2.4"><span class="toc-item-num">6.2.4 </span>A trigonometric example</a></span></li></ul></li><li><span><a href="#Solving-systems-of-equations" data-toc-modified-id="Solving-systems-of-equations-6.3"><span class="toc-item-num">6.3 </span>Solving systems of equations</a></span><ul class="toc-item"><li><span><a href="#Two-linear-equations" data-toc-modified-id="Two-linear-equations-6.3.1"><span class="toc-item-num">6.3.1 </span>Two linear equations</a></span></li><li><span><a href="#A-linear-and-cubic-system-with-three-point-solutions" data-toc-modified-id="A-linear-and-cubic-system-with-three-point-solutions-6.3.2"><span class="toc-item-num">6.3.2 </span>A linear and cubic system with three point-solutions</a></span></li><li><span><a href="#A-system-of-equations-with-no-solutions" data-toc-modified-id="A-system-of-equations-with-no-solutions-6.3.3"><span class="toc-item-num">6.3.3 </span>A system of equations with no solutions</a></span></li></ul></li></ul></li><li><span><a href="#Limits" data-toc-modified-id="Limits-7"><span class="toc-item-num">7 </span>Limits</a></span><ul class="toc-item"><li><span><a href="#Simple-example" data-toc-modified-id="Simple-example-7.1"><span class="toc-item-num">7.1 </span>Simple example</a></span></li><li><span><a href="#More-complicated-examples" data-toc-modified-id="More-complicated-examples-7.2"><span class="toc-item-num">7.2 </span>More complicated examples</a></span><ul class="toc-item"><li><span><a href="#$f(x)-=-x^n$" data-toc-modified-id="$f(x)-=-x^n$-7.2.1"><span class="toc-item-num">7.2.1 </span>$f(x) = x^n$</a></span></li><li><span><a href="#$f(x)=a^x$" data-toc-modified-id="$f(x)=a^x$-7.2.2"><span class="toc-item-num">7.2.2 </span>$f(x)=a^x$</a></span></li><li><span><a href="#$f(x)=sin(x)$" data-toc-modified-id="$f(x)=sin(x)$-7.2.3"><span class="toc-item-num">7.2.3 </span>$f(x)=sin(x)$</a></span></li></ul></li><li><span><a href="#Limits,-where-the-direction-in-which-we-approach-the-limit-is-important" data-toc-modified-id="Limits,-where-the-direction-in-which-we-approach-the-limit-is-important-7.3"><span class="toc-item-num">7.3 </span>Limits, where the direction in which we approach the limit is important</a></span></li></ul></li><li><span><a href="#Derivatives" data-toc-modified-id="Derivatives-8"><span class="toc-item-num">8 </span>Derivatives</a></span><ul class="toc-item"><li><span><a href="#First,-second-and-subsequent-derivatives" data-toc-modified-id="First,-second-and-subsequent-derivatives-8.1"><span class="toc-item-num">8.1 </span>First, second and subsequent derivatives</a></span></li><li><span><a href="#Partial-derivatives" data-toc-modified-id="Partial-derivatives-8.2"><span class="toc-item-num">8.2 </span>Partial derivatives</a></span></li></ul></li><li><span><a href="#Integrals" data-toc-modified-id="Integrals-9"><span class="toc-item-num">9 </span>Integrals</a></span><ul class="toc-item"><li><span><a href="#Definite-Integrals" data-toc-modified-id="Definite-Integrals-9.1"><span class="toc-item-num">9.1 </span>Definite Integrals</a></span></li><li><span><a href="#Indefinite-integrals" data-toc-modified-id="Indefinite-integrals-9.2"><span class="toc-item-num">9.2 </span>Indefinite integrals</a></span></li><li><span><a href="#sympy-cannot-evaluate-some-integrals" data-toc-modified-id="sympy-cannot-evaluate-some-integrals-9.3"><span class="toc-item-num">9.3 </span>sympy cannot evaluate some integrals</a></span></li></ul></li><li><span><a href="#Sums" data-toc-modified-id="Sums-10"><span class="toc-item-num">10 </span>Sums</a></span><ul class="toc-item"><li><span><a href="#Infinite-sums" data-toc-modified-id="Infinite-sums-10.1"><span class="toc-item-num">10.1 </span>Infinite sums</a></span></li><li><span><a href="#Finite-sums" data-toc-modified-id="Finite-sums-10.2"><span class="toc-item-num">10.2 </span>Finite sums</a></span></li></ul></li><li><span><a href="#Taylor-series-expansion" data-toc-modified-id="Taylor-series-expansion-11"><span class="toc-item-num">11 </span>Taylor series expansion</a></span><ul class="toc-item"><li><span><a href="#A-finite-Taylor-series" data-toc-modified-id="A-finite-Taylor-series-11.1"><span class="toc-item-num">11.1 </span>A finite Taylor series</a></span></li><li><span><a href="#Infinite-Taylor-series" data-toc-modified-id="Infinite-Taylor-series-11.2"><span class="toc-item-num">11.2 </span>Infinite Taylor series</a></span></li></ul></li><li><span><a href="#Matrices-/-Linear-Algebra" data-toc-modified-id="Matrices-/-Linear-Algebra-12"><span class="toc-item-num">12 </span>Matrices / Linear Algebra</a></span></li><li><span><a href="#The-End" data-toc-modified-id="The-End-13"><span class="toc-item-num">13 </span>The End</a></span></li></ul></div>
## Python set-up
Install with pip or conda (as appropriate to your system)
```python
from platform import python_version
python_version() # version of python on my machine
```
'3.9.7'
```python
import sympy as sp
sp.__version__ # version of sympy on my machine
```
'1.9'
```python
# This makes the notebook easier to read ...
sp.init_printing(use_unicode=True)
```
## Numbers
### Integers
```python
sp.Integer(5) # this is a sympy integer
```
### Floats or Reals
```python
sp.Float(1 / 2) # this is a sympy float
```
### Rationals or Fractions
Hint: using Rationals in calculus should be preferred over using floats, as it will yield easire to understand symbolic answers.
```python
y = sp.Rational(1, 3) # this is a sympy Rational
y
```
```python
# get the numeric value for an expression to n decimal places
y.n(22)
```
```python
# we can do the usual maths with Rationals
sp.Rational(3, 4) + sp.Rational(1, 3)
```
```python
# Note: if we divide sympy Integers, we also get a Rational
sp.Integer(1) / sp.Integer(4)
```
```python
# We get a sympy Rational even when one of the numerator or denominator is a python integer.
sp.Integer(5) / 2
```
```python
# getting the numerator and denominator
numerator, denominator = sp.fraction(sp.Rational(-55, 10))
numerator, denominator
```
```python
# or ...
r = sp.Rational(-55, 10)
numerator = r.numerator
denominator = r.denominator
numerator, denominator
```
```python
# It is a little challenging to represent an improper Rational
# as a mixed fraction or mixed numer (whole number plus fraction)
def mixed_number(rational: sp.Rational):
numerator, denominator = sp.fraction(rational)
whole = sp.Abs(numerator) // sp.Abs(denominator)
part = (
sp.Rational(sp.Abs(numerator) % sp.Abs(denominator),
sp.Abs(denominator))
)
with sp.evaluate(False):
# Use the context manager to avoid simplification back to
# a Rational. And make sure we have the correct sign ...
mixed_number = whole + part if rational >= 0 else (- whole - part)
return mixed_number
mixed_number(sp.Rational(-55, 10))
```
```python
_.n()
```
### Surds
```python
sp.sqrt(8)
# Note, surds are automatically simplified if possible
```
```python
# if you don't want the simplification
sp.sqrt(8, evaluate=False)
```
```python
# or you can use this context manager to avoid evaluation
with sp.evaluate(False):
y = sp.sqrt(8)
y
```
```python
sp.N(_) # numberic value for the last calculation
```
```python
sp.cbrt(3) # cube roots
```
```python
sp.real_root(4, 6) # nth real root
```
```python
# Use a context manager to prevent auto-simplification
with sp.evaluate(False):
t = sp.Integer(-2) ** sp.Rational(-1, 2)
t
```
```python
# same as previous cell with simplification
1 / sp.sqrt(-2)
```
### Useful constants
Remember, these constants are symbols, not their approximate values
```python
sp.pi # use sp.pi for 𝜋
```
```python
sp.E # capital E for the base of the natural logarithm (Euler's number)
```
```python
sp.I # capital I for the square root of -1
```
```python
sp.I ** 2
```
```python
sp.oo # oo (two lower-case letters o) for infinity
```
```python
-sp.oo # negative infinity
```
```python
# This is the "not a number" construct for sympy
sp.nan # in a result, this typically means undefined ...
```
### Complex numbers
```python
z = 3 + 4 * sp.I # Construct complex numbers
z
```
```python
sp.re(z), sp.im(z) # get the real and imaginary parts of z
```
```python
sp.Abs(z) # the Absolute value of a complex number
# It's distance from the origin on the Argand plane
```
```python
# To obtain the complex conjugate
t = sp.conjugate(4 + 5j) # from a python complex number (ugly)
s = (4 + 5 * sp.I).conjugate() # from a sympy complex number (better)
display(t, s)
```
### Miscellaneous
```python
sp.prime(5) # get the nth prime number
```
```python
sp.pi.evalf(5) # evaluate to a python floating with n decimal places
```
### Be a little careful when dividing by zero and of arithmetic with infinities
The results may differ with your expectations
```python
# This is undefined ...
sp.oo - sp.oo
```
```python
# This is also undefined ...
sp.Integer(0) / sp.Integer(0)
```
```python
# I would have pegged this one as undefined
# or as complex infinity. But not as real infinity???
sp.oo / 0
```
```python
sp.Rational(1, 0) # yields complex infinity
```
```python
sp.Integer(1) / sp.Integer(0) # Also yeilds complex infinity
```
## Symbols
You must define symbols before using them with sympy.
To avoid confusion, match the name of the symbol to the python variable name.
There are a number of ways to create a sympy symbol ...
### Import symbols from sympy.abc
```python
# The quick and easy way to get English and Greek letter names
from sympy.abc import a, b, c, x, n, alpha, beta, gamma, delta
alpha
```
```python
delta
```
### Define one symbol at a time
```python
a = sp.Symbol('a') # defining a symbol, one at a time
```
### Define multiple symbols in one line of code
```python
x, y, z = sp.symbols('x y z') # define multiple symbols at a time
```
### Set the attributes of a symbol
```python
# you can set attributes for a symbol
i, j, k = sp.symbols('i, j, k', integer=True, positive=True)
```
### Check the assumptions/properties of a symbol
```python
i.assumptions0
```
{'integer': True,
'commutative': True,
'complex': True,
'extended_real': True,
'finite': True,
'infinite': False,
'rational': True,
'irrational': False,
'noninteger': False,
'algebraic': True,
'imaginary': False,
'hermitian': True,
'transcendental': False,
'real': True,
'positive': True,
'extended_negative': False,
'nonnegative': True,
'zero': False,
'extended_nonpositive': False,
'nonpositive': False,
'extended_nonzero': True,
'nonzero': True,
'negative': False,
'extended_nonnegative': True,
'extended_positive': True}
### Symbolic functions
```python
# We can also declare that symbols are [undefined] functions
x = sp.Symbol('x')
f = sp.Function('f')
f
```
f
```python
# Including as functions that take arguments
g = sp.Function('g')(x)
g
```
```python
x, y = sp.symbols('x y')
h = sp.Function('h')(x, y)
h
```
```python
# And we can do multiple functions at once
f, g = sp.symbols('f g', function=True)
f.assumptions0
```
{'function': True, 'commutative': True}
## Functions
### In-line arithmetic
sympy recognizes the usual python in-line operators, and applies the proper order of operations
```python
# python operators work as expected
x, y = sp.symbols('x y')
x + x - 2 * y * x / x ** 3
# Note: some simplification ocurred
```
```python
with sp.evaluate(False):
y = sp.E ** (sp.I * sp.pi) + 1
y
```
```python
# The .doit() method evaluates an expression ...
y.doit() # Thank you Euler
```
```python
# Note: While this might look like a Rational,
# This is not a Rational, rather, it is a division
p, q = sp.symbols('p q', Integer=True)
frac = p / q
frac
```
```python
# Use sp.numer() and sp.denom() to get the numerator, denominator
sp.numer(frac), sp.denom(frac)
```
### Absolute Values
```python
# Absolute value
x = sp.Symbol('x')
sp.Abs(x)
```
### Factorials
```python
sp.factorial(4, evaluate=False) # factorials
```
### Trig functions
```python
from sympy.abc import theta
sp.sin(theta) # also cos, tan,
```
```python
sp.asin(1) # also acos, atan
```
```python
# secant example (which is the reciprocol of cos(𝜃))
sp.sec(theta) # also csc and cot for cosecant and cotangent
```
### Exponential and logarithmic functions
```python
sp.exp(x)
```
```python
# Which is the same as ...
sp.E ** x
```
```python
sp.E ** x == sp.exp(x)
```
True
```python
sp.log(x) # log to the base of e
```
```python
sp.log(x, 10) # log to the base of 10
```
## Expressions
### Creating an expression
```python
# We start by defining the symbols used in the expression
x = sp.Symbol('x')
# Then we use those symbols to create an expression.
y = x + x + x * x # Note: This assigns a sympy expression
# to the python variable y.
# This does not create an equation.
# sympy will collect simple polynomial terms automatically
y
```
### Creating expressions from strings
Note: the string should be a valid python string
```python
y = sp.simplify('m ** 2 + 2 * m - 8')
y
```
### Substituting values into an expression
```python
x, y = sp.symbols('x y')
f = x ** 2 + y
f
```
```python
f.subs(x, 4)
```
```python
x, a, b, c = sp.symbols('x a b c')
quadratic = a * (x ** 2) + b * x + c
quadratic
```
```python
# multiple substitutions with a dictionary
quadratic.subs({a:1, b:2, c:1})
```
### Simplifying expressions
Note: Often the .simplify() method is all you need. However, simplifying with the .simplify() method is not well defined. You may need to call a specific simplification method that is well defined to achieve a desired result.
Note: beyond some minimal simplification, sympy does not automatically simplify your expressions.
Note: this is not a complete list of simplification functions
```python
# Often the .simplify() method is all you need ...
x = sp.Symbol('x')
y = ((2 * x) / (x - 1)) - ((x ** 2 - 1) / (x - 1) ** 2)
y
```
```python
# This expression can be simplified to the number 1
y.simplify()
```
```python
# Another example ...
x = sp.Symbol('x')
with sp.evaluate(False):
# this context manager prevents any
# automatic simplification, resulting
# in a very ugly expression
y = (2 / (x + 3)) / (1 + (3 / x))
y
```
```python
y.simplify()
```
#### Finding factors
```python
x = sp.Symbol('x')
y = x ** 2 - 1
y
```
```python
y.factor()
```
#### Expanding out
```python
x = sp.Symbol('x')
y = (x + 1) * (x - 1)
y
```
```python
y.expand() # for polynomials, this is the opposite to .factor()
```
#### Collecting terms
```python
x, y, z = sp.symbols('x y z')
expr = x * y + 3 * x ** 2 + 2 * x ** 3 + z * x ** 2
expr
```
```python
expr.collect(x)
```
#### Canceling common factors, expressing as $\frac{p}{q}$
```python
x = sp.Symbol('x')
y = (x**2 - 1) / (x - 1)
y
```
```python
y.cancel()
```
#### Trig simplification
```python
x = sp.Symbol('x')
y = (sp.tan(x) ** 2) / ( sp.sec(x) ** 2 )
y
```
```python
y.trigsimp()
```
#### Trig expansions
```python
x, y = sp.symbols('x y')
f = sp.sin(x + y)
f
```
```python
sp.expand_trig(f)
```
#### Power simplifications
```python
x, a, b = sp.symbols('x a b')
y = x ** a * x ** b
y
```
```python
y.powsimp()
```
#### Log simplifications
```python
# Note: positive constraint in the next line ...
a, b = sp.symbols('a b', positive=True)
y = sp.log(a * b)
y
```
```python
sp.expand_log(y)
```
```python
y = sp.log(a / b)
y
```
```python
sp.expand_log(y)
```
#### Rewriting functions
```python
# rewite an expression in terms of a particular function
x = sp.Symbol('x')
y = sp.tan(x)
y
```
```python
# express our tan(x) function in terms of sin
y.rewrite(sp.sin)
```
```python
# express our tan(x) function in terms of the exponetial function
y.rewrite(sp.exp)
```
### Solving expressions
Solving these expressions assumes they are equations that are equal to zero.
#### Example quadratic solution
```python
# solve a quadratic equation
x = sp.Symbol('x')
sp.solve(x ** 2 - 1, x) # solve expression in nrespect of x
# yields two possible solutions
```
#### Generalised quadratic solution
```python
# More generally ...
a, b, c, x = sp.symbols('a b c x')
y = a * x ** 2 + b * x + c # standard quadratic equation
sp.solve(y, x) # yields a list of two possible solutions
```
#### Quadratic with a complex solution
```python
# and if the only solutions are complex ...
sp.solve(x ** 2 + 2 * x + 10, x)
# yields a list of two possible solutions
```
#### Manipulating expressions to re-arrange terms
```python
# rearrange terms ...
x, y = sp.symbols('x y')
f = x ** 2 - 2 * x * y + 3
sp.solve(f, y) # solve for y = ...; yeilds one possible solution
```
### Plotting expressions
```python
x = sp.Symbol('x')
expr = sp.sin(x) ** 2 + sp.cos(x)
expr
```
```python
plot = sp.plot(expr, show=True)
print(type(k))
```
```python
# plot multiple lines at once
sp.plot(sp.sin(x), sp.cos(x), legend=True, show=True)
```
```python
from sympy.plotting import plot3d
x, y = sp.symbols('x y')
plot = plot3d(x**2 + y**2, show=True)
```
## Equations
### Creating equations
```python
# Note: we use sp.Eq(left_expr, right_expr)
# to create an equation in sympy
x, y = sp.symbols('x y')
sp.Eq(y, 3 * x + 2)
```
### Solving equations
#### Rearranging terms
```python
x, y = sp.symbols('x y')
eqn = sp.Eq(x ** 2 + 2 * x * y - 1, 0)
eqn # this is our equation
```
```python
solution = sp.solve(eqn, y)
solution # yields a list of solutions,
# in this case a list of 1 ...
```
```python
# Which we can turn back into an equation
sp.Eq(y, solution[0])
```
#### Exponential example
```python
a, b, x = sp.symbols('a b x')
eq = sp.Eq(a ** b, sp.E ** x)
eq # our equation
```
```python
sp.Eq(x, sp.solve(eq, x)[0])
```
#### Quadratic example
```python
y = sp.Symbol('y')
eq = sp.Eq(x ** 2 - 2 * x, 0)
sp.solve(eq)
```
```python
y = sp.Symbol('y')
eq = sp.Eq(x ** 2 - 2 * x, y)
sp.solve(eq)
```
```python
y = sp.Symbol('y')
eq = sp.Eq(x ** 2 - 2 * x, y)
sp.solve(eq, x) # solve for x
```
#### A trigonometric example
```python
x = sp.Symbol('x')
eq = sp.Eq(sp.sin(x) ** 2 + sp.cos(x), 0)
sp.solve(eq)
```
```python
# solveset allows us to capture tbe set of infinite solutions
sp.solveset(eq)
```
### Solving systems of equations
#### Two linear equations
```python
x, y = sp.symbols('x y')
eq1 = sp.Eq(y, 3 * x + 2) # an equation
eq1
```
```python
eq2 = sp.Eq(y, -2 * x - 1)
eq2
```
```python
sp.solve([eq1, eq2], [x, y])
```
#### A linear and cubic system with three point-solutions
```python
eq1 = sp.Eq(y, x)
eq2 = sp.Eq(y, sp.Rational(1, 10) * x ** 3)
sp.solve([eq1, eq2], [x, y])
```
#### A system of equations with no solutions
```python
eq1 = sp.Eq(y, x)
eq2 = sp.Eq(y, x + 1)
sp.solve([eq1, eq2], [x, y])
```
## Limits
### Simple example
```python
x = sp.Symbol('x')
expr = (x * x - 1)/(x - 1)
expr
```
```python
# the function of this expression is a straight line
sp.plot(expr, xlim=(-4,4), ylim=(-2,6))
```
```python
# But our expression is not defined when x is 1
# as it evaluates to zero divided by zero
expr.subs(x, 1)
```
```python
# The limit as x approaches 1
sp.limit(expr, x, 1) # using the global limit() function
```
```python
expr.limit(x, 1) # using the .limit() method
```
```python
# We can display the limit with the Limit() function
# Note: by default, this is the limit approached from
# the positive side.
lim = sp.Limit(expr, x, 1)
lim
```
```python
# And we can use the .doit() method to calculate the limit
lim.doit()
```
### More complicated examples
Calculate the derivative from first principles (using limits), for a selection of functions
$$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$$
#### $f(x) = x^n$
```python
# This is our generic function x to the power of n
def f(x, n):
return x ** n
x, h, n = sp.symbols('x h n')
f(x, n) # what is our function f(x, n) ...
```
```python
# Apply the limit to our function ...
# Note: our arguments x, h and n are sympy symbols
lim = sp.Limit((f(x + h, n) - f(x, n))/h, h, 0)
lim
```
```python
# Calculate the limit ...
lim.doit()
```
#### $f(x)=a^x$
```python
# Let's change the function to an exponential: a ** x
def f(a, x):
return a ** x
x, h, a = sp.symbols('x h a')
f(a, x) # what is our function f(x) ...
```
```python
# Apply the limit to our function ...
lim = sp.Limit((f(a, x + h) - f(a, x))/h, h, 0)
lim
```
```python
# Calculate the limit ...
lim.doit()
```
#### $f(x)=sin(x)$
```python
# One last example, the derivative of sin(x)
def f(x):
return sp.sin(x)
x, h = sp.symbols('x h')
f(x) # Our function f(x) = sin(x)
```
```python
# Apply the limit to our function ...
lim = sp.Limit((f(x + h) - f(x))/h, h, 0)
lim
```
```python
# And evaluating the limit
lim.doit()
```
### Limits, where the direction in which we approach the limit is important
```python
x = sp.Symbol('x')
expr = 1 / x
expr
```
```python
sp.plot(expr, xlim=(-8, 8), ylim=(-8, 8))
```
```python
# Let's display the limit from the positve direction
lim = sp.Limit(expr, x, 0, '+')
lim
```
```python
# And calculate it ...
lim.doit()
```
```python
# And the limit from the negative direction
expr.limit(x, 0, '-')
```
```python
# We can also do the limit from both directions
expr.limit(x, 0, '+-') # which yields complex infinity
```
## Derivatives
### First, second and subsequent derivatives
For one-variable expressions ... sympy has multiple ways to get the ordinary derivative
* using the `.diff()` method on an expression
* using the `diff()` function on an expression
* using the combined `Derivative().doit()` function/method on an expression
```python
# Let's generate a polynomial ...
x = sp.Symbol('x')
y = 3 * x ** 4 + 2 * x ** 2 - x - 1
y
```
```python
# provide the symbolic forumala for the first derivative
y_dash = sp.Derivative(y, x)
y_dash
```
```python
# And calculate the differential ...
y_dash.doit()
```
```python
# Also ... using the .diff() method
y.diff(x) # differentiate with respect to x
```
```python
# Also using the diff() function
sp.diff(y, x) # differentiate with respect to x
```
```python
# provide the symbolic formula for the second derivative
y_2dash = sp.Derivative(y, x, 2)
y_2dash
```
```python
# second derivative can also be done like this
y_2dash.doit()
```
```python
# Also ...
y.diff(x, x) # differentiate twice in respect of x
```
```python
# Also ...
y.diff(x, 2)
```
```python
# And the formula for the third Derivative
y_3dash = sp.Derivative(y, x, 3)
y_3dash
```
```python
# third derivative (and so on ...)
y_3dash.doit()
```
```python
# Also ...
y.diff(x, 3)
```
```python
# Also ...
y.diff(x, x, x)
```
```python
# Also ...
sp.diff(y, x, 3)
```
```python
# Generalisations ...
a, x = sp.symbols('a x')
(x ** a).diff(x).simplify()
```
```python
# Generalisations ...
a, x = sp.symbols('a x')
(a ** x).diff(x)
```
### Partial derivatives
As with the above differentials, there are multiple ways to do this ...
```python
x, y = sp.symbols('x y')
g = x* y ** 2 + x ** 3
g
```
```python
# The first partial derivative of the expression with respect to x
partial_x = sp.Derivative(g, x, 1)
partial_x
```
```python
# Calculate ...
partial_x.doit()
```
```python
# And the first order partial derivative in respect of y
partial_y = sp.Derivative(g, y, 1)
partial_y.doit()
```
## Integrals
### Definite Integrals
```python
# Definite integral using the integrate() function
# The tuple contains (with_respect_to, lower_limit, upper_limit)
x = sp.Symbol('x')
f = sp.sin(x)
y = sp.Integral(f, (x, 0, sp.pi / 2))
y
```
```python
# we can then calculate it as follows
y.doit()
```
```python
# We can calculate the definite interval using the .integrate() method
x = sp.Symbol('x')
f.integrate((x, 0, sp.pi / 2))
```
```python
# We can calculate the definite interval using the integrate() function
sp.integrate(f, (x, 0, sp.pi / 2))
```
### Indefinite integrals
***Caution***: sympy does not yield the constant of integration (the "+ C") that arises from the indefinite integral. So technically, we are getting the anti-derivative, rather than the indefinite integral. Note: the constant of integration is netted out when the definite integral is calculated.
```python
x = sp.Symbol('x')
y = x ** 2 + 2 * x
y.integrate(x) # integreate in respect of x
```
```python
x = sp.Symbol('x')
sp.log(x).integrate(x)
```
```python
x = sp.Symbol('x')
sp.sin(x).integrate(x)
```
### sympy cannot evaluate some integrals
If `integrate()` is unable to compute an integral, it returns an unevaluated Integral object.
```python
# for example ... sysmpy cannot calculate this integral
sp.log(sp.sin(x)).integrate(x)
```
## Sums
Sums can be achieved with either the summation function or the Sum constructor
### Infinite sums
```python
# A sum from the sum constructor
# Note: the second term is the tuple: (index, lower_bound, upper_bound)
# Where: the range is from lower_bound to upper_bound inclusive
x, n = sp.symbols('x n')
s = sp.Sum(6 * (x ** -2), (x, 1, sp.oo)) # sum constructor
s # display the sum
```
```python
s.doit() # evaluate the sum
```
```python
s.n() # approximate value
```
```python
# A sum using the summation function
x = sp.symbols('x')
s = sp.summation(90 / x ** 4, (x, 1, sp.oo))
s
```
```python
# A sum using the summation function
# with a defined python function
n = sp.symbols('n')
def f(n): return 945 / (n ** 6) # a defined function
s = sp.summation(f(n), (n, 1, sp.oo))
s
```
```python
# And another example that sums to one
x = sp.symbols('x')
with sp.evaluate(False):
expr = 1 / (2 ** x)
expr
```
```python
s = sp.Sum(expr, (x, 1, sp.oo))
s
```
```python
s.doit()
```
### Finite sums
```python
x, a, b, c, n = sp.symbols('x a b c n')
quad = a * (x ** 2) + b * x + c
quad
```
```python
quad_sum = sp.summation(1 / quad, (x, 0, n))
quad_sum
```
```python
quad_sum.subs(n, 10)
```
```python
quad_sum.subs({a:1, b:2, c:1, n:10})
```
```python
_.n() # previous value approximated ...
```
```python
quad_sum = sp.summation(1 / quad, (x, 0, 10))
quad_sum
```
```python
quad_sum.subs({a:1, b:2, c:1})
```
```python
_.n() # previous value approximated ...
```
## Taylor series expansion
Taylor series is a technique to approximate a function at/near a point using polynomials. The technique requires that multiple n-order derivatives can be found for the function. It works well with exponential and trigonometric functions. The series used to approximate the function over a range, using a specified number of polynomial terms, or it can be expressed as an infinite series.
Why do it? Mathematically, polynomials are sometimes easier to work with. These approximations are remarkably accurate with just a small number of terms.
### A finite Taylor series
```python
x = sp.Symbol('x')
s = sp.series(sp.cos(x), x, x0=0, n=6) # at x=0, for six polynomial terms
# noting that the odd powers for our
# polynomial are zero.
s # note the donominators are 0!, 2!, 4!, 6! ... (where 0! equals 1)
s
```
```python
# We can remove the Big-O notation
s.removeO()
```
```python
# We can compare cos(0.5) with our taylor-approximation of cos(0.5)
# and see for this point-value it is accurate to three decimal places.
point = 0.5
print(f'cos={sp.cos(point)} Taylor={s.removeO().subs(x, point).n()}')
```
cos=0.877582561890373 Taylor=0.877604166666667
```python
# This Taylor series looks like it provides
# a workable approximation for cos(x) between
# at least x=-π/4 and +π/4
sp.plot(sp.cos(x), s.removeO(),
legend=True,
show=True, ylim=(-1,1.5))
```
```python
# Let's try plotting a few more terms, which expands the
# range for which our approximation provides good results.
x = sp.Symbol('x')
s_6 = sp.series(sp.cos(x), x, x0=0, n=6).removeO()
s_16 = sp.series(sp.cos(x), x, x0=0, n=16).removeO()
sp.plot(sp.cos(x), s_6, s_16,
#legend=True,
show=True, ylim=(-1.3,1.3))
```
```python
# Let's compare the two plotted approximations at π/4
p = sp.pi / 4
print(f'cos({p})={sp.cos(p.evalf())}; ')
print(f'n=6--> {s_6.subs(x, p).n()}, ')
print(f'n=16--> {s_16.subs(x, p).n()}')
```
cos(pi/4)=0.707106781186548;
n=6--> 0.707429206709773,
n=16--> 0.707106781186547
### Infinite Taylor series
```python
# Instead of specifying n for the order of the polynomial, we set n=None
t_log = sp.series(sp.log(x), x=x, x0=1, n=None)
t_log # this returns a python generator for the series.
```
<generator object Expr.series.<locals>.<genexpr> at 0x1880e66d0>
```python
# Let's display the first 8 terms of this series
# Note: generators can only be used once in Python ...
lst = []
for i in range(8):
lst.append(next(t_log))
display(lst)
```
```python
sum(lst) # which we can sum in python ...
```
## Matrices / Linear Algebra
```python
```
```python
```
```python
```
## The End
```python
```
| 6c10dd658eb51a794c7dc029ec3cf318805b61c5 | 704,626 | ipynb | Jupyter Notebook | notebooks/Introduction to sympy.ipynb | bpalmer4/code-snippets | e87f7baade69ff25e062f54d58b3f6f611c4c283 | [
"MIT"
] | null | null | null | notebooks/Introduction to sympy.ipynb | bpalmer4/code-snippets | e87f7baade69ff25e062f54d58b3f6f611c4c283 | [
"MIT"
] | null | null | null | notebooks/Introduction to sympy.ipynb | bpalmer4/code-snippets | e87f7baade69ff25e062f54d58b3f6f611c4c283 | [
"MIT"
] | null | null | null | 113.81457 | 101,124 | 0.865276 | true | 12,022 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.887205 | 0.80261 | __label__eng_Latn | 0.713386 | 0.703065 |
# Lecture 4 - SciPy
What we have seen so far
- How to setup a python environment and jupyter notebooks
- Basic python language features
- Introduction to NumPy
- Plotting using matplotlib
Scipy is a collection of packages that provide useful mathematical functions commonly used for scientific computing.
List of subpackages
- cluster : Clustering algorithms
- constants : Physical and mathematical constants
- fftpack : Fast Fourier Transform routines
- integrate : Integration and ordinary differential equation solvers
- interpolate : Interpolation and smoothing splines
- io : Input and Output
- linalg : Linear algebra
- ndimage : N-dimensional image processing
- odr : Orthogonal distance regression
- optimize : Optimization and root-finding routines
- signal : Signal processing
- sparse : Sparse matrices and associated routines
- spatial : Spatial data structures and algorithms
- special : Special functions
- stats : Statistical distributions and functions
We cannot cover all of them in detail but we will go through some of the packages and their capabilities today
- interpolate
- optimize
- stats
- integrate
We will also briefly look at some other useful packages
- networkx
- sympy
```python
import numpy as np
import matplotlib.pyplot as plt
```
## Interpolation : `scipy.interpolate`
```python
import scipy.interpolate as interp
```
```python
x = np.linspace(-1,2,5);
y = x**2
plt.plot(x,y,'ro')
```
```python
f = interp.interp1d(x,y,kind="linear")
```
```python
type(f)
```
scipy.interpolate.interpolate.interp1d
```python
x_fine = np.linspace(-1,2,100)
plt.plot(x_fine,f(x_fine))
plt.plot(x,y,'ro')
```
```python
plt.plot(x_fine,interp.interp1d(x,y,kind="zero")(x_fine))
plt.plot(x_fine,interp.interp1d(x,y,kind="linear")(x_fine))
plt.plot(x_fine,interp.interp1d(x,y,kind="cubic")(x_fine))
plt.plot(x,y,'ro')
```
```python
interp.interp1d?
```
```python
interp.interp2d?
```
## Optimization : `scipy.optimize`
Contains functions to find minima, roots and fit parameters
```python
from scipy import optimize
```
```python
def f(x):
return x**2 + np.sin(2*x)
```
```python
x = np.linspace(-5,5,100)
plt.plot(x,f(x));
```
```python
results = optimize.minimize(f, -4)
results
```
fun: -0.5920740012779424
hess_inv: array([[0.18432423]])
jac: array([1.49011612e-07])
message: 'Optimization terminated successfully.'
nfev: 33
nit: 6
njev: 11
status: 0
success: True
x: array([-0.51493324])
```python
x_opt = results.x
```
```python
plt.plot(x,f(x));
plt.plot(x_opt,f(x_opt),'ro');
```
```python
optimize.minimize?
```
```python
def f(x):
return x[0]*x[0] + x[1]*x[1] + 5*(np.sin(2*x[0]) + np.sin(2*x[1]) )
```
```python
x=np.linspace(-5,5,100)
y=np.linspace(-5,5,100)
X,Y = np.meshgrid(x,y)
```
```python
plt.imshow(f((X,Y)))
```
```python
optimize.minimize(f,x0=[2,2])
```
fun: 0.07912876341589659
hess_inv: array([[ 0.52488677, -0.47511323],
[-0.47511323, 0.52488677]])
jac: array([1.1920929e-07, 1.1920929e-07])
message: 'Optimization terminated successfully.'
nfev: 24
nit: 4
njev: 6
status: 0
success: True
x: array([2.13554766, 2.13554766])
You can use the function `basinhopping` to find the global minima
```python
optimize.basinhopping(f,[1,4])
```
```python
optimize.basinhopping?
```
## Curve Fitting
```python
x = np.linspace(-2,2,30)
y = x+np.sin(5.2*x)+0.3*np.random.randn(30)
plt.plot(x,y,'ro')
```
```python
def f(x,a,b,c):
return a*x + b*np.sin(c*x)
```
```python
((a,b,c),cov) = optimize.curve_fit(f,x,y,(0,0,4))
a,b,c
```
(1.0112575930259058, 1.0024366873075972, 5.1915165536399055)
```python
cov
```
array([[ 0.001428 , -0.00016983, 0.00077266],
[-0.00016983, 0.00355949, -0.00031009],
[ 0.00077266, -0.00031009, 0.00268351]])
```python
x_fine = np.linspace(-2,2,200)
plt.plot(x_fine,f(x_fine,a,b,c))
plt.plot(x,y,'ro')
```
### Root Finding
```python
def f(x):
return (x+2)*(x-1)*(x-5)
```
```python
optimize.root(f,0)
```
fjac: array([[-1.]])
fun: array([7.99360578e-15])
message: 'The solution converged.'
nfev: 8
qtf: array([-1.11436793e-08])
r: array([12.00000791])
status: 1
success: True
x: array([1.])
## Statistics : `scipy.stats`
```python
from scipy import stats
```
Find the maximum likelihood estimate for parameters
```python
samples = 3*np.random.randn(1000)+2
plt.hist(samples);
```
```python
stats.norm.fit(samples)
```
(1.9458130358594379, 2.9667022271670134)
```python
np.mean(samples),np.median(samples)
```
(1.9458130358594379, 1.996030772212961)
```python
stats.scoreatpercentile(samples,20)
```
-0.5886852239653766
```python
a = np.random.randn(30)
b = np.random.randn(30) + 0.1
```
```python
stats.ttest_ind(a,b)
```
Ttest_indResult(statistic=-1.1920408671206213, pvalue=0.23809979134147563)
You can also perform kernel density estimation
```python
x = np.concatenate(( 2*np.random.randn(1000)+5, 0.6*np.random.randn(1000)-1) )
```
```python
plt.hist(x);
```
```python
pdf = stats.kde.gaussian_kde(x)
```
```python
counts,bins,_ = plt.hist(x)
x_fine=np.linspace(-2,10,100)
plt.plot(x_fine,np.sum(counts)*pdf(x_fine))
```
```python
bins
```
## Numerical Integration : `scipy.integrate`
```python
import scipy.integrate as integ
```
You can compute integral using the `quad` funtion
```python
def f(x):
return x**2 + 5*x + np.sin(x)
```
```python
integ.quad(f,-1,1)
```
```python
integ.quad?
```
You can also solve ODEs of the form
$$ \frac{dy}{dt} = f(y,t) $$
```python
def f(y,t):
return (y[1], -y[1]-9*y[0])
```
```python
t = np.linspace(0,10,100)
Y = integ.odeint(f,[1,1],t)
```
```python
plt.plot(t,Y[:,1])
```
# Other useful packages
## `networkx`
Useful Package to handle graphs.
Install by running `conda install networkx`
```python
import networkx as nx
```
```python
G = nx.Graph()
G.add_nodes_from([1,2,3,4])
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(3,1)
G.add_edge(3,4)
```
```python
nx.draw(G)
```
```python
G = nx.complete_graph(10)
nx.draw(G)
```
## `sympy`
Package for performing symbolic computation and manipulation.
Install it in your environment by running `conda install sympy`
```python
from sympy import *
```
```python
x,y = symbols("x y")
```
```python
expr = x+y**2
```
```python
x*expr
```
```python
expand(x*expr)
```
```python
factor(x**2 -2*x*y + y**2)
```
```python
latex(expr)
```
```python
init_printing()
```
```python
simplify( (x-y)**2 + (x+y)**2)
```
```python
x**2/(y**3+y)
```
```python
(x**2/(y**3+y)).subs(y,1/(1+x))
```
```python
(x**2/(y**3+y)).evalf(subs={'x':2, 'y':4})
```
```python
Integral(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))
```
```python
I = Integral(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))
```
```python
I.doit()
```
```python
(sin(x)/(1+cos(x)))
```
```python
(sin(x)/(1+cos(x))).series(x,0,10)
```
```python
```
## Exercises
The following exercises requires the combined usage of the packages we learnt today.
1. Generate 10 random polynomials of order 5
- Numerically and analytically integrate them from 0 to 1 and compare the answers.
- Compute one minima for each polynomial and show that the analytically computed derivative is 0 at the minima
- Randomly sample the polynomials in the range from 0 to 1, and see if you can recover the original coefficents by trying to fit a 5th order polynomial to the samples.
2. Read and learn about [Erdos-Renyi Random Graphs](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model). See if you can numerically verify some of the properties mentioned in the wiki, such as for what parameter values is the graph most likely connected.
```python
```
| f82878b6e9f70c96a8b6382506586465f50e3f08 | 140,109 | ipynb | Jupyter Notebook | nb/2019_winter/Lecture_4.ipynb | samuelcheang0419/cme193 | 609e4655544292a28dbb9ca0301637b006970af2 | [
"MIT"
] | null | null | null | nb/2019_winter/Lecture_4.ipynb | samuelcheang0419/cme193 | 609e4655544292a28dbb9ca0301637b006970af2 | [
"MIT"
] | null | null | null | nb/2019_winter/Lecture_4.ipynb | samuelcheang0419/cme193 | 609e4655544292a28dbb9ca0301637b006970af2 | [
"MIT"
] | null | null | null | 119.445013 | 23,060 | 0.888644 | true | 2,511 | Qwen/Qwen-72B | 1. YES
2. YES | 0.932453 | 0.83762 | 0.781042 | __label__eng_Latn | 0.724922 | 0.652954 |
<a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/deeplearning.ai/nlp/c2_w4_model_architecture_relu_sigmoid.ipynb" target="_parent"></a>
# Word Embeddings: Intro to CBOW model, activation functions and working with Numpy
In this lecture notebook you will be given an introduction to the continuous bag-of-words model, its activation functions and some considerations when working with Numpy.
Let's dive into it!
```python
import numpy as np
```
# The continuous bag-of-words model
The CBOW model is based on a neural network, the architecture of which looks like the figure below, as you'll recall from the lecture.
## Activation functions
Let's start by implementing the activation functions, ReLU and softmax.
### ReLU
ReLU is used to calculate the values of the hidden layer, in the following formulas:
\begin{align}
\mathbf{z_1} &= \mathbf{W_1}\mathbf{x} + \mathbf{b_1} \tag{1} \\
\mathbf{h} &= \mathrm{ReLU}(\mathbf{z_1}) \tag{2} \\
\end{align}
Let's fix a value for $\mathbf{z_1}$ as a working example.
```python
np.random.seed(10)
z_1 = 10 * np.random.rand(5, 1) - 5
z_1
```
array([[ 2.71320643],
[-4.79248051],
[ 1.33648235],
[ 2.48803883],
[-0.01492988]])
Notice that using numpy's `random.rand` function returns a numpy array filled with values taken from a uniform distribution over [0, 1). Numpy allows vectorization so each value is multiplied by 10 and then substracted 5.
To get the ReLU of this vector, you want all the negative values to become zeros.
First create a copy of this vector.
```python
h = z_1.copy()
```
Now determine which of its values are negative.
```python
# Determine which values met the criteria (this is possible because of vectorization)
h < 0
```
array([[False],
[ True],
[False],
[False],
[ True]])
You can now simply set all of the values which are negative to 0.
```python
# Slice the array or vector. This is the same as applying ReLU to it
h[h < 0] = 0
```
And that's it: you have the ReLU of $\mathbf{z_1}$!
```python
# Print the vector after ReLU
h
```
array([[2.71320643],
[0. ],
[1.33648235],
[2.48803883],
[0. ]])
**Now implement ReLU as a function.**
```python
# Define the 'relu' function that will include the steps previously seen
def relu(z):
result = z.copy()
result[result < 0] = 0
return result
```
**And check that it's working.**
```python
# Define a new vector and save it in the 'z' variable
z = np.array([[-1.25459881], [ 4.50714306], [ 2.31993942], [ 0.98658484], [-3.4398136 ]])
# Apply ReLU to it
relu(z)
```
array([[0. ],
[4.50714306],
[2.31993942],
[0.98658484],
[0. ]])
Expected output:
array([[0. ],
[4.50714306],
[2.31993942],
[0.98658484],
[0. ]])
### Softmax
The second activation function that you need is softmax. This function is used to calculate the values of the output layer of the neural network, using the following formulas:
\begin{align}
\mathbf{z_2} &= \mathbf{W_2}\mathbf{h} + \mathbf{b_2} \tag{3} \\
\mathbf{\hat y} &= \mathrm{softmax}(\mathbf{z_2}) \tag{4} \\
\end{align}
To calculate softmax of a vector $\mathbf{z}$, the $i$-th component of the resulting vector is given by:
$$ \textrm{softmax}(\textbf{z})_i = \frac{e^{z_i} }{\sum\limits_{j=1}^{V} e^{z_j} } \tag{5} $$
Let's work through an example.
```python
# Define a new vector and save it in the 'z' variable
z = np.array([9, 8, 11, 10, 8.5])
# Print the vector
z
```
array([ 9. , 8. , 11. , 10. , 8.5])
You'll need to calculate the exponentials of each element, both for the numerator and for the denominator.
```python
# Save exponentials of the values in a new vector
e_z = np.exp(z)
# Print the vector with the exponential values
e_z
```
array([ 8103.08392758, 2980.95798704, 59874.1417152 , 22026.46579481,
4914.7688403 ])
The denominator is equal to the sum of these exponentials.
```python
# Save the sum of the exponentials
sum_e_z = np.sum(e_z)
# Print sum of exponentials
sum_e_z
```
97899.41826492078
And the value of the first element of $\textrm{softmax}(\textbf{z})$ is given by:
```python
# Print softmax value of the first element in the original vector
e_z[0] / sum_e_z
```
0.08276947985173956
This is for one element. You can use numpy's vectorized operations to calculate the values of all the elements of the $\textrm{softmax}(\textbf{z})$ vector in one go.
**Implement the softmax function.**
```python
# Define the 'softmax' function that will include the steps previously seen
def softmax(z):
e_z = np.exp(z)
sum_e_z = np.sum(e_z)
return e_z / sum_e_z
```
**Now check that it works.**
```python
# Print softmax values for original vector
softmax([9, 8, 11, 10, 8.5])
```
array([0.08276948, 0.03044919, 0.61158833, 0.22499077, 0.05020223])
Expected output:
array([0.08276948, 0.03044919, 0.61158833, 0.22499077, 0.05020223])
Notice that the sum of all these values is equal to 1.
```python
# Assert that the sum of the softmax values is equal to 1
sum(softmax([9, 8, 11, 10, 8.5]))
```
1.0
## Dimensions: 1-D arrays vs 2-D column vectors
Before moving on to implement forward propagation, backpropagation, and gradient descent in the next lecture notebook, let's have a look at the dimensions of the vectors you've been handling until now.
Create a vector of length $V$ filled with zeros.
```python
# Define V. Remember this was the size of the vocabulary in the previous lecture notebook
V = 5
# Define vector of length V filled with zeros
x_array = np.zeros(V)
# Print vector
x_array
```
array([0., 0., 0., 0., 0.])
This is a 1-dimensional array, as revealed by the `.shape` property of the array.
```python
# Print vector's shape
x_array.shape
```
(5,)
To perform matrix multiplication in the next steps, you actually need your column vectors to be represented as a matrix with one column. In numpy, this matrix is represented as a 2-dimensional array.
The easiest way to convert a 1D vector to a 2D column matrix is to set its `.shape` property to the number of rows and one column, as shown in the next cell.
```python
# Copy vector
x_column_vector = x_array.copy()
# Reshape copy of vector
x_column_vector.shape = (V, 1)
# Print vector
x_column_vector
```
array([[0.],
[0.],
[0.],
[0.],
[0.]])
The shape of the resulting "vector" is:
```python
# Print vector's shape
x_column_vector.shape
```
(5, 1)
So you now have a 5x1 matrix that you can use to perform standard matrix multiplication.
```python
x_expand_dim_column_vec = np.expand_dims(x_array, axis=1)
print(x_expand_dim_column_vec)
print(x_expand_dim_column_vec.shape)
```
[[0.]
[0.]
[0.]
[0.]
[0.]]
(5, 1)
```python
x_array.shape
```
(5,)
**Congratulations on finishing this lecture notebook!** Hopefully you now have a better understanding of the activation functions used in the continuous bag-of-words model, as well as a clearer idea of how to leverage Numpy's power for these types of mathematical computations.
In the next lecture notebook you will get a comprehensive dive into:
- Forward propagation.
- Cross-entropy loss.
- Backpropagation.
- Gradient descent.
**See you next time!**
| b7a3fe70edf7e4a63a0c4a98f2b2b6d4cc32fc11 | 24,317 | ipynb | Jupyter Notebook | deeplearning.ai/nlp/c2_w4_model_architecture_relu_sigmoid.ipynb | martin-fabbri/colab-notebooks | 03658a7772fbe71612e584bbc767009f78246b6b | [
"Apache-2.0"
] | 8 | 2020-01-18T18:39:49.000Z | 2022-02-17T19:32:26.000Z | deeplearning.ai/nlp/c2_w4_model_architecture_relu_sigmoid.ipynb | martin-fabbri/colab-notebooks | 03658a7772fbe71612e584bbc767009f78246b6b | [
"Apache-2.0"
] | null | null | null | deeplearning.ai/nlp/c2_w4_model_architecture_relu_sigmoid.ipynb | martin-fabbri/colab-notebooks | 03658a7772fbe71612e584bbc767009f78246b6b | [
"Apache-2.0"
] | 6 | 2020-01-18T18:40:02.000Z | 2020-09-27T09:26:38.000Z | 25.650844 | 290 | 0.436361 | true | 2,186 | Qwen/Qwen-72B | 1. YES
2. YES | 0.845942 | 0.812867 | 0.687639 | __label__eng_Latn | 0.986346 | 0.435947 |
# Tutorial rápido de Python para Matemáticos
© Ricardo Miranda Martins, 2022 - http://www.ime.unicamp.br/~rmiranda/
## Índice
1. [Introdução](1-intro.html)
2. [Python é uma boa calculadora!](2-calculadora.html) [(código fonte)](2-calculadora.ipynb)
3. [Resolvendo equações](3-resolvendo-eqs.html) [(código fonte)](3-resolvendo-eqs.ipynb)
4. [Gráficos](4-graficos.html) [(código fonte)](4-graficos.ipynb)
5. [Sistemas lineares e matrizes](5-lineares-e-matrizes.html) [(código fonte)](5-lineares-e-matrizes.ipynb)
6. [Limites, derivadas e integrais](6-limites-derivadas-integrais.html) [(código fonte)](6-limites-derivadas-integrais.ipynb)
7. **[Equações diferenciais](7-equacoes-diferenciais.html)** [(código fonte)](7-equacoes-diferenciais.ipynb)
# Equações diferenciais
Chegamos no capítulo que motivou essas notas: vamos usar o Pythonn para resolver equações diferenciais!
Começaremos resolvendo algumas equações usando o SymPy. Ele não é a melhor ferramenta para isso, pois só procura soluções "algébricas" (no sentido de poderem ser explicitadas), mas para começar a brincar, já funciona bem.
## Equações lineares de primeira e segunda ordens
Primeiro vamos resolver uma equação simples: encontrar $f(x)$ tal que $$f'(x)+f(x)=0.$$ A notação para $f'(x)$ é um pouco estranha: para representar $f(t)$, devemos digitar ```f(t).diff(t)```. Se quiser a derivada de segunda ordem, $f''(t)$, entre ```f(t).diff(t,t)``` e por aí vai.
Depois disso, é só usar o comando ```dsolve```.
```python
import sympy as sp
# definindo a variavel independente
t = sp.symbols('t')
# definindo símbolos para as funcoes que estarão envolvidas
# nas equações diferenciais.
# note que precisamos já declarar esses símbolos como sendo
# da classe "função" com a adição cls=sp.Function
f = sp.symbols('f', cls=sp.Function)
# defina a equacao diferencial. para representar f'(t)
# deve-se usar f(t).diff(t).
# para indicar f''(t) usamos f(t).diff(t,t).
# a equacao sera na forma F=0. defina eq=F.
eq=f(t).diff(t) + f(t)
# resolvendo a equacao
sp.dsolve(eq, f(t))
```
Pronto! Está aí a solução da equação diferencial, já com a constante de integração. Vamos resolver agora uma equação de segunda ordem, $$g''(t)-2g'(t)+g(t)=0.$$
```python
import sympy as sp
t = sp.symbols('t')
g = sp.symbols('g', cls=sp.Function)
eq=g(t).diff(t,t) -2*g(t).diff(t) +g(t)
sp.dsolve(eq, g(t))
```
Vamos piorar um pouco a equação diferencial e tentar resolver $$q''(t)-6q'(t)+2q(t)-t\cos(t)=0.$$
```python
import sympy as sp
t = sp.symbols('t')
q = sp.symbols('q', cls=sp.Function)
eq=q(t).diff(t,t) -6*q(t).diff(t) +2*q(t)-t*sp.cos(t)
sp.dsolve(eq, q(t))
```
Para resolver adicionando condições iniciais, podemos complementar o comando. Vamos fazer isso, só para poder plotar um gráfico - adoramos gráficos. A condição inicial $q(0)=1$, $q'(0)=1$ é entrada como a opção ```ics={q(0): 1, q(t).diff(t).subs(t, 0): 1}``` dentro do ```dsolve```.
```python
import sympy as sp
t = sp.symbols('t')
q = sp.symbols('q', cls=sp.Function)
eq=q(t).diff(t,t) -6*q(t).diff(t) +2*q(t)-t*sp.cos(t)
# encontrando a solucao e armazenando no nome "solucao"
solucao=sp.dsolve(eq, q(t), ics={q(0): 1, q(t).diff(t).subs(t, 0): 1})
sp.plot(solucao.rhs,(t,-2,2),ylim=[-5,5])
```
## Sistemas de equações diferenciais
Para resolver/plotar sistemas de equações diferenciais, até podemos usar o SymPy, mas é muito mais eficiente usar o NumPy (ou uma mistura dele com o SciPy). Parte do código abaixo foi inspirado [desse site](https://danielmuellerkomorowska.com/2021/02/11/differential-equations-in-python-with-scipy/). Vamos usar como exemplo o que é, talvez, o sistema de equações diferenciais cujo retrato de fase ilustra mais livros em todo o mundo!
O sistema de Lorenz é definido como
$$\left\{
\begin{array}{lcl}
\dot x&=&\sigma (y-x),\\
\dot y&=& x(\rho-z)-y,\\
\dot z&=&xy-\beta z,
\end{array}
\right.
$$
onde $\sigma,\rho,\beta$ são parâmetros. O sistema de Lorenz é uma simplificação de um modelo atmosférico. Os valores mais utilizados desses parâmetros, como feito pelo próprio Lorenz, são $\sigma=10$, $\beta=8/3$ e $\rho=28$.
O comando base para resolver sistemas de equações é o ```odeint```, mas ele tem alguns detalhes.
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def lorenz(state, t, sigma, beta, rho):
x, y, z = state
dx = sigma * (y - x)
dy = x * (rho - z) - y
dz = x * y - beta * z
return [dx, dy, dz]
# valores dos parametros
sigma = 10.0
beta = 8.0 / 3.0
rho = 28.0
# parametros
p = (sigma, beta, rho)
# condicao inicial
ic = [1.0, 1.0, 1.0]
# tempo inicial, tempo final, tamanho da partição.
# se você colocar uma partição muito grande, então serão
# usados menos pontos para avaliar a função, e com isso
# terá a impressão de que são vários segmentos de reta.
# experimente um pouco com isso.
t = np.arange(0.0, 60.0, 0.001)
# integrando
result = odeint(lorenz, ic, t, p)
# plotando
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot(result[:, 0], result[:, 1], result[:, 2])
fig2 = plt.figure()
plt.plot(t,result[:, 0])
plt.plot(t,result[:, 1])
plt.plot(t,result[:, 2])
```
Soluções para sistemas bidimensionais também podem ser plotadas de forma muito similar. Como antes, vamos plotar a curva solução $(x(t),y(t))$ e depois, num mesmo gráfico, $x(t)$ e $y(t)$. Nosso eleito para o exemplo bidimensional é o oscilador de van der Pol.
A equação de segunda ordem $$x''-\mu(1-x^2)x'+x=0$$ é conhecida como oscilador de van der Pol, em homenagem ao matemático holandês Balthasar van der Pol (ok, ele era engenheiro e físico, mas acho que não ficaria triste em ser chamado de matemático) que a deduziu na década de 20, enquanto trabalhava na Philips com circuitos elétricos (veja detalhes [aqui](https://en.wikipedia.org/wiki/Van_der_Pol_oscillator) na Wikipedia). Essa equação de segunda ordem pode ser escrita como um sistema de equações de primeira ordem:
$$\left\{
\begin{array}{lcl}
\dot x&=&\mu(1-y^2)x-y,\\
\dot y&=& x.\\
\end{array}
\right.
$$
É fato conhecido que o oscilador de van der Pol admite um ciclo limite (uma órbita periódica atratora) para certos valores do parâmetro $\mu$. Vejamos como isso acontece ao fazer o gráfico: a solução começa em um certo ponto inicial e depois tende a uma curva fechada. Essa curva não pode ser expressada em termos algébricos, ela é uma curva transcendente. Vamos ao código:
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# define o sistema de van der pol
def sistema(variaveis, t):
x, y = variaveis
dx = 3*(1-y**2)*x-y
dy = x
return [dx, dy]
# define a condicao inicial, onde
# ic=[ x(0), y(0)]
ic = [0.1,0.2]
# define o tempo - será usado pelo integrador e também
# pelo plot
t = np.arange(0, 40, 0.0001)
# integrando
result = odeint(sistema, ic, t)
# primeiro plot
fig = plt.figure()
ax = plt.axes()
# plotando a curva-solucao
ax.plot(result[:, 0], result[:, 1])
# agora plotando x(t) e y(t) separadamente num segundo plot
fig2=plt.figure()
plt.plot(t,result[:, 0])
plt.plot(t,result[:, 1])
```
Note como no sistema de van der Pol, o plot das soluções $x(t),y(t)$ a partir de um certo $t$ parece o de uma função periódica, o que não acontece no sistema de Lorenz. Isso é justificado pela presença de um ciclo limite atrator no sistema de van der Pol, enquanto o sistema de Lorenz apresenta comportamento caótico.
Acima usamos o ```odeint``` para encontrar a solução de forma numérica antes de plotá-la, e o plot era sempre com $t\in[0,t_0]$, com $t_0>0$. O motivo para isso é que o ```odeint``` não funciona bem com tempos negativos. Uma possibilidade para resolver isso é usar o ```solve_ivp```. A página com manual dele [pode ser vista nesse link](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html), e é bem detalhada. Vamos resolver novamente o oscilador de van der Pol, agora com o ```solve_ivp```:
```python
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# note que agora que queremos usar o solve_ivp precisamos
# colocar o t como primeira variável, ao contrário de quando
# queríamos usar o odeint.
def sistema(t, variaveis):
x, y = variaveis
dx = 3*(1-y**2)*x-y
dy = x
return [dx, dy]
# condicao inicial
ic=[0.1,0.2]
# basicamente a sintaxe eh solve_ivp(sistema [t0,t1], ic). adicionamos
# a opcao dense_output para ficar mais facil recuperar as solucoes para
# que sejam plotadas.
solucao = solve_ivp(sistema, [-40, 40], ic,dense_output=True)
# agora discretizamos o tempo para plotar. é preciso ser compatível com
# o intervalo do tempo que foi passado para o comando solve_ivp
t = np.arange(-40, 40, 0.0001)
# a propriedade sol(t) da solucao carrega em suas colunas os valores
# de x(t) e y(t). para usar no plot, precisamos que isso esteja nas linhas,
# por isso usamos o .T no final, para calcular a transposta. isso vai plotar
# as solucoes x(t), y(t)
plt.plot(t,solucao.sol(t).T)
# agora criamos outra janela gráfica e plotamos a curva parametrizada.
# usamos [:,0] para acessar a primeira linha da matriz solucao.sol(t)
# e [:,1] para acessar a segunda linha. isso vai produzir um plot de
# curva parametrizada, como já sabemos fazer.
fig=plt.figure()
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
```
Compare os gráficos e as curvas: são bem parecidas! (Claro!!)
## Retratos de fase e campos de direções
Quando estamos estudando o comportamento de uma função $y=f(x)$, analisar o gráfico da função é um bom ponto de partida - e vimos como fazer gráficos no Python lá no começo desse tutorial.
No caso de equações diferenciais e PVIs, podemos encontrar e plotar a solução, seja na forma de curva parametrizada, seja em termos das coordenadas.
Já para sistemas de equações diferenciais, duas representações gráficas são muito eficientes:
1. o retrato de fase do sistema, que basicamente é o conjunto de todas as soluções, e pode ser representado graficamente/computacionalmente fazendo o esboço de "algumas" curvas-solução (como curvas parametrizadas), de modo a representar o comportamento gloal do sistema e
2. o campo de direções, que é uma representação gráfica do sistema de equações diferenciais (ordinárias e autônomas, claro) como o campo vetorial adjacente, ou seja, se o sistema tem a forma
$$\left\{
\begin{array}{lcl}
\dot x&=&P(x,y),\\
\dot y&=&Q(x,y)
\end{array}
\right.$$
então construímos o campo vetorial $X(x,y)=(P(x,y),Q(x,y))$ e em cada ponto $(a,b)$ do plano cartesiano colocamos o vetor $X(a,b)$. Como o campo vetorial $X$ é tangente às soluções da equação diferencial, isso nos dará uma boa ideia sobre o comportamento qualitativo das soluções da equação.
Para o retrato de fase, já sabemos o que fazer: basta colocar, num mesmo sistema de eixos, várias soluções (ou seja, soluções passando por várias condições iniciais). Isso pode ser feito com um ```for``` para pegar várias condições iniciais e ir plotando, como fazemos no comando abaixo. Confira [esse site aqui](http://www.doc.mmu.ac.uk/STAFF/S.Lynch/DSAP_Jupyter_Notebook.html) para mais alguns exemplos.
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
fig = plt.figure(num=1)
ax=fig.add_subplot(111)
def sistema(t, variaveis):
x, y = variaveis
dx = (1-y**2)*x-y
dy = x
return [dx, dy]
# trajetórias em tempo positivo
for P in range(-5,5,1):
for Q in range(-5,5,1):
solucao = solve_ivp(sistema, [0,10], [P,Q],dense_output=True)
t = np.linspace(0, 10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# trajetórias em tempo negativo
for P in range(-5,5,1):
for Q in range(-5,5,1):
solucao = solve_ivp(sistema, [0,-10], [P,Q],dense_output=True)
t = np.linspace(0, -10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# limita a janela de visualização e mostra o plot
plt.xlim(-5,5)
plt.ylim(-5,5)
plt.show()
```
Vamos a mais dois exemplo, agora considerando um sistema de equações lineares: primeiro o sistema
$$\left\{
\begin{array}{lcl}
\dot x&=&-y,\\
\dot y&=&x,
\end{array}
\right.
$$ que só tem soluções periódicas, e depois o sistema
e depois o sistema
$$\left\{
\begin{array}{lcl}
\dot x&=&y,\\
\dot y&=&x,
\end{array}
\right.
$$ que tem um ponto de sela.
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
fig = plt.figure(num=1)
ax=fig.add_subplot(111)
# um sistema que só tem órbitas periódicas
def sistema(t, variaveis):
x, y = variaveis
dx = -y
dy = x
return [dx, dy]
# trajetórias em tempo positivo
for P in range(-5,5,1):
for Q in range(-5,5,1):
solucao = solve_ivp(sistema, [0,10], [P,Q],dense_output=True)
t = np.linspace(0, 10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# trajetórias em tempo negativo
for P in range(-5,5,1):
for Q in range(-5,5,1):
solucao = solve_ivp(sistema, [0,-10], [P,Q],dense_output=True)
t = np.linspace(0, -10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# limita a janela de visualização e mostra o plot
plt.xlim(-5,5)
plt.ylim(-5,5)
plt.show()
```
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from scipy.integrate import solve_ivp
fig = plt.figure(num=1)
ax=fig.add_subplot(111)
# um sistema que só tem órbitas periódicas
def sistema(t, variaveis):
x, y = variaveis
dx = y
dy = x
return [dx, dy]
# trajetórias em tempo positivo
for P in range(-5,5,1):
for Q in range(-5,5,1):
solucao = solve_ivp(sistema, [0,10], [P,Q],dense_output=True)
t = np.linspace(0, 10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# trajetórias em tempo negativo
for P in range(-5,5,1):
for Q in range(-5,5,1):
solucao = solve_ivp(sistema, [0,-10], [P,Q],dense_output=True)
t = np.linspace(0, -10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# limita a janela de visualização e mostra o plot
plt.xlim(-5,5)
plt.ylim(-5,5)
plt.show()
```
Por fim, um exemplo com MUITAS órbitas traçadas, e que possui tanto uma órbita homoclínica quanto algumas órbitas periódicas.
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
fig = plt.figure(num=1)
ax=fig.add_subplot(111)
# um sistema hamiltoniano com órbitas periódicas e uma órbita homoclínica
def sistema(t, variaveis):
x, y = variaveis
dx = -y
dy = -x-x**2
return [dx, dy]
# trajetórias em tempo positivo
for P in np.linspace(-2, 2, 20):
for Q in np.linspace(-2, 2, 20):
solucao = solve_ivp(sistema, [0,10], [P,Q],dense_output=True)
t = np.linspace(0, 10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# trajetórias em tempo negativo
for P in np.linspace(-2, 2, 20):
for Q in np.linspace(-2, 2, 20):
solucao = solve_ivp(sistema, [0,-10], [P,Q],dense_output=True)
t = np.linspace(0, -10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# limita a janela de visualização e mostra o plot
plt.xlim(-2,2)
plt.ylim(-2,2)
plt.show()
```
Agora que já sabemos plotar retratos de fase, vamos plotar campos de direções, e depois os dois juntos. Vamos usar como base a sela linear. O comando para plotar o campo vetorial é o ```quiver```, dentro do matplotlib. A sintaxe é bem simples.
```python
import matplotlib.pyplot as plt
import numpy as np
# criamos uma malha em R2, com x e y indo de -5 a 5, contendo
# um total de 10 pontos em cada coordenada
x,y = np.meshgrid( np.linspace(-5,5,10),np.linspace(-5,5,10) )
# calculamos o campo vetorial X(x,y)=(y,x) na malha
u = y
v = x
#N = np.sqrt(u**2+v**2)
#U2, V2 = u/N, v/N
# plotando
plt.quiver( x,y,u, v)
```
Em alguns casos, pode ser uma boa estratégia normalizar o campo vetorial para que a figura fique melhor. Isso obviamente deforma o campo vetorial, mas para visualização inicial, pode ser útil:
```python
import matplotlib.pyplot as plt
import numpy as np
# criamos uma malha em R2, com x e y indo de -5 a 5, contendo
# um total de 10 pontos em cada coordenada
x,y = np.meshgrid( np.linspace(-5,5,10),np.linspace(-5,5,10) )
# calculamos o campo vetorial X(x,y)=(y,x) na malha
u = y
v = x
# normalizando o campo
N = np.sqrt(u**2+v**2)
U, V = u/N, v/N
# plotando
plt.quiver( x,y,U, V)
```
Agora nosso *gran finale*: vamos plotar o campo de direções e o retrato de fase num só gráfico. Isso nos dá muita informação sobre o sistema dinâmico.
O código abaixo é o melhor que eu sei fazer, mas provavelmente deve existir uma forma mais eficiente: em particular, eu preciso definir separadamente tanto o sistema de equações diferenciais quanto o campo vetorial e isso não é a forma mais inteligente de fazer isso.
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
fig = plt.figure(num=1)
ax=fig.add_subplot(111)
# um sistema que só tem órbitas periódicas
def sistema(t, variaveis):
x, y = variaveis
dx = y
dy = x
return [dx, dy]
# calculando e plotando trajetórias em tempo positivo
for P in range(-5,5,1):
for Q in range(-5,5,1):
solucao = solve_ivp(sistema, [0,10], [P,Q],dense_output=True)
t = np.linspace(0, 10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# calculando e plotando trajetórias em tempo negativo
for P in range(-5,5,1):
for Q in range(-5,5,1):
solucao = solve_ivp(sistema, [0,-10], [P,Q],dense_output=True)
t = np.linspace(0, -10, 500)
plt.plot(solucao.sol(t).T[:, 0],solucao.sol(t).T[:, 1])
# grid para os vetores
x,y = np.meshgrid( np.linspace(-5,5,20),np.linspace(-5,5,20) )
# calculando o campo vetorial X(x,y)=(y,x) na malha
u = y
v = x
# normalizando o campo (opcional)
N = np.sqrt(u**2+v**2)
u, v = u/N, v/N
# plotando o campo de direcoes
plt.quiver( x,y,u, v)
# limita a janela de visualização e mostra o plot
plt.xlim(-5,5)
plt.ylim(-5,5)
plt.show()
```
## EDPs
Se obter de forma explícita soluções para EDOs não é uma coisa simples, para EDPs se torna uma tarefa impossível. Então já partiremos para soluções numéricas desde o começo. Vamos estudar como obter a solução da equação do calor, e nosso objetivo será obter uma representação gráfica animada da evolução da equação (ou seja, como o "calor" se espalha numa região).
Os códigos abaixo foram adaptados dos códigos originais [daqui](https://levelup.gitconnected.com/solving-2d-heat-equation-numerically-using-python-3334004aa01a) e [daqui](http://firsttimeprogrammer.blogspot.com/2015/07/the-heat-equation-python-implementation.html).
Vamos começar com a equação do calor em dimensão 1. A ideia é simples: considere uma barra de ferro (ou outro material) que está fixada pelos dois extremos. Suponha que existe uma distribuição inicial de temperatura. Com o passar do tempo, como a temperatura evolui?
A solução do código abaixo é um pouco fake: nós já pegamos uma animação da solução com o passar do tempo e plotamos.
```python
%matplotlib inline
%matplotlib tk
import numpy as np
from numpy import pi
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig = plt.figure()
fig.set_dpi(100)
ax1 = fig.add_subplot(1,1,1)
#Diffusion constant
k = 2
#Scaling factor (for visualisation purposes)
scale = 5
#Length of the rod (0,L) on the x axis
L = pi
#Initial contitions u(0,t) = u(L,t) = 0. Temperature at x=0 and x=L is fixed
x0 = np.linspace(0,L+1,10000)
t0 = 0
temp0 = 5 #Temperature of the rod at rest (before heating)
#Increment
dt = 0.01
# Solução da equação do calor
def u(x,t):
return temp0 + scale*np.exp(-k*t)*np.sin(x)
#Gradient of u
def grad_u(x,t):
#du/dx #du/dt
return scale*np.array([np.exp(-k*t)*np.cos(x),-k*np.exp(-k*t)*np.sin(x)])
a = []
t = []
for i in range(500):
value = u(x0,t0) + grad_u(x0,t0)[1]*dt
t.append(t0)
t0 = t0 + dt
a.append(value)
k = 0
def animate(i): #The plot shows the temperature evolving with time
global k #at each point x in the rod
x = a[k] #The ends of the rod are kept at temperature temp0
k += 1 #The rod is heated in one spot, then it cools down
ax1.clear()
plt.plot(x0,x,color='red',label='Temperatura em cada ponto x')
plt.plot(0,0,color='red',label='Tempo decorrido '+str(round(t[k],2)))
plt.grid(True)
plt.ylim([temp0-2,2.5*scale])
plt.xlim([0,L])
plt.title('Evolução da equação do calor')
plt.legend()
anim = animation.FuncAnimation(fig,animate,frames=360,interval=10)
plt.show()
```
É muito mais complicado resolver a equação do calor em umd domínio bidimensional. O código abaixo resolve numericamente a equação e plota o resultado numa animação.
```python
%matplotlib inline
%matplotlib tk
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib.animation import FuncAnimation
print("2D heat equation solver")
plate_length = 50
max_iter_time = 750
alpha = 2
delta_x = 1
delta_t = (delta_x ** 2)/(4 * alpha)
gamma = (alpha * delta_t) / (delta_x ** 2)
# Initialize solution: the grid of u(k, i, j)
u = np.empty((max_iter_time, plate_length, plate_length))
# Initial condition everywhere inside the grid
u_initial = 0
# Boundary conditions
u_top = 100.0
u_left = 0.0
u_bottom = 0.0
u_right = 0.0
# Set the initial condition
u.fill(u_initial)
# Set the boundary conditions
u[:, (plate_length-1):, :] = u_top
u[:, :, :1] = u_left
u[:, :1, 1:] = u_bottom
u[:, :, (plate_length-1):] = u_right
def calculate(u):
for k in range(0, max_iter_time-1, 1):
for i in range(1, plate_length-1, delta_x):
for j in range(1, plate_length-1, delta_x):
u[k + 1, i, j] = gamma * (u[k][i+1][j] + u[k][i-1][j] + u[k][i][j+1] + u[k][i][j-1] - 4*u[k][i][j]) + u[k][i][j]
return u
def plotheatmap(u_k, k):
plt.clf()
plt.title(f"Gráfico da temperatura no instante t = {k*delta_t:.3f} unit time")
plt.xlabel("x")
plt.ylabel("y")
# This is to plot u_k (u at time-step k)
plt.pcolormesh(u_k, cmap=plt.cm.jet, vmin=0, vmax=100)
plt.colorbar()
return plt
# Do the calculation here
u = calculate(u)
def animate(k):
plotheatmap(u[k], k)
anim = animation.FuncAnimation(plt.figure(), animate, interval=1, frames=max_iter_time, repeat=False)
```
| 63ffdb699bf6081f001ccf137f991eb90f4f756c | 32,190 | ipynb | Jupyter Notebook | 7-equacoes-diferenciais.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
] | null | null | null | 7-equacoes-diferenciais.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
] | null | null | null | 7-equacoes-diferenciais.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
] | null | null | null | 35.687361 | 530 | 0.567443 | true | 7,270 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.890294 | 0.768673 | __label__por_Latn | 0.988519 | 0.624216 |
# An Introduction to Bayesian Statistical Analysis
Before we jump in to model-building and using MCMC to do wonderful things, it is useful to understand a few of the theoretical underpinnings of the Bayesian statistical paradigm. A little theory (and I do mean a *little*) goes a long way towards being able to apply the methods correctly and effectively.
## What *is* Bayesian Statistical Analysis?
Though many of you will have taken a statistics course or two during your undergraduate (or graduate) education, most of those who have will likely not have had a course in *Bayesian* statistics. Most introductory courses, particularly for non-statisticians, still do not cover Bayesian methods at all, except perhaps to derive Bayes' formula as a trivial rearrangement of the definition of conditional probability. Even today, Bayesian courses are typically tacked onto the curriculum, rather than being integrated into the program.
In fact, Bayesian statistics is not just a particular method, or even a class of methods; it is an entirely different paradigm for doing statistical analysis.
> Practical methods for making inferences from data using probability models for quantities we observe and about which we wish to learn.
*-- Gelman et al. 2013*
A Bayesian model is described by parameters, uncertainty in those parameters is described using probability distributions.
All conclusions from Bayesian statistical procedures are stated in terms of *probability statements*
This confers several benefits to the analyst, including:
- ease of interpretation, summarization of uncertainty
- can incorporate uncertainty in parent parameters
- easy to calculate summary statistics
## Bayesian vs Frequentist Statistics: What's the difference?
Any statistical paradigm, Bayesian or otherwise, involves at least the following:
1. Some **unknown quantities** about which we are interested in learning or testing. We call these *parameters*.
2. Some **data** which have been observed, and hopefully contain information about (1).
3. One or more **models** that relate the data to the parameters, and is the instrument that is used to learn.
### The Frequentist World View
- The data that have been observed are considered **random**, because they are realizations of random processes, and hence will vary each time one goes to observe the system.
- Model parameters are considered **fixed**. The parameters' values are unknown, but they are fixed, and so we *condition* on them.
In mathematical notation, this implies a (very) general model of the following form:
<div style="font-size:35px">
\\[f(y | \theta)\\]
</div>
Here, the model \\(f\\) accepts data values \\(y\\) as an argument, conditional on particular values of \\(\theta\\).
Frequentist inference typically involves deriving **estimators** for the unknown parameters. Estimators are formulae that return estimates for particular estimands, as a function of data. They are selected based on some chosen optimality criterion, such as *unbiasedness*, *variance minimization*, or *efficiency*.
> For example, lets say that we have collected some data on the prevalence of autism spectrum disorder (ASD) in some defined population. Our sample includes \\(n\\) sampled children, \\(y\\) of them having been diagnosed with autism. A frequentist estimator of the prevalence \\(p\\) is:
> <div style="font-size:25px">
> \\[\hat{p} = \frac{y}{n}\\]
> </div>
> Why this particular function? Because it can be shown to be unbiased and minimum-variance.
It is important to note that new estimators need to be derived for every estimand that is introduced.
### The Bayesian World View
- Data are considered **fixed**. They used to be random, but once they were written into your lab notebook/spreadsheet/IPython notebook they do not change.
- Model parameters themselves may not be random, but Bayesians use probability distribtutions to describe their uncertainty in parameter values, and are therefore treated as **random**. In some cases, it is useful to consider parameters as having been sampled from probability distributions.
This implies the following form:
<div style="font-size:35px">
\\[p(\theta | y)\\]
</div>
This formulation used to be referred to as ***inverse probability***, because it infers from observations to parameters, or from effects to causes.
Bayesians do not seek new estimators for every estimation problem they encounter. There is only one estimator for Bayesian inference: **Bayes' Formula**.
# Computational Methods in Bayesian Analysis
The process of conducting Bayesian inference can be broken down into three general steps (Gelman *et al.* 2013):
### Step 1: Specify a probability model
As was noted above, Bayesian statistics involves using probability models to solve problems. So, the first task is to *completely specify* the model in terms of probability distributions. This includes everything: unknown parameters, data, covariates, missing data, predictions. All must be assigned some probability density.
This step involves making choices.
- what is the form of the sampling distribution of the data?
- what form best describes our uncertainty in the unknown parameters?
### Step 2: Calculate a posterior distribution
The mathematical form \\(p(\theta | y)\\) that we associated with the Bayesian approach is referred to as a **posterior distribution**.
> posterior /pos·ter·i·or/ (pos-tēr´e-er) later in time; subsequent.
Why posterior? Because it tells us what we know about the unknown \\(\theta\\) *after* having observed \\(y\\).
This posterior distribution is formulated as a function of the probability model that was specified in Step 1. Usually, we can write it down but we cannot calculate it analytically. In fact, the difficulty inherent in calculating the posterior distribution for most models of interest is perhaps the major contributing factor for the lack of widespread adoption of Bayesian methods for data analysis. Various strategies for doing so comprise this tutorial.
**But**, once the posterior distribution is calculated, you get a lot for free:
- point estimates
- credible intervals
- quantiles
- predictions
### Step 3: Check your model
Though frequently ignored in practice, it is critical that the model and its outputs be assessed before using the outputs for inference. Models are specified based on assumptions that are largely unverifiable, so the least we can do is examine the output in detail, relative to the specified model and the data that were used to fit the model.
Specifically, we must ask:
- does the model fit data?
- are the conclusions reasonable?
- are the outputs sensitive to changes in model structure?
## Example: binomial calculation
Binomial model is suitable for data that are generated from a sequence of exchangeable Bernoulli trials. These data can be summarized by $y$, the number of times the event of interest occurs, and $n$, the total number of trials. The model parameter is the expected proportion of trials that an event occurs.
$$p(Y|\theta) = \frac{n!}{y! (n-y)!} \theta^{y} (1-\theta)^{n-y}$$
where $y \in \{0, 1, \ldots, n\}$ and $p \in [0, 1]$.
To perform Bayesian inference, we require the specification of a prior distribution. A reasonable choice is a uniform prior on [0,1] which has two implications:
1. makes all probability values equally probable *a priori*
2. makes calculation of the posterior easy
The second task in performing Bayesian inference is, given a fully-specified model, to calculate a posterior distribution. As we have specified the model, we can calculate a posterior distribution up to a proportionality constant (that is, a probability distribution that is **unnormalized**):
$$P(\theta | n, y) \propto P(y | n, \theta) P(\theta) = \theta^y (1-\theta)^{n-y}$$
We can present different posterior distributions as a function of different realized data.
We can also calculate posterior estimates for $\theta$ by maximizing the unnormalized posterior using optimization.
### Exercise: posterior estimation
Write a function that returns posterior estimates of a binomial sampling model using a uniform prior on the unknown probability. Plot the posterior densities for each of the following datasets:
1. n=5, y=3
2. n=20, y=12
3. n=100, y=60
4. n=750, y=450
what type of distribution do these plots look like?
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
n = [5, 20, 100, 750]
y = [3, 12, 60, 450]
```
```python
# Write your answer here
```
# Evaluating Hypotheses with Bayes
Statistical inference is a process of learning from incomplete or imperfect (error-contaminated) data. Can account for this "imperfection" using either a sampling model or a measurement error model.
### Statistical hypothesis testing
The *de facto* standard for statistical inference is statistical hypothesis testing. The goal of hypothesis testing is to evaluate a **null hypothesis**. There are two possible outcomes:
- reject the null hypothesis
- fail to reject the null hypothesis
Rejection occurs when a chosen test statistic is higher than some pre-specified threshold valuel; non-rejection occurs otherwise.
Notice that neither outcome says anything about the quantity of interest, the **research hypothesis**.
Setting up a statistical test involves several subjective choices by the user that are rarely justified based on the problem or decision at hand:
- statistical test to use
- null hypothesis to test
- significance level
Choices are often based on arbitrary criteria, including "statistical tradition" (Johnson 1999). The resulting evidence is indirect, incomplete, and typically overstates the evidence against the null hypothesis (Goodman 1999).
Most importantly to applied users, the results of statistical hypothesis tests are very easy to misinterpret.
### Estimation
Instead of testing, a more informative and effective approach for inference is based on **estimation** (be it frequentist or Bayesian). That is, rather than testing whether two groups are different, we instead pursue an estimate of *how different* they are, which is fundamentally more informative.
Additionally, we include an estimate of **uncertainty** associated with that difference which includes uncertainty due to our lack of knowledge of the model parameters (*epistemic uncertainty*) and uncertainty due to the inherent stochasticity of the system (*aleatory uncertainty*).
## One Group
Before we compare two groups using Bayesian analysis, let's start with an even simpler scenario: statistical inference for one group.
For this we will use Gelman et al.'s (2007) radon dataset. In this dataset the amount of the radioactive gas radon has been measured among different households in all counties of several states. Radon gas is known to be the highest cause of lung cancer in non-smokers. It is believed to be more strongly present in households containing a basement and to differ in amount present among types of soil.
> the US EPA has set an action level of 4 pCi/L. At or above this level of radon, the EPA recommends you take corrective measures to reduce your exposure to radon gas.
Let's import the dataset:
```python
import pandas as pd
import seaborn as sns
sns.set_context('notebook')
RANDOM_SEED = 20090425
```
```python
radon = pd.read_csv('../data/radon.csv', index_col=0)
radon.head()
```
Let's focus on the (log) radon levels measured in a single county (Hennepin).
Suppose we are interested in:
- whether the mean log-radon value is greater than 4 pCi/L in Hennepin county
- the probability that any randomly-chosen household in Hennepin county has a reading of greater than 4
```python
hennepin_radon = radon.query('county=="HENNEPIN"').log_radon
sns.distplot(hennepin_radon);
```
### The model
Recall that the first step in Bayesian inference is specifying a **full probability model** for the problem.
This consists of:
- a likelihood function(s) for the observations
- priors for all unknown quantities
The measurements look approximately normal, so let's start by assuming a normal distribution as the sampling distribution (likelihood) for the data.
$$y_i \sim N(\mu, \sigma^2)$$
(don't worry, we can evaluate this assumption)
This implies that we have 2 unknowns in the model; the mean and standard deviation of the distribution.
#### Prior choice
How do we choose distributions to use as priors for these parameters?
There are several considerations:
- discrete vs continuous values
- the support of the variable
- the available prior information
While there may likely be prior information about the distribution of radon values, we will assume no prior knowledge, and specify a **diffuse** prior for each parameter.
Since the mean can take any real value (since it is on the log scale), we will use another normal distribution here, and specify a large variance to allow the possibility of very large or very small values:
$$\mu \sim N(0, 10^2)$$
For the standard deviation, we know that the true value must be positive (no negative variances!). I will choose a uniform prior bounded from below at zero and from above at a value that is sure to be higher than any plausible value the true standard deviation (on the log scale) could take.
$$\sigma \sim U(0, 10)$$
We can encode these in a Python model, using the PyMC3 package, as follows:
```python
from pymc3 import Model, Normal, Uniform
with Model() as radon_model:
μ = Normal('μ', mu=0, sd=5)
σ = Uniform('σ', 0, 5)
```
> ## Software
> Today there is an array of software choices for Bayesians, including both open source software (*e.g.*, Stan, PyMC, JAGS, emcee) and commercial (*e.g.*, SAS, Stata). These examples can be replicated in any of these environments.
All that remains is to add the likelihood, which takes $\mu$ and $\sigma$ as parameters, and the log-radon values as the set of observations:
```python
with radon_model:
dist = Normal('dist', mu=μ, sd=σ, observed=hennepin_radon)
```
Before we go ahead and estimate the model paramters from the data, its a good idea to perform a **prior predictive check**. This involves sampling from the model before data are incorporated, and gives you an idea of the range of observations that would be considered reasonable within the scope of the modeling assumptions (including choice of priors). If the simnulations generate too many extreme observations relative to our expectations based on domain knowledge, then it can be an indication of problems with model formulation.
```python
from pymc3 import sample_prior_predictive
with radon_model:
prior_sample = sample_prior_predictive(1000)
```
```python
plt.hist(prior_sample['dist'].ravel(), bins=30);
```
```python
plt.hist(radon.log_radon, bins=30);
```
Now, we will fit the model using **Markov chain Monte Carlo (MCMC)**, which will be covered in detail in an upcoming section. This will draw samples from the posterior distribution (which cannot be calculated exactly).
```python
from pymc3 import sample
with radon_model:
samples = sample(1000, tune=1000, cores=2, random_seed=RANDOM_SEED)
```
```python
from arviz import plot_posterior
plot_posterior(samples, var_names=['μ'], ref_val=np.log(4), credible_interval=0.95, kind='kde');
```
The plot shows the posterior distribution of $\mu$, along with an estimate of the 95% posterior **credible interval**.
The output
85% < 1.38629 < 15%
informs us that the probability of $\mu$ being less than log(4) is 85% and the corresponding probability of being greater than log(4) is 15%.
> The posterior probability that the mean level of household radon in Henneprin County is greater than 4 pCi/L is 0.15.
### Prediction
What is the probability that a given household has a log-radon measurement larger than one? To answer this, we make use of the **posterior predictive distribution**.
$$p(z |y) = \int_{\theta} p(z |\theta) p(\theta | y) d\theta$$
where here $z$ is the predicted value and y is the data used to fit the model.
We can estimate this from the posterior samples of the parameters in the model.
```python
mus = samples['μ']
sigmas = samples['σ']
```
```python
radon_samples = Normal.dist(mus, sigmas).random()
```
```python
(radon_samples > np.log(4)).mean()
```
> The posterior probability that a randomly-selected household in Henneprin County contains radon levels in excess of 4 pCi/L is about 0.46.
### Model checking
But, ***how do we know this model is any good?***
Its important to check the fit of the model, to see if its assumptions are reasonable. One way to do this is to perform **posterior predictive checks**. This involves generating simulated data using the model that you built, and comparing that data to the observed data.
One can choose a particular statistic to compare, such as tail probabilities or quartiles, but here it is useful to compare them graphically.
We already have these simulations from the previous exercise!
```python
sns.distplot(radon_samples, label='simulated')
sns.distplot(hennepin_radon, label='observed')
plt.legend()
```
### Prior sensitivity
Its also important to check the sensitivity of your choice of priors to the resulting inference.
Here is the same model, but with drastically different (though still uninformative) priors specified:
```python
from pymc3 import Flat, HalfCauchy
with Model() as prior_sensitivity:
μ = Flat('μ')
σ = HalfCauchy('σ', 5)
dist = Normal('dist', mu=μ, sd=σ, observed=hennepin_radon)
sensitivity_samples = sample(1000, tune=1000)
```
```python
plot_posterior(sensitivity_samples, var_names=['μ'], ref_val=np.log(4));
```
Here is the original model for comparison:
```python
plot_posterior(samples, var_names=['μ'], ref_val=np.log(4));
```
## Two Groups with Continiuous Outcome
To illustrate how this Bayesian estimation approach works in practice, we will use a fictitious example from Kruschke (2012) concerning the evaluation of a clinical trial for drug evaluation. The trial aims to evaluate the efficacy of a "smart drug" that is supposed to increase intelligence by comparing IQ scores of individuals in a treatment arm (those receiving the drug) to those in a control arm (those recieving a placebo). There are 47 individuals and 42 individuals in the treatment and control arms, respectively.
```python
drug = pd.DataFrame(dict(iq=(101,100,102,104,102,97,105,105,98,101,100,123,105,103,100,95,102,106,
109,102,82,102,100,102,102,101,102,102,103,103,97,97,103,101,97,104,
96,103,124,101,101,100,101,101,104,100,101),
group='drug'))
placebo = pd.DataFrame(dict(iq=(99,101,100,101,102,100,97,101,104,101,102,102,100,105,88,101,100,
104,100,100,100,101,102,103,97,101,101,100,101,99,101,100,100,
101,100,99,101,100,102,99,100,99),
group='placebo'))
trial_data = pd.concat([drug, placebo], ignore_index=True)
trial_data.hist('iq', by='group');
```
Since there appear to be extreme ("outlier") values in the data, we will choose a Student-t distribution to describe the distributions of the scores in each group. This sampling distribution adds **robustness** to the analysis, as a T distribution is less sensitive to outlier observations, relative to a normal distribution.
The three-parameter Student-t distribution allows for the specification of a mean $\mu$, a precision (inverse-variance) $\lambda$ and a degrees-of-freedom parameter $\nu$:
$$f(x|\mu,\lambda,\nu) = \frac{\Gamma(\frac{\nu + 1}{2})}{\Gamma(\frac{\nu}{2})} \left(\frac{\lambda}{\pi\nu}\right)^{\frac{1}{2}} \left[1+\frac{\lambda(x-\mu)^2}{\nu}\right]^{-\frac{\nu+1}{2}}$$
the degrees-of-freedom parameter essentially specifies the "normality" of the data, since larger values of $\nu$ make the distribution converge to a normal distribution, while small values (close to zero) result in heavier tails.
Thus, the likelihood functions of our model are specified as follows:
$$\begin{align}
y^{(drug)}_i &\sim T(\nu, \mu_1, \sigma_1) \\
y^{(placebo)}_i &\sim T(\nu, \mu_2, \sigma_2)
\end{align}$$
As a simplifying assumption, we will assume that the degree of normality $\nu$ is the same for both groups.
### Prior choice
Since the means are real-valued, we will apply normal priors. Since we know something about the population distribution of IQ values, we will center the priors at 100, and use a standard deviation that is more than wide enough to account for plausible deviations from this population mean:
$$\mu_k \sim N(100, 10^2)$$
```python
with Model() as drug_model:
μ_0 = Normal('μ_0', 100, sd=10)
μ_1 = Normal('μ_1', 100, sd=10)
```
Similarly, we will use a uniform prior for the standard deviations, with an upper bound of 20.
```python
with drug_model:
σ_0 = Uniform('σ_0', lower=0, upper=20)
σ_1 = Uniform('σ_1', lower=0, upper=20)
```
For the degrees-of-freedom parameter $\nu$, we will use an **exponential** distribution with a mean of 30; this allocates high prior probability over the regions of the parameter that describe the range from normal to heavy-tailed data under the Student-T distribution.
```python
from pymc3 import Exponential
with drug_model:
ν = Exponential('ν_minus_one', 1/29.) + 1
```
```python
sns.distplot(Exponential.dist(1/29).random(size=10000), kde=False);
```
```python
from pymc3 import StudentT
with drug_model:
drug_like = StudentT('drug_like', nu=ν, mu=μ_1, lam=σ_1**-2, observed=drug.iq)
placebo_like = StudentT('placebo_like', nu=ν, mu=μ_0, lam=σ_0**-2, observed=placebo.iq)
```
Now that the model is fully specified, we can turn our attention to tracking the posterior quantities of interest. Namely, we can calculate the difference in means between the drug and placebo groups.
As a joint measure of the groups, we will also estimate the "effect size", which is the difference in means scaled by the pooled estimates of standard deviation. This quantity can be harder to interpret, since it is no longer in the same units as our data, but it is a function of all four estimated parameters.
```python
from pymc3 import Deterministic
with drug_model:
diff_of_means = Deterministic('difference of means', μ_1 - μ_0)
effect_size = Deterministic('effect size',
diff_of_means / np.sqrt((σ_1**2 + σ_0**2) / 2))
```
```python
with drug_model:
drug_trace = sample(1000, random_seed=RANDOM_SEED)
```
```python
plot_posterior(drug_trace[100:], var_names=['μ_0', 'μ_1', 'σ_0', 'σ_1', 'ν_minus_one']);
```
```python
plot_posterior(drug_trace[100:],
var_names=['difference of means', 'effect size'],
ref_val=0);
```
> The posterior probability that the mean IQ of subjects in the treatment group is greater than that of the control group is 0.99.
## Exercise: Two Groups with Binary Outcome
Now that we have seen how to generalize normally-distributed data to another distribution, try our hand with another data type. Binary outcomes are common in clinical research:
- survival/death
- true/false
- presence/absence
- positive/negative
In practice, binary outcomes are encoded as ones (for event occurrences) and zeros (for non-occurrence). A single binary variable is distributed as a **Bernoulli** random variable:
$$f(x \mid p) = p^{x} (1-p)^{1-x}$$
In terms of inference, we are typically interested in whether $p$ is larger or smaller in one group relative to another.
To demonstrate the comparison of two groups with binary outcomes using Bayesian inference, we will use a sample pediatric dataset. Data on 671 infants with very low (<1600 grams) birth weight from 1981-87 were collected at Duke University Medical Center. Of interest is the relationship between the outcome intra-ventricular hemorrhage (IVH) and predictor such as birth weight, gestational age, presence of pneumothorax and mode of delivery.
```python
vlbw = pd.read_csv('../data/vlbw.csv', index_col=0).dropna(axis=0, subset=['ivh', 'pneumo'])
vlbw.head()
```
To demonstrate binary data analysis, we will try to estimate the difference between the probability of an intra-ventricular hemorrhage for infants with and without a pneumothorax.
```python
pd.crosstab(vlbw.ivh, vlbw.pneumo)
```
We will create a binary outcome by combining `definite` and `possible` into a single outcome.
```python
ivh = vlbw.ivh.isin(['definite', 'possible']).astype(int).values
x = vlbw.pneumo.astype(int).values
```
Fit a model that evaluates the association of a pneumothorax with the probability of IVH.
```python
# Write your answer here
```
---
# References
Gelman, Andrew, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. 2013. Bayesian Data Analysis, Third Edition. CRC Press.
Pilon, Cam-Davidson. [Probabilistic Programming and Bayesian Methods for Hackers](http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/)
| 11612d57cb023c1cad234e817257943816dba2a6 | 39,388 | ipynb | Jupyter Notebook | notebooks/Section1_1-Basic_Bayes.ipynb | AllenDowney/Bayes_Computing_Course | 9008ac3d0c25fb84a78bbc385eb73a680080c49c | [
"MIT"
] | 3 | 2020-08-24T16:26:02.000Z | 2020-10-16T21:43:45.000Z | notebooks/Section1_1-Basic_Bayes.ipynb | volpatto/Bayes_Computing_Course | f58f7655b366979ead4f15d73096025ee4e4ef70 | [
"MIT"
] | null | null | null | notebooks/Section1_1-Basic_Bayes.ipynb | volpatto/Bayes_Computing_Course | f58f7655b366979ead4f15d73096025ee4e4ef70 | [
"MIT"
] | 2 | 2020-10-11T08:53:45.000Z | 2022-01-03T08:49:00.000Z | 34.250435 | 542 | 0.617244 | true | 5,948 | Qwen/Qwen-72B | 1. YES
2. YES | 0.800692 | 0.880797 | 0.705247 | __label__eng_Latn | 0.997339 | 0.476857 |
<b>Construir o gráfico e encontrar o foco e uma equação da diretriz.</b>
<b>3. $y^2 = -8x$</b>
$2p = -8$,<b>logo</b><br><br>
$p = -4$<br><br><br>
<b>Calculando o foco</b><br><br>
$F = \frac{p}{2}$<br><br>
$F = \frac{-4}{2}$<br><br>
$F = -2$<br><br>
$F(-2,0)$<br><br><br>
<b>Calculando a diretriz</b><br><br>
$d = -\frac{p}{2}$<br><br>
$d = -(-2)$<br><br>
$d : x = 2$<br><br>
$V(0,0)$<br><br>
$F(-2,0)$
<b>Gráfico da parábola</b>
```python
from sympy import *
from sympy.plotting import plot_implicit
x, y = symbols("x y")
plot_implicit(Eq((y-0)**2, -8*(x+0)), (x,-5,10), (y,-10,10),
title=u'Gráfico da parábola', xlabel='x', ylabel='y');
```
| adada994be2375c2b4933c04f9d39674f8df1366 | 14,302 | ipynb | Jupyter Notebook | Problemas Propostos. Pag. 172 - 175/03.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | 1 | 2020-02-03T16:40:45.000Z | 2020-02-03T16:40:45.000Z | Problemas Propostos. Pag. 172 - 175/03.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | null | null | null | Problemas Propostos. Pag. 172 - 175/03.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | null | null | null | 158.911111 | 12,456 | 0.896378 | true | 303 | Qwen/Qwen-72B | 1. YES
2. YES | 0.908618 | 0.774583 | 0.7038 | __label__por_Latn | 0.495126 | 0.473495 |
## Markov Networks
author: Jacob Schreiber <br>
contact: jmschreiber91@gmail.com
Markov networks are probabilistic models that are usually represented as an undirected graph, where the nodes represent variables and the edges represent associations. Markov networks are similar to Bayesian networks with the primary difference being that Bayesian networks can be represented as directed graphs with known parental relations. Generally, Bayesian networks are easier to interpret and can be used to calculate probabilities faster but, naturally, require that causality is known. However, in many settings, one can only know the associations between variables and not necessarily the direction of causality.
The underlying implementation of inference in pomegranate for both Markov networks and Bayesian networks is the same, because both get converted to their factor graph representations. However, there are many important differences between the two models that should be considered before choosing one.
In this tutorial we will go over how to use Markov networks in pomegranate and what some of the current limitations of the implementation are.
```python
%matplotlib inline
import numpy
import itertools
from pomegranate import *
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
```
Mon Dec 02 2019
numpy 1.17.2
scipy 1.3.1
pomegranate 0.11.3
compiler : GCC 7.3.0
system : Linux
release : 4.15.0-66-generic
machine : x86_64
processor : x86_64
CPU cores : 8
interpreter: 64bit
### Defining a Markov Network
A Markov network is defined by passing in a list of the joint probability tables associated with a series of cliques rather than an explicit graph structure. This is because the probability distributions for a particular variable are not defined by themselves, but rather through associations with other variables. While a Bayesian network has root variables that do not hav e parents, the undirected nature of the edges in a Markov networks means that variables are generally grouped together.
Let's define a simple Markov network where the cliques are A-B, B-C-D, and C-D-E. B-C-D-E is almost a clique but are missing the connection between B and E.
```python
d1 = JointProbabilityTable([
[0, 0, 0.1],
[0, 1, 0.2],
[1, 0, 0.4],
[1, 1, 0.3]], [0, 1])
d2 = JointProbabilityTable([
[0, 0, 0, 0.05],
[0, 0, 1, 0.15],
[0, 1, 0, 0.07],
[0, 1, 1, 0.03],
[1, 0, 0, 0.12],
[1, 0, 1, 0.18],
[1, 1, 0, 0.10],
[1, 1, 1, 0.30]], [1, 2, 3])
d3 = JointProbabilityTable([
[0, 0, 0, 0.08],
[0, 0, 1, 0.12],
[0, 1, 0, 0.11],
[0, 1, 1, 0.19],
[1, 0, 0, 0.04],
[1, 0, 1, 0.06],
[1, 1, 0, 0.23],
[1, 1, 1, 0.17]], [2, 3, 4])
model = MarkovNetwork([d1, d2, d3])
model.bake()
```
We can see that the initialization is fairly straightforward. An important note is that the JointProbabilityTable object requires as the second argument a list of variables that are included in that clique in the order that they appear in the table, from left to right.
### Calculating the probability of examples
Similar to the other probabilistic models in pomegranate, Markov networks can be used to calculate the probability or log probability of examples. However, unlike the other models, calculating the log probability for Markov networks is generally computationally intractable for data with even a modest number of variables (~30).
The process for calculating the log probability begins by calculating the "unnormalized" log probability $\hat{P}$, which is just the product of the probabilities for the variables $c$ in each clique $c \in C$ under their joint probability table $JPT(c)$. This step is easy because it just involves, for each clique, taking the columns corresponding to that clique and performing a table lookup.
\begin{equation}
\hat{P}(X=x) = \prod\limits_{c \in C} JPT(c)
\end{equation}
The reason this is called the unnormalized probability is because the sum of all combinations of variables that $X$ can take $\sum\limits_{x \in X} \hat{P}(X=x)$ does not sum to 1; thus, it is not a true probability distribution.
We can calculate the normalized probability $P(X)$ by dividing by the sum of probabilities under all combinations of variables, frequently referred to as the "partition function". Calculating the partition function $Z$ is as simple as summing the unnormalized probabilities over all possible combinations of variables $x \in X$.
\begin{equation}
Z = \sum\limits_{x \in X} \hat{P}(X=x)
\end{equation}
Finally, we can divide any unnormalized probability calculation by the partition function to get the correct probability.
\begin{equation}
P(X = x) = \frac{1}{Z} \hat{P}(X = x)
\end{equation}
The `probability` method returns the normalized probability value. We can check this by seeing that it is different than simply passing the columns of data in to the distributions for their respective cliques.
```python
model.probability([0, 1, 0, 0, 1])
```
0.020425530968183916
And the probability if we simply passed the columns into the corresponding cliques:
```python
d1.probability([0, 1]) * d2.probability([1, 0, 0]) * d3.probability([0, 0, 1])
```
0.0028800000000000006
However, by passing the `unnormalized=True` parameter in to the `probability` method we can return the unnormalized probability values.
```python
model.probability([0, 1, 0, 0, 1], unnormalized=True)
```
0.0028799999999999997
We can see that the two are identical, subject to machine precision.
### Calculating the partition function
Calculating the partition function involves summing the unnormalized probabilities of all combinations of variables that an example can take. Unfortunately, the time it takes to calculate the partition function grows exponentially with the number of dimensions. This means that it may not be able to calculate the partition function exactly for more than ~25 variables, depending on your machine. While pomegranate does not currently support any methods for calculating the partition function other than the exact method, it is flexible enough to allow users to get around this limitation.
The partition function itself is calculated in the `bake` method because, at that point, all combinations of variables are known to the model. This value is then cached so that calls to `probability` or `log_probability` are just as fast regardless of if the normalized or unnormalized probabilities are calculated. However, if the user passes in `calculate_partition=False` the model will not spend time calculating the partition function. We can see the difference in time here:
```python
X = numpy.random.randint(2, size=(100, 14))
model2 = MarkovNetwork.from_samples(X)
%timeit model2.bake()
%timeit model2.bake(calculate_partition=False)
```
2.67 s ± 50.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.06 ms ± 31.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
There are two main reasons that one might not want to calculate the partition function when creating the model. The first is when the user will only be inspecting the model, such as after structure learning, or only doing inference, which uses the approximate loopy belief propogation. The second is if the user wants to estimate the partition function themselves using an approximate algorithm.
Let's look at how one would manually calculate the exact partition function to see how an approximate algorithm could be substituted in. First, what happens if we don't calculate the partition but try to calculate probabilities?
```python
model.bake(calculate_partition=False)
model.probability([0, 1, 0, 0, 1])
```
Looks like we get an error. We can still calculate unnormalized probabilities though.
```python
model.probability([0, 1, 0, 0, 1], unnormalized=True)
```
0.0028799999999999997
Now we can calculate the partition function by calculating the unnormalized probability of all combinations.
```python
Z = model.probability(list(itertools.product(*model.keys_)), unnormalized=True).sum()
Z
```
0.141
We can set the stored partition function in the model to be this value (or specifically the log partition function) and then calculate normalized probabilities as before.
```python
model.partition = numpy.log(Z)
model.probability([0, 1, 0, 0, 1])
```
0.020425530968183916
Now you can calculate $Z$ however you'd like and simply plug it in.
### Inference
Similar to Bayesian networks, Markov networks can be used to infer missing values in data sets. In pomegranate, inference is done for Markov networks by converting them to their factor graph representations and then using loopy belief propagation. This results in fast, but approximate, inference.
```python
model.predict([[None, 1, 0, None, None], [None, None, 1, None, 0]])
```
[array([1, 1, 0, 1, 1], dtype=object), array([1, 1, 1, 1, 0], dtype=object)]
If we go back and inspect the joint probability tables we can easily see that the inferred values are the correct ones.
If we want the full probability distribution rather than just the most likely value we can use the `predict_proba` method.
```python
model.predict_proba([[None, None, 1, None, 0]])
```
[array([{
"class" :"Distribution",
"dtype" :"int",
"name" :"DiscreteDistribution",
"parameters" :[
{
"0" :0.37654171704957684,
"1" :0.623458282950423
}
],
"frozen" :false
},
{
"class" :"Distribution",
"dtype" :"int",
"name" :"DiscreteDistribution",
"parameters" :[
{
"0" :0.11729141475211632,
"1" :0.8827085852478838
}
],
"frozen" :false
},
1,
{
"class" :"Distribution",
"dtype" :"int",
"name" :"DiscreteDistribution",
"parameters" :[
{
"0" :0.08222490931076197,
"1" :0.917775090689238
}
],
"frozen" :false
},
0], dtype=object)]
### Structure Learning
Markov networks can be learned from data just as Bayesian networks can. While there are many algorithms that have been proposed for Bayesian network structure learning, there are fewer for Markov networks. Currently, pomegranate only supports the Chow-Liu tree-building algorithm for learning a tree-like network. Let's see a simple example where we learn a Markov network over a chunk of features from the digits data set.
```python
from sklearn.datasets import load_digits
X = load_digits().data[:, 22:40]
X = (X > numpy.median(X)).astype(int)
model = MarkovNetwork.from_samples(X)
icd = IndependentComponentsDistribution.from_samples(X, distributions=DiscreteDistribution)
model.log_probability(X).sum(), icd.log_probability(X).sum()
```
(-12699.71644426486, -13141.625038482453)
It looks like the Markov network is somewhat better than simply modeling each pixel individually for this set of features.
| b0b52b79e94cdcd687b0c6100398f66e501fcdca | 19,162 | ipynb | Jupyter Notebook | tutorials/B_Model_Tutorial_7_Markov_Networks.ipynb | manishgit138/pomegranate | 3457dcefdd623483b8efec7e9d87fd1bf4c115b0 | [
"MIT"
] | 3,019 | 2015-01-04T23:19:03.000Z | 2022-03-31T12:55:46.000Z | tutorials/B_Model_Tutorial_7_Markov_Networks.ipynb | manishgit138/pomegranate | 3457dcefdd623483b8efec7e9d87fd1bf4c115b0 | [
"MIT"
] | 818 | 2015-01-05T10:15:57.000Z | 2022-03-07T19:30:28.000Z | tutorials/B_Model_Tutorial_7_Markov_Networks.ipynb | manishgit138/pomegranate | 3457dcefdd623483b8efec7e9d87fd1bf4c115b0 | [
"MIT"
] | 639 | 2015-01-05T04:16:42.000Z | 2022-03-29T11:08:00.000Z | 35.031079 | 843 | 0.593362 | true | 2,849 | Qwen/Qwen-72B | 1. YES
2. YES | 0.740174 | 0.805632 | 0.596308 | __label__eng_Latn | 0.995947 | 0.223754 |
```python
import holoviews as hv
hv.extension('bokeh')
hv.opts.defaults(hv.opts.Curve(width=500),
hv.opts.Scatter(width=500, size=4),
hv.opts.Histogram(width=500),
hv.opts.Slope(color='k', alpha=0.5, line_dash='dashed'),
hv.opts.HLine(color='k', alpha=0.5, line_dash='dashed'))
```
```python
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
```
# Multivariate linear regression
In the previous lesson we introduce the topic of linear regression and studied the most simple linear model: the line.
In this lesson we will generalize this model to the multivariate case, i.e. when we want to predict an unidimensional (and continuous) variable $Y$ from a multidimensional (and continuous) variable $X$. You can interpret $X$ as a table where each column represents a particular attribute.
:::{admonition} Example
:class: tip
We want to predict a car's $Y=[\text{fuel consumption}]$ using its $X=[\text{weight}; \text{number of cylinders}; \text{average speed}; \ldots]$
:::
In what follows we will learn the mathematical formalism of the Ordinary Least Squares (OLS) method and how to implement it to fit regression models using Python
## Ordinary Least Squares (OLS)
### Mathematical derivation
Consider a dataset $\{x_i, y_i\}_{i=1,\ldots,N}$ of *i.i.d.* observations with $y_i \in \mathbb{R}$ and $x_i \in \mathbb{R}^D$, with $D>1$. We want to find $\theta$ such that
$$
y_i \approx \theta_0 + \sum_{j=1}^D \theta_j x_{ij}, \quad \forall i
$$
As before we start by writing the sum of squared errors (residuals)
$$
\min_\theta L = \sum_{i=1}^N (y_i - \theta_0 - \sum_{j=1}^D \theta_j x_{ij})^2
$$
but in this case we will express it in matrix form
$$
\min_\theta L = \| Y - X \theta \|^2 = (Y - X \theta)^T (Y - X \theta)
$$
where
$$
X = \begin{pmatrix} 1 & x_{11} & x_{12} & \ldots & x_{1D} \\
1 & x_{21} & x_{22} & \ldots & x_{2D} \\
1 & \vdots & \vdots & \ddots & \vdots \\
1 & x_{N1} & x_{N2} & \ldots & x_{ND} \end{pmatrix}, Y = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{pmatrix}, \theta = \begin{pmatrix} \theta_0 \\ \theta_1 \\ \vdots \\ \theta_D \end{pmatrix}
$$
From here we can do
$$
\frac{dL}{d\theta} = -(Y - X \theta)^T X = -X^T (Y - X \theta) = 0
$$
to obtain the **normal equations**
$$
X^T X \theta = X^T Y
$$
whose solution is
$$
\hat \theta = (X^T X)^{-1} X^T Y
$$
which is known as the **least squares (LS) estimator** of $\theta$
:::{dropdown} Relation with the Moore-Penrose inverse
Matrix $X^{\dagger} = (X^T X)^{-1} X^T $ is known as the left [*Moore-Penrose*](https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) pseudo-inverse. There is also the right pseudo inverse $X^T (X X^T)^{-1}$. Together they act as a generalization of the inverse for non-squared matrices. Further note that if $X$ is squared and invertible then $X^{\dagger} = (X^T X)^{-1} X^T = X^{-1} (X^T)^{-1} X^T = X^{-1}$
:::
:::{warning}
The OLS solution is only valid if $A=X^T X$ is invertible (non-singular). By construction $A \in \mathbb{R}^{D\times D}$ is a squared symmetric matrix. For $A$ to be invertible we require that its determinant is not zero or equivalently
- The rank of $A$, i.e. the number of linearly independent rows or columns, is equal to $D$
- The eigenvalues/singular values of $A$ are positive
:::
:::{note}
The solution we found for the univariate case in the previous lesson is a particular case of the OLS solution
:::
:::{dropdown} Proof
The solution for the univariate case was
$$
\begin{pmatrix} N & \sum_i x_i \\ \sum_i x_i & \sum_i x_i^2\\\end{pmatrix} \begin{pmatrix} \theta_0 \\ \theta_1 \end{pmatrix} = \begin{pmatrix} \sum_i y_i \\ \sum_i x_i y_i \end{pmatrix}
$$
which can be rewritten as
$$
\begin{align}
\begin{pmatrix} 1 & 1 & \ldots & 1 \\ x_1 & x_2 & \ldots & x_N \end{pmatrix}
\begin{pmatrix} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_N \end{pmatrix}
\begin{pmatrix} \theta_0 \\ \theta_1 \end{pmatrix} &=
\begin{pmatrix} 1 & 1 & \ldots & 1 \\ x_1 & x_2 & \ldots & x_N \end{pmatrix}
\begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{pmatrix} \nonumber \\
X^T X \theta &= X^T Y \nonumber
\end{align}
$$
:::
### Fitting an hyperplane using `numpy`
The [`linalg`](https://numpy.org/doc/stable/reference/routines.linalg.html) submodule of the `numpy` library provides
```python
np.linalg.lstsq(X, # a (N, D) shaped ndarray
Y, # a (N, ) shaped ndarray
rcond='warn' # See note below
)
```
which returns
- The OLS solution: $\hat \theta = (X^T X)^{-1} X^T Y$
- The sum of squared residuals
- The rank of matrix $X$
- The singular values of matrix $X$
:::{note}
For a near-singular $A=X^T X$ we might not be able to obtain the solution using numerical methods. Conditioning can help stabilize the solution. Singular values smaller than $\epsilon$ can be cut-off by setting `rcond=epsilon` when calling `lstsq`
:::
Let's test `lstsq` on the following database of ice-cream consumption from
```python
df = pd.read_csv('data/ice_cream.csv', header=0, index_col=0)
df.columns = ['Consumption', 'Income', 'Price', 'Temperature']
display(df.head())
```
The `corr` attribute of the `pandas` dataframe returns the pairwise correlations between the variables
```python
display(df.corr())
```
Observations:
- Temperature has a high positive correlation with consumption
- Price has a low negative correlation with consumption
- Income has an almost null correlation with consumption
Let's train a multivariate linear regressor for ice-cream consumption as a function of the other variables
```python
Y = df["Consumption"].values
X = df[["Income", "Price", "Temperature"]].values
```
- We will standardize the independent variables so that their scale is the same
- We will incorporate a column with ones to model the intercept ($\theta_0$) of the hyperplane
```python
X = (X - np.mean(X, axis=0, keepdims=True))/np.std(X, axis=0, keepdims=True)
X = np.concatenate((np.ones(shape=(X.shape[0], 1)), X), axis=1)
theta, mse, rank, singvals = np.linalg.lstsq(X, Y, rcond=None)
hatY = np.dot(X, theta) # Predicted Y
```
To assess the quality of the fitted model we can visualize the predicted consumption versus actual (real) consumption or the residuals as a function of the latter and/or the independent variables
```python
p1 = hv.Scatter((Y, hatY), 'Real', 'Predicted').opts(width=330) * hv.Slope(slope=1, y_intercept=0)
p2 = hv.Scatter((Y, Y - hatY), 'Real', 'Residuals').opts(width=330) * hv.HLine(0)
hv.Layout([p1, p2]).cols(2)
```
```python
p = []
for var_name in ["Income", "Price", "Temperature"]:
p.append(hv.Scatter((df[var_name].values, Y - hatY), var_name, 'Residuals').opts(width=330) * hv.HLine(0))
hv.Layout(p).cols(3).opts(hv.opts.Scatter(width=280, height=250))
```
The predicted consumption follows the real consumption closely. There is also no apparent correlation in the residuals.
But some important questions remain
:::{important}
- How significant is the contribution of each of the independent variables to the prediction?
- How to measure in a quantitative way the quality of the fitted model?
:::
For this we need to view OLS from an statistical perspective
## Statistical perspective of OLS
Up to now we have viewed regression from a deterministic (optimization) perspective. To understand its properties and perform inference we seek an statistical interpretation.
Let's say that we have $\{x_i, y_i\}_{i=1,\ldots,N}$ *i.i.d.* observations from an unidimensional target variable $Y$ and a **D-dimensional** independent variable $X$. We will assume that our measurements of $Y$ consists of the **true model** plus **white Gaussian noise**, *i.e.*
$$
\begin{align}
y_i &= f_\theta(x_i) + \varepsilon_i \nonumber \\
&= \theta_0 + \sum_{j=1}^D \theta_j x_{ij} + \varepsilon_i
\end{align}
$$
where $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$. Then the log likelihood of $\theta$ is
$$
\begin{align}
\log L(\theta) &= \log \prod_{i=1}^N \mathcal{N}(y_i | f_\theta(x_i), \sigma^2) \nonumber \\
&= \sum_{i=1}^N \log \mathcal{N}(y_i | f_\theta(x_i), \sigma^2) \nonumber \\
&= -\frac{N}{2} \log(2\pi \sigma^2) - \frac{1}{2\sigma^2} \sum_{i=1}^N (y_i - f_\theta(x_i))^2\nonumber \\
&= -\frac{N}{2} \log(2\pi \sigma^2) - \frac{1}{2\sigma^2} (Y-X\theta)^T (Y - X\theta), \nonumber
\end{align}
$$
and the maximum likelihood solution for $\theta$ can by obtained from
$$
\max_\theta \log L(\theta) = - \frac{1}{2\sigma^2} (Y-X\theta)^T (Y - X\theta).
$$
Note that this is equivalent to
$$
\min_\theta \log L(\theta) = \frac{1}{2\sigma^2} (Y-X\theta)^T (Y - X\theta),
$$
which yields
$$
\hat \theta = (X^T X)^{-1} X^T Y
$$
:::{important}
The least squares solution is equivalent to the maximum likelihood solution under iid samples and gaussian noise
:::
### Statistical properties of the OLS solution
Let $\varepsilon = (\varepsilon_1, \varepsilon_2, \ldots, \varepsilon_N)$, where $\varepsilon \sim \mathcal{N}(0, I \sigma^2) \quad \forall i$
Is the OLS estimator unbiased?
$$
\begin{align}
\mathbb{E}[\hat \theta] &= \mathbb{E}[(X^T X)^{-1} X^T Y] \nonumber \\
&= \mathbb{E}[(X^T X)^{-1} X^T (X \theta + \varepsilon)] \nonumber \\
&= \mathbb{E}[\theta] + (X^T X)^{-1} X^T \mathbb{E}[\varepsilon] \\
& = \mathbb{E}[\theta]
\end{align}
$$
> YES!
What is the variance of the estimator?
$$
\begin{align}
\mathbb{E}[(\hat \theta - \mathbb{E}[\hat\theta])(\hat \theta - \mathbb{E}[\hat\theta])^T] &= \mathbb{E}[((X^T X)^{-1} X^T \varepsilon) ((X^T X)^{-1} X^T \varepsilon)^T] \nonumber \\
&= (X^T X)^{-1} X^T \mathbb{E}[\varepsilon \varepsilon^T] X ((X^T X)^{-1})^T \nonumber \\
&= (X^T X)^{-1} X^T \mathbb{E}[(\varepsilon-0) (\varepsilon-0)^T] X (X^T X)^{-1} \nonumber \\
& =\sigma^2 (X^T X)^{-1}
\end{align}
$$
and typically we estimate the variance of the noise using the unbiased estimator
$$
\begin{align}
\hat \sigma^2 &= \frac{1}{N-D-1} \sum_{i=1}^N (y_i - \theta_0 - \sum_{j=1}^D \theta_j x_{ij})^2 \nonumber \\
& = \frac{1}{N-D-1} (Y-X\theta)^T (Y-X\theta)
\end{align}
$$
**The Gauss-Markov Theorem:** The least squares estimate of $\theta$ have the smallest variance among all unbiased estimators (Hastie, 3.2.2)
### Inference and hypothesis tests for OLS
We found the expected value and the variance of $\theta$. From the properties of MLE we know that
$$
\hat \theta \sim \mathcal{N}(\theta, \sigma^2 (X^T X)^{-1})
$$
and the estimator of the variance will be proportional to
$$
\hat \sigma^2 \sim \frac{1}{(N-M)}\sigma^2 \chi_{N-M}^2
$$
With this we have all the ingredients to find confidence intervals and do hypothesis test on $\hat \theta$
To assess the significance of our model we might try to reject the following *hypotheses*
- One of the parameters (slopes) is zero (t-test)
$\mathcal{H}_0: \theta_i = 0$
$\mathcal{H}_A: \theta_i \neq 0$
- All parameters are zero (f-test)
$\mathcal{H}_0: \theta_1 = \theta_2 = \ldots = \theta_D = 0$
$\mathcal{H}_A:$ At least one parameter is not zero
- A subset of the parameters are zero (ANOVA)
$\mathcal{H}_0: \theta_i = \theta_j =0 $
$\mathcal{H}_A:$ $\theta_i \neq 0 $ or $\theta_j \neq 0 $
We can use the [`OLS`](https://www.statsmodels.org/stable/regression.html) function of the `statsmodels` Python library to perform all these tests
First we create the model by giving the target and independent variables. In `statsmodels` jargon these are called endogenous and exogenous, respectively. Then we call the `fit` attribute
The coefficients obtained are equivalent to those we found with `numpy`
```python
mod = sm.OLS(Y, X, hasconst=True)
res = mod.fit()
display(theta,
res.params)
```
The `summary` attribute gives as
- the `R-squared` statistic of the model
- the `F-statistic` and its p-value
- A table with the values of `theta` their standard errors, `t-statistics`, p-values and confidence interval
```python
display(res.summary(yname="Consumption",
xname=["Intercept", "Income", "Price", "Temperature"],
alpha=0.05))
```
Observations from the results table:
- The f-test tells that we can reject the hypothesis that all coefficients are null
- The t-test tells us that we cannot reject the null hypothesis that the price coefficient is null
The $r^2$ statistic for the multivariate case is defined as
$$
\begin{align}
r^2 &= 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i (y_i - \bar y_i)^2} \nonumber \\
&= 1 - \frac{Y^T(I-X(X^TX)^{-1}X^T)Y}{Y^T (I - \frac{1}{N} \mathbb{1}^T \mathbb{1} ) Y} \nonumber \\
&= 1 - \frac{SS_{res}}{SS_{total}} \nonumber
\end{align}
$$
where $\mathbb{1} = (1, 1, \ldots, 1)$. And it has the same interpretation that was given in the previous lecture
:::{important}
We can trust the test only if our assumptions are true. The assumptions in this case are
- Relation between X and Y is linear
- Errors/noise follows a multivariate normal with covariance $I\sigma^2$
:::
Verify this assumptions by
1. Checking the residuals for normality. Are there outliers that we should remove?
1. Checking for absence of correlation in the residuals
1. Do the errors have different variance?
If the variance of the error is not constant (heteroscedastic) we can use the **Weighted Least Squares** estimator
## Extra: Weighted Least Squares (WLS)
Before we assumed that the noise was homoscedastic (constant variance). We will generalize to the heteroscedastic case.
We can write the multivariate linear regression model with observations subject to Gaussian noise with changing variance as
$$
y_i = \theta_0 + \sum_{j=1}^D \theta_j x_{ij} + \varepsilon_i, \forall i \quad \text{and} \quad \varepsilon_i \sim \mathcal{N}(0, \sigma_i^2)
$$
With respect to OLS the only difference is that $\sigma_i \neq \sigma$
In this case the maximum likelihood solution is
$$
\hat \theta = (X^T \Sigma^{-1}X)^{-1} X^T \Sigma^{-1} Y
$$
where
$$
\Sigma = \begin{pmatrix}
\sigma_1^2 & 0 &\ldots & 0 \\
0 & \sigma_2^2 &\ldots & 0 \\
\vdots & \vdots &\ddots & \vdots \\
0 & 0 &\ldots & \sigma_N^2 \\
\end{pmatrix}
$$
An the distribution of $\theta$ is
$$
\hat \theta \sim \mathcal{N}( \theta, (X^T X)^{-1} X^T \Sigma X (X^T X)^{-1} )
$$
| 58b56443705523301d7026a5694c562b7ce6c275 | 22,109 | ipynb | Jupyter Notebook | lectures/5_linear_regression/part2.ipynb | magister-informatica-uach/INFO337 | 45d7faabbd4ed5b25a575ee065551b87b097f92e | [
"Unlicense"
] | 4 | 2021-06-12T04:07:26.000Z | 2022-03-27T23:22:59.000Z | lectures/5_linear_regression/part2.ipynb | magister-informatica-uach/INFO337 | 45d7faabbd4ed5b25a575ee065551b87b097f92e | [
"Unlicense"
] | null | null | null | lectures/5_linear_regression/part2.ipynb | magister-informatica-uach/INFO337 | 45d7faabbd4ed5b25a575ee065551b87b097f92e | [
"Unlicense"
] | 1 | 2019-11-07T14:49:09.000Z | 2019-11-07T14:49:09.000Z | 32.802671 | 429 | 0.531639 | true | 4,600 | Qwen/Qwen-72B | 1. YES
2. YES | 0.712232 | 0.73412 | 0.522864 | __label__eng_Latn | 0.935712 | 0.053117 |
# LASSO and Ridge Regression
This function shows how to use TensorFlow to solve lasso or ridge regression for $\boldsymbol{y} = \boldsymbol{Ax} + \boldsymbol{b}$
We will use the iris data, specifically: $\boldsymbol{y}$ = Sepal Length, $\boldsymbol{x}$ = Petal Width
```python
# import required libraries
import matplotlib.pyplot as plt
import sys
import numpy as np
import tensorflow as tf
from sklearn import datasets
from tensorflow.python.framework import ops
```
```python
# Specify 'Ridge' or 'LASSO'
regression_type = 'LASSO'
# regression_type = 'Ridge'
```
```python
# clear out old graph
ops.reset_default_graph()
# Create graph
sess = tf.Session()
```
## Load iris data
```python
# iris.data = [(Sepal Length, Sepal Width, Petal Length, Petal Width)]
iris = datasets.load_iris()
x_vals = np.array([x[3] for x in iris.data])
y_vals = np.array([y[0] for y in iris.data])
```
## Model Parameters
```python
# Declare batch size
batch_size = 50
# Initialize placeholders
x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
# make results reproducible
seed = 13
np.random.seed(seed)
tf.set_random_seed(seed)
# Create variables for linear regression
A = tf.Variable(tf.random_normal(shape=[1,1]))
b = tf.Variable(tf.random_normal(shape=[1,1]))
# Declare model operations
model_output = tf.add(tf.matmul(x_data, A), b)
```
## Loss Functions
**ヘビサイド関数を以下の様にして近似している**
\begin{equation}
h = \frac{1}{1 + \exp\Big(-50\times(A-\text{constant})\Big)}
\end{equation}
```python
# Select appropriate loss function based on regression type
if regression_type == 'LASSO':
# Declare Lasso loss function
# Lasso Loss = L2_Loss + heavyside_step,
# Where heavyside_step ~ 0 if A < constant, otherwise ~ 99
lasso_param = tf.constant(0.9)
heavyside_step = tf.truediv(1.,
tf.add(1.,
tf.exp(
tf.multiply(-50.,
tf.subtract(
A, lasso_param)))))
regularization_param = tf.multiply(heavyside_step, 99.)
loss = tf.add(
tf.reduce_mean(tf.square(y_target - model_output)),
regularization_param)
print("Lasso loss:", loss.shape)
elif regression_type == 'Ridge':
# Declare the Ridge loss function
# Ridge loss = L2_loss + L2 norm of slope
ridge_param = tf.constant(1.)
ridge_loss = tf.reduce_mean(tf.square(A))
loss = tf.expand_dims(
tf.add(
tf.reduce_mean(tf.square(y_target - model_output)),
tf.multiply(ridge_param, ridge_loss)),
0) # expandするのはLassoとlossの形を同様にするため
else:
print('Invalid regression_type parameter value', file=sys.stderr)
```
Lasso loss: (1, 1)
## Optimizer
```python
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.001)
train_step = my_opt.minimize(loss)
```
## Run regression
```python
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec = []
for i in range(1500):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = np.transpose([x_vals[rand_index]])
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss[0])
if (i+1)%300==0:
print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)) + ' b = ' + str(sess.run(b)))
print('Loss = ' + str(temp_loss))
print('\n')
```
Step #300 A = [[0.77170753]] b = [[1.8249986]]
Loss = [[10.26473]]
Step #600 A = [[0.7590854]] b = [[3.2220633]]
Loss = [[3.0629203]]
Step #900 A = [[0.74843585]] b = [[3.9975822]]
Loss = [[1.2322046]]
Step #1200 A = [[0.73752165]] b = [[4.429741]]
Loss = [[0.57872057]]
Step #1500 A = [[0.7294267]] b = [[4.672531]]
Loss = [[0.40874982]]
## Extract regression results
```python
# Get the optimal coefficients
[slope] = sess.run(A)
[y_intercept] = sess.run(b)
# Get best fit line
best_fit = []
for i in x_vals:
best_fit.append(slope*i+y_intercept)
```
## Plot results
```python
%matplotlib inline
plt.style.use("ggplot")
# Plot the result
plt.plot(x_vals, y_vals, 'o', label='Data Points')
plt.plot(x_vals, best_fit, 'b-', label='Best fit line', linewidth=3)
plt.legend(loc='upper left')
plt.title('Sepal Length vs Pedal Width')
plt.xlabel('Pedal Width')
plt.ylabel('Sepal Length')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title(regression_type + ' Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
```
```python
```
| 8c6a18d5d7ff10a771fb835ee989ea944035f14a | 53,264 | ipynb | Jupyter Notebook | 03_Linear_Regression/06_Implementing_Lasso_and_Ridge_Regression/06_lasso_and_ridge_regression.ipynb | haru-256/tensorflow_cookbook | 18923111eaccb57b47d07160ae5c202c945da750 | [
"MIT"
] | 1 | 2021-02-27T16:16:02.000Z | 2021-02-27T16:16:02.000Z | 03_Linear_Regression/06_Implementing_Lasso_and_Ridge_Regression/06_lasso_and_ridge_regression.ipynb | haru-256/tensorflow_cookbook | 18923111eaccb57b47d07160ae5c202c945da750 | [
"MIT"
] | 2 | 2018-03-07T14:31:22.000Z | 2018-03-07T15:04:17.000Z | 03_Linear_Regression/06_Implementing_Lasso_and_Ridge_Regression/06_lasso_and_ridge_regression.ipynb | haru-256/tensorflow_cookbook | 18923111eaccb57b47d07160ae5c202c945da750 | [
"MIT"
] | null | null | null | 138.348052 | 24,392 | 0.885307 | true | 1,342 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.752013 | 0.675078 | __label__eng_Latn | 0.310633 | 0.406764 |
<div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">The Chebyshev Pseudospectral Method - Elastic Waves in 1D</div>
</div>
</div>
</div>
<p style="width:20%;float:right;padding-left:50px">
<span style="font-size:smaller">
</span>
</p>
---
This notebook is part of the supplementary material
to [Computational Seismology: A Practical Introduction](https://global.oup.com/academic/product/computational-seismology-9780198717416?cc=de&lang=en&#),
Oxford University Press, 2016.
##### Authors:
* David Vargas ([@dvargas](https://github.com/davofis))
* Heiner Igel ([@heinerigel](https://github.com/heinerigel))
---
## Basic Equations
This notebook presents the numerical solution for the 1D elastic wave equation using the Chebyshev Pseudospectral Method. We depart from the equation
\begin{equation}
\rho(x) \partial_t^2 u(x,t) = \partial_x (\mu(x) \partial_x u(x,t)) + f(x,t),
\end{equation}
and use a standard 3-point finite-difference operator to approximate the time derivatives. Then, the displacement field is extrapolated as
\begin{equation}
\rho_i\frac{u_{i}^{j+1} - 2u_{i}^{j} + u_{i}^{j-1}}{dt^2}= \partial_x (\mu(x) \partial_x u(x,t))_{i}^{j} + f_{i}^{j}
\end{equation}
An alternative way of performing space derivatives of a function defined on the Chebyshev collocation points is to define a derivative matrix $D_{ij}$
\begin{equation}
D_{ij} =
\begin{cases}
-\frac{2 N^2 + 1}{6} \hspace{1.5cm} \text{for i = j = N}\\
-\frac{1}{2} \frac{x_i}{1-x_i^2} \hspace{1.5cm} \text{for i = j = 1,2,...,N-1}\\
\frac{c_i}{c_j} \frac{(-1)^{i+j}}{x_i - x_j} \hspace{1.5cm} \text{for i $\neq$ j = 0,1,...,N}
\end{cases}
\end{equation}
where $N+1$ is the number of Chebyshev collocation points $ \ x_i = cos(i\pi / N)$, $ \ i=0,...,N$ and the $c_i$ are given as
$$ c_i = 2 \hspace{1.5cm} \text{for i = 0 or N} $$
$$ c_i = 1 \hspace{1.5cm} \text{otherwise} $$
This differentiation matrix allows us to write the derivative of the function $f_i = f(x_i)$ (possibly depending on time) simply as
$$\partial_x u_i = D_{ij} \ u_j$$
where the right-hand side is a matrix-vector product, and the Einstein summation convention applies.
```
# This is a configuration step for the exercise. Please run it before calculating the derivative!
import numpy as np
import matplotlib
# Show Plot in The Notebook
matplotlib.use("nbagg")
import matplotlib.pyplot as plt
from ricker import ricker
```
### 1. Chebyshev derivative method
#### Exercise
Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$, call this function and display the Chebyshev derivative matrix.
```
#################################################################
# IMPLEMENT THE CHEBYSHEV DERIVATIVE MATRIX METHOD HERE!
#################################################################
# Call the chebyshev differentiation matrix
# ---------------------------------------------------------------
#D_ij =
# ---------------------------------------------------------------
# Display Differentiation Matrix
# ---------------------------------------------------------------
```
```
```
### 2. Initialization of setup
```
# Basic parameters
# ---------------------------------------------------------------
#nt = 5000 # number of time steps
tmax = 0.0006
eps = 1.4 # stability limit
isx = 100
lw = 0.7
ft = 10
f0 = 100000 # dominant frequency
iplot = 20 # Snapshot frequency
# material parameters
rho = 2500.
c = 3000.
mu = rho*c**2
# space domain
nx = 100 # number of grid points in x 199
xs = np.floor(nx/2) # source location
xr = np.floor(nx*0.8)
x = np.zeros(nx+1)
# initialization of pressure fields
p = np.zeros(nx+1)
pnew = np.zeros(nx+1)
pold = np.zeros(nx+1)
d2p = np.zeros(nx+1)
for ix in range(0,nx+1):
x[ix] = np.cos(ix * np.pi / nx)
dxmin = min(abs(np.diff(x)))
dxmax = max(abs(np.diff(x)))
dt = eps*dxmin/c # calculate time step from stability criterion
nt = int(round(tmax/dt))
```
### 3. Source Initialization
```
# source time function
# ---------------------------------------------------------------
t = np.arange(1, nt+1)*dt # initialize time axis
T0 = 1./f0
tmp = ricker(dt, T0)
isrc = tmp
tmp = np.diff(tmp)
src = np.zeros(nt)
src[0:np.size(tmp)] = tmp
#spatial source function
# ---------------------------------------------------------------
sigma = 1.5*dxmax
x0 = x[int(xs)]
sg = np.exp(-1/sigma**2*(x-x0)**2)
sg = sg/max(sg)
```
### 4. Time Extrapolation
Now we time extrapolate using the previously defined get_cheby_matrix(nx) method to call the differentiation matrix. The discrete values of the numerical simulation are indicated by dots in the animation, they represent the Chebyshev collocation points. Observe how the wavefield near the domain center is less dense than towards the boundaries.
```
# Initialize animated plot
# ---------------------------------------------------------------
plt.figure(figsize=(10,6))
line = plt.plot(x, p, 'k.', lw=2)
plt.title('Chebyshev Method - 1D Elastic wave', size=16)
plt.xlabel(' x(m)', size=14)
plt.ylabel(' Amplitude ', size=14)
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
# Differentiation matrix
D = get_cheby_matrix(nx)
for it in range(nt):
# Space derivatives
dp = np.dot(D, p.T)
dp = mu/rho * dp
dp = D @ dp
# Time extrapolation
pnew = 2*p - pold + np.transpose(dp) * dt**2
# Source injection
pnew = pnew + sg*src[it]*dt**2/rho
# Remapping
pold, p = p, pnew
p[0] = 0; p[nx] = 0 # set boundaries pressure free
# --------------------------------------
# Animation plot. Display solution
if not it % iplot:
for l in line:
l.remove()
del l
# --------------------------------------
# Display lines
line = plt.plot(x, p, 'k.', lw=1.5)
plt.gcf().canvas.draw()
```
| e665286258e008e45064bf1ca26110b0a82086b1 | 8,825 | ipynb | Jupyter Notebook | notebooks/Computational Seismology/The Pseudospectral Method/ps_cheby_elastic_1d.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | 3 | 2020-07-11T10:01:39.000Z | 2020-12-16T14:26:03.000Z | notebooks/Computational Seismology/The Pseudospectral Method/ps_cheby_elastic_1d.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | null | null | null | notebooks/Computational Seismology/The Pseudospectral Method/ps_cheby_elastic_1d.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | 3 | 2020-11-11T05:05:41.000Z | 2022-03-12T09:36:24.000Z | 8,825 | 8,825 | 0.541076 | true | 1,887 | Qwen/Qwen-72B | 1. YES
2. YES | 0.771844 | 0.651355 | 0.502744 | __label__eng_Latn | 0.693717 | 0.006372 |
## Linear Algebra
Linear algebra refers to the study of linear relationships. In this class, we will cover some basic concepts of linear algebra that are needed to understand some more advanced and *practical* concepts and definitions. If you are interested in the concepts related to linear algebra and application, there is an excellent online series that covers these topics in detail
https://github.com/fastai/numerical-linear-algebra
Linear algebra is a fundamental component of machine learning, so if you are interested in using machine learning in the future go and check that class.
### Vectors
A vector is a collection of numbers. Vectors can be **row vectors** or **column vectors** depending on their orientation. In general, you can assume that a vector is a **column vector** unless otherwise stated.
```python
import numpy as np
vector_row = np.array([[1, -5, 3, 2, 4]])
vector_column = np.array([[1],
[2],
[3],
[4]])
print(vector_row.shape)
print(vector_column.shape)
```
(1, 5)
(4, 1)
The transpose ($T$) of a vector is an operation that transform a column vector into a row vector and a row vector into a column vector. If $v$ is a vector, then $v^{T}$ is the transpose.
```python
vector_row, vector_row.T
```
(array([[ 1, -5, 3, 2, 4]]),
array([[ 1],
[-5],
[ 3],
[ 2],
[ 4]]))
The norm of a vector is a measure of its lenght. There are many ways to measure lenght and you can use different definitions depending on the application. The most common norm is the $L_2$ norm, if $v$ is a vector, then the $L_2$ norm ($\Vert v \Vert_{2}$) is
$$
\Vert v \Vert_{2} = \sqrt{\sum_i v_i^2}
$$
This is also known as the Euclidian norm.
Others well known norms are the $L_1$ norm (or Manhattan Distance), and the $L_\infty$ norm (or infinity norm) equal to the maximum absolut value of the vector
```python
from numpy.linalg import norm
new_vector = vector_row.T
norm_1 = norm(new_vector, 1)
norm_2 = norm(new_vector, 2)
norm_inf = norm(new_vector, np.inf)
print('L_1 is: %.1f'%norm_1)
print('L_2 is: %.1f'%norm_2)
print('L_inf is: %.1f'%norm_inf)
```
L_1 is: 15.0
L_2 is: 7.4
L_inf is: 5.0
The **dot product** of two vectors is the sum of the product of the respective elements in each vector and is denoted by $\cdot$. If $v$ and $w$ are vectors, then the dot product is defined as
$$
d = v \cdot w= \sum_{i = 1}^{n} v_iw_i
$$
alternatively, the dot product can be computed as
$$
v \cdot w = \Vert v \Vert_{2} \Vert w \Vert_{2} \cos{\theta}
$$
where $\theta$ is the angle between the vectors. In the same way, the angle between two vector can be computed as
$$
\theta = cos^{-1}\left[\frac{v \cdot w }{\Vert v \Vert_{2} \Vert w \Vert_{2}}\right]
$$
```python
#lets take two vectors that are on the same direction but have different lenghts
from numpy import arccos, dot
v = np.array([[1,2]])
w = np.array([[5,10]])
theta = arccos(v.dot(w.T)/(norm(v)*norm(w)))
theta*(180/pi) #arcos return gradients, we are convering to degrees
```
array([[8.53773646e-07]])
```python
#lets take two vectors that are on opposite directions
from numpy import arccos, dot, pi
v = np.array([[1,2]])
w = np.array([[-1,-2]])
theta = arccos(v.dot(w.T)/(norm(v)*norm(w)))
theta*(180/pi) #arcos return gradients, we are convering to degrees
```
array([[179.99999879]])
```python
#lets take two vectors that are on orthogonal to eachother
from numpy import arccos, dot, pi
v = np.array([[1,1]])
w = np.array([[-1,1]])
theta = arccos(v.dot(w.T)/(norm(v)*norm(w)))
theta*(180/pi) #arcos return gradients, we are convering to degrees
```
array([[90.]])
The **cross product** between two vectors, $v$ and $w$, is written $v\times w$. It is defined by
$$
v \times w = \Vert v \Vert_{2}\Vert w \Vert_{2}\sin{(\theta)}
$$
where $θ$ is the angle between the $v$ and $w$.
The geometric interpretation of the cross product is a vector perpendicular to both $v$ and $w$ with length (as measured by $L_2$) equal to the area enclosed by the parallelogram created by the two vectors.
```python
v = np.array([[0, 2, 0]])
w = np.array([[3, 0, 0]])
cross = np.cross(v, w)
print(cross)
```
[[ 0 0 -6]]
```python
arccos(v.dot(cross.T)/(norm(v)*norm(cross)))*(180/pi)
```
array([[90.]])
```python
arccos(w.dot(cross.T)/(norm(w)*norm(cross)))*(180/pi)
```
array([[90.]])
### Matrices
An $n \times m $ matrix is a rectangular table of numbers consisting of $m$ rows and $n$ columns.
The norm of a matrix can be consider as a kind of vector norm by alingming the $n * m$ elements of the matrix into a single vector
$$
\Vert M \Vert_{p} = \sqrt[p]{(\sum_i^m \sum_j^n |a_{ij}|^p)}
$$
where $p$ defines the norm order ($p=0, 1, 2,...$)
**Matrix multiplication** between two matrices, $P$ and $Q$, is defined when $P$ is an $m \timed p$ matrix and $Q$ is a $p \times n$ matrix. The result of $M=PQ$ is a matrix $M$ that is $m \times n$. The dimension with size $p$ is called the inner matrix dimension, and the inner matrix dimensions must match (i.e., the number of columns in $P$ and the number of rows in $Q$ must be the same) for matrix multiplication. The dimensions $m$ and $n$ are called the outer matrix dimensions. Formally, $M=PQ$ is defined as
$$
M_{ij} = \sum_{k=1}^p P_{ik}Q_{kj}
$$
```python
P = np.array([[1, 7], [2, 3], [5, 0]])
Q = np.array([[2, 6, 3, 1], [1, 2, 3, 4]])
print(P)
print(f'The dimensions of P are: {P.shape}')
print(Q, Q.shape)
print(f'The dimensions of Q are: {Q.shape}')
print(np.dot(P, Q))
print(f'The dimensions of PxQ are: {np.dot(P, Q).shape}')
```
[[1 7]
[2 3]
[5 0]]
The dimensions of P are: (3, 2)
[[2 6 3 1]
[1 2 3 4]] (2, 4)
The dimensions of Q are: (2, 4)
[[ 9 20 24 29]
[ 7 18 15 14]
[10 30 15 5]]
The dimensions of PxQ are: (3, 4)
```python
#what will happend here?
np.dot(P, Q)
```
The **determinant** is an important property of square matrices (same number of rows and columns). The determinant is denoted by $\det(M)$ or $|M|$.
In the case of $2 \times 2$ matrices, the determinant is
$$
\begin{split}
|M| = \begin{bmatrix}
a & b \\
c & d\\
\end{bmatrix} = ad - bc\end{split}
$$
In the case of $3 \times 3$ matrices, the determinant is
$$
\begin{split}
\begin{eqnarray*}
|M| = \begin{bmatrix}
a & b & c \\
d & e & f \\
g & h & i \\
\end{bmatrix} & = & a\begin{bmatrix}
\Box &\Box &\Box \\
\Box & e & f \\
\Box & h & i \\
\end{bmatrix} - b\begin{bmatrix}
\Box &\Box &\Box \\
d & \Box & f \\
g & \Box & i \\
\end{bmatrix}+c\begin{bmatrix}
\Box &\Box &\Box \\
d & e & \Box \\
g & h & \Box \\
\end{bmatrix} \\
&&\\
& = & a\begin{bmatrix}
e & f \\
h & i \\
\end{bmatrix} - b\begin{bmatrix}
d & f \\
g & i \\
\end{bmatrix}+c\begin{bmatrix}
d & e \\
g & h \\
\end{bmatrix} \\
&&\\
& = & aei + bfg + cdh - ceg - bdi - afh
\end{eqnarray*}\end{split}
$$
Computing the determinant or larger matrices is cumbersome. However, the process can be easily automated and always reduced to computing the determinant of $2 \time 2$ matrices. Numpy includes an efficient method to compute the determinant of a matrix
```python
from numpy.linalg import det
M = np.array([[0,2,1,3],
[3,2,8,1],
[1,0,0,3],
[0,3,2,1]])
print(f'M: {M}')
print(f'Determinant: {det(M):0.2f}') #note that the :0.2f limits the number of decimals printed!
```
M: [[0 2 1 3]
[3 2 8 1]
[1 0 0 3]
[0 3 2 1]]
Determinant: -38.00
The inverse of a square matrix $M$ is a matrix of the same size, $N$, such that $M \bullet N=I$, Where $I$ is a matrix with only ones in its diagonal (unity matrix). The inverse of a matrix $M$ is denoted as $M^{-1}$. For a $2 \times 2$ matrix, the inverse is defined as
$$
\begin{split}
M^{-1} = \begin{bmatrix}
a & b \\
c & d\\
\end{bmatrix}^{-1} = \frac{1}{|M|}\begin{bmatrix}
d & -b \\
-c & a\\
\end{bmatrix}\end{split}
$$
calculating the inverse of a matrix is a complex process; however, it is an important step in many calculations and several *easier* approaches have been developed.
if the determinant of a matrix is zero, then the matrix doesn't have an inverse.
```python
from numpy.linalg import inv
M = np.array([[0,2,1,3],
[3,2,8,1],
[1,0,0,3],
[0,3,2,1]])
print(f'M: {M}')
print(f'Inverse: {inv(M)}') #note that the :0.2f limits the number of decimals printed!
print(f'M x inv(M) = {np.dot(M,inv(M))}')
```
M: [[0 2 1 3]
[3 2 8 1]
[1 0 0 3]
[0 3 2 1]]
Inverse: [[-1.57894737 -0.07894737 1.23684211 1.10526316]
[-0.63157895 -0.13157895 0.39473684 0.84210526]
[ 0.68421053 0.18421053 -0.55263158 -0.57894737]
[ 0.52631579 0.02631579 -0.07894737 -0.36842105]]
M x inv(M) = [[ 1.00000000e+00 -3.46944695e-18 5.55111512e-17 1.11022302e-16]
[ 0.00000000e+00 1.00000000e+00 4.99600361e-16 -1.11022302e-16]
[ 2.22044605e-16 5.20417043e-17 1.00000000e+00 -3.33066907e-16]
[ 0.00000000e+00 1.73472348e-17 5.55111512e-17 1.00000000e+00]]
A matrix that is close to being singular (i.e., the determinant is close to 0) is called **ill-conditioned**. Although ill-conditioned matrices have inverses, they are problematic numerically in the same way that dividing a number by a very, very small number is problematic.
The **condition number** is a measure of how ill-conditioned a matrix is, and it can be computed using Numpy’s function cond from linalg. The higher the condition number, the closer the matrix is to being singular.
The **rank** of an $m \times n$ matrix $A$ is the number of linearly independent columns or rows of $A$ (that is, you cannot write a row or column as a linear combination of other rows or columns), and is denoted by **rank(A)**. It can be shown that the number of linearly independent rows is always equal to the number of linearly independent columns for any matrix. A matrix is called full rank. if **rank (A)=min(m,n)**. The matrix, $A$, is also full rank if all of its columns are linearly independent.
```python
from numpy.linalg import cond, matrix_rank
A = np.array([[1,1,0],
[0,1,0],
[1,0,1]])
print(f'Condition number: {cond(A)}')
print(f'Rank: {matrix_rank(A)}')
```
Condition number: 4.048917339522305
Rank: 3
if you append a new columns (or row) to a matrix, the rank will increase if the new columns add new information (that is, the new column cannot be explained by a linear combinantion of existing columns)
```python
y = np.array([[1], [2], [1]])
A_y = np.concatenate((A, y), axis = 1)
print(f'Augmented matrix: \n {A_y}')
print(f'Rank of augmented matrix: {matrix_rank(A_y)} ')
```
Augmented matrix:
[[1 1 0 1]
[0 1 0 2]
[1 0 1 1]]
Rank of augmented matrix: 3
### Linear Transformations
You can transform a vector by applying linear operations to it, for examples
- Sum with a scalar
- Multiplication with a scalar
- Sum with another vector
- Multiplication with another vector
- Multiplication with a matrix
The last operation is one of the most important operation in linear algebra and has many applications.
Example, **Vector Rotation**
```python
import numpy as np
import matplotlib.pyplot as plt
V = np.array([[3],[1]])
origin = np.array([[0], [0]]) # origin point
plt.quiver(0,0,*V, color=['r'], scale=21)
plt.plot([-1,1],[0,0], lw=0.5, color = 'k')
plt.plot([0,0],[-1,1], lw=0.5, color = 'k')
plt.show()
```
To rotate a vector by an angle $\theta$, you have to multiply it by a rotation matrix given by
$$
R = \begin{bmatrix}
\cos(\theta) & -\sin(\theta) \\
\sin(\theta) & \cos(\theta)
\end{bmatrix}
$$
```python
#Rotate the vector by 45 degress
theta = 45 * (np.pi/180)
Rot_Matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])
rot_V = Rot_Matrix @ V
#(2x2) @ (2x1) -> (2x1)
plt.quiver(0,0,*V, color=['r'], scale=21)
plt.quiver(0,0,*rot_V, color=['tab:green'], scale=21)
plt.plot([-1,1],[0,0], lw=0.5, color = 'k')
plt.plot([0,0],[-1,1], lw=0.5, color = 'k')
plt.show()
```
## Exercise
Try it yourself, rotate the vector
$$
R = \begin{bmatrix}
5 \\
3
\end{bmatrix}
$$
by 50 degrees. Verify the result of the operation
```python
import numpy as np
def rot_matrix(theta):
return np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])
a = np.array([[5,3]]).T
angle = 50 * (np.pi/180)
R = rot_matrix(angle)
a_rot = R.dot(a)
print(np.arccos(((a.T).dot(a_rot))/(np.linalg.norm(a)*np.linalg.norm(a_rot)))*(180/np.pi))
```
[[50.]]
Linear transformation are **inversible**, you can recover the original vector by multiplying by the inverse of the rotation matrix
```python
V = np.array([[3],[1]])
theta = 45 * (np.pi/180)
Rot_Matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])
rot_V = Rot_Matrix @ V
rec_V = np.linalg.inv(Rot_Matrix)@rot_V
print(f'The original Vector is: \n {V}')
print(f'The recovered Vector is : \n {rec_V}')
```
The original Vector is:
[[3]
[1]]
The recovered Vector is :
[[3.]
[1.]]
### System of linear equations
A system of linear equations is a set of linear equations that share the same variables. Consider the following system of linear equations:
$$
\begin{eqnarray*}
\begin{array}{rcrcccccrcc}
a_{1,1} x_1 &+& a_{1,2} x_2 &+& {\ldots}& +& a_{1,n-1} x_{n-1} &+&a_{1,n} x_n &=& y_1,\\
a_{2,1} x_1 &+& a_{2,2} x_2 &+&{\ldots}& +& a_{2,n-1} x_{n-1} &+& a_{2,n} x_n &=& y_2, \\
&&&&{\ldots} &&{\ldots}&&&& \\
a_{m-1,1}x_1 &+& a_{m-1,2}x_2&+ &{\ldots}& +& a_{m-1,n-1} x_{n-1} &+& a_{m-1,n} x_n &=& y_{m-1},\\
a_{m,1} x_1 &+& a_{m,2}x_2 &+ &{\ldots}& +& a_{m,n-1} x_{n-1} &+& a_{m,n} x_n &=& y_{m}.
\end{array}
\end{eqnarray*}$$
The matrix form of a system of linear equations is $\textbf{A}x = y$ where $\textbf{A}$ is a m×n matrix, $y$ is a vector, and $x$ is an unknown vector:
$$
\begin{split}\begin{bmatrix}
a_{1,1} & a_{1,2} & ... & a_{1,n}\\
a_{2,1} & a_{2,2} & ... & a_{2,n}\\
... & ... & ... & ... \\
a_{m,1} & a_{m,2} & ... & a_{m,n}
\end{bmatrix}\left[\begin{array}{c} x_1 \\x_2 \\ ... \\x_n \end{array}\right] =
\left[\begin{array}{c} y_1 \\y_2 \\ ... \\y_m \end{array}\right]\end{split}
$$
For example, the system of linear equations
$$
\begin{eqnarray*}
4x + 3y - 5z &=& 2 \\
-2x - 4y + 5z &=& 5 \\
7x + 8y &=& -3 \\
x + 2z &=& 1 \\
9 + y - 6z &=& 6 \\
\end{eqnarray*}
$$
can be written as
$$
\begin{split}\begin{bmatrix}
4 & 3 & -5\\
-2 & -4 & 5\\
7 & 8 & 0\\
1 & 0 & 2\\
9 & 1 & -6
\end{bmatrix}\left[\begin{array}{c} x \\y \\z \end{array}\right] =
\left[\begin{array}{c} 2 \\5 \\-3 \\1 \\6 \end{array}\right]\end{split}
$$
#### Solutions to Systems of Linear Equations
The objective is to find a set of scalars ($x$, $y$, and $z$) that allow us to write the vector $y$ as a linear combination of the columns of $\textbf{A}$.
If the rank of the augmented matriz $[\textbf{A},y]$ is equal to the rank of $[\textbf{A}]$, this solution exist. Otherwise, the solution doesn't exists.
Moreover, if $\textbf{A}$ is not full rank (i.e, the $rank(\textbf{A})$ is smaller than the number of columns), then not all the columns of $\textbf{A}$ are independent and the system will have infinite number of solutions.
There are many methods that can be used to solve a system of linear equations. Most methods were designed to simplify manual calculations, however, we are mostly interested in computer based methods
1) Direct matrix inversion
In this method, we multiply by the inverse of the matrix $\textbf{A}$ in both sides of the equation
$$
\begin{align}
\textbf{A} x &= y \\
\textbf{A}^{-1}\textbf{A}x &= \textbf{A}^{-1} y \\
x &= \textbf{A}^{-1}y
\end{align}
$$
```python
A = np.array([[8, 8, 0],
[-2, -4, 5],
[4, 3, -5] ])
y = np.array([2, 5, -3])
x = np.linalg.inv(A)@y
print(x)
```
[ 0.75 -0.5 0.9 ]
```python
#Verify the results
print(f'Ax = {A@x}')
```
Ax = [ 2. 5. -3.]
That methods works fine unless the matrix $\textbf{A}$ is close to be singular. In that case, we can use other methods that avoid finding the inverse of the matrix. The most common method is called $LU$ decomposition, , where a matrix $\textbf{A}$ is expressed as
$$
\textbf{A} = \textbf{L}\textbf{U}
$$
with $\textbf{L}$ a lower diagonal matrix and $\textbf{U}$ a upper diagonal matrix
$$
\begin{split}Ax = y \rightarrow LUx=y\rightarrow
\begin{bmatrix}
l_{1,1} & 0 & 0 & 0\\
l_{2,1} & l_{2,2} & 0 & 0\\
l_{3,1} & l_{3,2} & l_{3,3} & 0 \\
l_{4,1} & l_{4,2} & l_{4,3} & l_{4,4}
\end{bmatrix}
\begin{bmatrix}
u_{1,1} & u_{1,2} & u_{1,3} & u_{1,4}\\
0 & u_{2,2} & u_{2,3} & u_{2,4}\\
0 & 0 & u_{3,3} & u_{3,4} \\
0 & 0 & 0 & u_{4,4}
\end{bmatrix}\left[\begin{array}{c} x_1 \\x_2 \\ x_3 \\x_4 \end{array}\right] =
\left[\begin{array}{c} y_1 \\y_2 \\ y_3 \\y_4 \end{array}\right]\end{split}
$$
we can now split this problem into two simpler problems
$$
\begin{split}
\begin{bmatrix}
u_{1,1} & u_{1,2} & u_{1,3} & u_{1,4}\\
0 & u_{2,2} & u_{2,3} & u_{2,4}\\
0 & 0 & u_{3,3} & u_{3,4} \\
0 & 0 & 0 & u_{4,4}
\end{bmatrix}\left[\begin{array}{c} x_1 \\x_2 \\ x_3 \\x_4 \end{array}\right] =
\left[\begin{array}{c} m_1 \\m_2 \\ m_3 \\m_4 \end{array}\right]\end{split}
$$
and
$$
\begin{split}
\begin{bmatrix}
l_{1,1} & 0 & 0 & 0\\
l_{2,1} & l_{2,2} & 0 & 0\\
l_{3,1} & l_{3,2} & l_{3,3} & 0 \\
l_{4,1} & l_{4,2} & l_{4,3} & l_{4,4}
\end{bmatrix}
\left[\begin{array}{c} m_1 \\m_2 \\ m_3 \\m_4 \end{array}\right] =
\left[\begin{array}{c} y_1 \\y_2 \\ y_3 \\y_4 \end{array}\right]\end{split}
$$
Note that if $\textbf{A}$ is full rank, then the matrices $\textbf{L}$ and $\textbf{U}$ exist, and its inverse is easy to find and the determinant is equal to the mutiplication of the elements in the diagonal
```python
from scipy.linalg import lu #note that we are not using numpy
A = np.array([[8, 8, 0],
[-2, -4, 5],
[4, 3, -5] ])
y = np.array([2, 5, -3])
P,L,U = lu(A)
print(L)
print(U)
```
[[ 1. 0. 0. ]
[-0.25 1. 0. ]
[ 0.5 0.5 1. ]]
[[ 8. 8. 0. ]
[ 0. -2. 5. ]
[ 0. 0. -7.5]]
```python
#compute m using L and y
m = np.linalg.inv(L)@y
```
```python
#compute x using U and m
x = np.linalg.inv(U)@m
print(x)
```
[ 0.75 -0.5 0.9 ]
```python
#Verify the results
print(f'Ax = {A@x}')
```
Ax = [ 2. 5. -3.]
```python
#numpy does the same in its own function to solve linear system
from numpy.linalg import solve
x = solve(A,y)
print(x)
```
[ 0.75 -0.5 0.9 ]
| b70211afe93fde3e5139a2354ad681413b21e932 | 74,860 | ipynb | Jupyter Notebook | Linear_Algebra.ipynb | dguari1/BME3240_2021 | b069d6e6336f44dcb8d3ef79bbcf5410cde68dcc | [
"MIT"
] | 4 | 2021-08-28T03:42:39.000Z | 2021-11-04T17:14:29.000Z | Linear_Algebra.ipynb | dguari1/BME3240_2021 | b069d6e6336f44dcb8d3ef79bbcf5410cde68dcc | [
"MIT"
] | null | null | null | Linear_Algebra.ipynb | dguari1/BME3240_2021 | b069d6e6336f44dcb8d3ef79bbcf5410cde68dcc | [
"MIT"
] | null | null | null | 70.622642 | 18,056 | 0.768875 | true | 6,571 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944177 | 0.952574 | 0.899398 | __label__eng_Latn | 0.970079 | 0.927938 |
Problem: Heavy hitter
Reference:
- Privacy at Scale: Local Differential Privacy in Practice
```python
%load_ext autoreload
%autoreload 2
```
```python
%matplotlib inline
import matplotlib.pyplot as plt
```
Implementation of Random Response Protocol $\pi$ (for user with value $v$) as
\begin{equation}
\forall_{y\in D} Pr[\pi(v)=y]=
\begin{cases}
\frac{e^\epsilon}{e^\epsilon+|D|-1}, & \text{if $y=v$}.\\
\frac{1}{e^\epsilon+|D|-1}, & \text{if $y\neq v$}.\\
\end{cases}
\end{equation}
Intuition: We sample the correct answer (response $y$ equals truth value $v$) with higher probability. Therefore, if $\epsilon$ is higher, probability of returning the correct answer is higher which means the privacy is lower. When $\epsilon = 0$, we have the highest privacy protection.
```python
from heavyhitter import heavyhitter as hh
import numpy as np
class ROUser(hh.User):
def set_epsilon(self, epsilon=0):
self.eps = epsilon
pk = [np.exp(self.eps) if np.array_equal(self.val, np.asarray([x])) else 1
for x in self.rv.xk]
pk /= np.exp(self.eps)+len(self.rv.xk)-1
self.response_rv = stats.rv_discrete(name='response', values=(self.rv.xk, pk))
def response(self, query):
assert hasattr(self, 'response_rv'), "response requires setting privacy budget epsilon"
return self.response_rv.rvs(size=1) == query
class ROAggregator(hh.Aggregator):
def aggregate(self, responses):
return np.mean(responses)
```
```python
from scipy import stats
import random
# create the random variable
domain_size = 5
domain = np.arange(domain_size)
prob = np.asarray([random.random() for _ in range(5)])
prob /= sum(prob)
rv = stats.rv_discrete(name='rv', values=(domain, prob))
# plot prob
plt.figure()
fig, ax = plt.subplots(1, 1)
ax.plot(domain, rv.pmf(domain), 'ro', ms=12, mec='r')
ax.vlines(domain, 0, rv.pmf(domain), colors='r', lw=4)
ax.set_xticks(domain)
plt.show()
# sanity check
from collections import Counter
c = Counter([sample for sample in rv.rvs(size=1000)])
print "Generating 1000 samples\n", list(c.items())
```
Frequency query with local Differential Privacy. The untrusted aggregator is trying to estimate the underlying true distribtion so it needs to query the users to report their values.
There are two sources of randomness
- Each user is a sample of the true distribution.
- To protect their privacy, each user reports a random response.
Note that the randomness of the response in the equation depends on the domain size but not the number of users.
```python
users = [ROUser(index=i, rv=rv) for i in range(1000)]
aggregator = ROAggregator(index=0)
aggregator.subscribe(users)
```
```python
# Frequency query for elements
for eps in [0.0, 5.0]:
for user in users:
user.set_epsilon(epsilon=eps)
print "==== %s users with epsilon=%s ===="%(len(users), users[0].eps)
for x, p in zip(rv.xk, rv.pk):
frequency = aggregator.aggregate(aggregator.query(np.asarray([x])))
print "ele:%s, truth:%0.4f, estimate:%0.4f, diff:%0.4f"%(x,p,frequency, abs(p-frequency))
```
==== 1000 users with epsilon=0.0 ====
ele:0, truth:0.2380, estimate:0.1990, diff:0.0390
ele:1, truth:0.0092, estimate:0.1940, diff:0.1848
ele:2, truth:0.1772, estimate:0.1950, diff:0.0178
ele:3, truth:0.3103, estimate:0.2040, diff:0.1063
ele:4, truth:0.2653, estimate:0.2130, diff:0.0523
==== 1000 users with epsilon=5.0 ====
ele:0, truth:0.2380, estimate:0.2320, diff:0.0060
ele:1, truth:0.0092, estimate:0.0130, diff:0.0038
ele:2, truth:0.1772, estimate:0.1810, diff:0.0038
ele:3, truth:0.3103, estimate:0.2720, diff:0.0383
ele:4, truth:0.2653, estimate:0.3000, diff:0.0347
| 8253beaf4767fb8cc934a0f47a1763fce1a025c5 | 14,932 | ipynb | Jupyter Notebook | 012119_random_response.ipynb | kinsumliu/notes | 3601c50a11966bed84c5d792778f3b103ba801d2 | [
"MIT"
] | null | null | null | 012119_random_response.ipynb | kinsumliu/notes | 3601c50a11966bed84c5d792778f3b103ba801d2 | [
"MIT"
] | null | null | null | 012119_random_response.ipynb | kinsumliu/notes | 3601c50a11966bed84c5d792778f3b103ba801d2 | [
"MIT"
] | null | null | null | 69.775701 | 8,562 | 0.783016 | true | 1,134 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.863392 | 0.781068 | __label__eng_Latn | 0.755536 | 0.653014 |
<!-- dom:TITLE: Week 42 Solving differential equations and Convolutional (CNN) -->
# Week 42 Solving differential equations and Convolutional (CNN)
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: -->
**Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: **Oct 22, 2021**
Copyright 1999-2021, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
## Plan for week 42
* Thursday: Solving differential equations with Neural Networks and start Convolutional Neural Networks and examples.
* [Video of Lecture](https://www.uio.no/studier/emner/matnat/fys/FYS-STK3155/h21/forelesningsvideoer/LectureOctober21.mp4?vrtx=view-as-webpage)
* Friday: Convolutional Neural Networks.
* [Video of Lecture](https://www.uio.no/studier/emner/matnat/fys/FYS-STK4155/h21/forelesningsvideoer/LectureOctober22.mp4?vrtx=view-as-webpage)
* Reading recommendations:
a. See lecture notes for week 42 at <https://compphysics.github.io/MachineLearning/doc/web/course.html.>
b. For neural networks we recommend Goodfellow et al chapters 6 and 7. For CNNs, see Goodfellow et al chapter 9. See also chapter 11 and 12 on practicalities and applications
c. Reading suggestions for implementation of CNNs: [Aurelien Geron's chapter 13](https://github.com/CompPhysics/MachineLearning/blob/master/doc/Textbooks/TensorflowML.pdf).
**Excellent lectures on CNNs.**
* [Video on Convolutional Neural Networks from MIT](https://www.youtube.com/watch?v=iaSUYvmCekI&ab_channel=AlexanderAmini)
* [Video on CNNs from Stanford](https://www.youtube.com/watch?v=bNb2fEVKeEo&list=PLC1qU-LWwrF64f4QKQT-Vg5Wr4qEE1Zxk&index=6&ab_channel=StanfordUniversitySchoolofEngineering)
**And Lecture material on CNNs.**
* [Lectures from IN5400 spring 2019](https://www.uio.no/studier/emner/matnat/ifi/IN5400/v19/material/week5/in5400_2019_week5_convolutional_nerual_networks.pdf)
* [Lectures from IN5400 spring 2021](https://www.uio.no/studier/emner/matnat/ifi/IN5400/v21/lecture-slides/in5400_2021_w5_lecture_convolutions.pdf)
* [See also Michael Nielsen's Lectures](http://neuralnetworksanddeeplearning.com/chap6.html)
## Using Automatic differentiation
In our discussions of ordinary differential equations
we will also study the usage of [Autograd](https://www.youtube.com/watch?v=fRf4l5qaX1M&ab_channel=AlexSmola) in computing gradients for deep learning. For the documentation of Autograd and examples see the lectures slides from [week 40](https://compphysics.github.io/MachineLearning/doc/pub/week40/html/week40.html) and the [Autograd documentation](https://github.com/HIPS/autograd).
## Solving ODEs with Deep Learning
The Universal Approximation Theorem states that a neural network can
approximate any function at a single hidden layer along with one input
and output layer to any given precision.
**Book on solving differential equations with ML methods.**
[An Introduction to Neural Network Methods for Differential Equations](https://www.springer.com/gp/book/9789401798150), by Yadav and Kumar.
## Ordinary Differential Equations
An ordinary differential equation (ODE) is an equation involving functions having one variable.
In general, an ordinary differential equation looks like
<!-- Equation labels as ordinary links -->
<div id="ode"></div>
$$
\begin{equation} \label{ode} \tag{1}
f\left(x, \, g(x), \, g'(x), \, g''(x), \, \dots \, , \, g^{(n)}(x)\right) = 0
\end{equation}
$$
where $g(x)$ is the function to find, and $g^{(n)}(x)$ is the $n$-th derivative of $g(x)$.
The $f\left(x, g(x), g'(x), g''(x), \, \dots \, , g^{(n)}(x)\right)$ is just a way to write that there is an expression involving $x$ and $g(x), \ g'(x), \ g''(x), \, \dots \, , \text{ and } g^{(n)}(x)$ on the left side of the equality sign in ([1](#ode)).
The highest order of derivative, that is the value of $n$, determines to the order of the equation.
The equation is referred to as a $n$-th order ODE.
Along with ([1](#ode)), some additional conditions of the function $g(x)$ are typically given
for the solution to be unique.
## The trial solution
Let the trial solution $g_t(x)$ be
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
g_t(x) = h_1(x) + h_2(x,N(x,P))
\label{_auto1} \tag{2}
\end{equation}
$$
where $h_1(x)$ is a function that makes $g_t(x)$ satisfy a given set
of conditions, $N(x,P)$ a neural network with weights and biases
described by $P$ and $h_2(x, N(x,P))$ some expression involving the
neural network. The role of the function $h_2(x, N(x,P))$, is to
ensure that the output from $N(x,P)$ is zero when $g_t(x)$ is
evaluated at the values of $x$ where the given conditions must be
satisfied. The function $h_1(x)$ should alone make $g_t(x)$ satisfy
the conditions.
But what about the network $N(x,P)$?
As described previously, an optimization method could be used to minimize the parameters of a neural network, that being its weights and biases, through backward propagation.
## Minimization process
For the minimization to be defined, we need to have a cost function at hand to minimize.
It is given that $f\left(x, \, g(x), \, g'(x), \, g''(x), \, \dots \, , \, g^{(n)}(x)\right)$ should be equal to zero in ([1](#ode)).
We can choose to consider the mean squared error as the cost function for an input $x$.
Since we are looking at one input, the cost function is just $f$ squared.
The cost function $c\left(x, P \right)$ can therefore be expressed as
$$
C\left(x, P\right) = \big(f\left(x, \, g(x), \, g'(x), \, g''(x), \, \dots \, , \, g^{(n)}(x)\right)\big)^2
$$
If $N$ inputs are given as a vector $\boldsymbol{x}$ with elements $x_i$ for $i = 1,\dots,N$,
the cost function becomes
<!-- Equation labels as ordinary links -->
<div id="cost"></div>
$$
\begin{equation} \label{cost} \tag{3}
C\left(\boldsymbol{x}, P\right) = \frac{1}{N} \sum_{i=1}^N \big(f\left(x_i, \, g(x_i), \, g'(x_i), \, g''(x_i), \, \dots \, , \, g^{(n)}(x_i)\right)\big)^2
\end{equation}
$$
The neural net should then find the parameters $P$ that minimizes the cost function in
([3](#cost)) for a set of $N$ training samples $x_i$.
## Minimizing the cost function using gradient descent and automatic differentiation
To perform the minimization using gradient descent, the gradient of $C\left(\boldsymbol{x}, P\right)$ is needed.
It might happen so that finding an analytical expression of the gradient of $C(\boldsymbol{x}, P)$ from ([3](#cost)) gets too messy, depending on which cost function one desires to use.
Luckily, there exists libraries that makes the job for us through automatic differentiation.
Automatic differentiation is a method of finding the derivatives numerically with very high precision.
## Example: Exponential decay
An exponential decay of a quantity $g(x)$ is described by the equation
<!-- Equation labels as ordinary links -->
<div id="solve_expdec"></div>
$$
\begin{equation} \label{solve_expdec} \tag{4}
g'(x) = -\gamma g(x)
\end{equation}
$$
with $g(0) = g_0$ for some chosen initial value $g_0$.
The analytical solution of ([4](#solve_expdec)) is
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
g(x) = g_0 \exp\left(-\gamma x\right)
\label{_auto2} \tag{5}
\end{equation}
$$
Having an analytical solution at hand, it is possible to use it to compare how well a neural network finds a solution of ([4](#solve_expdec)).
## The function to solve for
The program will use a neural network to solve
<!-- Equation labels as ordinary links -->
<div id="solveode"></div>
$$
\begin{equation} \label{solveode} \tag{6}
g'(x) = -\gamma g(x)
\end{equation}
$$
where $g(0) = g_0$ with $\gamma$ and $g_0$ being some chosen values.
In this example, $\gamma = 2$ and $g_0 = 10$.
## The trial solution
To begin with, a trial solution $g_t(t)$ must be chosen. A general trial solution for ordinary differential equations could be
$$
g_t(x, P) = h_1(x) + h_2(x, N(x, P))
$$
with $h_1(x)$ ensuring that $g_t(x)$ satisfies some conditions and $h_2(x,N(x, P))$ an expression involving $x$ and the output from the neural network $N(x,P)$ with $P $ being the collection of the weights and biases for each layer. For now, it is assumed that the network consists of one input layer, one hidden layer, and one output layer.
## Setup of Network
In this network, there are no weights and bias at the input layer, so $P = \{ P_{\text{hidden}}, P_{\text{output}} \}$.
If there are $N_{\text{hidden} }$ neurons in the hidden layer, then $P_{\text{hidden}}$ is a $N_{\text{hidden} } \times (1 + N_{\text{input}})$ matrix, given that there are $N_{\text{input}}$ neurons in the input layer.
The first column in $P_{\text{hidden} }$ represents the bias for each neuron in the hidden layer and the second column represents the weights for each neuron in the hidden layer from the input layer.
If there are $N_{\text{output} }$ neurons in the output layer, then $P_{\text{output}} $ is a $N_{\text{output} } \times (1 + N_{\text{hidden} })$ matrix.
Its first column represents the bias of each neuron and the remaining columns represents the weights to each neuron.
It is given that $g(0) = g_0$. The trial solution must fulfill this condition to be a proper solution of ([6](#solveode)). A possible way to ensure that $g_t(0, P) = g_0$, is to let $F(N(x,P)) = x \cdot N(x,P)$ and $A(x) = g_0$. This gives the following trial solution:
<!-- Equation labels as ordinary links -->
<div id="trial"></div>
$$
\begin{equation} \label{trial} \tag{7}
g_t(x, P) = g_0 + x \cdot N(x, P)
\end{equation}
$$
## Reformulating the problem
We wish that our neural network manages to minimize a given cost function.
A reformulation of out equation, ([6](#solveode)), must therefore be done,
such that it describes the problem a neural network can solve for.
The neural network must find the set of weights and biases $P$ such that the trial solution in ([7](#trial)) satisfies ([6](#solveode)).
The trial solution
$$
g_t(x, P) = g_0 + x \cdot N(x, P)
$$
has been chosen such that it already solves the condition $g(0) = g_0$. What remains, is to find $P$ such that
<!-- Equation labels as ordinary links -->
<div id="nnmin"></div>
$$
\begin{equation} \label{nnmin} \tag{8}
g_t'(x, P) = - \gamma g_t(x, P)
\end{equation}
$$
is fulfilled as *best as possible*.
## More technicalities
The left hand side and right hand side of ([8](#nnmin)) must be computed separately, and then the neural network must choose weights and biases, contained in $P$, such that the sides are equal as best as possible.
This means that the absolute or squared difference between the sides must be as close to zero, ideally equal to zero.
In this case, the difference squared shows to be an appropriate measurement of how erroneous the trial solution is with respect to $P$ of the neural network.
This gives the following cost function our neural network must solve for:
$$
\min_{P}\Big\{ \big(g_t'(x, P) - ( -\gamma g_t(x, P) \big)^2 \Big\}
$$
(the notation $\min_{P}\{ f(x, P) \}$ means that we desire to find $P$ that yields the minimum of $f(x, P)$)
or, in terms of weights and biases for the hidden and output layer in our network:
$$
\min_{P_{\text{hidden} }, \ P_{\text{output} }}\Big\{ \big(g_t'(x, \{ P_{\text{hidden} }, P_{\text{output} }\}) - ( -\gamma g_t(x, \{ P_{\text{hidden} }, P_{\text{output} }\}) \big)^2 \Big\}
$$
for an input value $x$.
## More details
If the neural network evaluates $g_t(x, P)$ at more values for $x$, say $N$ values $x_i$ for $i = 1, \dots, N$, then the *total* error to minimize becomes
<!-- Equation labels as ordinary links -->
<div id="min"></div>
$$
\begin{equation} \label{min} \tag{9}
\min_{P}\Big\{\frac{1}{N} \sum_{i=1}^N \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2 \Big\}
\end{equation}
$$
Letting $\boldsymbol{x}$ be a vector with elements $x_i$ and $C(\boldsymbol{x}, P) = \frac{1}{N} \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2$ denote the cost function, the minimization problem that our network must solve, becomes
$$
\min_{P} C(\boldsymbol{x}, P)
$$
In terms of $P_{\text{hidden} }$ and $P_{\text{output} }$, this could also be expressed as
$$
\min_{P_{\text{hidden} }, \ P_{\text{output} }} C(\boldsymbol{x}, \{P_{\text{hidden} }, P_{\text{output} }\})
$$
## A possible implementation of a neural network
For simplicity, it is assumed that the input is an array $\boldsymbol{x} = (x_1, \dots, x_N)$ with $N$ elements. It is at these points the neural network should find $P$ such that it fulfills ([9](#min)).
First, the neural network must feed forward the inputs.
This means that $\boldsymbol{x}s$ must be passed through an input layer, a hidden layer and a output layer. The input layer in this case, does not need to process the data any further.
The input layer will consist of $N_{\text{input} }$ neurons, passing its element to each neuron in the hidden layer. The number of neurons in the hidden layer will be $N_{\text{hidden} }$.
## Technicalities
For the $i$-th in the hidden layer with weight $w_i^{\text{hidden} }$ and bias $b_i^{\text{hidden} }$, the weighting from the $j$-th neuron at the input layer is:
$$
\begin{aligned}
z_{i,j}^{\text{hidden}} &= b_i^{\text{hidden}} + w_i^{\text{hidden}}x_j \\
&=
\begin{pmatrix}
b_i^{\text{hidden}} & w_i^{\text{hidden}}
\end{pmatrix}
\begin{pmatrix}
1 \\
x_j
\end{pmatrix}
\end{aligned}
$$
## Final technicalities I
The result after weighting the inputs at the $i$-th hidden neuron can be written as a vector:
$$
\begin{aligned}
\boldsymbol{z}_{i}^{\text{hidden}} &= \Big( b_i^{\text{hidden}} + w_i^{\text{hidden}}x_1 , \ b_i^{\text{hidden}} + w_i^{\text{hidden}} x_2, \ \dots \, , \ b_i^{\text{hidden}} + w_i^{\text{hidden}} x_N\Big) \\
&=
\begin{pmatrix}
b_i^{\text{hidden}} & w_i^{\text{hidden}}
\end{pmatrix}
\begin{pmatrix}
1 & 1 & \dots & 1 \\
x_1 & x_2 & \dots & x_N
\end{pmatrix} \\
&= \boldsymbol{p}_{i, \text{hidden}}^T X
\end{aligned}
$$
## Final technicalities II
The vector $\boldsymbol{p}_{i, \text{hidden}}^T$ constitutes each row in $P_{\text{hidden} }$, which contains the weights for the neural network to minimize according to ([9](#min)).
After having found $\boldsymbol{z}_{i}^{\text{hidden}} $ for every $i$-th neuron within the hidden layer, the vector will be sent to an activation function $a_i(\boldsymbol{z})$.
In this example, the sigmoid function has been chosen to be the activation function for each hidden neuron:
$$
f(z) = \frac{1}{1 + \exp{(-z)}}
$$
It is possible to use other activations functions for the hidden layer also.
The output $\boldsymbol{x}_i^{\text{hidden}}$ from each $i$-th hidden neuron is:
$$
\boldsymbol{x}_i^{\text{hidden} } = f\big( \boldsymbol{z}_{i}^{\text{hidden}} \big)
$$
The outputs $\boldsymbol{x}_i^{\text{hidden} } $ are then sent to the output layer.
The output layer consists of one neuron in this case, and combines the
output from each of the neurons in the hidden layers. The output layer
combines the results from the hidden layer using some weights $w_i^{\text{output}}$
and biases $b_i^{\text{output}}$. In this case,
it is assumes that the number of neurons in the output layer is one.
## Final technicalities III
The procedure of weighting the output neuron $j$ in the hidden layer to the $i$-th neuron in the output layer is similar as for the hidden layer described previously.
$$
\begin{aligned}
z_{1,j}^{\text{output}} & =
\begin{pmatrix}
b_1^{\text{output}} & \boldsymbol{w}_1^{\text{output}}
\end{pmatrix}
\begin{pmatrix}
1 \\
\boldsymbol{x}_j^{\text{hidden}}
\end{pmatrix}
\end{aligned}
$$
## Final technicalities IV
Expressing $z_{1,j}^{\text{output}}$ as a vector gives the following way of weighting the inputs from the hidden layer:
$$
\boldsymbol{z}_{1}^{\text{output}} =
\begin{pmatrix}
b_1^{\text{output}} & \boldsymbol{w}_1^{\text{output}}
\end{pmatrix}
\begin{pmatrix}
1 & 1 & \dots & 1 \\
\boldsymbol{x}_1^{\text{hidden}} & \boldsymbol{x}_2^{\text{hidden}} & \dots & \boldsymbol{x}_N^{\text{hidden}}
\end{pmatrix}
$$
In this case we seek a continuous range of values since we are approximating a function. This means that after computing $\boldsymbol{z}_{1}^{\text{output}}$ the neural network has finished its feed forward step, and $\boldsymbol{z}_{1}^{\text{output}}$ is the final output of the network.
## Back propagation
The next step is to decide how the parameters should be changed such that they minimize the cost function.
The chosen cost function for this problem is
$$
C(\boldsymbol{x}, P) = \frac{1}{N} \sum_i \big(g_t'(x_i, P) - ( -\gamma g_t(x_i, P) \big)^2
$$
In order to minimize the cost function, an optimization method must be chosen.
Here, gradient descent with a constant step size has been chosen.
## Gradient descent
The idea of the gradient descent algorithm is to update parameters in
a direction where the cost function decreases goes to a minimum.
In general, the update of some parameters $\boldsymbol{\omega}$ given a cost
function defined by some weights $\boldsymbol{\omega}$, $C(\boldsymbol{x},
\boldsymbol{\omega})$, goes as follows:
$$
\boldsymbol{\omega}_{\text{new} } = \boldsymbol{\omega} - \lambda \nabla_{\boldsymbol{\omega}} C(\boldsymbol{x}, \boldsymbol{\omega})
$$
for a number of iterations or until $ \big|\big| \boldsymbol{\omega}_{\text{new} } - \boldsymbol{\omega} \big|\big|$ becomes smaller than some given tolerance.
The value of $\lambda$ decides how large steps the algorithm must take
in the direction of $ \nabla_{\boldsymbol{\omega}} C(\boldsymbol{x}, \boldsymbol{\omega})$.
The notation $\nabla_{\boldsymbol{\omega}}$ express the gradient with respect
to the elements in $\boldsymbol{\omega}$.
In our case, we have to minimize the cost function $C(\boldsymbol{x}, P)$ with
respect to the two sets of weights and biases, that is for the hidden
layer $P_{\text{hidden} }$ and for the output layer $P_{\text{output}
}$ .
This means that $P_{\text{hidden} }$ and $P_{\text{output} }$ is updated by
$$
\begin{aligned}
P_{\text{hidden},\text{new}} &= P_{\text{hidden}} - \lambda \nabla_{P_{\text{hidden}}} C(\boldsymbol{x}, P) \\
P_{\text{output},\text{new}} &= P_{\text{output}} - \lambda \nabla_{P_{\text{output}}} C(\boldsymbol{x}, P)
\end{aligned}
$$
## The code for solving the ODE
```python
%matplotlib inline
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
# Assuming one input, hidden, and output layer
def neural_network(params, x):
# Find the weights (including and biases) for the hidden and output layer.
# Assume that params is a list of parameters for each layer.
# The biases are the first element for each array in params,
# and the weights are the remaning elements in each array in params.
w_hidden = params[0]
w_output = params[1]
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
## Hidden layer:
# Add a row of ones to include bias
x_input = np.concatenate((np.ones((1,num_values)), x_input ), axis = 0)
z_hidden = np.matmul(w_hidden, x_input)
x_hidden = sigmoid(z_hidden)
## Output layer:
# Include bias:
x_hidden = np.concatenate((np.ones((1,num_values)), x_hidden ), axis = 0)
z_output = np.matmul(w_output, x_hidden)
x_output = z_output
return x_output
# The trial solution using the deep neural network:
def g_trial(x,params, g0 = 10):
return g0 + x*neural_network(params,x)
# The right side of the ODE:
def g(x, g_trial, gamma = 2):
return -gamma*g_trial
# The cost function:
def cost_function(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial(x,P)
# Find the derivative w.r.t x of the neural network
d_net_out = elementwise_grad(neural_network,1)(P,x)
# Find the derivative w.r.t x of the trial function
d_g_t = elementwise_grad(g_trial,0)(x,P)
# The right side of the ODE
func = g(x, g_t)
err_sqr = (d_g_t - func)**2
cost_sum = np.sum(err_sqr)
return cost_sum / np.size(err_sqr)
# Solve the exponential decay ODE using neural network with one input, hidden, and output layer
def solve_ode_neural_network(x, num_neurons_hidden, num_iter, lmb):
## Set up initial weights and biases
# For the hidden layer
p0 = npr.randn(num_neurons_hidden, 2 )
# For the output layer
p1 = npr.randn(1, num_neurons_hidden + 1 ) # +1 since bias is included
P = [p0, p1]
print('Initial cost: %g'%cost_function(P, x))
## Start finding the optimal weights using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_grad = grad(cost_function,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of two arrays;
# one for the gradient w.r.t P_hidden and
# one for the gradient w.r.t P_output
cost_grad = cost_function_grad(P, x)
P[0] = P[0] - lmb * cost_grad[0]
P[1] = P[1] - lmb * cost_grad[1]
print('Final cost: %g'%cost_function(P, x))
return P
def g_analytic(x, gamma = 2, g0 = 10):
return g0*np.exp(-gamma*x)
# Solve the given problem
if __name__ == '__main__':
# Set seed such that the weight are initialized
# with same weights and biases for every run.
npr.seed(15)
## Decide the vales of arguments to the function to solve
N = 10
x = np.linspace(0, 1, N)
## Set up the initial parameters
num_hidden_neurons = 10
num_iter = 10000
lmb = 0.001
# Use the network
P = solve_ode_neural_network(x, num_hidden_neurons, num_iter, lmb)
# Print the deviation from the trial solution and true solution
res = g_trial(x,P)
res_analytical = g_analytic(x)
print('Max absolute difference: %g'%np.max(np.abs(res - res_analytical)))
# Plot the results
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(x, res_analytical)
plt.plot(x, res[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
```
## The network with one input layer, specified number of hidden layers, and one output layer
It is also possible to extend the construction of our network into a more general one, allowing the network to contain more than one hidden layers.
The number of neurons within each hidden layer are given as a list of integers in the program below.
```python
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
# The neural network with one input layer and one output layer,
# but with number of hidden layers specified by the user.
def deep_neural_network(deep_params, x):
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consists of
# parameters to all the hidden
# layers AND the output layer.
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
# Due to multiple hidden layers, define a variable referencing to the
# output of the previous layer:
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output
# The trial solution using the deep neural network:
def g_trial_deep(x,params, g0 = 10):
return g0 + x*deep_neural_network(params, x)
# The right side of the ODE:
def g(x, g_trial, gamma = 2):
return -gamma*g_trial
# The same cost function as before, but calls deep_neural_network instead.
def cost_function_deep(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial_deep(x,P)
# Find the derivative w.r.t x of the neural network
d_net_out = elementwise_grad(deep_neural_network,1)(P,x)
# Find the derivative w.r.t x of the trial function
d_g_t = elementwise_grad(g_trial_deep,0)(x,P)
# The right side of the ODE
func = g(x, g_t)
err_sqr = (d_g_t - func)**2
cost_sum = np.sum(err_sqr)
return cost_sum / np.size(err_sqr)
# Solve the exponential decay ODE using neural network with one input and one output layer,
# but with specified number of hidden layers from the user.
def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb):
# num_hidden_neurons is now a list of number of neurons within each hidden layer
# The number of elements in the list num_hidden_neurons thus represents
# the number of hidden layers.
# Find the number of hidden layers:
N_hidden = np.size(num_neurons)
## Set up initial weights and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 )
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: %g'%cost_function_deep(P, x))
## Start finding the optimal weights using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_deep_grad = grad(cost_function_deep,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases
# in the hidden layers and output layers evaluated at x.
cost_deep_grad = cost_function_deep_grad(P, x)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_deep_grad[l]
print('Final cost: %g'%cost_function_deep(P, x))
return P
def g_analytic(x, gamma = 2, g0 = 10):
return g0*np.exp(-gamma*x)
# Solve the given problem
if __name__ == '__main__':
npr.seed(15)
## Decide the vales of arguments to the function to solve
N = 10
x = np.linspace(0, 1, N)
## Set up the initial parameters
num_hidden_neurons = np.array([10,10])
num_iter = 10000
lmb = 0.001
P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb)
res = g_trial_deep(x,P)
res_analytical = g_analytic(x)
plt.figure(figsize=(10,10))
plt.title('Performance of a deep neural network solving an ODE compared to the analytical solution')
plt.plot(x, res_analytical)
plt.plot(x, res[0,:])
plt.legend(['analytical','dnn'])
plt.ylabel('g(x)')
plt.show()
```
## Example: Population growth
A logistic model of population growth assumes that a population converges toward an equilibrium.
The population growth can be modeled by
<!-- Equation labels as ordinary links -->
<div id="log"></div>
$$
\begin{equation} \label{log} \tag{10}
g'(t) = \alpha g(t)(A - g(t))
\end{equation}
$$
where $g(t)$ is the population density at time $t$, $\alpha > 0$ the growth rate and $A > 0$ is the maximum population number in the environment.
Also, at $t = 0$ the population has the size $g(0) = g_0$, where $g_0$ is some chosen constant.
In this example, similar network as for the exponential decay using Autograd has been used to solve the equation. However, as the implementation might suffer from e.g numerical instability
and high execution time (this might be more apparent in the examples solving PDEs),
using a library like TensorFlow is recommended.
Here, we stay with a more simple approach and implement for comparison, the simple forward Euler method.
## Setting up the problem
Here, we will model a population $g(t)$ in an environment having carrying capacity $A$.
The population follows the model
<!-- Equation labels as ordinary links -->
<div id="solveode_population"></div>
$$
\begin{equation} \label{solveode_population} \tag{11}
g'(t) = \alpha g(t)(A - g(t))
\end{equation}
$$
where $g(0) = g_0$.
In this example, we let $\alpha = 2$, $A = 1$, and $g_0 = 1.2$.
## The trial solution
We will get a slightly different trial solution, as the boundary conditions are different
compared to the case for exponential decay.
A possible trial solution satisfying the condition $g(0) = g_0$ could be
$$
h_1(t) = g_0 + t \cdot N(t,P)
$$
with $N(t,P)$ being the output from the neural network with weights and biases for each layer collected in the set $P$.
The analytical solution is
$$
g(t) = \frac{Ag_0}{g_0 + (A - g_0)\exp(-\alpha A t)}
$$
## The program using Autograd
The network will be the similar as for the exponential decay example, but with some small modifications for our problem.
```python
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
# Function to get the parameters.
# Done such that one can easily change the paramaters after one's liking.
def get_parameters():
alpha = 2
A = 1
g0 = 1.2
return alpha, A, g0
def deep_neural_network(P, x):
# N_hidden is the number of hidden layers
N_hidden = np.size(P) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
# Due to multiple hidden layers, define a variable referencing to the
# output of the previous layer:
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = P[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = P[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output
def cost_function_deep(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial_deep(x,P)
# Find the derivative w.r.t x of the trial function
d_g_t = elementwise_grad(g_trial_deep,0)(x,P)
# The right side of the ODE
func = f(x, g_t)
err_sqr = (d_g_t - func)**2
cost_sum = np.sum(err_sqr)
return cost_sum / np.size(err_sqr)
# The right side of the ODE:
def f(x, g_trial):
alpha,A, g0 = get_parameters()
return alpha*g_trial*(A - g_trial)
# The trial solution using the deep neural network:
def g_trial_deep(x, params):
alpha,A, g0 = get_parameters()
return g0 + x*deep_neural_network(params,x)
# The analytical solution:
def g_analytic(t):
alpha,A, g0 = get_parameters()
return A*g0/(g0 + (A - g0)*np.exp(-alpha*A*t))
def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb):
# num_hidden_neurons is now a list of number of neurons within each hidden layer
# Find the number of hidden layers:
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 )
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: %g'%cost_function_deep(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_deep_grad = grad(cost_function_deep,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases
# in the hidden layers and output layers evaluated at x.
cost_deep_grad = cost_function_deep_grad(P, x)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_deep_grad[l]
print('Final cost: %g'%cost_function_deep(P, x))
return P
if __name__ == '__main__':
npr.seed(4155)
## Decide the vales of arguments to the function to solve
Nt = 10
T = 1
t = np.linspace(0,T, Nt)
## Set up the initial parameters
num_hidden_neurons = [100, 50, 25]
num_iter = 1000
lmb = 1e-3
P = solve_ode_deep_neural_network(t, num_hidden_neurons, num_iter, lmb)
g_dnn_ag = g_trial_deep(t,P)
g_analytical = g_analytic(t)
# Find the maximum absolute difference between the solutons:
diff_ag = np.max(np.abs(g_dnn_ag - g_analytical))
print("The max absolute difference between the solutions is: %g"%diff_ag)
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(t, g_analytical)
plt.plot(t, g_dnn_ag[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('t')
plt.ylabel('g(t)')
plt.show()
```
## Using forward Euler to solve the ODE
A straightforward way of solving an ODE numerically, is to use Euler's method.
Euler's method uses Taylor series to approximate the value at a function $f$ at a step $\Delta x$ from $x$:
$$
f(x + \Delta x) \approx f(x) + \Delta x f'(x)
$$
In our case, using Euler's method to approximate the value of $g$ at a step $\Delta t$ from $t$ yields
$$
\begin{aligned}
g(t + \Delta t) &\approx g(t) + \Delta t g'(t) \\
&= g(t) + \Delta t \big(\alpha g(t)(A - g(t))\big)
\end{aligned}
$$
along with the condition that $g(0) = g_0$.
Let $t_i = i \cdot \Delta t$ where $\Delta t = \frac{T}{N_t-1}$ where $T$ is the final time our solver must solve for and $N_t$ the number of values for $t \in [0, T]$ for $i = 0, \dots, N_t-1$.
For $i \geq 1$, we have that
$$
\begin{aligned}
t_i &= i\Delta t \\
&= (i - 1)\Delta t + \Delta t \\
&= t_{i-1} + \Delta t
\end{aligned}
$$
Now, if $g_i = g(t_i)$ then
<!-- Equation labels as ordinary links -->
<div id="odenum"></div>
$$
\begin{equation}
\begin{aligned}
g_i &= g(t_i) \\
&= g(t_{i-1} + \Delta t) \\
&\approx g(t_{i-1}) + \Delta t \big(\alpha g(t_{i-1})(A - g(t_{i-1}))\big) \\
&= g_{i-1} + \Delta t \big(\alpha g_{i-1}(A - g_{i-1})\big)
\end{aligned}
\end{equation} \label{odenum} \tag{12}
$$
for $i \geq 1$ and $g_0 = g(t_0) = g(0) = g_0$.
Equation ([12](#odenum)) could be implemented in the following way,
extending the program that uses the network using Autograd:
```python
# Assume that all function definitions from the example program using Autograd
# are located here.
if __name__ == '__main__':
npr.seed(4155)
## Decide the vales of arguments to the function to solve
Nt = 10
T = 1
t = np.linspace(0,T, Nt)
## Set up the initial parameters
num_hidden_neurons = [100,50,25]
num_iter = 1000
lmb = 1e-3
P = solve_ode_deep_neural_network(t, num_hidden_neurons, num_iter, lmb)
g_dnn_ag = g_trial_deep(t,P)
g_analytical = g_analytic(t)
# Find the maximum absolute difference between the solutons:
diff_ag = np.max(np.abs(g_dnn_ag - g_analytical))
print("The max absolute difference between the solutions is: %g"%diff_ag)
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(t, g_analytical)
plt.plot(t, g_dnn_ag[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('t')
plt.ylabel('g(t)')
## Find an approximation to the funtion using forward Euler
alpha, A, g0 = get_parameters()
dt = T/(Nt - 1)
# Perform forward Euler to solve the ODE
g_euler = np.zeros(Nt)
g_euler[0] = g0
for i in range(1,Nt):
g_euler[i] = g_euler[i-1] + dt*(alpha*g_euler[i-1]*(A - g_euler[i-1]))
# Print the errors done by each method
diff1 = np.max(np.abs(g_euler - g_analytical))
diff2 = np.max(np.abs(g_dnn_ag[0,:] - g_analytical))
print('Max absolute difference between Euler method and analytical: %g'%diff1)
print('Max absolute difference between deep neural network and analytical: %g'%diff2)
# Plot results
plt.figure(figsize=(10,10))
plt.plot(t,g_euler)
plt.plot(t,g_analytical)
plt.plot(t,g_dnn_ag[0,:])
plt.legend(['euler','analytical','dnn'])
plt.xlabel('Time t')
plt.ylabel('g(t)')
plt.show()
```
## Example: Solving the one dimensional Poisson equation
The Poisson equation for $g(x)$ in one dimension is
<!-- Equation labels as ordinary links -->
<div id="poisson"></div>
$$
\begin{equation} \label{poisson} \tag{13}
-g''(x) = f(x)
\end{equation}
$$
where $f(x)$ is a given function for $x \in (0,1)$.
The conditions that $g(x)$ is chosen to fulfill, are
$$
\begin{align*}
g(0) &= 0 \\
g(1) &= 0
\end{align*}
$$
This equation can be solved numerically using programs where e.g Autograd and TensorFlow are used.
The results from the networks can then be compared to the analytical solution.
In addition, it could be interesting to see how a typical method for numerically solving second order ODEs compares to the neural networks.
## The specific equation to solve for
Here, the function $g(x)$ to solve for follows the equation
$$
-g''(x) = f(x),\qquad x \in (0,1)
$$
where $f(x)$ is a given function, along with the chosen conditions
<!-- Equation labels as ordinary links -->
<div id="cond"></div>
$$
\begin{aligned}
g(0) = g(1) = 0
\end{aligned}\label{cond} \tag{14}
$$
In this example, we consider the case when $f(x) = (3x + x^2)\exp(x)$.
For this case, a possible trial solution satisfying the conditions could be
$$
g_t(x) = x \cdot (1-x) \cdot N(P,x)
$$
The analytical solution for this problem is
$$
g(x) = x(1 - x)\exp(x)
$$
## Solving the equation using Autograd
```python
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
# Due to multiple hidden layers, define a variable referencing to the
# output of the previous layer:
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output
def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb):
# num_hidden_neurons is now a list of number of neurons within each hidden layer
# Find the number of hidden layers:
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 )
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: %g'%cost_function_deep(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_deep_grad = grad(cost_function_deep,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases
# in the hidden layers and output layers evaluated at x.
cost_deep_grad = cost_function_deep_grad(P, x)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_deep_grad[l]
print('Final cost: %g'%cost_function_deep(P, x))
return P
## Set up the cost function specified for this Poisson equation:
# The right side of the ODE
def f(x):
return (3*x + x**2)*np.exp(x)
def cost_function_deep(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial_deep(x,P)
# Find the derivative w.r.t x of the trial function
d2_g_t = elementwise_grad(elementwise_grad(g_trial_deep,0))(x,P)
right_side = f(x)
err_sqr = (-d2_g_t - right_side)**2
cost_sum = np.sum(err_sqr)
return cost_sum/np.size(err_sqr)
# The trial solution:
def g_trial_deep(x,P):
return x*(1-x)*deep_neural_network(P,x)
# The analytic solution;
def g_analytic(x):
return x*(1-x)*np.exp(x)
if __name__ == '__main__':
npr.seed(4155)
## Decide the vales of arguments to the function to solve
Nx = 10
x = np.linspace(0,1, Nx)
## Set up the initial parameters
num_hidden_neurons = [200,100]
num_iter = 1000
lmb = 1e-3
P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb)
g_dnn_ag = g_trial_deep(x,P)
g_analytical = g_analytic(x)
# Find the maximum absolute difference between the solutons:
max_diff = np.max(np.abs(g_dnn_ag - g_analytical))
print("The max absolute difference between the solutions is: %g"%max_diff)
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(x, g_analytical)
plt.plot(x, g_dnn_ag[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
```
## Comparing with a numerical scheme
The Poisson equation is possible to solve using Taylor series to approximate the second derivative.
Using Taylor series, the second derivative can be expressed as
$$
g''(x) = \frac{g(x + \Delta x) - 2g(x) + g(x-\Delta x)}{\Delta x^2} + E_{\Delta x}(x)
$$
where $\Delta x$ is a small step size and $E_{\Delta x}(x)$ being the error term.
Looking away from the error terms gives an approximation to the second derivative:
<!-- Equation labels as ordinary links -->
<div id="approx"></div>
$$
\begin{equation} \label{approx} \tag{15}
g''(x) \approx \frac{g(x + \Delta x) - 2g(x) + g(x-\Delta x)}{\Delta x^2}
\end{equation}
$$
If $x_i = i \Delta x = x_{i-1} + \Delta x$ and $g_i = g(x_i)$ for $i = 1,\dots N_x - 2$ with $N_x$ being the number of values for $x$, ([15](#approx)) becomes
$$
\begin{aligned}
g''(x_i) &\approx \frac{g(x_i + \Delta x) - 2g(x_i) + g(x_i -\Delta x)}{\Delta x^2} \\
&= \frac{g_{i+1} - 2g_i + g_{i-1}}{\Delta x^2}
\end{aligned}
$$
Since we know from our problem that
$$
\begin{aligned}
-g''(x) &= f(x) \\
&= (3x + x^2)\exp(x)
\end{aligned}
$$
along with the conditions $g(0) = g(1) = 0$,
the following scheme can be used to find an approximate solution for $g(x)$ numerically:
<!-- Equation labels as ordinary links -->
<div id="odesys"></div>
$$
\begin{equation}
\begin{aligned}
-\Big( \frac{g_{i+1} - 2g_i + g_{i-1}}{\Delta x^2} \Big) &= f(x_i) \\
-g_{i+1} + 2g_i - g_{i-1} &= \Delta x^2 f(x_i)
\end{aligned}
\end{equation} \label{odesys} \tag{16}
$$
for $i = 1, \dots, N_x - 2$ where $g_0 = g_{N_x - 1} = 0$ and $f(x_i) = (3x_i + x_i^2)\exp(x_i)$, which is given for our specific problem.
The equation can be rewritten into a matrix equation:
$$
\begin{aligned}
\begin{pmatrix}
2 & -1 & 0 & \dots & 0 \\
-1 & 2 & -1 & \dots & 0 \\
\vdots & & \ddots & & \vdots \\
0 & \dots & -1 & 2 & -1 \\
0 & \dots & 0 & -1 & 2\\
\end{pmatrix}
\begin{pmatrix}
g_1 \\
g_2 \\
\vdots \\
g_{N_x - 3} \\
g_{N_x - 2}
\end{pmatrix}
&=
\Delta x^2
\begin{pmatrix}
f(x_1) \\
f(x_2) \\
\vdots \\
f(x_{N_x - 3}) \\
f(x_{N_x - 2})
\end{pmatrix} \\
\boldsymbol{A}\boldsymbol{g} &= \boldsymbol{f},
\end{aligned}
$$
which makes it possible to solve for the vector $\boldsymbol{g}$.
## Setting up the code
We can then compare the result from this numerical scheme with the output from our network using Autograd:
```python
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
# Due to multiple hidden layers, define a variable referencing to the
# output of the previous layer:
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output
def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb):
# num_hidden_neurons is now a list of number of neurons within each hidden layer
# Find the number of hidden layers:
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 )
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: %g'%cost_function_deep(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_deep_grad = grad(cost_function_deep,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases
# in the hidden layers and output layers evaluated at x.
cost_deep_grad = cost_function_deep_grad(P, x)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_deep_grad[l]
print('Final cost: %g'%cost_function_deep(P, x))
return P
## Set up the cost function specified for this Poisson equation:
# The right side of the ODE
def f(x):
return (3*x + x**2)*np.exp(x)
def cost_function_deep(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial_deep(x,P)
# Find the derivative w.r.t x of the trial function
d2_g_t = elementwise_grad(elementwise_grad(g_trial_deep,0))(x,P)
right_side = f(x)
err_sqr = (-d2_g_t - right_side)**2
cost_sum = np.sum(err_sqr)
return cost_sum/np.size(err_sqr)
# The trial solution:
def g_trial_deep(x,P):
return x*(1-x)*deep_neural_network(P,x)
# The analytic solution;
def g_analytic(x):
return x*(1-x)*np.exp(x)
if __name__ == '__main__':
npr.seed(4155)
## Decide the vales of arguments to the function to solve
Nx = 10
x = np.linspace(0,1, Nx)
## Set up the initial parameters
num_hidden_neurons = [200,100]
num_iter = 1000
lmb = 1e-3
P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb)
g_dnn_ag = g_trial_deep(x,P)
g_analytical = g_analytic(x)
# Find the maximum absolute difference between the solutons:
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(x, g_analytical)
plt.plot(x, g_dnn_ag[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('x')
plt.ylabel('g(x)')
## Perform the computation using the numerical scheme
dx = 1/(Nx - 1)
# Set up the matrix A
A = np.zeros((Nx-2,Nx-2))
A[0,0] = 2
A[0,1] = -1
for i in range(1,Nx-3):
A[i,i-1] = -1
A[i,i] = 2
A[i,i+1] = -1
A[Nx - 3, Nx - 4] = -1
A[Nx - 3, Nx - 3] = 2
# Set up the vector f
f_vec = dx**2 * f(x[1:-1])
# Solve the equation
g_res = np.linalg.solve(A,f_vec)
g_vec = np.zeros(Nx)
g_vec[1:-1] = g_res
# Print the differences between each method
max_diff1 = np.max(np.abs(g_dnn_ag - g_analytical))
max_diff2 = np.max(np.abs(g_vec - g_analytical))
print("The max absolute difference between the analytical solution and DNN Autograd: %g"%max_diff1)
print("The max absolute difference between the analytical solution and numerical scheme: %g"%max_diff2)
# Plot the results
plt.figure(figsize=(10,10))
plt.plot(x,g_vec)
plt.plot(x,g_analytical)
plt.plot(x,g_dnn_ag[0,:])
plt.legend(['numerical scheme','analytical','dnn'])
plt.show()
```
## Partial Differential Equations
A partial differential equation (PDE) has a solution here the function
is defined by multiple variables. The equation may involve all kinds
of combinations of which variables the function is differentiated with
respect to.
In general, a partial differential equation for a function $g(x_1,\dots,x_N)$ with $N$ variables may be expressed as
<!-- Equation labels as ordinary links -->
<div id="PDE"></div>
$$
\begin{equation} \label{PDE} \tag{17}
f\left(x_1, \, \dots \, , x_N, \frac{\partial g(x_1,\dots,x_N) }{\partial x_1}, \dots , \frac{\partial g(x_1,\dots,x_N) }{\partial x_N}, \frac{\partial g(x_1,\dots,x_N) }{\partial x_1\partial x_2}, \, \dots \, , \frac{\partial^n g(x_1,\dots,x_N) }{\partial x_N^n} \right) = 0
\end{equation}
$$
where $f$ is an expression involving all kinds of possible mixed derivatives of $g(x_1,\dots,x_N)$ up to an order $n$. In order for the solution to be unique, some additional conditions must also be given.
## Type of problem
The problem our network must solve for, is similar to the ODE case.
We must have a trial solution $g_t$ at hand.
For instance, the trial solution could be expressed as
$$
\begin{align*}
g_t(x_1,\dots,x_N) = h_1(x_1,\dots,x_N) + h_2(x_1,\dots,x_N,N(x_1,\dots,x_N,P))
\end{align*}
$$
where $h_1(x_1,\dots,x_N)$ is a function that ensures $g_t(x_1,\dots,x_N)$ satisfies some given conditions.
The neural network $N(x_1,\dots,x_N,P)$ has weights and biases described by $P$ and $h_2(x_1,\dots,x_N,N(x_1,\dots,x_N,P))$ is an expression using the output from the neural network in some way.
The role of the function $h_2(x_1,\dots,x_N,N(x_1,\dots,x_N,P))$, is to ensure that the output of $N(x_1,\dots,x_N,P)$ is zero when $g_t(x_1,\dots,x_N)$ is evaluated at the values of $x_1,\dots,x_N$ where the given conditions must be satisfied. The function $h_1(x_1,\dots,x_N)$ should alone make $g_t(x_1,\dots,x_N)$ satisfy the conditions.
## Network requirements
The network tries then the minimize the cost function following the
same ideas as described for the ODE case, but now with more than one
variables to consider. The concept still remains the same; find a set
of parameters $P$ such that the expression $f$ in ([17](#PDE)) is as
close to zero as possible.
As for the ODE case, the cost function is the mean squared error that
the network must try to minimize. The cost function for the network to
minimize is
$$
C\left(x_1, \dots, x_N, P\right) = \left( f\left(x_1, \, \dots \, , x_N, \frac{\partial g(x_1,\dots,x_N) }{\partial x_1}, \dots , \frac{\partial g(x_1,\dots,x_N) }{\partial x_N}, \frac{\partial g(x_1,\dots,x_N) }{\partial x_1\partial x_2}, \, \dots \, , \frac{\partial^n g(x_1,\dots,x_N) }{\partial x_N^n} \right) \right)^2
$$
## More details
If we let $\boldsymbol{x} = \big( x_1, \dots, x_N \big)$ be an array containing the values for $x_1, \dots, x_N$ respectively, the cost function can be reformulated into the following:
$$
C\left(\boldsymbol{x}, P\right) = f\left( \left( \boldsymbol{x}, \frac{\partial g(\boldsymbol{x}) }{\partial x_1}, \dots , \frac{\partial g(\boldsymbol{x}) }{\partial x_N}, \frac{\partial g(\boldsymbol{x}) }{\partial x_1\partial x_2}, \, \dots \, , \frac{\partial^n g(\boldsymbol{x}) }{\partial x_N^n} \right) \right)^2
$$
If we also have $M$ different sets of values for $x_1, \dots, x_N$, that is $\boldsymbol{x}_i = \big(x_1^{(i)}, \dots, x_N^{(i)}\big)$ for $i = 1,\dots,M$ being the rows in matrix $X$, the cost function can be generalized into
$$
C\left(X, P \right) = \sum_{i=1}^M f\left( \left( \boldsymbol{x}_i, \frac{\partial g(\boldsymbol{x}_i) }{\partial x_1}, \dots , \frac{\partial g(\boldsymbol{x}_i) }{\partial x_N}, \frac{\partial g(\boldsymbol{x}_i) }{\partial x_1\partial x_2}, \, \dots \, , \frac{\partial^n g(\boldsymbol{x}_i) }{\partial x_N^n} \right) \right)^2.
$$
## Example: The diffusion equation
In one spatial dimension, the equation reads
$$
\frac{\partial g(x,t)}{\partial t} = \frac{\partial^2 g(x,t)}{\partial x^2}
$$
where a possible choice of conditions are
$$
\begin{align*}
g(0,t) &= 0 ,\qquad t \geq 0 \\
g(1,t) &= 0, \qquad t \geq 0 \\
g(x,0) &= u(x),\qquad x\in [0,1]
\end{align*}
$$
with $u(x)$ being some given function.
## Defining the problem
For this case, we want to find $g(x,t)$ such that
<!-- Equation labels as ordinary links -->
<div id="diffonedim"></div>
$$
\begin{equation}
\frac{\partial g(x,t)}{\partial t} = \frac{\partial^2 g(x,t)}{\partial x^2}
\end{equation} \label{diffonedim} \tag{18}
$$
and
$$
\begin{align*}
g(0,t) &= 0 ,\qquad t \geq 0 \\
g(1,t) &= 0, \qquad t \geq 0 \\
g(x,0) &= u(x),\qquad x\in [0,1]
\end{align*}
$$
with $u(x) = \sin(\pi x)$.
First, let us set up the deep neural network.
The deep neural network will follow the same structure as discussed in the examples solving the ODEs.
First, we will look into how Autograd could be used in a network tailored to solve for bivariate functions.
## Setting up the network using Autograd
The only change to do here, is to extend our network such that
functions of multiple parameters are correctly handled. In this case
we have two variables in our function to solve for, that is time $t$
and position $x$. The variables will be represented by a
one-dimensional array in the program. The program will evaluate the
network at each possible pair $(x,t)$, given an array for the desired
$x$-values and $t$-values to approximate the solution at.
```python
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# x is now a point and a 1D numpy array; make it a column vector
num_coordinates = np.size(x,0)
x = x.reshape(num_coordinates,-1)
num_points = np.size(x,1)
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assume that the input layer does nothing to the input x
x_input = x
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_points)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_points)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output[0][0]
```
## Setting up the network using Autograd; The trial solution
The cost function must then iterate through the given arrays
containing values for $x$ and $t$, defines a point $(x,t)$ the deep
neural network and the trial solution is evaluated at, and then finds
the Jacobian of the trial solution.
A possible trial solution for this PDE is
$$
g_t(x,t) = h_1(x,t) + x(1-x)tN(x,t,P)
$$
with $A(x,t)$ being a function ensuring that $g_t(x,t)$ satisfies our given conditions, and $N(x,t,P)$ being the output from the deep neural network using weights and biases for each layer from $P$.
To fulfill the conditions, $A(x,t)$ could be:
$$
h_1(x,t) = (1-t)\Big(u(x) - \big((1-x)u(0) + x u(1)\big)\Big) = (1-t)u(x) = (1-t)\sin(\pi x)
$$
since $(0) = u(1) = 0$ and $u(x) = \sin(\pi x)$.
## Why the jacobian?
The Jacobian is used because the program must find the derivative of
the trial solution with respect to $x$ and $t$.
This gives the necessity of computing the Jacobian matrix, as we want
to evaluate the gradient with respect to $x$ and $t$ (note that the
Jacobian of a scalar-valued multivariate function is simply its
gradient).
In Autograd, the differentiation is by default done with respect to
the first input argument of your Python function. Since the points is
an array representing $x$ and $t$, the Jacobian is calculated using
the values of $x$ and $t$.
To find the second derivative with respect to $x$ and $t$, the
Jacobian can be found for the second time. The result is a Hessian
matrix, which is the matrix containing all the possible second order
mixed derivatives of $g(x,t)$.
```python
# Set up the trial function:
def u(x):
return np.sin(np.pi*x)
def g_trial(point,P):
x,t = point
return (1-t)*u(x) + x*(1-x)*t*deep_neural_network(P,point)
# The right side of the ODE:
def f(point):
return 0.
# The cost function:
def cost_function(P, x, t):
cost_sum = 0
g_t_jacobian_func = jacobian(g_trial)
g_t_hessian_func = hessian(g_trial)
for x_ in x:
for t_ in t:
point = np.array([x_,t_])
g_t = g_trial(point,P)
g_t_jacobian = g_t_jacobian_func(point,P)
g_t_hessian = g_t_hessian_func(point,P)
g_t_dt = g_t_jacobian[1]
g_t_d2x = g_t_hessian[0][0]
func = f(point)
err_sqr = ( (g_t_dt - g_t_d2x) - func)**2
cost_sum += err_sqr
return cost_sum
```
## Setting up the network using Autograd; The full program
Having set up the network, along with the trial solution and cost function, we can now see how the deep neural network performs by comparing the results to the analytical solution.
The analytical solution of our problem is
$$
g(x,t) = \exp(-\pi^2 t)\sin(\pi x)
$$
A possible way to implement a neural network solving the PDE, is given below.
Be aware, though, that it is fairly slow for the parameters used.
A better result is possible, but requires more iterations, and thus longer time to complete.
Indeed, the program below is not optimal in its implementation, but rather serves as an example on how to implement and use a neural network to solve a PDE.
Using TensorFlow results in a much better execution time. Try it!
```python
import autograd.numpy as np
from autograd import jacobian,hessian,grad
import autograd.numpy.random as npr
from matplotlib import cm
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
## Set up the network
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# x is now a point and a 1D numpy array; make it a column vector
num_coordinates = np.size(x,0)
x = x.reshape(num_coordinates,-1)
num_points = np.size(x,1)
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assume that the input layer does nothing to the input x
x_input = x
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_points)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_points)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output[0][0]
## Define the trial solution and cost function
def u(x):
return np.sin(np.pi*x)
def g_trial(point,P):
x,t = point
return (1-t)*u(x) + x*(1-x)*t*deep_neural_network(P,point)
# The right side of the ODE:
def f(point):
return 0.
# The cost function:
def cost_function(P, x, t):
cost_sum = 0
g_t_jacobian_func = jacobian(g_trial)
g_t_hessian_func = hessian(g_trial)
for x_ in x:
for t_ in t:
point = np.array([x_,t_])
g_t = g_trial(point,P)
g_t_jacobian = g_t_jacobian_func(point,P)
g_t_hessian = g_t_hessian_func(point,P)
g_t_dt = g_t_jacobian[1]
g_t_d2x = g_t_hessian[0][0]
func = f(point)
err_sqr = ( (g_t_dt - g_t_d2x) - func)**2
cost_sum += err_sqr
return cost_sum /( np.size(x)*np.size(t) )
## For comparison, define the analytical solution
def g_analytic(point):
x,t = point
return np.exp(-np.pi**2*t)*np.sin(np.pi*x)
## Set up a function for training the network to solve for the equation
def solve_pde_deep_neural_network(x,t, num_neurons, num_iter, lmb):
## Set up initial weigths and biases
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 + 1 ) # 2 since we have two points, +1 to include bias
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: ',cost_function(P, x, t))
cost_function_grad = grad(cost_function,0)
# Let the update be done num_iter times
for i in range(num_iter):
cost_grad = cost_function_grad(P, x , t)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_grad[l]
print('Final cost: ',cost_function(P, x, t))
return P
if __name__ == '__main__':
### Use the neural network:
npr.seed(15)
## Decide the vales of arguments to the function to solve
Nx = 10; Nt = 10
x = np.linspace(0, 1, Nx)
t = np.linspace(0,1,Nt)
## Set up the parameters for the network
num_hidden_neurons = [100, 25]
num_iter = 250
lmb = 0.01
P = solve_pde_deep_neural_network(x,t, num_hidden_neurons, num_iter, lmb)
## Store the results
g_dnn_ag = np.zeros((Nx, Nt))
G_analytical = np.zeros((Nx, Nt))
for i,x_ in enumerate(x):
for j, t_ in enumerate(t):
point = np.array([x_, t_])
g_dnn_ag[i,j] = g_trial(point,P)
G_analytical[i,j] = g_analytic(point)
# Find the map difference between the analytical and the computed solution
diff_ag = np.abs(g_dnn_ag - G_analytical)
print('Max absolute difference between the analytical solution and the network: %g'%np.max(diff_ag))
## Plot the solutions in two dimensions, that being in position and time
T,X = np.meshgrid(t,x)
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Solution from the deep neural network w/ %d layer'%len(num_hidden_neurons))
s = ax.plot_surface(T,X,g_dnn_ag,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Analytical solution')
s = ax.plot_surface(T,X,G_analytical,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Difference')
s = ax.plot_surface(T,X,diff_ag,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
## Take some slices of the 3D plots just to see the solutions at particular times
indx1 = 0
indx2 = int(Nt/2)
indx3 = Nt-1
t1 = t[indx1]
t2 = t[indx2]
t3 = t[indx3]
# Slice the results from the DNN
res1 = g_dnn_ag[:,indx1]
res2 = g_dnn_ag[:,indx2]
res3 = g_dnn_ag[:,indx3]
# Slice the analytical results
res_analytical1 = G_analytical[:,indx1]
res_analytical2 = G_analytical[:,indx2]
res_analytical3 = G_analytical[:,indx3]
# Plot the slices
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t1)
plt.plot(x, res1)
plt.plot(x,res_analytical1)
plt.legend(['dnn','analytical'])
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t2)
plt.plot(x, res2)
plt.plot(x,res_analytical2)
plt.legend(['dnn','analytical'])
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t3)
plt.plot(x, res3)
plt.plot(x,res_analytical3)
plt.legend(['dnn','analytical'])
plt.show()
```
## Example: Solving the wave equation with Neural Networks
The wave equation is
$$
\frac{\partial^2 g(x,t)}{\partial t^2} = c^2\frac{\partial^2 g(x,t)}{\partial x^2}
$$
with $c$ being the specified wave speed.
Here, the chosen conditions are
$$
\begin{align*}
g(0,t) &= 0 \\
g(1,t) &= 0 \\
g(x,0) &= u(x) \\
\frac{\partial g(x,t)}{\partial t} \Big |_{t = 0} &= v(x)
\end{align*}
$$
where $\frac{\partial g(x,t)}{\partial t} \Big |_{t = 0}$ means the derivative of $g(x,t)$ with respect to $t$ is evaluated at $t = 0$, and $u(x)$ and $v(x)$ being given functions.
## The problem to solve for
The wave equation to solve for, is
<!-- Equation labels as ordinary links -->
<div id="wave"></div>
$$
\begin{equation} \label{wave} \tag{19}
\frac{\partial^2 g(x,t)}{\partial t^2} = c^2 \frac{\partial^2 g(x,t)}{\partial x^2}
\end{equation}
$$
where $c$ is the given wave speed.
The chosen conditions for this equation are
<!-- Equation labels as ordinary links -->
<div id="condwave"></div>
$$
\begin{aligned}
g(0,t) &= 0, &t \geq 0 \\
g(1,t) &= 0, &t \geq 0 \\
g(x,0) &= u(x), &x\in[0,1] \\
\frac{\partial g(x,t)}{\partial t}\Big |_{t = 0} &= v(x), &x \in [0,1]
\end{aligned} \label{condwave} \tag{20}
$$
In this example, let $c = 1$ and $u(x) = \sin(\pi x)$ and $v(x) = -\pi\sin(\pi x)$.
## The trial solution
Setting up the network is done in similar matter as for the example of solving the diffusion equation.
The only things we have to change, is the trial solution such that it satisfies the conditions from ([20](#condwave)) and the cost function.
The trial solution becomes slightly different since we have other conditions than in the example of solving the diffusion equation. Here, a possible trial solution $g_t(x,t)$ is
$$
g_t(x,t) = h_1(x,t) + x(1-x)t^2N(x,t,P)
$$
where
$$
h_1(x,t) = (1-t^2)u(x) + tv(x)
$$
Note that this trial solution satisfies the conditions only if $u(0) = v(0) = u(1) = v(1) = 0$, which is the case in this example.
## The analytical solution
The analytical solution for our specific problem, is
$$
g(x,t) = \sin(\pi x)\cos(\pi t) - \sin(\pi x)\sin(\pi t)
$$
## Solving the wave equation - the full program using Autograd
```python
import autograd.numpy as np
from autograd import hessian,grad
import autograd.numpy.random as npr
from matplotlib import cm
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
## Set up the trial function:
def u(x):
return np.sin(np.pi*x)
def v(x):
return -np.pi*np.sin(np.pi*x)
def h1(point):
x,t = point
return (1 - t**2)*u(x) + t*v(x)
def g_trial(point,P):
x,t = point
return h1(point) + x*(1-x)*t**2*deep_neural_network(P,point)
## Define the cost function
def cost_function(P, x, t):
cost_sum = 0
g_t_hessian_func = hessian(g_trial)
for x_ in x:
for t_ in t:
point = np.array([x_,t_])
g_t_hessian = g_t_hessian_func(point,P)
g_t_d2x = g_t_hessian[0][0]
g_t_d2t = g_t_hessian[1][1]
err_sqr = ( (g_t_d2t - g_t_d2x) )**2
cost_sum += err_sqr
return cost_sum / (np.size(t) * np.size(x))
## The neural network
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# x is now a point and a 1D numpy array; make it a column vector
num_coordinates = np.size(x,0)
x = x.reshape(num_coordinates,-1)
num_points = np.size(x,1)
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assume that the input layer does nothing to the input x
x_input = x
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_points)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_points)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output[0][0]
## The analytical solution
def g_analytic(point):
x,t = point
return np.sin(np.pi*x)*np.cos(np.pi*t) - np.sin(np.pi*x)*np.sin(np.pi*t)
def solve_pde_deep_neural_network(x,t, num_neurons, num_iter, lmb):
## Set up initial weigths and biases
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 + 1 ) # 2 since we have two points, +1 to include bias
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: ',cost_function(P, x, t))
cost_function_grad = grad(cost_function,0)
# Let the update be done num_iter times
for i in range(num_iter):
cost_grad = cost_function_grad(P, x , t)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_grad[l]
print('Final cost: ',cost_function(P, x, t))
return P
if __name__ == '__main__':
### Use the neural network:
npr.seed(15)
## Decide the vales of arguments to the function to solve
Nx = 10; Nt = 10
x = np.linspace(0, 1, Nx)
t = np.linspace(0,1,Nt)
## Set up the parameters for the network
num_hidden_neurons = [50,20]
num_iter = 1000
lmb = 0.01
P = solve_pde_deep_neural_network(x,t, num_hidden_neurons, num_iter, lmb)
## Store the results
res = np.zeros((Nx, Nt))
res_analytical = np.zeros((Nx, Nt))
for i,x_ in enumerate(x):
for j, t_ in enumerate(t):
point = np.array([x_, t_])
res[i,j] = g_trial(point,P)
res_analytical[i,j] = g_analytic(point)
diff = np.abs(res - res_analytical)
print("Max difference between analytical and solution from nn: %g"%np.max(diff))
## Plot the solutions in two dimensions, that being in position and time
T,X = np.meshgrid(t,x)
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Solution from the deep neural network w/ %d layer'%len(num_hidden_neurons))
s = ax.plot_surface(T,X,res,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Analytical solution')
s = ax.plot_surface(T,X,res_analytical,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Difference')
s = ax.plot_surface(T,X,diff,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
## Take some slices of the 3D plots just to see the solutions at particular times
indx1 = 0
indx2 = int(Nt/2)
indx3 = Nt-1
t1 = t[indx1]
t2 = t[indx2]
t3 = t[indx3]
# Slice the results from the DNN
res1 = res[:,indx1]
res2 = res[:,indx2]
res3 = res[:,indx3]
# Slice the analytical results
res_analytical1 = res_analytical[:,indx1]
res_analytical2 = res_analytical[:,indx2]
res_analytical3 = res_analytical[:,indx3]
# Plot the slices
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t1)
plt.plot(x, res1)
plt.plot(x,res_analytical1)
plt.legend(['dnn','analytical'])
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t2)
plt.plot(x, res2)
plt.plot(x,res_analytical2)
plt.legend(['dnn','analytical'])
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t3)
plt.plot(x, res3)
plt.plot(x,res_analytical3)
plt.legend(['dnn','analytical'])
plt.show()
```
## Resources on differential equations and deep learning
1. [Artificial neural networks for solving ordinary and partial differential equations by I.E. Lagaris et al](https://pdfs.semanticscholar.org/d061/df393e0e8fbfd0ea24976458b7d42419040d.pdf)
2. [Neural networks for solving differential equations by A. Honchar](https://becominghuman.ai/neural-networks-for-solving-differential-equations-fa230ac5e04c)
3. [Solving differential equations using neural networks by M.M Chiaramonte and M. Kiener](http://cs229.stanford.edu/proj2013/ChiaramonteKiener-SolvingDifferentialEquationsUsingNeuralNetworks.pdf)
4. [Introduction to Partial Differential Equations by A. Tveito, R. Winther](https://www.springer.com/us/book/9783540225515)
## Convolutional Neural Networks (recognizing images)
Convolutional neural networks (CNNs) were developed during the last
decade of the previous century, with a focus on character recognition
tasks. Nowadays, CNNs are a central element in the spectacular success
of deep learning methods. The success in for example image
classifications have made them a central tool for most machine
learning practitioners.
CNNs are very similar to ordinary Neural Networks.
They are made up of neurons that have learnable weights and
biases. Each neuron receives some inputs, performs a dot product and
optionally follows it with a non-linearity. The whole network still
expresses a single differentiable score function: from the raw image
pixels on one end to class scores at the other. And they still have a
loss function (for example Softmax) on the last (fully-connected) layer
and all the tips/tricks we developed for learning regular Neural
Networks still apply (back propagation, gradient descent etc etc).
## What is the Difference
**CNN architectures make the explicit assumption that
the inputs are images, which allows us to encode certain properties
into the architecture. These then make the forward function more
efficient to implement and vastly reduce the amount of parameters in
the network.**
Here we provide only a superficial overview, for the more interested, we recommend highly the course
[IN5400 – Machine Learning for Image Analysis](https://www.uio.no/studier/emner/matnat/ifi/IN5400/index-eng.html)
and the slides of [CS231](http://cs231n.github.io/convolutional-networks/).
Another good read is the article here <https://arxiv.org/pdf/1603.07285.pdf>.
## Neural Networks vs CNNs
Neural networks are defined as **affine transformations**, that is
a vector is received as input and is multiplied with a matrix of so-called weights (our unknown paramters) to produce an
output (to which a bias vector is usually added before passing the result
through a nonlinear activation function). This is applicable to any type of input, be it an
image, a sound clip or an unordered collection of features: whatever their
dimensionality, their representation can always be flattened into a vector
before the transformation.
## Why CNNS for images, sound files, medical images from CT scans etc?
However, when we consider images, sound clips and many other similar kinds of data, these data have an intrinsic
structure. More formally, they share these important properties:
* They are stored as multi-dimensional arrays (think of the pixels of a figure) .
* They feature one or more axes for which ordering matters (e.g., width and height axes for an image, time axis for a sound clip).
* One axis, called the channel axis, is used to access different views of the data (e.g., the red, green and blue channels of a color image, or the left and right channels of a stereo audio track).
These properties are not exploited when an affine transformation is applied; in
fact, all the axes are treated in the same way and the topological information
is not taken into account. Still, taking advantage of the implicit structure of
the data may prove very handy in solving some tasks, like computer vision and
speech recognition, and in these cases it would be best to preserve it. This is
where discrete convolutions come into play.
A discrete convolution is a linear transformation that preserves this notion of
ordering. It is sparse (only a few input units contribute to a given output
unit) and reuses parameters (the same weights are applied to multiple locations
in the input).
## Regular NNs don’t scale well to full images
As an example, consider
an image of size $32\times 32\times 3$ (32 wide, 32 high, 3 color channels), so a
single fully-connected neuron in a first hidden layer of a regular
Neural Network would have $32\times 32\times 3 = 3072$ weights. This amount still
seems manageable, but clearly this fully-connected structure does not
scale to larger images. For example, an image of more respectable
size, say $200\times 200\times 3$, would lead to neurons that have
$200\times 200\times 3 = 120,000$ weights.
We could have
several such neurons, and the parameters would add up quickly! Clearly,
this full connectivity is wasteful and the huge number of parameters
would quickly lead to possible overfitting.
<!-- dom:FIGURE: [figslides/nn.jpeg, width=500 frac=0.6] A regular 3-layer Neural Network. -->
<!-- begin figure -->
<p style="font-size: 0.9em"><i>Figure 1: A regular 3-layer Neural Network.</i></p><!-- end figure -->
## 3D volumes of neurons
Convolutional Neural Networks take advantage of the fact that the
input consists of images and they constrain the architecture in a more
sensible way.
In particular, unlike a regular Neural Network, the
layers of a CNN have neurons arranged in 3 dimensions: width,
height, depth. (Note that the word depth here refers to the third
dimension of an activation volume, not to the depth of a full Neural
Network, which can refer to the total number of layers in a network.)
To understand it better, the above example of an image
with an input volume of
activations has dimensions $32\times 32\times 3$ (width, height,
depth respectively).
The neurons in a layer will
only be connected to a small region of the layer before it, instead of
all of the neurons in a fully-connected manner. Moreover, the final
output layer could for this specific image have dimensions $1\times 1 \times 10$,
because by the
end of the CNN architecture we will reduce the full image into a
single vector of class scores, arranged along the depth
dimension.
<!-- dom:FIGURE: [figslides/cnn.jpeg, width=500 frac=0.6] A CNN arranges its neurons in three dimensions (width, height, depth), as visualized in one of the layers. Every layer of a CNN transforms the 3D input volume to a 3D output volume of neuron activations. In this example, the red input layer holds the image, so its width and height would be the dimensions of the image, and the depth would be 3 (Red, Green, Blue channels). -->
<!-- begin figure -->
<p style="font-size: 0.9em"><i>Figure 1: A CNN arranges its neurons in three dimensions (width, height, depth), as visualized in one of the layers. Every layer of a CNN transforms the 3D input volume to a 3D output volume of neuron activations. In this example, the red input layer holds the image, so its width and height would be the dimensions of the image, and the depth would be 3 (Red, Green, Blue channels).</i></p><!-- end figure -->
<!-- !split -->
## Layers used to build CNNs
A simple CNN is a sequence of layers, and every layer of a CNN
transforms one volume of activations to another through a
differentiable function. We use three main types of layers to build
CNN architectures: Convolutional Layer, Pooling Layer, and
Fully-Connected Layer (exactly as seen in regular Neural Networks). We
will stack these layers to form a full CNN architecture.
A simple CNN for image classification could have the architecture:
* **INPUT** ($32\times 32 \times 3$) will hold the raw pixel values of the image, in this case an image of width 32, height 32, and with three color channels R,G,B.
* **CONV** (convolutional )layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume such as $[32\times 32\times 12]$ if we decided to use 12 filters.
* **RELU** layer will apply an elementwise activation function, such as the $max(0,x)$ thresholding at zero. This leaves the size of the volume unchanged ($[32\times 32\times 12]$).
* **POOL** (pooling) layer will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as $[16\times 16\times 12]$.
* **FC** (i.e. fully-connected) layer will compute the class scores, resulting in volume of size $[1\times 1\times 10]$, where each of the 10 numbers correspond to a class score, such as among the 10 categories of the MNIST images we considered above . As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume.
## Transforming images
CNNs transform the original image layer by layer from the original
pixel values to the final class scores.
Observe that some layers contain
parameters and other don’t. In particular, the CNN layers perform
transformations that are a function of not only the activations in the
input volume, but also of the parameters (the weights and biases of
the neurons). On the other hand, the RELU/POOL layers will implement a
fixed function. The parameters in the CONV/FC layers will be trained
with gradient descent so that the class scores that the CNN computes
are consistent with the labels in the training set for each image.
## CNNs in brief
In summary:
* A CNN architecture is in the simplest case a list of Layers that transform the image volume into an output volume (e.g. holding the class scores)
* There are a few distinct types of Layers (e.g. CONV/FC/RELU/POOL are by far the most popular)
* Each Layer accepts an input 3D volume and transforms it to an output 3D volume through a differentiable function
* Each Layer may or may not have parameters (e.g. CONV/FC do, RELU/POOL don’t)
* Each Layer may or may not have additional hyperparameters (e.g. CONV/FC/POOL do, RELU doesn’t)
For more material on convolutional networks, we strongly recommend
the course
[IN5400 – Machine Learning for Image Analysis](https://www.uio.no/studier/emner/matnat/ifi/IN5400/index-eng.html)
and the slides of [CS231](http://cs231n.github.io/convolutional-networks/) which is taught at Stanford University (consistently ranked as one of the top computer science programs in the world). [Michael Nielsen's book is a must read, in particular chapter 6 which deals with CNNs](http://neuralnetworksanddeeplearning.com/chap6.html).
The textbook by Goodfellow et al, see chapter 9 contains an in depth discussion as well.
## Key Idea
A dense neural network is representd by an affine operation (like matrix-matrix multiplication) where all parameters are included.
The key idea in CNNs for say imaging is that in images neighbor pixels tend to be related! So we connect
only neighboring neurons in the input instead of connecting all with the first hidden layer.
We say we perform a filtering (convolution is the mathematical operation).
## Mathematics of CNNs
The mathematics of CNNs is based on the mathematical operation of
**convolution**. In mathematics (in particular in functional analysis),
convolution is represented by matheematical operation (integration,
summation etc) on two function in order to produce a third function
that expresses how the shape of one gets modified by the other.
Convolution has a plethora of applications in a variety of disciplines, spanning from statistics to signal processing, computer vision, solutions of differential equations,linear algebra, engineering, and yes, machine learning.
Mathematically, convolution is defined as follows (one-dimensional example):
Let us define a continuous function $y(t)$ given by
$$
y(t) = \int x(a) w(t-a) da,
$$
where $x(a)$ represents a so-called input and $w(t-a)$ is normally called the weight function or kernel.
The above integral is written in a more compact form as
$$
y(t) = \left(x * w\right)(t).
$$
The discretized version reads
$$
y(t) = \sum_{a=-\infty}^{a=\infty}x(a)w(t-a).
$$
Computing the inverse of the above convolution operations is known as deconvolution.
How can we use this? And what does it mean? Let us study some familiar examples first.
## Convolution Examples: Polynomial multiplication
We have already met such an example in project 1 when we tried to set
up the design matrix for a two-dimensional function. This was an
example of polynomial multiplication. Let us recast such a problem in terms of the convolution operation.
Let us look a the following polynomials to second and third order, respectively:
$$
p(t) = \alpha_0+\alpha_1 t+\alpha_2 t^2,
$$
and
$$
s(t) = \beta_0+\beta_1 t+\beta_2 t^2+\beta_3 t^3.
$$
The polynomial multiplication gives us a new polynomial of degree $5$
$$
z(t) = \delta_0+\delta_1 t+\delta_2 t^2+\delta_3 t^3+\delta_4 t^4+\delta_5 t^5.
$$
## Efficient Polynomial Multiplication
Computing polynomial products can be implemented efficiently if we rewrite the more brute force multiplications using convolution.
We note first that the new coefficients are given as
$$
\begin{split}
\delta_0=&\alpha_0\beta_0\\
\delta_1=&\alpha_1\beta_0+\alpha_1\beta_0\\
\delta_2=&\alpha_0\beta_2+\alpha_1\beta_1+\alpha_2\beta_0\\
\delta_3=&\alpha_1\beta_2+\alpha_2\beta_1+\alpha_0\beta_3\\
\delta_4=&\alpha_2\beta_2+\alpha_1\beta_3\\
\delta_5=&\alpha_2\beta_3.\\
\end{split}
$$
We note that $\alpha_i=0$ except for $i\in \left\{0,1,2\right\}$ and $\beta_i=0$ except for $i\in\left\{0,1,2,3\right\}$.
We can then rewrite the coefficients $\delta_j$ using a discrete convolution as
$$
\delta_j = \sum_{i=-\infty}^{i=\infty}\alpha_i\beta_{j-i}=(\alpha * \beta)_j,
$$
or as a double sum with restriction $l=i+j$
$$
\delta_l = \sum_{ij}\alpha_i\beta_{j}.
$$
Do you see a potential drawback with these equations?
## A more efficient way of coding the above Convolution
Since we only have a finite number of $\alpha$ and $\beta$ values
which are non-zero, we can rewrite the above convolution expressions
as a matrix-vector multiplication
$$
\boldsymbol{\delta}=\begin{bmatrix}\alpha_0 & 0 & 0 & 0 \\
\alpha_1 & \alpha_0 & 0 & 0 \\
\alpha_2 & \alpha_1 & \alpha_0 & 0 \\
0 & \alpha_2 & \alpha_1 & \alpha_0 \\
0 & 0 & \alpha_2 & \alpha_1 \\
0 & 0 & 0 & \alpha_2
\end{bmatrix}\begin{bmatrix} \beta_0 \\ \beta_1 \\ \beta_2 \\ \beta_3\end{bmatrix}.
$$
The process is commutative and we can easily see that we can rewrite the multiplication in terms of a matrix holding $\beta$ and a vector holding $\alpha$.
In this case we have
$$
\boldsymbol{\delta}=\begin{bmatrix}\beta_0 & 0 & 0 \\
\beta_1 & \beta_0 & 0 \\
\beta_2 & \beta_1 & \beta_0 \\
\beta_3 & \beta_2 & \beta_1 \\
0 & \beta_3 & \beta_2 \\
0 & 0 & \beta_3
\end{bmatrix}\begin{bmatrix} \alpha_0 \\ \alpha_1 \\ \alpha_2\end{bmatrix}.
$$
Note that the use of these matrices is for mathematical purposes only and not implementation purposes.
When implementing the above equation we do not encode (and allocate memory) the matrices explicitely.
We rather code the convolutions in the minimal memory footprint that they require.
Does the number of floating point operations change here when we use the commutative property?
## Convolution Examples: Principle of Superposition and Periodic Forces (Fourier Transforms)
For problems with so-called harmonic oscillations, given by for example the following differential equation
$$
m\frac{d^2x}{dt^2}+\eta\frac{dx}{dt}+x(t)=F(t),
$$
where $F(t)$ is an applied external force acting on the system (often called a driving force), one can use the theory of Fourier transformations to find the solutions of this type of equations.
If one has several driving forces, $F(t)=\sum_n F_n(t)$, one can find
the particular solution to each $F_n$, $x_{pn}(t)$, and the particular
solution for the entire driving force is then given by a series like
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
x_p(t)=\sum_nx_{pn}(t).
\label{_auto3} \tag{21}
\end{equation}
$$
## Principle of Superposition
This is known as the principle of superposition. It only applies when
the homogenous equation is linear. If there were an anharmonic term
such as $x^3$ in the homogenous equation, then when one summed various
solutions, $x=(\sum_n x_n)^2$, one would get cross
terms. Superposition is especially useful when $F(t)$ can be written
as a sum of sinusoidal terms, because the solutions for each
sinusoidal (sine or cosine) term is analytic.
Driving forces are often periodic, even when they are not
sinusoidal. Periodicity implies that for some time $\tau$
$$
\begin{eqnarray}
F(t+\tau)=F(t).
\end{eqnarray}
$$
One example of a non-sinusoidal periodic force is a square wave. Many
components in electric circuits are non-linear, e.g. diodes, which
makes many wave forms non-sinusoidal even when the circuits are being
driven by purely sinusoidal sources.
## Simple Code Example
The code here shows a typical example of such a square wave generated using the functionality included in the **scipy** Python package. We have used a period of $\tau=0.2$.
```python
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t)
plt.plot(t, SqrSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
For the sinusoidal example the
period is $\tau=2\pi/\omega$. However, higher harmonics can also
satisfy the periodicity requirement. In general, any force that
satisfies the periodicity requirement can be expressed as a sum over
harmonics,
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
F(t)=\frac{f_0}{2}+\sum_{n>0} f_n\cos(2n\pi t/\tau)+g_n\sin(2n\pi t/\tau).
\label{_auto4} \tag{22}
\end{equation}
$$
## Wrapping up Fourier transforms
We can write down the answer for
$x_{pn}(t)$, by substituting $f_n/m$ or $g_n/m$ for $F_0/m$. By
writing each factor $2n\pi t/\tau$ as $n\omega t$, with $\omega\equiv
2\pi/\tau$,
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef1"></div>
$$
\begin{equation}
\label{eq:fourierdef1} \tag{23}
F(t)=\frac{f_0}{2}+\sum_{n>0}f_n\cos(n\omega t)+g_n\sin(n\omega t).
\end{equation}
$$
The solutions for $x(t)$ then come from replacing $\omega$ with
$n\omega$ for each term in the particular solution,
$$
\begin{eqnarray}
x_p(t)&=&\frac{f_0}{2k}+\sum_{n>0} \alpha_n\cos(n\omega t-\delta_n)+\beta_n\sin(n\omega t-\delta_n),\\
\nonumber
\alpha_n&=&\frac{f_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\beta_n&=&\frac{g_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\delta_n&=&\tan^{-1}\left(\frac{2\beta n\omega}{\omega_0^2-n^2\omega^2}\right).
\end{eqnarray}
$$
## Finding the Coefficients
Because the forces have been applied for a long time, any non-zero
damping eliminates the homogenous parts of the solution, so one need
only consider the particular solution for each $n$.
The problem is considered solved if one can find expressions for the
coefficients $f_n$ and $g_n$, even though the solutions are expressed
as an infinite sum. The coefficients can be extracted from the
function $F(t)$ by
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef2"></div>
$$
\begin{eqnarray}
\label{eq:fourierdef2} \tag{24}
f_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\cos(2n\pi t/\tau),\\
\nonumber
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\sin(2n\pi t/\tau).
\end{eqnarray}
$$
To check the consistency of these expressions and to verify
Eq. ([24](#eq:fourierdef2)), one can insert the expansion of $F(t)$ in
Eq. ([23](#eq:fourierdef1)) into the expression for the coefficients in
Eq. ([24](#eq:fourierdef2)) and see whether
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~\left\{
\frac{f_0}{2}+\sum_{m>0}f_m\cos(m\omega t)+g_m\sin(m\omega t)
\right\}\cos(n\omega t).
\end{eqnarray}
$$
Immediately, one can throw away all the terms with $g_m$ because they
convolute an even and an odd function. The term with $f_0/2$
disappears because $\cos(n\omega t)$ is equally positive and negative
over the interval and will integrate to zero. For all the terms
$f_m\cos(m\omega t)$ appearing in the sum, one can use angle addition
formulas to see that $\cos(m\omega t)\cos(n\omega
t)=(1/2)(\cos[(m+n)\omega t]+\cos[(m-n)\omega t]$. This will integrate
to zero unless $m=n$. In that case the $m=n$ term gives
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
\int_{-\tau/2}^{\tau/2}dt~\cos^2(m\omega t)=\frac{\tau}{2},
\label{_auto5} \tag{25}
\end{equation}
$$
and
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~f_n/2\\
\nonumber
&=&f_n~\checkmark.
\end{eqnarray}
$$
The same method can be used to check for the consistency of $g_n$.
## Final words on Fourier Transforms
The code here uses the Fourier series applied to a
square wave signal. The code here
visualizes the various approximations given by Fourier series compared
with a square wave with period $T=0.2$ (dimensionless time), width $0.1$ and max value of the force $F=2$. We
see that when we increase the number of components in the Fourier
series, the Fourier series approximation gets closer and closer to the
square wave signal.
```python
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
T =0.2
# Max value of square signal
Fmax= 2.0
# Width of signal
Width = 0.1
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
FourierSeriesSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t+np.pi*Width/T)
a0 = Fmax*Width/T
FourierSeriesSignal = a0
Factor = 2.0*Fmax/np.pi
for i in range(1,500):
FourierSeriesSignal += Factor/(i)*np.sin(np.pi*i*Width/T)*np.cos(i*t*2*np.pi/T)
plt.plot(t, SqrSignal)
plt.plot(t, FourierSeriesSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
## Two-dimensional Objects
We often use convolutions over more than one dimension at a time. If
we have a two-dimensional image $I$ as input, we can have a **filter**
defined by a two-dimensional **kernel** $K$. This leads to an output $S$
$$
S_(i,j)=(I * K)(i,j) = \sum_m\sum_n I(m,n)K(i-m,j-n).
$$
Convolution is a commutatitave process, which means we can rewrite this equation as
$$
S_(i,j)=(I * K)(i,j) = \sum_m\sum_n I(i-m,j-n)K(m,n).
$$
Normally the latter is more straightforward to implement in a machine elarning library since there is less variation in the range of values of $m$ and $n$.
## Cross-Correlation
Many deep learning libraries implement cross-correlation instead of convolution
$$
S_(i,j)=(I * K)(i,j) = \sum_m\sum_n I(i+m,j-+)K(m,n).
$$
## More on Dimensionalities
In feilds like signal processing (and imaging as well), one designs
so-called filters. These filters are defined by the convolutions and
are often hand-crafted. One may specify filters for smoothing, edge
detection, frequency reshaping, and similar operations. However with
neural networks the idea is to automatically learn the filters and use
many of them in conjunction with non-linear operations (activation
functions).
As an example consider a neural network operating on sound sequence
data. Assume that we an input vector $\boldsymbol{x}$ of length $d=10^6$. We
construct then a neural network with onle hidden layer only with
$10^4$ nodes. This means that we will have a weight matrix with
$10^4\times 10^6=10^{10}$ weights to be determined, together with $10^4$ biases.
Assume furthermore that we have an output layer which is meant to train whether the sound sequence represents a human voice (true) or something else (false).
It means that we have only one output node. But since this output node connects to $10^4$ nodes in the hidden layer, there are in total $10^4$ weights to be determined for the output layer, plus one bias. In total we have
$$
\mathrm{NumberParameters}=10^{10}+10^4+10^4+1 \approx 10^{10},
$$
that is ten billion parameters to determine.
## Further Dimensionality Remarks
In today’s architecture one can train such neural networks, however
this is a huge number of parameters for the task at hand. In general,
it is a very wasteful and inefficient use of dense matrices as
parameters. Just as importantly, such trained network parameters are
very specific for the type of input data on which they were trained
and the network is not likely to generalize easily to variations in
the input.
The main principles that justify convolutions is locality of
information and repetion of patterns within the signal. Sound samples
of the input in adjacent spots are much more likely to affect each
other than those that are very far away. Similarly, sounds are
repeated in multiple times in the signal. While slightly simplistic,
reasoning about such a sound example demonstrates this. The same
principles then apply to images and other similar data.
## CNNs in more detail, Lecture from IN5400
* [Lectures from IN5400 spring 2019](https://www.uio.no/studier/emner/matnat/ifi/IN5400/v19/material/week5/in5400_2019_week5_convolutional_nerual_networks.pdf)
## CNNs in more detail, building convolutional neural networks in Tensorflow and Keras
As discussed above, CNNs are neural networks built from the assumption that the inputs
to the network are 2D images. This is important because the number of features or pixels in images
grows very fast with the image size, and an enormous number of weights and biases are needed in order to build an accurate network.
As before, we still have our input, a hidden layer and an output. What's novel about convolutional networks
are the **convolutional** and **pooling** layers stacked in pairs between the input and the hidden layer.
In addition, the data is no longer represented as a 2D feature matrix, instead each input is a number of 2D
matrices, typically 1 for each color dimension (Red, Green, Blue).
## Setting it up
It means that to represent the entire
dataset of images, we require a 4D matrix or **tensor**. This tensor has the dimensions:
$$
(n_{inputs},\, n_{pixels, width},\, n_{pixels, height},\, depth) .
$$
## The MNIST dataset again
The MNIST dataset consists of grayscale images with a pixel size of
$28\times 28$, meaning we require $28 \times 28 = 724$ weights to each
neuron in the first hidden layer.
If we were to analyze images of size $128\times 128$ we would require
$128 \times 128 = 16384$ weights to each neuron. Even worse if we were
dealing with color images, as most images are, we have an image matrix
of size $128\times 128$ for each color dimension (Red, Green, Blue),
meaning 3 times the number of weights $= 49152$ are required for every
single neuron in the first hidden layer.
## Strong correlations
Images typically have strong local correlations, meaning that a small
part of the image varies little from its neighboring regions. If for
example we have an image of a blue car, we can roughly assume that a
small blue part of the image is surrounded by other blue regions.
Therefore, instead of connecting every single pixel to a neuron in the
first hidden layer, as we have previously done with deep neural
networks, we can instead connect each neuron to a small part of the
image (in all 3 RGB depth dimensions). The size of each small area is
fixed, and known as a [receptive](https://en.wikipedia.org/wiki/Receptive_field).
<!-- !split -->
## Layers of a CNN
The layers of a convolutional neural network arrange neurons in 3D: width, height and depth.
The input image is typically a square matrix of depth 3.
A **convolution** is performed on the image which outputs
a 3D volume of neurons. The weights to the input are arranged in a number of 2D matrices, known as **filters**.
Each filter slides along the input image, taking the dot product
between each small part of the image and the filter, in all depth
dimensions. This is then passed through a non-linear function,
typically the **Rectified Linear (ReLu)** function, which serves as the
activation of the neurons in the first convolutional layer. This is
further passed through a **pooling layer**, which reduces the size of the
convolutional layer, e.g. by taking the maximum or average across some
small regions, and this serves as input to the next convolutional
layer.
## Systematic reduction
By systematically reducing the size of the input volume, through
convolution and pooling, the network should create representations of
small parts of the input, and then from them assemble representations
of larger areas. The final pooling layer is flattened to serve as
input to a hidden layer, such that each neuron in the final pooling
layer is connected to every single neuron in the hidden layer. This
then serves as input to the output layer, e.g. a softmax output for
classification.
## Prerequisites: Collect and pre-process data
```python
# import necessary packages
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
# ensure the same random numbers appear every time
np.random.seed(0)
# display images in notebook
%matplotlib inline
plt.rcParams['figure.figsize'] = (12,12)
# download MNIST dataset
digits = datasets.load_digits()
# define inputs and labels
inputs = digits.images
labels = digits.target
# RGB images have a depth of 3
# our images are grayscale so they should have a depth of 1
inputs = inputs[:,:,:,np.newaxis]
print("inputs = (n_inputs, pixel_width, pixel_height, depth) = " + str(inputs.shape))
print("labels = (n_inputs) = " + str(labels.shape))
# choose some random images to display
n_inputs = len(inputs)
indices = np.arange(n_inputs)
random_indices = np.random.choice(indices, size=5)
for i, image in enumerate(digits.images[random_indices]):
plt.subplot(1, 5, i+1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title("Label: %d" % digits.target[random_indices[i]])
plt.show()
```
## Importing Keras and Tensorflow
```python
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Sequential #This allows appending layers to existing models
from tensorflow.keras.layers import Dense #This allows defining the characteristics of a particular layer
from tensorflow.keras import optimizers #This allows using whichever optimiser we want (sgd,adam,RMSprop)
from tensorflow.keras import regularizers #This allows using whichever regularizer we want (l1,l2,l1_l2)
from tensorflow.keras.utils import to_categorical #This allows using categorical cross entropy as the cost function
#from tensorflow.keras import Conv2D
#from tensorflow.keras import MaxPooling2D
#from tensorflow.keras import Flatten
from sklearn.model_selection import train_test_split
# representation of labels
labels = to_categorical(labels)
# split into train and test data
# one-liner from scikit-learn library
train_size = 0.8
test_size = 1 - train_size
X_train, X_test, Y_train, Y_test = train_test_split(inputs, labels, train_size=train_size,
test_size=test_size)
```
<!-- !split -->
## Running with Keras
```python
def create_convolutional_neural_network_keras(input_shape, receptive_field,
n_filters, n_neurons_connected, n_categories,
eta, lmbd):
model = Sequential()
model.add(layers.Conv2D(n_filters, (receptive_field, receptive_field), input_shape=input_shape, padding='same',
activation='relu', kernel_regularizer=regularizers.l2(lmbd)))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(n_neurons_connected, activation='relu', kernel_regularizer=regularizers.l2(lmbd)))
model.add(layers.Dense(n_categories, activation='softmax', kernel_regularizer=regularizers.l2(lmbd)))
sgd = optimizers.SGD(lr=eta)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
return model
epochs = 100
batch_size = 100
input_shape = X_train.shape[1:4]
receptive_field = 3
n_filters = 10
n_neurons_connected = 50
n_categories = 10
eta_vals = np.logspace(-5, 1, 7)
lmbd_vals = np.logspace(-5, 1, 7)
```
## Final part
```python
CNN_keras = np.zeros((len(eta_vals), len(lmbd_vals)), dtype=object)
for i, eta in enumerate(eta_vals):
for j, lmbd in enumerate(lmbd_vals):
CNN = create_convolutional_neural_network_keras(input_shape, receptive_field,
n_filters, n_neurons_connected, n_categories,
eta, lmbd)
CNN.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size, verbose=0)
scores = CNN.evaluate(X_test, Y_test)
CNN_keras[i][j] = CNN
print("Learning rate = ", eta)
print("Lambda = ", lmbd)
print("Test accuracy: %.3f" % scores[1])
print()
```
## Final visualization
```python
# visual representation of grid search
# uses seaborn heatmap, could probably do this in matplotlib
import seaborn as sns
sns.set()
train_accuracy = np.zeros((len(eta_vals), len(lmbd_vals)))
test_accuracy = np.zeros((len(eta_vals), len(lmbd_vals)))
for i in range(len(eta_vals)):
for j in range(len(lmbd_vals)):
CNN = CNN_keras[i][j]
train_accuracy[i][j] = CNN.evaluate(X_train, Y_train)[1]
test_accuracy[i][j] = CNN.evaluate(X_test, Y_test)[1]
fig, ax = plt.subplots(figsize = (10, 10))
sns.heatmap(train_accuracy, annot=True, ax=ax, cmap="viridis")
ax.set_title("Training Accuracy")
ax.set_ylabel("$\eta$")
ax.set_xlabel("$\lambda$")
plt.show()
fig, ax = plt.subplots(figsize = (10, 10))
sns.heatmap(test_accuracy, annot=True, ax=ax, cmap="viridis")
ax.set_title("Test Accuracy")
ax.set_ylabel("$\eta$")
ax.set_xlabel("$\lambda$")
plt.show()
```
## The CIFAR01 data set
The CIFAR10 dataset contains 60,000 color images in 10 classes, with
6,000 images in each class. The dataset is divided into 50,000
training images and 10,000 testing images. The classes are mutually
exclusive and there is no overlap between them.
```python
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
# We import the data set
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1 by dividing by 255.
train_images, test_images = train_images / 255.0, test_images / 255.0
```
## Verifying the data set
To verify that the dataset looks correct, let's plot the first 25 images from the training set and display the class name below each image.
```python
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
```
## Set up the model
The 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers.
As input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size. If you are new to these dimensions, color_channels refers to (R,G,B). In this example, you will configure our CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images. You can do this by passing the argument input_shape to our first layer.
```python
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
# Let's display the architecture of our model so far.
model.summary()
```
You can see that the output of every Conv2D and MaxPooling2D layer is a 3D tensor of shape (height, width, channels). The width and height dimensions tend to shrink as you go deeper in the network. The number of output channels for each Conv2D layer is controlled by the first argument (e.g., 32 or 64). Typically, as the width and height shrink, you can afford (computationally) to add more output channels in each Conv2D layer.
## Add Dense layers on top
To complete our model, you will feed the last output tensor from the
convolutional base (of shape (4, 4, 64)) into one or more Dense layers
to perform classification. Dense layers take vectors as input (which
are 1D), while the current output is a 3D tensor. First, you will
flatten (or unroll) the 3D output to 1D, then add one or more Dense
layers on top. CIFAR has 10 output classes, so you use a final Dense
layer with 10 outputs and a softmax activation.
```python
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
Here's the complete architecture of our model.
model.summary()
```
As you can see, our (4, 4, 64) outputs were flattened into vectors of shape (1024) before going through two Dense layers.
## Compile and train the model
```python
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=10,
validation_data=(test_images, test_labels))
```
## Finally, evaluate the model
```python
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)
```
| d31d104f07f122737b3c909af4530f7cf9e17daa | 169,272 | ipynb | Jupyter Notebook | Lectures/week42.ipynb | adelezaini/MachineLearning | dc3f34f5d509bed6a993705373c46be4da3f97db | [
"MIT"
] | null | null | null | Lectures/week42.ipynb | adelezaini/MachineLearning | dc3f34f5d509bed6a993705373c46be4da3f97db | [
"MIT"
] | null | null | null | Lectures/week42.ipynb | adelezaini/MachineLearning | dc3f34f5d509bed6a993705373c46be4da3f97db | [
"MIT"
] | null | null | null | 37.466135 | 3,992 | 0.569982 | true | 32,633 | Qwen/Qwen-72B | 1. YES
2. YES | 0.699254 | 0.879147 | 0.614747 | __label__eng_Latn | 0.985281 | 0.266594 |
```python
import numpy as np
import matplotlib.pyplot as plt
import math
from mpl_toolkits.mplot3d import Axes3D
from scipy.ndimage.morphology import distance_transform_edt
```
#### Gradient Ascent
\begin{align}
\mathbf{r}_{i+1}&=\mathbf{r}_i+\eta\Delta \mathbf{r} \\
\Delta\mathbf{r} &\sim -\frac{\nabla \mathbf{f}}{\|\nabla \mathbf{f}\|}
\end{align}
where $\mathbf{f}$ the potential field, $\nabla$ the gradient, $i$ the iteration of the for-loop, $\eta$ the rate of change constant and $\mathbf{r}$ the position.
```python
def mesh(X,Y,Z):
ax = plt.gca()
ax.plot_surface(X,Y,Z, rstride=1, cstride=1,
cmap='viridis', edgecolor='none')
# ax.contour3D(x, y, repulmap, 50)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.view_init(70,-110)
```
```python
def round2(n):
return np.floor(n+ 0.5).astype(int)
```
```python
class PotentialFieldPathDemo:
def __init__(self):
self.nrows = 400
self.ncols = 600
self.d0 = 2
self.nu = 800
self.start = np.array([50,350])
self.goal = np.array([400,50])
self.xi = 1/700
self.x,self.y=np.meshgrid(np.linspace(1,self.ncols,self.ncols),
np.linspace(1,self.nrows,self.nrows))
self.maxIter = 1000
def generateObstacle(self):
obstacle = False*np.ones((self.nrows,self.ncols))
obstacle[299:,99:249] = True
obstacle[149:199, 399:499] = True
t = ((self.x-200)**2+(self.y-50)**2) < 50**2
obstacle[t] = True
t = ((self.x-400)**2+(self.y-300)**2)< 100**2
obstacle[t] = True
d = distance_transform_edt(1-obstacle)
d2 = d/100 + 1
repulsive=self.nu*((1/d2-1/self.d0)**2)
repulsive[d2>self.d0] = 0
return obstacle,repulsive,self.x,self.y
def generateAttractive(self):
attractive=self.xi*((self.x-self.goal[0])**2+
(self.y-self.goal[1])**2)
return attractive,self.x,self.y
def GradientBasedPlanner(self,f):
gy,gx = np.gradient(-f)
route = self.start.reshape(-1,2).astype(float);
rate = 1
current = route[0,:]
G = np.sqrt(gx**2+gy**2); gx /= G; gy /= G
for i in range(self.maxIter):
tmpx = round2(current[1])
tmpy = round2(current[0])
current+=rate*np.array([gx[tmpx,tmpy],gy[tmpx,tmpy]])
if np.sum(current<=0):
break
elif np.prod(round2(current)==self.goal):
print('yes')
break
route = np.concatenate((route,
np.array(current).reshape(-1,2)))
route = np.concatenate((route,
np.array(current).reshape(-1,2)))
return route
```
```python
demo = PotentialFieldPathDemo()
obsmap,repulmap,x,y = demo.generateObstacle()
attmap,_,_ = demo.generateAttractive()
f = repulmap+attmap
route = demo.GradientBasedPlanner(f)
```
```python
plt.figure(figsize=(20,10))
plt.subplot(221,projection='3d'); mesh(x,y,repulmap)
plt.subplot(222,projection='3d'); mesh(x,y,attmap)
plt.subplot(223,projection='3d'); mesh(x,y,f)
plt.subplot(224);
plt.imshow(obsmap)
plt.plot(route[:,0],route[:,1],'-',linewidth=5)
dxdy = route[10,:] - route[0,:]
plt.arrow(route[0,0],route[0,1],dxdy[0],dxdy[1],width=15)
plt.plot(demo.start[0],demo.start[1],
'rp',markersize=15)
plt.plot(demo.goal[0],demo.goal[1],
'r*',markersize=15)
```
```python
```
| 42294f67b92631841c11d70978aeb5a455ac1546 | 404,681 | ipynb | Jupyter Notebook | Robotics/PotentialFieldPlanPath/.ipynb_checkpoints/PotentialFieldPath-checkpoint.ipynb | zcemycl/ProbabilisticPerspectiveMachineLearning | 8291bc6cb935c5b5f9a88f7b436e6e42716c21ae | [
"MIT"
] | 4 | 2019-11-20T10:20:29.000Z | 2021-11-09T11:15:23.000Z | Computational Motion Planning/PotentialFieldPlanPath/PotentialFieldPath.ipynb | kasiv008/Robotics | 302b3336005acd81202ebbbb0c52a4b2692fa9c7 | [
"MIT"
] | null | null | null | Computational Motion Planning/PotentialFieldPlanPath/PotentialFieldPath.ipynb | kasiv008/Robotics | 302b3336005acd81202ebbbb0c52a4b2692fa9c7 | [
"MIT"
] | 2 | 2020-05-27T03:56:38.000Z | 2021-05-02T13:15:42.000Z | 2,013.338308 | 398,696 | 0.960658 | true | 1,058 | Qwen/Qwen-72B | 1. YES
2. YES | 0.923039 | 0.72487 | 0.669084 | __label__eng_Latn | 0.350791 | 0.392837 |
# Free-Body Diagram for particles
> Renato Naville Watanabe
> [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab)
> Federal University of ABC, Brazil
<h1>Contents<span class="tocSkip"></span></h1><br>
<div class="toc"><ul class="toc-item"><li><span><a href="#Python-setup" data-toc-modified-id="Python-setup-1"><span class="toc-item-num">1 </span>Python setup</a></span></li><li><span><a href="#Free-Body-Diagram" data-toc-modified-id="Free-Body-Diagram-2"><span class="toc-item-num">2 </span>Free-Body Diagram</a></span></li><li><span><a href="#Steps-to-draw-a-free-body-diagram-(FBD)" data-toc-modified-id="Steps-to-draw-a-free-body-diagram-(FBD)-3"><span class="toc-item-num">3 </span>Steps to draw a free-body diagram (FBD)</a></span></li><li><span><a href="#Basic-element-and-forces" data-toc-modified-id="Basic-element-and-forces-4"><span class="toc-item-num">4 </span>Basic element and forces</a></span><ul class="toc-item"><li><span><a href="#Gravity" data-toc-modified-id="Gravity-4.1"><span class="toc-item-num">4.1 </span>Gravity</a></span><li><span><a href="#Spring" data-toc-modified-id="Spring-4.2"><span class="toc-item-num">4.2 </span>Spring</a></span><li><span><a href="#Damping" data-toc-modified-id="Damping-4.3"><span class="toc-item-num">4.3 </span>Damping</a></span></li></ul><li><span><a href="#Examples-of-free-body-diagram" data-toc-modified-id="Examples-of-free-body-diagram-5"><span class="toc-item-num">5 </span>Examples of free-body diagram</a></span><ul class="toc-item"><li><span><a href="#No-force-acting-on-the-particle" data-toc-modified-id="No-force-acting-on-the-particle-5.1"><span class="toc-item-num">5.1 </span>No force acting on the particle</a></span><li><span><a href="#Gravity-force-acting-on-the-particle" data-toc-modified-id="Gravity-force-acting-on-the-particle-5.2"><span class="toc-item-num">5.2 </span>Gravity force acting on the particle</a><li><span><a href="#Ground-reaction-force" data-toc-modified-id="Ground-reaction-force-5.3"><span class="toc-item-num">5.3 </span>Ground reaction force</a><li><span><a href="#Mass-spring-system-with-horizontal-movement" data-toc-modified-id="Mass-spring-system-with-horizontal-movement-5.4"><span class="toc-item-num">5.4 </span>Mass-spring system with horizontal movement</a></span><li><span><a href="#Linear-spring-in-bidimensional-movement-at-horizontal-plane" data-toc-modified-id="Linear-spring-in-bidimensional-movement-at-horizontal-plane-5.5"><span class="toc-item-num">5.5 </span>Linear spring in bidimensional movement at horizontal plane</a></span><li><span><a href="#Particle-under-action-of-gravity-and-linear-air-resistance" data-toc-modified-id="Particle-under-action-of-gravity-and-linear-air-resistance-5.6"><span class="toc-item-num">5.6 </span>Particle under action of gravity and linear air resistance</a></span><li><span><a href="#Particle-under-action-of-gravity-and-nonlinear-air-resistance" data-toc-modified-id="Particle-under-action-of-gravity-and-nonlinear-air-resistance-5.7"><span class="toc-item-num">5.7 </span>Particle under action of gravity and nonlinear air resistance</a></span><li><span><a href="#Linear-spring-and-damping-on-bidimensional-horizontal-movement" data-toc-modified-id="Linear-spring-and-damping-on-bidimensional-horizontal-movement-5.8"><span class="toc-item-num">5.8 </span>Linear spring and damping on bidimensional horizontal movement</a></span><li><span><a href="#Simple-muscle-model" data-toc-modified-id="Simple-muscle-model-5.9"><span class="toc-item-num">5.9 </span>Simple muscle model</a></span></li></ul></li><li><span><a href="#Further-reading" data-toc-modified-id="Further-reading-6"><span class="toc-item-num">6 </span>Further reading</a></span></li><li><span><a href="#Video-lectures-on-the-internet" data-toc-modified-id="Video-lectures-on-the-internet-7"><span class="toc-item-num">7 </span>Video lectures on the internet</a></span></li><li><span><a href="#Problems" data-toc-modified-id="Problems-8"><span class="toc-item-num">8 </span>Problems</a></span></li><li><span><a href="#References" data-toc-modified-id="References-9"><span class="toc-item-num">9 </span>References</a></span></li></ul></div>
## Python setup
```python
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_context('notebook', font_scale=1.2)
```
## Free-Body Diagram
In the mechanical modeling of an inanimate or living system, composed by one or more bodies (bodies as units that are mechanically isolated according to the question one is trying to answer), it is convenient to isolate each body (be they originally interconnected or not) and identify each force and moment of force (torque) that act on this body in order to apply the laws of mechanics.
**The free body diagram (FBD) of a mechanical system or model is the representation in a diagram of all forces and moments of force acting on each body, isolated from the rest of the system.**
The term free means that each body, which maybe was part of a connected system, is represented as isolated (free) and any existent contact force is represented in the diagram as forces (action and reaction) acting on the formerly connected bodies. Then, the laws of mechanics are applied on each body, and the unknown movement, force or moment of force can be found if the system of equations is determined (the number of unknown variables can not be greater than the number of equations for each body).
How exactly a FBD is drawn for a mechanical model of something is dependent on what one is trying to find. For example, the air resistance might be neglected or not when modeling the movement of an object and the number of parts the system is divided is dependent on what is needed to know about the model.
The use of FBD is very common in biomechanics; a typical use is to use the FBD in order to determine the forces and torques on the ankle, knee, and hip joints of the lower limb (foot, leg, and thigh) during locomotion, and the FBD can be applied to any problem where the laws of mechanics are needed to solve a problem.
For now, let's study how to draw free-body diagrams for systems that can be modeled as particles.
### Steps to draw a free-body diagram (FBD)
1. Draw separately each object considered in the problem. How you separate depends on what questions you want to answer.
2. Identify the forces acting on each object. If you are analyzing more than one object, remember the Newton's third Law (action and reaction), and identify where the reaction of a force is being applied.
3. Draw all the identified forces, representing them as vectors. The vectors should be represented with the origin in the object. In the case of particles, the origin should be in the center of the particle.
4. If necessary, you should represent the reference frame in the free-body diagram.
5. After this, you can solve the problem using the Newton's second Law (see, e.g, [Newton's Laws](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/Notebooks/newtonLawForParticles.ipynb)) to find the motion of the particle.
## Basic element and forces
### Gravity
The gravity force acts on two masses, each one atracting each other:
\begin{equation}
\vec{{\bf{F}}} = - G\frac{m_1m_2}{||\vec{\bf{r}}||^2}\frac{\vec{\bf{r}}}{||\vec{\bf{r}}||}
\end{equation}
where $G = 6.67.10^{−11} Nm^2/kg^2$ and $\vec{\bf{r}}$ is a vector with length equal to the distance between the masses and directing towards the other mass. Note the forces acting on each mass have the same absolute value.
Since the mass of the Earth is $m_1=5.9736×10^{24}kg$ and its radius is 6.371×10$^6$ m, the gravity force near the surface of the Earth is:
<span class="notranslate">
\begin{equation}
\vec{{\bf{F}}} = m\vec{\bf{g}}
\end{equation}
</span>
with the absolute value of $\vec{\bf{g}}$ approximately equal to 9.81 $m/s^2$, pointing towards the center of Earth.
### Spring
Spring is an element used to represent a force proportional to some length or displacement. It produces a force in the same direction of the vector linking the spring extremities and opposite to its length or displacement from an equilibrium length. Frequently it has a linear relation, but it could be nonlinear as well. The force exerted by the spring in one of the extremities is:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = - k(||\vec{\bf{r}}||-l_0)\frac{\vec{\bf{r}}}{||\vec{\bf{r}}||} = -k\vec{\bf{r}} +kl_0\frac{\vec{\bf{r}}}{||\vec{\bf{r}}||} = -k\left(1-\frac{l_0}{||\vec{\bf{r}}||}\right)\vec{\bf{r}}
\end{equation}
</span>
where $\vec{\bf{r}}$ is the vector linking the extremity applying the force to the other extremity and $l_0$ is the equilibrium length of the spring.
Since the spring element is a massless element, the force in both extremities have the same absolute value and opposite directions.
### Damping
Damper is an element used to represent a force proportional to the velocity of displacement. It produces a force in the opposite direction of its velocity.
Frequently it has a linear relation, but it could be nonlinear as well. The force exerted by the damper element in one of its extremities is:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = - b||\vec{\bf{v}}||\frac{\vec{\bf{v}}}{||\vec{\bf{v}}||} = -b\vec{\bf{v}} = -b\frac{d\vec{\bf{r}}}{dt}
\end{equation}
</span>
where $\vec{\bf{r}}$ is the vector linking the extremity applying the force to the other extremity.
Since the damper element is a massless element, , the force in both extremities have the same absolute value and opposite directions.
## Examples of free-body diagram
Let's see some examples on how to draw the free-body diagram and obtain the motion equations to solve the problems.
### 1. No force acting on the particle
The most trivial situation is a particle with no force acting on it.
The free-body diagram is below, with no force vectors acting on the particle.
<figure><center><figcaption><i>Figure. Free-body diagram of a ball with no force acting on it.</i></figcaption></center></figure>
In this situation, the resultant force is:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = 0
\end{equation}
</span>
And the second Newton law for this particle is:
<span class="notranslate">
\begin{equation}
m\frac{d^2\vec{\bf{r}}}{dt^2} = 0 \quad \rightarrow \quad \frac{d^2\vec{\bf{r}}}{dt^2} = 0
\end{equation}
</span>
The motion of of the particle can be found by integrating twice both times, getting the following:
<span class="notranslate">
\begin{equation}
\vec{\bf{r}} = \vec{\bf{v}}_0t + \vec{\bf{r}}_0
\end{equation}
</span>
The particle continues to change its position with the same velocity it was at the beginning of the analysis. This could be predicted by Newton's first law.
### 2. Gravity force acting on the particle
Now, let's consider a ball with the gravity force acting on it. The free-body diagram is depicted below.
<figure><center><figcaption><i>Figure. Free-body diagram of a ball under the influence of gravity.</i></figcaption></center></figure>
The only force acting on the ball is the gravitational force:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}}_g = - mg \; \hat{\bf{j}}
\end{equation}
</span>
Applying Newton's second Law:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}}_g = m \frac{d^2\vec{\bf{r}}}{dt^2} \rightarrow - mg \; \hat{\bf{j}} = m \frac{d^2\vec{\bf{r}}}{dt^2} \rightarrow - g \; \hat{\bf{j}} = \frac{d^2\vec{\bf{r}}}{dt^2}
\end{equation}
</span>
Now, we can separate the equation in two components (x and y):
<span class="notranslate">
\begin{equation}
0 = \frac{d^2x}{dt^2}
\end{equation}
</span>
and
<span class="notranslate">
\begin{equation}
- g = \frac{d^2y}{dt^2}
\end{equation}
</span>
These equations were solved in [this Notebook about the Newton's laws](https://nbviewer.jupyter.org/github/BMClab/BMC/blob/master/notebooks/newtonLawForParticles.ipynb).
### 3. Ground reaction force
Now, we will analyze the situation of a particle at rest in contact with the ground. To simplify the analysis, only the vertical movement will be considered.
<figure><center><figcaption><i>Figure. Free-body diagram of a ball at rest in contact with the ground.</i></figcaption></center></figure>
The forces acting on the particle are the ground reaction force (often called as normal force) and the gravity force. The free-body diagram of the particle is below:
<figure><center><figcaption><i>Figure. Free-body diagram of a ball under the influence of gravity.</i></figcaption></center></figure>
So, the resultant force in the particle is:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = \overrightarrow{\bf{GRF}} + m\vec{\bf{g}} = \overrightarrow{\bf{GRF}} - mg \; \hat{\bf{j}}
\end{equation}
</span>
Considering only the y direction:
<span class="notranslate">
\begin{equation}
F = GRF - mg
\end{equation}
</span>
Applying Newton's second law to the particle:
<span class="notranslate">
\begin{equation}
m \frac{d^2y}{dt^2} = GRF - mg
\end{equation}
</span>
Note that since we have no information about how the force GRF varies along time, we cannot solve this equation. To find the position of the particle along time, one would have to measure the ground reaction force. See [the notebook on Vertical jump](http://nbviewer.jupyter.org/github/BMClab/BMC/blob/master/notebooks/VerticalJump.ipynb) for an application of this model.
### 4. Mass-spring system with horizontal movement
The example below represents a mass attached to a spring and the other extremity of the spring is fixed.
<figure><center><figcaption><i>Figure. Mass-spring system with horizontal movement.</i></figcaption></center></figure>
The only force force acting on the mass is from the spring. Below is the free-body diagram from the mass.
<figure><center><figcaption><i>Figure. Free-body diagram of a mass-spring system.</i></figcaption></center></figure>
Since the movement is horizontal, we can neglect the gravity force.
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = -k\left(1-\frac{l_0}{||\vec{\bf{r}}||}\right)\vec{\bf{r}}
\end{equation}
</span>
Applying Newton's second law to the mass:
<span class="notranslate">
\begin{equation}
m\frac{d^2\vec{\bf{r}}}{dt^2} = -k\left(1-\frac{l_0}{||\vec{\bf{r}}||}\right)\vec{\bf{r}} \rightarrow \frac{d^2\vec{\bf{r}}}{dt^2} = -\frac{k}{m}\left(1-\frac{l_0}{||\vec{\bf{r}}||}\right)\vec{\bf{r}}
\end{equation}
</span>
Since the movement is unidimensional, we can deal with it scalarly:
<span class="notranslate">
\begin{equation}
\frac{d^2x}{dt^2} = -\frac{k}{m}\left(1-\frac{l_0}{x}\right)x = -\frac{k}{m}(x-l_0)
\end{equation}
</span>
To solve this equation numerically, we must break the equations into two first-order differential equation:
<span class="notranslate">
\begin{equation}
\frac{dv_x}{dt} = -\frac{k}{m}(x-l_0)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dx}{dt} = v_x
\end{equation}
</span>
In the numerical solution below, we will use $k = 40 N/m$, $m = 2 kg$, $l_0 = 0.5 m$ and the mass starts from the position $x = 0.8m$ and at rest.
```python
k = 40
m = 2
l0 = 0.5
x0 = 0.8
v0 = 0
x = x0
v = v0
dt = 0.001
t = np.arange(0, 3, dt)
r = np.array([x])
for i in t[1:]:
dxdt = v
dvxdt = -k/m*(x-l0)
x = x + dt*dxdt
v = v + dt*dvxdt
r = np.vstack((r,np.array([x])))
plt.figure(figsize=(8, 4))
plt.plot(t, r, lw=4)
plt.xlabel('t(s)')
plt.ylabel('x(m)')
plt.title('Spring displacement')
plt.show()
```
### 5. Linear spring in bidimensional movement at horizontal plane
This example below represents a system with two masses attached to a spring.
To solve the motion of both masses, we have to draw a free-body diagram for each one of the masses.
<figure><center><figcaption><i>Figure. Linear spring in bidimensional movement at horizontal plane.</i></figcaption></center></figure>
The only force acting on each mass is the force due to the spring. Since the movement is happening at the horizontal plane, the gravity force can be neglected.
<figure><center><figcaption><i>Figure. FBD of linear spring in bidimensional movement at horizontal plane.</i></figcaption></center></figure>
So, the forces acting on mass 1 is:
<span class="notranslate">
\begin{equation}
\vec{\bf{F_1}} = k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}
\end{equation}
</span>
and the forces acting on mass 2 is:
<span class="notranslate">
\begin{equation}
\vec{\bf{F_2}} =k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}
\end{equation}
</span>
Applying Newton's second law for the masses:
<span class="notranslate">
\begin{equation}
m_1\frac{d^2\vec{\bf{r_1}}}{dt^2} = k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}
\\
\frac{d^2\vec{\bf{r_1}}}{dt^2} = -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_1}}+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_2}}
\\
\frac{d^2x_1\hat{\bf{i}}}{dt^2}+\frac{d^2y_1\hat{\bf{j}}}{dt^2} = -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}})+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}})
\end{equation}
</span>
<br/>
<span class="notranslate">
\begin{equation}
m_2\frac{d^2\vec{\bf{r_2}}}{dt^2} = k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}
\\
\frac{d^2\vec{\bf{r_2}}}{dt^2} = -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_2}}+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_1}}
\\
\frac{d^2x_2\hat{\bf{i}}}{dt^2}+\frac{d^2y_2\hat{\bf{j}}}{dt^2} = -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}})+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}})
\end{equation}
</span>
Now, we can separate the equations for each of the coordinates:
<span class="notranslate">
\begin{equation}
\frac{d^2x_1}{dt^2} = -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)x_1+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)x_2=-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1-x_2)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d^2y_1}{dt^2} = -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)y_1+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)y_2=-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_1-y_2)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d^2x_2}{dt^2} = -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)x_2+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)x_1=-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2-x_1)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d^2y_2}{dt^2} = -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)y_2+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)y_1=-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_2-y_1)
\end{equation}
</span>
To solve these equations numerically, you must break these equations into first-order equations:
<span class="notranslate">
\begin{equation}
\frac{dv_{x_1}}{dt} = -\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1-x_2)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dv_{y_1}}{dt} = -\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_1-y_2)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dv_{x_2}}{dt} = -\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2-x_1)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dv_{y_2}}{dt} = -\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_2-y_1)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dx_1}{dt} = v_{x_1}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dy_1}{dt} = v_{y_1}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dx_2}{dt} = v_{x_2}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dy_2}{dt} = v_{y_2}
\end{equation}
</span>
Note that if you did not want to know the details about the motion of each mass, but only the motion of the center of mass of the masses-spring system, you could have modeled the whole system as a single particle.
To solve the equations numerically, we will use the $m_1=1 kg$, $m_2 = 2 kg$, $l_0 = 0.5 m$, $k = 90 N/m$ and $x_{1_0} = 0 m$, $x_{2_0} = 0 m$, $y_{1_0} = 1 m$, $y_{2_0} = -1 m$, $v_{x1_0} = -2 m/s$, $v_{x2_0} = 0 m/s$, $v_{y1_0} = 0 m/s$, $v_{y2_0} = 0 m/s$.
```python
x01 = 0
y01= 0.5
x02 = 0
y02 = -0.5
vx01 = 0.1
vy01 = 0
vx02 = -0.1
vy02 = 0
x1= x01
y1 = y01
x2= x02
y2 = y02
vx1= vx01
vy1 = vy01
vx2= vx02
vy2 = vy02
r1 = np.array([x1,y1])
r2 = np.array([x2,y2])
k = 30
m1 = 1
m2 = 1
l0 = 0.5
dt = 0.0001
t = np.arange(0,5,dt)
for i in t[1:]:
dvx1dt = -k/m1*(x1-x2)*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))
dvx2dt = -k/m2*(x2-x1)*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))
dvy1dt = -k/m1*(y1-y2)*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))
dvy2dt = -k/m2*(y2-y1)*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))
dx1dt = vx1
dx2dt = vx2
dy1dt = vy1
dy2dt = vy2
x1 = x1 + dt*dx1dt
x2 = x2 + dt*dx2dt
y1 = y1 + dt*dy1dt
y2 = y2 + dt*dy2dt
vx1 = vx1 + dt*dvx1dt
vx2 = vx2 + dt*dvx2dt
vy1 = vy1 + dt*dvy1dt
vy2 = vy2 + dt*dvy2dt
r1 = np.vstack((r1,np.array([x1,y1])))
r2 = np.vstack((r2,np.array([x2,y2])))
springLength = np.sqrt((r1[:,0]-r2[:,0])**2+(r1[:,1]-r2[:,1])**2)
plt.figure(figsize=(8, 4))
plt.plot(t, springLength, lw=4)
plt.xlabel('t(s)')
plt.ylabel('Spring length (m)')
plt.show()
plt.figure(figsize=(8, 4))
plt.plot(r1[:,0], r1[:,1], 'r.', lw=4)
plt.plot(r2[:,0], r2[:,1], 'b.', lw=4)
plt.plot((m1*r1[:,0]+m2*r2[:,0])/(m1+m2), (m1*r1[:,1]+m2*r2[:,1])/(m1+m2),'g.')
plt.xlim(-0.7,0.7)
plt.ylim(-0.7,0.7)
plt.xlabel('x(m)')
plt.ylabel('y(m)')
plt.title('Masses position')
plt.legend(('Mass1','Mass 2','Masses center of mass'))
plt.show()
```
### 6. Particle under action of gravity and linear air resistance
Below is the free-body diagram of a particle with the gravity force and a linear drag force due to the air resistance.
<figure><center><figcaption><i>Figure. Particle under action of gravity and linear air resistance.</i></figcaption></center></figure>
the forces being applied in the ball are:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = -mg \hat{\bf{j}} - b\vec{\bf{v}} = -mg \hat{\bf{j}} - b\frac{d\vec{\bf{r}}}{dt} = -mg \hat{\bf{j}} - b\left(\frac{dx}{dt}\hat{\bf{i}}+\frac{dy}{dt}\hat{\bf{j}}\right) = - b\frac{dx}{dt}\hat{\bf{i}} - \left(mg + b\frac{dy}{dt}\right)\hat{\bf{j}}
\end{equation}
</span>
Writing down Newton's second law:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = m \frac{d^2\vec{\bf{r}}}{dt^2} \rightarrow - b\frac{dx}{dt}\hat{\bf{i}} - \left(mg + b\frac{dy}{dt}\right)\hat{\bf{j}} = m\left(\frac{d^2x}{dt^2}\hat{\bf{i}}+\frac{d^2y}{dt^2}\hat{\bf{j}}\right)
\end{equation}
</span>
Now, we can separate into one equation for each coordinate:
<span class="notranslate">
\begin{equation}
- b\frac{dx}{dt} = m\frac{d^2x}{dt^2} -\rightarrow \frac{d^2x}{dt^2} = -\frac{b}{m} \frac{dx}{dt}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
-mg - b\frac{dy}{dt} = m\frac{d^2y}{dt^2} \rightarrow \frac{d^2y}{dt^2} = -\frac{b}{m}\frac{dy}{dt} - g
\end{equation}
</span>
These equations were solved in [this notebook](https://nbviewer.jupyter.org/github/BMClab/BMC/blob/master/notebooks/newtonLawForParticles.ipynb).
### 7. Particle under action of gravity and nonlinear air resistance
Below, is the free-body diagram of a particle with the gravity force and a drag force due to the air resistance proportional to the square of the particle velocity.
<figure><center><figcaption><i>Figure. Particle under action of gravity and nonlinear air resistance.</i></figcaption></center></figure>
The forces being applied in the ball are:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = -mg \hat{\bf{j}} - bv^2\hat{\bf{e_t}} = -mg \hat{\bf{j}} - b (v_x^2+v_y^2) \frac{v_x\hat{\bf{i}}+v_y\hat{\bf{j}}}{\sqrt{v_x^2+v_y^2}} = -mg \hat{\bf{j}} - b \sqrt{v_x^2+v_y^2} \,(v_x\hat{\bf{i}}+v_y\hat{\bf{j}}) = -mg \hat{\bf{j}} - b \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\left(\frac{dx}{dt} \hat{\bf{i}}+\frac{dy}{dt}\hat{\bf{j}}\right)
\end{equation}
</span>
Writing down Newton's second law:
<span class="notranslate">
\begin{equation}
\vec{\bf{F}} = m \frac{d^2\vec{\bf{r}}}{dt^2} \rightarrow -mg \hat{\bf{j}} - b \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\left(\frac{dx}{dt} \hat{\bf{i}}+\frac{dy}{dt}\hat{\bf{j}}\right) = m\left(\frac{d^2x}{dt^2}\hat{\bf{i}}+\frac{d^2y}{dt^2}\hat{\bf{j}}\right)
\end{equation}
</span>
Now, we can separate into one equation for each coordinate:
<span class="notranslate">
\begin{equation}
- b \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\frac{dx}{dt} = m\frac{d^2x}{dt^2} \rightarrow \frac{d^2x}{dt^2} = - \frac{b}{m} \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\frac{dx}{dt}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
-mg - b \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\frac{dy}{dt} = m\frac{d^2y}{dt^2} \rightarrow \frac{d^2y}{dt^2} = - \frac{b}{m} \sqrt{\left(\frac{dx}{dt} \right)^2+\left(\frac{dy}{dt} \right)^2} \,\frac{dy}{dt} -g
\end{equation}
</span>
These equations were solved numerically in [this notebook](https://nbviewer.jupyter.org/github/BMClab/BMC/blob/master/notebooks/newtonLawForParticles.ipynb).
### 8. Linear spring and damping on bidimensional horizontal movement
This situation is very similar to the example of horizontal movement with one spring and two masses, with a damper added in parallel to the spring.
<figure><center><figcaption><i>Figure. Linear spring and damping on bidimensional horizontal movement.</i></figcaption></center></figure>
Now, the forces acting on each mass are the force due to the spring and the force due to the damper.
<figure><center><figcaption><i>Figure. FBD of linear spring and damping on bidimensional horizontal movement.</i></figcaption></center></figure>
So, the forces acting on mass 1 is:
<span class="notranslate">
\begin{equation}
\vec{\bf{F_1}} = b\frac{d(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{dt} + k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||} = b\frac{d(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{dt} + k\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(\vec{\bf{r_2}}-\vec{\bf{r_1}})
\end{equation}
</span>
and the forces acting on mass 2 is:
<span class="notranslate">
\begin{equation}
\vec{\bf{F_2}} = b\frac{d(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{dt} + k\left(||\vec{\bf{r_2}}-\vec{\bf{r_1}}||-l_0\right)\frac{(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{||\vec{\bf{r_1}}-\vec{\bf{r_2}}||}= b\frac{d(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{dt} + k\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(\vec{\bf{r_1}}-\vec{\bf{r_2}})
\end{equation}
</span>
Applying the Newton's second law for the masses:
<span class="notranslate">
\begin{equation}
m_1\frac{d^2\vec{\bf{r_1}}}{dt^2} = b\frac{d(\vec{\bf{r_2}}-\vec{\bf{r_1}})}{dt}+k\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(\vec{\bf{r_2}}-\vec{\bf{r_1}})
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d^2\vec{\bf{r_1}}}{dt^2} = -\frac{b}{m_1}\frac{d\vec{\bf{r_1}}}{dt} -\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_1}} + \frac{b}{m_1}\frac{d\vec{\bf{r_2}}}{dt}+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_2}}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d^2x_1\hat{\bf{i}}}{dt^2}+\frac{d^2y_1\hat{\bf{j}}}{dt^2} = -\frac{b}{m_1}\left(\frac{dx_1\hat{\bf{i}}}{dt}+\frac{dy_1\hat{\bf{j}}}{dt}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}})+\frac{b}{m_1}\left(\frac{dx_2\hat{\bf{i}}}{dt}+\frac{dy_2\hat{\bf{j}}}{dt}\right)+\frac{k}{m_1}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}}) = -\frac{b}{m_1}\left(\frac{dx_1\hat{\bf{i}}}{dt}+\frac{dy_1\hat{\bf{j}}}{dt}-\frac{dx_2\hat{\bf{i}}}{dt}-\frac{dy_2\hat{\bf{j}}}{dt}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}}-x_2\hat{\bf{i}}-y_2\hat{\bf{j}})
\end{equation}
</span>
\begin{equation}
m_2\frac{d^2\vec{\bf{r_2}}}{dt^2} = b\frac{d(\vec{\bf{r_1}}-\vec{\bf{r_2}})}{dt}+k\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(\vec{\bf{r_1}}-\vec{\bf{r_2}})
\end{equation}
\begin{equation}
\frac{d^2\vec{\bf{r_2}}}{dt^2} = -\frac{b}{m_2}\frac{d\vec{\bf{r_2}}}{dt} -\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_2}} + \frac{b}{m_2}\frac{d\vec{\bf{r_1}}}{dt}+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)\vec{\bf{r_1}}
\end{equation}
\begin{equation}
\frac{d^2x_2\hat{\bf{i}}}{dt^2}+\frac{d^2y_2\hat{\bf{j}}}{dt^2} = -\frac{b}{m_2}\left(\frac{dx_2\hat{\bf{i}}}{dt}+\frac{dy_2\hat{\bf{j}}}{dt}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}})+\frac{b}{m_2}\left(\frac{dx_1\hat{\bf{i}}}{dt}+\frac{dy_1\hat{\bf{j}}}{dt}\right)+\frac{k}{m_2}\left(1-\frac{l_0}{||\vec{\bf{r_2}}-\vec{\bf{r_1}}||}\right)(x_1\hat{\bf{i}}+y_1\hat{\bf{j}})=-\frac{b}{m_2}\left(\frac{dx_2\hat{\bf{i}}}{dt}+\frac{dy_2\hat{\bf{j}}}{dt}-\frac{dx_1\hat{\bf{i}}}{dt}-\frac{dy_1\hat{\bf{j}}}{dt}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2\hat{\bf{i}}+y_2\hat{\bf{j}}-x_1\hat{\bf{i}}-y_1\hat{\bf{j}})
\end{equation}
Now, we can separate the equations for each of the coordinates:
<span class="notranslate">
\begin{equation}
\frac{d^2x_1}{dt^2} = -\frac{b}{m_1}\left(\frac{dx_1}{dt}-\frac{dx_2}{dt}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1-x_2)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d^2y_1}{dt^2} = -\frac{b}{m_1}\left(\frac{dy_1}{dt}-\frac{dy_2}{dt}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_1-y_2)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d^2x_2}{dt^2} = -\frac{b}{m_2}\left(\frac{dx_2}{dt}-\frac{dx_1}{dt}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2-x_1)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d^2y_2}{dt^2} = -\frac{b}{m_2}\left(\frac{dy_2}{dt}-\frac{dy_1}{dt}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_2-y_1)
\end{equation}
</span>
If you want to solve these equations numerically, you must break these equations into first-order equations:
<span class="notranslate">
\begin{equation}
\frac{dv_{x_1}}{dt} = -\frac{b}{m_1}\left(v_{x_1}-v_{x_2}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_1-x_2)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dv_{y_1}}{dt} = -\frac{b}{m_1}\left(v_{y_1}-v_{y_2}\right)-\frac{k}{m_1}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_1-y_2)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dv_{x_2}}{dt} = -\frac{b}{m_2}\left(v_{x_2}-v_{x_1}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(x_2-x_1)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dv_{y_2}}{dt} = -\frac{b}{m_2}\left(v_{y_2}-v_{y_1}\right)-\frac{k}{m_2}\left(1-\frac{l_0}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\right)(y_2-y_1)
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dx_1}{dt} = v_{x_1}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dy_1}{dt} = v_{y_1}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dx_2}{dt} = v_{x_2}
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{dy_2}{dt} = v_{y_2}
\end{equation}
</span>
To solve the equations numerically, we will use the $m_1=1 kg$, $m_2 = 2 kg$, $l_0 = 0.5 m$, $k = 10 N/m$, $b = 0.6 Ns/m$ and $x_{1_0} = 0 m$, $x_{2_0} = 0 m$, $y_{1_0} = 1 m$, $y_{2_0} = -1 m$, $v_{x1_0} = -2 m/s$, $v_{x2_0} = 1 m/s$, $v_{y1_0} = 0 m/s$, $v_{y2_0} = 0 m/s$.
```python
x01 = 0
y01= 1
x02 = 0
y02 = -1
vx01 = -2
vy01 = 0
vx02 = 1
vy02 = 0
x1= x01
y1 = y01
x2= x02
y2 = y02
vx1= vx01
vy1 = vy01
vx2= vx02
vy2 = vy02
r1 = np.array([x1,y1])
r2 = np.array([x2,y2])
k = 10
m1 = 1
m2 = 2
b = 0.6
l0 = 0.5
dt = 0.001
t = np.arange(0,5,dt)
for i in t[1:]:
dvx1dt = -b/m1*(vx1-vx2) -k/m1*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))*(x1-x2)
dvx2dt = -b/m2*(vx2-vx1) -k/m2*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))*(x2-x1)
dvy1dt = -b/m1*(vy1-vy2) -k/m1*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))*(y1-y2)
dvy2dt = -b/m2*(vy2-vy1) -k/m2*(1-l0/np.sqrt((x2-x1)**2+(y2-y1)**2))*(y2-y1)
dx1dt = vx1
dx2dt = vx2
dy1dt = vy1
dy2dt = vy2
x1 = x1 + dt*dx1dt
x2 = x2 + dt*dx2dt
y1 = y1 + dt*dy1dt
y2 = y2 + dt*dy2dt
vx1 = vx1 + dt*dvx1dt
vx2 = vx2 + dt*dvx2dt
vy1 = vy1 + dt*dvy1dt
vy2 = vy2 + dt*dvy2dt
r1 = np.vstack((r1,np.array([x1,y1])))
r2 = np.vstack((r2,np.array([x2,y2])))
springDampLength = np.sqrt((r1[:,0]-r2[:,0])**2+(r1[:,1]-r2[:,1])**2)
plt.figure(figsize=(8, 4))
plt.plot(t, springDampLength, lw=4)
plt.xlabel('t(s)')
plt.ylabel('Spring length (m)')
plt.show()
plt.figure(figsize=(8, 4))
plt.plot(r1[:,0], r1[:,1], 'r.', lw=4)
plt.plot(r2[:,0], r2[:,1], 'b.', lw=4)
plt.plot((m1*r1[:,0]+m2*r2[:,0])/(m1+m2), (m1*r1[:,1]+m2*r2[:,1])/(m1+m2),'g.')
plt.xlim(-2,2)
plt.ylim(-2,2)
plt.xlabel('x(m)')
plt.ylabel('y(m)')
plt.title('Masses position')
plt.legend(('Mass1','Mass 2','Masses center of mass'))
plt.show()
```
### 9. Simple muscle model
The diagram below shows a simple muscle model. The spring in the left represents the tendinous tissues and the spring in the right represents the elastic properties of the muscle fibers. The damping is present to model the viscous properties of the muscle fibers, the element CE is the contractile element (force production) and the mass $m$ is the muscle mass.
The length $L_{MT}$ is the length of the muscle plus the tendon. In our model $L_{MT}$ is constant, but it could be a function of the joint angle.
<figure><center><figcaption><i>Figure. Simple muscle model.</i></figcaption></center></figure>
The length of the tendon will be denoted by $l_T(t)$ and the muscle length, by $l_{m}(t)$. Both lengths are related by each other by the following expression:
<span class="notranslate">
\begin{equation}
l_t(t) + l_m(t) = L_{MT}
\end{equation}
</span>
The free-body diagram of the muscle mass is depicted below.
<figure><center><figcaption><i>Figure. FBD of simple muscle model.</i></figcaption></center></figure>
The resultant force being applied in the muscle mass is:
<span class="notranslate">
$$\vec{\bf{F}} = -k_T(||\vec{\bf{r_m}}||-l_{t_0})\frac{\vec{\bf{r_m}}}{||\vec{\bf{r_m}}||} + b\frac{d(L_{MT}\hat{\bf{i}} - \vec{\bf{r_{m}}})}{dt} + k_m (||L_{MT}\hat{\bf{i}} - \vec{\bf{r_{m}}}||-l_{m_0})\frac{L_{MT}\hat{\bf{i}} - \vec{\bf{r_{m}}}}{||L_{MT}\hat{\bf{i}} - \vec{\bf{r_{m}}}||} +\vec{\bf{F}}{\bf{_{CE}}}(t)$$
</span>
where $\vec{\bf{r_m}}$ is the muscle mass position.
Since the model is unidimensional, we can assume that the force $\vec{\bf{F}}\bf{_{CE}}(t)$ is in the x direction, so the analysis will be done only in this direction.
<span class="notranslate">
$$F = -k_T(l_t-l_{t_0}) + b\frac{d(L_{MT} - l_t)}{dt} + k_m (l_m-l_{m_0}) + F_{CE}(t) \\
F = -k_T(l_t-l_{t_0}) -b\frac{dl_t}{dt} + k_m (L_{MT}-l_t-l_{m_0}) + F_{CE}(t) \\
F = -b\frac{dl_t}{dt}-(k_T+k_m)l_t+F_{CE}(t)+k_Tl_{t_0}+k_m(L_{MT}-l_{m_0})$$
</span>
Applying the Newton's second law:
<span class="notranslate">
$$m\frac{d^2l_t}{dt^2} = -b\frac{dl_t}{dt}-(k_T+k_m)l_t+F_{CE}(t)+k_Tl_{t_0}+k_m(L_{MT}-l_{m_0})$$
</span>
To solve this equation, we must break the equation into two first-order differential equations:
<span class="notranslate">
\begin{equation}
\frac{dvt}{dt} = - \frac{b}{m}v_t - \frac{k_T+k_m}{m}l_t +\frac{F_{CE}(t)}{m} + \frac{k_T}{m}l_{t_0}+\frac{k_m}{m}(L_{MT}-l_{m_0})
\end{equation}
</span>
<span class="notranslate">
\begin{equation}
\frac{d l_t}{dt} = v_t
\end{equation}
</span>
Now, we can solve these equations by using some numerical method. To obtain the solution, we will use the damping factor of the muscle as $b = 10\,Ns/m$, the muscle mass is $m = 2 kg$, the stiffness of the tendon as $k_t=1000\,N/m$ and the elastic element of the muscle as $k_m=1500\,N/m$. The tendon-length is $L_{MT} = 0.35\,m$, and the tendon equilibrium length is $l_{t0} = 0.28\,m$ and the muscle fiber equilibrium length is $l_{m0} = 0.07\,m$. Both the tendon and the muscle fiber are at their equilibrium lengths and at rest.
Also, we will model the force of the contractile element as a Heaviside step of $90\,N$ (90 N beginning at $t=0$), but normally it is modeled as a function of $l_m$ and $v_m$ having a neural activation signal as input.
```python
m = 2
b = 10
km = 1500
kt = 1000
lt0 = 0.28
lm0 = 0.07
Lmt = 0.35
vt0 = 0
dt = 0.0001
t = np.arange(0, 10, dt)
Fce = 90
lt = lt0
vt = vt0
ltp = np.array([lt0])
lmp = np.array([lm0])
Ft = np.array([0])
for i in range(1,len(t)):
dvtdt = -b/m*vt-(kt+km)/m*lt + Fce/m + kt/m*lt0 +km/m*(Lmt-lm0)
dltdt = vt
vt = vt + dt*dvtdt
lt = lt + dt*dltdt
Ft = np.vstack((Ft,np.array(kt*(lt-lt0))))
ltp = np.vstack((ltp,np.array(lt)))
lmp = np.vstack((lmp,np.array(Lmt - lt)))
plt.figure(figsize=(8, 4))
plt.plot(t, Ft, lw=4)
plt.xlabel('t(s)')
plt.ylabel('Tendon force (N)')
plt.show()
plt.figure(figsize=(8, 4))
plt.plot(t, ltp, lw=4)
plt.plot(t, lmp, lw=4)
plt.xlabel('t(s)')
plt.ylabel('Length (m)')
plt.legend(('Tendon length', 'Muscle fiber length'))
plt.show()
```
## Further reading
- Read the 2nd chapter of the [Ruina and Rudra's book](http://ruina.tam.cornell.edu/Book/index.html) about free-body diagrams;
- Read the 13th of the [Hibbeler's book](https://drive.google.com/file/d/1sDLluWCiBCog2C11_Iu1fjv-BtfVUxBU/view) (available in the Classroom).
## Problems
1. Solve the problems 2.3.9, 2.3.20, 11.1.6, 13.1.6 (a, b, c, d, f), 13.1.7, 13.1.10 (a, b) from Ruina and Pratap's book.
2. Check examples 13.1, 13.4 and 13.5 from Hibbeler's book.
## References
- Ruina A, Rudra P (2019) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press.
- R. C. Hibbeler (2010) [Engineering Mechanics Dynamics](https://drive.google.com/file/d/1sDLluWCiBCog2C11_Iu1fjv-BtfVUxBU/view). 12th Edition. Pearson Prentice Hall.
- Nigg & Herzog (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley.
| d46eb71f5674bf8e4cbe801e48106fa35c9fe2f3 | 235,130 | ipynb | Jupyter Notebook | notebooks/FBDParticles.ipynb | regifukuchi/BMC | 9983c94ba0aa8e3660f08ab06fb98e38d7b22f0a | [
"CC-BY-4.0"
] | 293 | 2015-01-17T12:36:30.000Z | 2022-02-13T13:13:12.000Z | notebooks/FBDParticles.ipynb | guinetn/BMC | ae2d187a5fb9da0a2711a1ed56b87a3e1da0961f | [
"CC-BY-4.0"
] | 11 | 2018-06-21T21:40:40.000Z | 2018-08-09T19:55:26.000Z | notebooks/FBDParticles.ipynb | guinetn/BMC | ae2d187a5fb9da0a2711a1ed56b87a3e1da0961f | [
"CC-BY-4.0"
] | 162 | 2015-01-16T22:54:31.000Z | 2022-02-14T21:14:43.000Z | 159.843644 | 34,004 | 0.856862 | true | 15,148 | Qwen/Qwen-72B | 1. YES
2. YES | 0.672332 | 0.826712 | 0.555825 | __label__eng_Latn | 0.654101 | 0.129696 |
<table border="0">
<tr>
<td>
</td>
<td>
</td>
</tr>
</table>
# Orthogonal Random Forest: Use Cases and Examples
Orthogonal Random Forest (ORF) combines orthogonalization,
a technique that effectively removes the confounding effect in two-stage estimation,
with generalized random forests, a flexible method for estimating treatment effect heterogeneity. Due to the orthogonalization aspect of this method, the ORF performs especially well in the presence of high-dimensional confounders. For more details, see [this paper](https://arxiv.org/abs/1806.03467).
The EconML SDK implements the following OrthoForest variants:
* ContinuousTreatmentOrthoForest: suitable for continuous treatments
* DiscreteTreatmentOrthoForest: suitable for discrete treatments
In this notebook, we show the performance of the ORF on synthetic data.
**Notebook contents:**
1. Example usage with continuous treatment synthetic data
2. Example usage with binary treatment synthetic data
3. Example usage with multiple discrete treatment synthetic data
4. Example usage with real continuous treatment observational data
```python
import econml
```
```python
# Main imports
from econml.ortho_forest import ContinuousTreatmentOrthoForest, WeightedModelWrapper, DiscreteTreatmentOrthoForest
# Helper imports
import numpy as np
from itertools import product
from sklearn.linear_model import Lasso, LassoCV, LogisticRegression, LogisticRegressionCV
#from sklearn.multioutput import MultiOutputRegressor
import matplotlib.pyplot as plt
%matplotlib inline
```
## 1. Example Usage with Continuous Treatment Synthetic Data
### 1.1. DGP
We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467). The DGP is described by the following equations:
\begin{align}
T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\
Y =& T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim& \text{Normal}(0,\, I_{n_w})\\
X \sim& \text{Uniform}(0,1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity.
For this DGP,
\begin{align}
\theta(x) = \exp(2\cdot x_1).
\end{align}
```python
# Treatment effect function
def exp_te(x):
return np.exp(2*x[0])
```
```python
# DGP constants
np.random.seed(123)
n = 1000
n_w = 30
support_size = 5
n_x = 1
# Outcome support
support_Y = np.random.choice(range(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=support_size)
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE = np.array([exp_te(x_i) for x_i in X])
T = np.dot(W[:, support_T], coefs_T) + eta_sample(n)
Y = TE * T + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
# ORF parameters and test data
# The following parameters are set according to theory
subsample_power = 0.88
subsample_ratio = ((n/np.log(n_w))**(subsample_power)) / n
lambda_reg = np.sqrt(np.log(n_w) / (10 * subsample_ratio * n))
X_test = np.array(list(product(np.arange(0, 1, 0.01), repeat=n_x)))
```
### 1.2. Train Estimator
**Note:** The models in the final stage of the estimation (``model_T_final``, ``model_Y_final``) need to support sample weighting.
If the models of choice do not support sample weights (e.g. ``sklearn.linear_model.LassoCV``), the ``econml`` packages provides a convenient wrapper for these models ``WeightedModelWrapper`` in order to allow sample weights.
If the model of choice is a linear (regression) model such as Lasso, you should set ``sample_type="weighted"``. Otherwise, set ``sample_type="sampled"``.
```python
est = ContinuousTreatmentOrthoForest(
n_trees=200, min_leaf_size=5,
max_splits=50, subsample_ratio=2*subsample_ratio, bootstrap=False,
model_T=Lasso(alpha=lambda_reg),
model_Y=Lasso(alpha=lambda_reg),
model_T_final=WeightedModelWrapper(Lasso(alpha=lambda_reg), sample_type="weighted"),
model_Y_final=WeightedModelWrapper(Lasso(alpha=lambda_reg), sample_type="weighted"),
random_state=123)
```
```python
est.fit(Y, T, X, W)
```
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 9.2s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 48.0s
[Parallel(n_jobs=-1)]: Done 200 out of 200 | elapsed: 1.4min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 6.5s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 45.1s
[Parallel(n_jobs=-1)]: Done 200 out of 200 | elapsed: 1.4min finished
<econml.ortho_forest.ContinuousTreatmentOrthoForest at 0x125a40550>
```python
treatment_effects = est.const_marginal_effect(X_test)
```
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 35.7s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 3.6min finished
### 1.3. Performance Visualization
```python
y = treatment_effects[:, 0]
plt.plot(X_test, y, label='ORF estimate')
expected_te = np.array([exp_te(x_i) for x_i in X_test])
plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect')
plt.ylabel("Treatment Effect")
plt.xlabel("x")
plt.legend()
plt.show()
```
## 2. Example Usage with Binary Treatment Synthetic Data
### 2.1. DGP
We use the following DGP:
\begin{align}
T \sim & \text{Bernoulli}\left(f(W)\right), &\; f(W)=\sigma(\langle W, \beta\rangle + \eta), \;\eta \sim \text{Uniform}(-1, 1)\\
Y = & T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, & \; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim & \text{Normal}(0,\, I_{n_w}) & \\
X \sim & \text{Uniform}(0,\, 1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders, $\beta, \gamma$ have high sparsity and $\sigma$ is the sigmoid function.
For this DGP,
\begin{align}
\theta(x) = \exp( 2\cdot x_1 ).
\end{align}
```python
# DGP constants
np.random.seed(1234)
n = 1000
n_w = 30
support_size = 5
n_x = 1
# Outcome support
support_Y = np.random.choice(range(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=support_size)
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE = np.array([exp_te(x_i) for x_i in X])
# Define treatment
log_odds = np.dot(W[:, support_T], coefs_T) + eta_sample(n)
T_sigmoid = 1/(1 + np.exp(-log_odds))
T = np.array([np.random.binomial(1, p) for p in T_sigmoid])
# Define the outcome
Y = TE * T + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
# ORF parameters and test data
# The following parameters are set according to theory
subsample_power = 0.88
subsample_ratio = ((n/np.log(n_w))**(subsample_power)) / n
lambda_reg = np.sqrt(np.log(n_w) / (10 * subsample_ratio * n))
X_test = np.array(list(product(np.arange(0, 1, 0.01), repeat=n_x)))
```
### 2.2. Train Estimator
```python
est = DiscreteTreatmentOrthoForest(
n_trees=200, min_leaf_size=10,
max_splits=30, subsample_ratio=2*subsample_ratio, bootstrap=False,
propensity_model = LogisticRegression(C=1/(X.shape[0]*lambda_reg), penalty='l1'),
model_Y = Lasso(alpha=lambda_reg),
propensity_model_final=LogisticRegression(C=1/(X.shape[0]*lambda_reg), penalty='l1'),
model_Y_final=WeightedModelWrapper(Lasso(alpha=lambda_reg), sample_type="weighted")
)
```
```python
est.fit(Y, T, X, W)
```
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 5.6s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 27.4s
[Parallel(n_jobs=-1)]: Done 200 out of 200 | elapsed: 44.1s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 3.2s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 22.9s
[Parallel(n_jobs=-1)]: Done 200 out of 200 | elapsed: 37.6s finished
<econml.ortho_forest.DiscreteTreatmentOrthoForest at 0x12bbc44a8>
```python
treatment_effects = est.const_marginal_effect(X_test)
```
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 33.4s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 3.4min finished
### 2.3. Performance Visualization
```python
y = treatment_effects[:, 0]
plt.plot(X_test, y, label='ORF estimate')
expected_te = np.array([exp_te(x_i) for x_i in X_test])
plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect')
plt.ylabel("Treatment Effect")
plt.xlabel("x")
plt.legend()
plt.show()
```
## 3. Example Usage with Multiple Treatment Synthetic Data
### 3.1 DGP
We use the following DGP:
\begin{align}
Y = & \sum_{t=1}^{n_{\text{treatments}}} 1\{T=t\}\cdot \theta_{T}(X) + \langle W, \gamma\rangle + \epsilon, \; \epsilon \sim \text{Unif}(-1, 1), \\
\text{Pr}[T=t \mid W] \propto & \exp\{\langle W, \beta_t \rangle\}, \;\;\;\; \forall t\in \{0, 1, \ldots, n_{\text{treatments}}\}
\end{align}
where $W$ is a matrix of high-dimensional confounders, $\beta_t, \gamma$ are sparse.
For this particular example DGP we used $n_{\text{treatments}}=3$ and
\begin{align}
\theta_1(x) = & \exp( 2 x_1 ),\\
\theta_2(x) = & 3 \cdot \sigma(100\cdot (x_1 - .5)),\\
\theta_3(x) = & -2 \cdot \sigma(100\cdot (x_1 - .25)),
\end{align}
where $\sigma$ is the sigmoid function.
```python
def get_test_train_data(n, n_w, support_size, n_x, te_func, n_treatments):
# Outcome support
support_Y = np.random.choice(range(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=(support_size, n_treatments))
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE = np.array([te_func(x_i, n_treatments) for x_i in X])
log_odds = np.dot(W[:, support_T], coefs_T)
T_sigmoid = np.exp(log_odds)
T_sigmoid = T_sigmoid/np.sum(T_sigmoid, axis=1, keepdims=True)
T = np.array([np.random.choice(n_treatments, p=p) for p in T_sigmoid])
TE = np.concatenate((np.zeros((n,1)), TE), axis=1)
Y = TE[np.arange(n), T] + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
X_test = np.array(list(product(np.arange(0, 1, 0.01), repeat=n_x)))
return (Y, T, X, W), (X_test, np.array([te_func(x, n_treatments) for x in X_test]))
```
```python
import scipy.special
def te_func(x, n_treatments):
return [np.exp(2*x[0]), 3*scipy.special.expit(100*(x[0] - .5)) - 1, -2*scipy.special.expit(100*(x[0] - .25))]
np.random.seed(123)
(Y, T, X, W), (X_test, te_test) = get_test_train_data(1000, 3, 3, 1, te_func, 4)
```
### 3.2 Train Estimator
```python
est = DiscreteTreatmentOrthoForest(n_trees=500,
propensity_model = LogisticRegression(C=1/(X.shape[0]*lambda_reg), penalty='l1'),
model_Y = WeightedModelWrapper(Lasso(alpha=lambda_reg)))
```
```python
est.fit(Y, T, X, W)
```
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 14.4s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 1.4min
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 3.2min
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 5.8min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 11.3s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 1.3min
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 3.2min
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 5.8min finished
<econml.ortho_forest.DiscreteTreatmentOrthoForest at 0x1267494a8>
```python
treatment_effects = est.const_marginal_effect(X_test)
```
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 2.5min
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 15.1min finished
### 3.3 Performance Visualization
```python
y = treatment_effects
for it in range(y.shape[1]):
plt.plot(X_test, y[:, it], label='ORF estimate T={}'.format(it))
plt.plot(X_test[:, 0], te_test[:, it], '--', label='True effect T={}'.format(it))
plt.ylabel("Treatment Effect")
plt.xlabel("x")
plt.legend()
plt.show()
```
## 4. Example usage with real continuous treatment observational data
We applied our technique to Dominick’s dataset, a popular historical dataset of store-level orange juice prices and sales provided by University of Chicago Booth School of Business.
The dataset is comprised of a large number of covariates $W$, but researchers might only be interested in learning the elasticity of demand as a function of a few variables $x$ such
as income or education.
We applied the `ContinuousTreatmentOrthoForest` to estimate orange juice price elasticity
as a function of income, and our results, unveil the natural phenomenon that lower income consumers are more price-sensitive.
### 4.1. Data
```python
# A few more imports
import os
import pandas as pd
import urllib.request
from sklearn.preprocessing import StandardScaler
```
```python
# Import the data
file_name = "oj_large.csv"
if not os.path.isfile(file_name):
print("Downloading file (this might take a few seconds)...")
urllib.request.urlretrieve("https://msalicedatapublic.blob.core.windows.net/datasets/OrangeJuice/oj_large.csv", file_name)
oj_data = pd.read_csv(file_name)
oj_data.head()
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>store</th>
<th>brand</th>
<th>week</th>
<th>logmove</th>
<th>feat</th>
<th>price</th>
<th>AGE60</th>
<th>EDUC</th>
<th>ETHNIC</th>
<th>INCOME</th>
<th>HHLARGE</th>
<th>WORKWOM</th>
<th>HVAL150</th>
<th>SSTRDIST</th>
<th>SSTRVOL</th>
<th>CPDIST5</th>
<th>CPWVOL5</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2</td>
<td>tropicana</td>
<td>40</td>
<td>9.018695</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>tropicana</td>
<td>46</td>
<td>8.723231</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>tropicana</td>
<td>47</td>
<td>8.253228</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>3</th>
<td>2</td>
<td>tropicana</td>
<td>48</td>
<td>8.987197</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
<tr>
<th>4</th>
<td>2</td>
<td>tropicana</td>
<td>50</td>
<td>9.093357</td>
<td>0</td>
<td>3.87</td>
<td>0.232865</td>
<td>0.248935</td>
<td>0.11428</td>
<td>10.553205</td>
<td>0.103953</td>
<td>0.303585</td>
<td>0.463887</td>
<td>2.110122</td>
<td>1.142857</td>
<td>1.92728</td>
<td>0.376927</td>
</tr>
</tbody>
</table>
</div>
```python
# Prepare data
Y = oj_data['logmove'].values
T = np.log(oj_data["price"]).values
scaler = StandardScaler()
W1 = scaler.fit_transform(oj_data[[c for c in oj_data.columns if c not in ['price', 'logmove', 'brand', 'week', 'store']]].values)
W2 = pd.get_dummies(oj_data[['brand']]).values
W = np.concatenate([W1, W2], axis=1)
X = oj_data[['INCOME']].values
```
### 4.2. Train Estimator
```python
# Define some parameters
n_trees = 500
min_leaf_size = 50
max_splits = 20
subsample_ratio = 0.02
bootstrap = False
```
```python
est = ContinuousTreatmentOrthoForest(
n_trees=n_trees, min_leaf_size=min_leaf_size, max_splits=max_splits,
subsample_ratio=subsample_ratio, bootstrap=bootstrap,
model_T=Lasso(alpha=0.5), model_Y=Lasso(alpha=0.5),
model_T_final=WeightedModelWrapper(LassoCV(), sample_type="weighted"),
model_Y_final=WeightedModelWrapper(LassoCV(), sample_type="weighted")
)
```
```python
min_income = 10.4
max_income = 10.9
delta = (max_income - min_income) / 50
X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1, 1)
```
```python
est.fit(Y, T, X, W)
te_pred = est.const_marginal_effect(X_test)
```
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 2.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 6.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 12.8s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 21.8s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.8s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.0min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.4min finished
### 4.3. Performance Visualization
```python
# Plot Oranje Juice elasticity as a function of income
plt.plot(np.ndarray.flatten(X_test), te_pred[:, 0], label="OJ Elasticity")
plt.xlabel(r'$\log$(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.xlim(10.45, 10.9)
plt.ylim(-2.9, -2.3)
plt.legend()
plt.title("Orange Juice Elasticity vs Income")
plt.show()
```
### 4.4 Bootstrap Confidence Intervals
```python
from econml.bootstrap import BootstrapEstimator
boot_est = BootstrapEstimator(ContinuousTreatmentOrthoForest(
n_trees=n_trees, min_leaf_size=min_leaf_size, max_splits=max_splits,
subsample_ratio=subsample_ratio, bootstrap=bootstrap,
model_T=Lasso(alpha=0.5), model_Y=Lasso(alpha=0.5),
model_T_final=WeightedModelWrapper(LassoCV(), sample_type="weighted"),
model_Y_final=WeightedModelWrapper(LassoCV(), sample_type="weighted")
), n_bootstrap_samples=10)
```
```python
boot_est.fit(Y, T, X, W)
te_pred_interval = boot_est.const_marginal_effect_interval(X_test, lower=1, upper=99)
```
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 2.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 6.6s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 12.9s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 21.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.8s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.8s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.6s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.8s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 20.0s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.8s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 11.1s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.9s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 11.0s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 20.2s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.4s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.8s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 11.0s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.8s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 11.3s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 20.3s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 11.1s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 20.8s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.8s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.6s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 11.1s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 20.5s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.4s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.7s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.4s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.9s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.6s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.6s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.9s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.9s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 20.4s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.8s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.7s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.6s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.9s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.8s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 10.9s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 19.8s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 0.8s
[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 4.8s
[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 11.7s
[Parallel(n_jobs=-1)]: Done 500 out of 500 | elapsed: 20.7s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.1min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.4min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.0min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.3min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.0min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.3min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.0min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.3min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.0min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.3min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.0min
/anaconda/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py:706: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or by a memory leak.
"timeout or by a memory leak.", UserWarning
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 34.8min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.1min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.6min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.2min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.7min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.1min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.6min finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 1.1min
[Parallel(n_jobs=-1)]: Done 51 out of 51 | elapsed: 3.5min finished
```python
plt.plot(np.ndarray.flatten(X_test), te_pred[:, 0], label="OJ Elasticity")
plt.fill_between(np.ndarray.flatten(X_test),
te_pred_interval[0][:, 0],
te_pred_interval[1][:, 0], alpha=.5, label="1-99% CI")
plt.xlabel(r'$\log$(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.title("Orange Juice Elasticity vs Income")
plt.legend()
plt.show()
```
| 32a8a1de360d6834234d2631e43ce9ab7fa6b27b | 157,964 | ipynb | Jupyter Notebook | notebooks/Orthogonal Random Forest Examples.ipynb | elissyah/econml | df89a5f5b7ba6089a38ed42e1b9af2a0bf2e0b1e | [
"MIT"
] | null | null | null | notebooks/Orthogonal Random Forest Examples.ipynb | elissyah/econml | df89a5f5b7ba6089a38ed42e1b9af2a0bf2e0b1e | [
"MIT"
] | null | null | null | notebooks/Orthogonal Random Forest Examples.ipynb | elissyah/econml | df89a5f5b7ba6089a38ed42e1b9af2a0bf2e0b1e | [
"MIT"
] | null | null | null | 126.776886 | 37,952 | 0.83438 | true | 9,879 | Qwen/Qwen-72B | 1. YES
2. YES | 0.763484 | 0.689306 | 0.526274 | __label__eng_Latn | 0.498999 | 0.061039 |
# Extracting Information from Audio Signals
## Measuring amplitude (Session 1.9) - Kadenze
### George Tzanetakis, University of Victoria
In this notebook we will explore different ways of measuring the amplitude of a sinusoidal signal. The use of the inner product to estimate the amplitude of a sinusoids in the presence of noise and other sinusoids will also be covered. As usual we start by defining a sinusoid generation function.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import IPython.display as ipd
# generate a discrete time sinusoidal signal with a specified frequency and duration
def sinusoid(freq=440.0, dur=1.0, srate=44100.0, amp=1.0, phase = 0.0):
t = np.linspace(0,dur,int(srate*dur))
data = amp * np.sin(2*np.pi*freq *t+phase)
return data
```
One way of measuring the amplitude of an audio signal is by finding the maximum value. As long the array of samples contains a few cycles of a sinusoidal signal this estimation works well.
```python
def peak_amplitude(data):
return np.max(data)
```
Let's check it out:
```python
freq = 550
data = sinusoid(freq, 0.5, amp =4.0)
print('Peak amplitude = %2.2f ' % peak_amplitude(data))
```
Peak amplitude = 4.00
Now let's define a function that returns the Root of the Mean Squarred (RMS) amplitude. For an array containing a few cycles of a sinusoid signal we can estimate the RMS amplitude as follows:
```python
def rms_amplitude(data):
rms_sum = np.sum(np.multiply(data,data))
rms_sum /= len(data)
return np.sqrt(rms_sum) * np.sqrt(2.0)
```
Let's check out that this method of estimation also works:
```python
freq = 550
data = sinusoid(freq, 0.5, amp =8.0)
print('Rms amplitude = %2.2f' % rms_amplitude(data))
```
Rms amplitude = 8.00
Now let's look at estimating the amplitude based on taking the dot product of two sinusoids.
Unlike the peak and RMS methods of estimating amplitude this method requires knowledge of the
frequency (and possibly phase) of the underlying sinusoid. However, it has the advantage that it is much more robust when there is interferring noise or other sinusoidal signals with other frequencies.
```python
def dot_amplitude(data1, data2):
dot_product = np.dot(data1, data2)
return 2 * (dot_product / len(data1))
```
First lets confirm that this amplitude estimation works for a single sinusoid
```python
data = sinusoid(300, 0.5, amp =4.0)
basis = sinusoid(300, 0.5, amp = 1)
print('Dot product amplitude = %2.2f' % dot_amplitude(data, basis))
plt.figure()
plt.plot(data[1:1000])
plt.plot(basis[1:1000])
```
Now lets add some noise to our signal. Notice that the dot-amplitude estimation works reliably, the RMS still does ok but the peak amplitude gets really affected by the added noise. Notice that the dot product amplitude estimation requires knowledge of the frequency to create the appropriate basis signal
```python
noise = np.random.normal(0, 1.0, len(data))
mix = data + noise
plt.figure()
plt.plot(data[1:1000])
plt.plot(noise[1:1000])
plt.plot(mix[1:1000])
plt.plot(basis[1:1000])
print('Dot product amplitude = %2.2f' % dot_amplitude(mix, basis))
print('Peak amplitude = %2.2f' % peak_amplitude(mix))
print('RMS amplitude = %2.2f' % rms_amplitude(mix))
```
```python
data_other = sinusoid(500, 0.5, amp = 3.0)
mix = data + data_other
plt.figure()
#plt.plot(data_other[1:1000])
#plt.plot(data[1:1000])
plt.plot(mix[1:1000])
plt.plot(basis[1:1000])
print('Dot product amplitude = %2.2f' % dot_amplitude(mix, basis))
print('Peak amplitude = %2.2f' % peak_amplitude(mix))
print('RMS amplitude = %2.2f' % rms_amplitude(mix))
```
To summarize, if we know the frequency of the sinusoid we are interested in, we can use the inner product with a sinusoid of the same frequency and phase as a robust way to estimate the amplitude in the presence of interferring noise and/or sinusoidal signals of different frequencies. If we don't know the phase we can use an iterative approach of trying every possible phase and selecting the one that gives the highest amplitude estimate - the brute force approach we talked about in a previous notebook.
However, there is a simpler approach to estimating both the amplitude and the phase of a sinusoidal signal of known frequency.
It is based on the following identity:
\begin{equation} a \sin(x) + b \cos(x) = R \sin (x + \theta)
\end{equation}
where
$ R = \sqrt{(a^2 + b^2)} \;\; \text{and} \;\; \theta = \tan^{-1} \frac{b}{a} $
So basically we can represent a sinusoidal signal of a particular amplitude and phase as a weighted sum (with appropriate weights $ a \;\; \text{and}\;\; b$ of a sine signal and a cosine signal. So to estimate the amplitude and phase of a sinusoid of known frequency we can take the inner product with a pair of sine and cosine signals of the same frequncy. Let's see how this would work. We will see later that these pairs of sines and cosine signals are what are called basis functions of the Discrete Fourier Transform.
```python
srate = 8000
amplitude = 3.0
k = 1000
phase = k * (2 * np.pi / srate)
print('Original amplitude = %2.2f' % amplitude)
print('Original phase = %2.2f' % phase)
data = sinusoid(300, 0.5, amp =amplitude, phase = phase)
plt.plot(data[1:1000])
basis_sin = sinusoid(300, 0.5, amp = 1)
basis_cos = sinusoid(300, 0.5, amp = 1, phase = np.pi/2)
a = dot_amplitude(data, basis_sin)
b = dot_amplitude(data, basis_cos)
estimated_phase = np.arctan(b/a)
estimated_magnitude = np.sqrt(a*a+b*b)
print('Estimated Magnitude = %2.2f' % estimated_magnitude)
print('Estimated Phase = %2.2f' % estimated_phase)
```
```python
```
```python
```
```python
```
| d5388e3e11af7625457d90f580ede4580157c601 | 144,311 | ipynb | Jupyter Notebook | course1/session1/kadenze_mir_c1_s1_9_measuring_amplitude.ipynb | Achilleasein/mir_program_kadenze | adc204f82dff565fe615e20681b84c94c2cff10d | [
"CC0-1.0"
] | 19 | 2021-03-16T00:00:29.000Z | 2022-02-01T05:03:45.000Z | course1/session1/kadenze_mir_c1_s1_9_measuring_amplitude.ipynb | femiogunbode/mir_program_kadenze | 7c3087acf1623b3b8d9742f1d50cd5dd53135020 | [
"CC0-1.0"
] | null | null | null | course1/session1/kadenze_mir_c1_s1_9_measuring_amplitude.ipynb | femiogunbode/mir_program_kadenze | 7c3087acf1623b3b8d9742f1d50cd5dd53135020 | [
"CC0-1.0"
] | 9 | 2021-03-16T03:07:45.000Z | 2022-02-12T04:29:03.000Z | 365.344304 | 39,396 | 0.93751 | true | 1,570 | Qwen/Qwen-72B | 1. YES
2. YES | 0.951142 | 0.890294 | 0.846796 | __label__eng_Latn | 0.966679 | 0.805725 |
[](https://pythonista.io)
# Introducción a ```sympy```.
El proyecto [sympy](https://www.sympy.org/en/index.html) comprende una biblioteca de herramientas que permiten realziar operaciones de matemáticas simbólicas.
En este sentido, es posible utilizar algunos de sus componentes para realizar operaciones que en lugar de regresar valores numéricos regresan representaciones simbólicas.
```python
!pip install sympy
```
```python
import sympy
```
## La función *sympy.symbols()*.
Esta función permite crear objetos de la clase *sympy.core.symbol.Symbol* que pueden ser utulizadso como símbolos algebraicos.
```
sympy.symbols('<símbolo>')
```
```python
x = sympy.symbols('x')
```
```python
type(x)
```
```python
x + 1
```
```python
2/3 + x
```
```python
x ** 2
```
```python
x ** (1/2)
```
## La función *sympy.Rational()*
```python
sympy.Rational(2, 3)
```
```python
```
```python
x, y, z = sympy.symbols("x, y, z")
```
```python
f = sympy.Function("f")
```
```python
f(x)
```
```python
f = sympy.Function('f')(x)
```
```python
f
```
```python
expr = x**4 + x**3 + x**2 + x + 1
```
```python
expr
```
```python
expr.diff()
```
```python
expr.integrate()
```
```python
expresion = x + sympy.sin(x)
```
```python
expresion
```
```python
expresion.integrate(x, x)
```
```python
expresion.diff(x, x, x)
```
```python
expr.diff(x)
```
<p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
<p style="text-align: center">© José Luis Chiquete Valdivieso. 2019.</p>
| b269f0d194bfce9315d3009937ce9024cb20fa14 | 5,798 | ipynb | Jupyter Notebook | 15_introduccion_a_sympy.ipynb | PythonistaMX/py301 | 8831a0a0864d69b3ac6dc1a547c1e5066a124cde | [
"MIT"
] | 7 | 2019-05-14T18:23:29.000Z | 2021-12-24T13:34:16.000Z | 15_introduccion_a_sympy.ipynb | PythonistaMX/py301 | 8831a0a0864d69b3ac6dc1a547c1e5066a124cde | [
"MIT"
] | null | null | null | 15_introduccion_a_sympy.ipynb | PythonistaMX/py301 | 8831a0a0864d69b3ac6dc1a547c1e5066a124cde | [
"MIT"
] | 8 | 2018-12-25T23:09:33.000Z | 2021-09-13T04:49:52.000Z | 19.924399 | 406 | 0.481545 | true | 546 | Qwen/Qwen-72B | 1. YES
2. YES | 0.943348 | 0.868827 | 0.819606 | __label__spa_Latn | 0.495871 | 0.742551 |
```python
from __future__ import division, print_function
%matplotlib inline
```
```python
import sympy
from sympy import Matrix, eye, symbols, sin, cos, zeros
from sympy.physics.mechanics import *
from IPython.display import display
sympy.init_printing(use_latex='mathjax')
```
# Quaternion Math Functions
```python
def expq(n):
n *= 0.5
nNorm = n.norm()
qn = Matrix([cos(nNorm),n/nNorm*sin(nNorm)])
return qn
def quat2dcm(q):
"""
Convert quaternion to DCM
"""
# Extract components
w = q[0]
x = q[1]
y = q[2]
z = q[3]
# Reduce repeated calculations
ww = w*w
xx = x*x
yy = y*y
zz = z*z
wx = w*x
wy = w*y
wz = w*z
xy = x*y
xz = x*z
yz = y*z
# Build Direction Cosine Matrix (DCM)
dcm = Matrix([
[ww + xx - yy - zz, 2*(xy - wz), 2*(xz + wy)],
[ 2*(xy + wz), ww - xx + yy - zz, 2*(yz - wx)],
[ 2*(xz - wy), 2*(yz + wx), ww - xx - yy + zz]
])
return dcm
def dcm2quat(dcm):
"""
Determine quaternion corresponding to dcm using
the stanley method.
Flips sign to always return shortest path quaterion
so w >= 0
Converts the 3x3 DCM into the quaterion where the
first component is the real part
"""
tr = Matrix.trace(dcm)
w = 0.25*(1+tr)
x = 0.25*(1+2*dcm[0,0]-tr)
y = 0.25*(1+2*dcm[1,1]-tr)
z = 0.25*(1+2*dcm[2,2]-tr)
#kMax = np.argmax([w,x,y,z])
kMax = 0
if kMax == 0:
w = sqrt(w)
x = 0.25*(dcm[1,2]-dcm[2,1])/w
y = 0.25*(dcm[2,0]-dcm[0,2])/w
z = 0.25*(dcm[0,1]-dcm[1,0])/w
elif kMax == 1:
x = sqrt(x)
w = 0.25*(dcm[1,2]-dcm[2,1])/x
if w<0:
x = -x
w = -w
y = 0.25*(dcm[0,1]+dcm[1,0])/x
z = 0.25*(dcm[2,0]+dcm[0,2])/x
elif kMax == 2:
y = sqrt(y)
w = 0.25*(dcm[2,0]-dcm[0,2])/y
if w<0:
y = -y
w = -w
x = 0.25*(dcm[0,1]+dcm[1,0])/y
z = 0.25*(dcm[1,2]+dcm[2,1])/y
elif kMax == 3:
z = sqrt(z)
w = 0.25*(dcm[0,1]-dcm[1,0])/z
if w<0:
z = -z
w = -w
x = 0.25*(dcm[2,0]+dcm[0,2])/z
y = 0.25*(dcm[1,2]+dcm[2,1])/z
q = Matrix([w,x,y,z])
return q
def skew3(v):
vx,vy,vz = v
out = Matrix([[ 0, -vz, vy],
[ vz, 0, -vx],
[-vy, vx, 0]])
return out
def skew4Left(v):
if len(v)==3:
v = Matrix.vstack(zeros(1),v)
w,x,y,z = v
out = Matrix([
[w, -x, -y, -z],
[x, w, -z, y],
[y, z, w, -x],
[z, -y, x, w],
])
return out
def skew4Right(v):
if len(v)==3:
v = Matrix.vstack(zeros(1),v)
w,x,y,z = v
out = Matrix([
[w, -x, -y, -z],
[x, w, z, -y],
[y, -z, w, x],
[z, y, -x, w],
])
return out
def quatConj(q):
q_out = Matrix(q[:])
q_out = q_out.T*sympy.diag(1,-1,-1,-1)
q_out = q_out.T
return q_out
def qRot(q,v):
qPrime = quatConj(q)
v = Matrix.vstack(zeros(1),v)
vout = skew4Left(q)*skew4Right(qPrime)*v
return Matrix(vout[1:])
```
# Inertia Tensor
```python
def build_inertia_tensor(Ivec):
Ixx,Iyy,Izz,Ixy,Ixz,Iyz = Ivec
Imat = zeros(3,3)
Imat[0,0] = Ixx
Imat[0,1] = Ixy
Imat[0,2] = Ixz
Imat[1,0] = Ixy
Imat[1,1] = Iyy
Imat[1,2] = Iyz
Imat[2,0] = Ixz
Imat[2,1] = Iyz
Imat[2,2] = Izz
return Imat
```
# 6DOF EOM using general body frame Force and Moment
Define Sympy Symbols
```python
rx,ry,rz = symbols('r_x r_y r_z')
vx,vy,vz = symbols('v_x v_y v_z')
qw, qx, qy, qz = symbols('q_w, q_x, q_y, q_z')
wx, wy, wz = symbols('w_x, w_y, w_z')
Ixx, Iyy, Izz, Ixy, Ixz, Iyz = symbols('I_xx, I_yy, I_zz, I_xy, I_xz, I_yz')
Mx, My, Mz = symbols('M_x, M_y, M_z')
Fbx, Fby, Fbz = symbols('F_x, F_y, F_z')
m,g = symbols('m g')
L = symbols('L') # Quadcopter arm length
```
Setup Vectors
```python
r_BwrtLexpL = Matrix([rx,ry,rz])
v_BwrtLexpL = Matrix([vx,vy,vz])
q_toLfromB = Matrix([qw,qx,qy,qz])
wb = Matrix([wx,wy,wz])
Fb = Matrix([Fbx,Fby,Fbz])
Mb = Matrix([Mx,My,Mz])
```
## Build Inertia Tensor
```python
Ixy = 0
Ixz = 0
Iyz = 0
Ivec = Ixx,Iyy,Izz,Ixy,Ixz,Iyz
inertiaTensor = build_inertia_tensor(Ivec)
display(inertiaTensor)
display(inertiaTensor.inv())
```
$\displaystyle \left[\begin{matrix}I_{xx} & 0 & 0\\0 & I_{yy} & 0\\0 & 0 & I_{zz}\end{matrix}\right]$
$\displaystyle \left[\begin{matrix}\frac{1}{I_{xx}} & 0 & 0\\0 & \frac{1}{I_{yy}} & 0\\0 & 0 & \frac{1}{I_{zz}}\end{matrix}\right]$
Gravity Vector in local frame (NED)
```python
g_expL = Matrix([0,0,g])
```
## Body Forces & Moments
```python
# Motor speeds
wm1, wm2, wm3, wm4 = symbols('w_m1, w_m2, w_m3, w_m4')
# Motor force and moment coefficients
kF, kM = symbols('k_F, k_M')
# Motor Thrust and Torque
Fm1 = kF*wm1**2
Fm2 = kF*wm2**2
Fm3 = kF*wm3**2
Fm4 = kF*wm4**2
Mm1 = kM*wm1**2
Mm2 = kM*wm2**2
Mm3 = kM*wm3**2
Mm4 = kM*wm4**2
# Calc Body Forces due to motors
Fb[0] = 0
Fb[1] = 0
Fb[2] = -(Fm1+Fm2+Fm3+Fm4)
# Calc Body Moments dut to motors
Mb[0] = L*(Fm4-Fm2)
Mb[1] = L*(Fm1-Fm3)
Mb[2] = Mm2 + Mm4 - Mm1 -Mm3
print('Fb')
display(Fb)
print('Mb')
display(Mb)
```
Fb
$\displaystyle \left[\begin{matrix}0\\0\\- k_{F} w_{m1}^{2} - k_{F} w_{m2}^{2} - k_{F} w_{m3}^{2} - k_{F} w_{m4}^{2}\end{matrix}\right]$
Mb
$\displaystyle \left[\begin{matrix}L \left(- k_{F} w_{m2}^{2} + k_{F} w_{m4}^{2}\right)\\L \left(k_{F} w_{m1}^{2} - k_{F} w_{m3}^{2}\right)\\- k_{M} w_{m1}^{2} + k_{M} w_{m2}^{2} - k_{M} w_{m3}^{2} + k_{M} w_{m4}^{2}\end{matrix}\right]$
## Sum of forces
```python
a_BwrtLexpL = 1/m*qRot(q_toLfromB,Fb) + g_expL
print('a_BwrtLexpL')
display(a_BwrtLexpL)
```
a_BwrtLexpL
$\displaystyle \left[\begin{matrix}\frac{\left(2 q_{w} q_{y} + 2 q_{x} q_{z}\right) \left(- k_{F} w_{m1}^{2} - k_{F} w_{m2}^{2} - k_{F} w_{m3}^{2} - k_{F} w_{m4}^{2}\right)}{m}\\\frac{\left(- 2 q_{w} q_{x} + 2 q_{y} q_{z}\right) \left(- k_{F} w_{m1}^{2} - k_{F} w_{m2}^{2} - k_{F} w_{m3}^{2} - k_{F} w_{m4}^{2}\right)}{m}\\g + \frac{\left(q_{w}^{2} - q_{x}^{2} - q_{y}^{2} + q_{z}^{2}\right) \left(- k_{F} w_{m1}^{2} - k_{F} w_{m2}^{2} - k_{F} w_{m3}^{2} - k_{F} w_{m4}^{2}\right)}{m}\end{matrix}\right]$
## Sum of moments
```python
wbDot = inertiaTensor.inv() * (-skew3(wb)*inertiaTensor*wb + Mb)
print('wbDot')
display(wbDot)
```
wbDot
$\displaystyle \left[\begin{matrix}\frac{I_{yy} w_{y} w_{z} - I_{zz} w_{y} w_{z} + L \left(- k_{F} w_{m2}^{2} + k_{F} w_{m4}^{2}\right)}{I_{xx}}\\\frac{- I_{xx} w_{x} w_{z} + I_{zz} w_{x} w_{z} + L \left(k_{F} w_{m1}^{2} - k_{F} w_{m3}^{2}\right)}{I_{yy}}\\\frac{I_{xx} w_{x} w_{y} - I_{yy} w_{x} w_{y} - k_{M} w_{m1}^{2} + k_{M} w_{m2}^{2} - k_{M} w_{m3}^{2} + k_{M} w_{m4}^{2}}{I_{zz}}\end{matrix}\right]$
## Quaternion Kinematic Equation
```python
qDot = 0.5*skew4Left(q_toLfromB) * Matrix.vstack(zeros(1),wb)
# 0.5*skew4Left(q_toLfromB)[:,1:] * wb
#display(skew4Left(q_toLfromB))
#display(skew4Left(q_toLfromB)[:,1:])
#display(0.5*skew4Left(q_toLfromB)[:,1:] * wb)
print('qDot')
display(qDot)
```
qDot
$\displaystyle \left[\begin{matrix}- 0.5 q_{x} w_{x} - 0.5 q_{y} w_{y} - 0.5 q_{z} w_{z}\\0.5 q_{w} w_{x} + 0.5 q_{y} w_{z} - 0.5 q_{z} w_{y}\\0.5 q_{w} w_{y} - 0.5 q_{x} w_{z} + 0.5 q_{z} w_{x}\\0.5 q_{w} w_{z} + 0.5 q_{x} w_{y} - 0.5 q_{y} w_{x}\end{matrix}\right]$
## State and dstate vectors
```python
state = Matrix([r_BwrtLexpL, v_BwrtLexpL, q_toLfromB, wb])
dstate = Matrix([
v_BwrtLexpL,
a_BwrtLexpL,
qDot,
wbDot
])
display(state.T)
display(dstate)
mprint(dstate)
```
$\displaystyle \left[\begin{array}{ccccccccccccc}r_{x} & r_{y} & r_{z} & v_{x} & v_{y} & v_{z} & q_{w} & q_{x} & q_{y} & q_{z} & w_{x} & w_{y} & w_{z}\end{array}\right]$
$\displaystyle \left[\begin{matrix}v_{x}\\v_{y}\\v_{z}\\\frac{\left(2 q_{w} q_{y} + 2 q_{x} q_{z}\right) \left(- k_{F} w_{m1}^{2} - k_{F} w_{m2}^{2} - k_{F} w_{m3}^{2} - k_{F} w_{m4}^{2}\right)}{m}\\\frac{\left(- 2 q_{w} q_{x} + 2 q_{y} q_{z}\right) \left(- k_{F} w_{m1}^{2} - k_{F} w_{m2}^{2} - k_{F} w_{m3}^{2} - k_{F} w_{m4}^{2}\right)}{m}\\g + \frac{\left(q_{w}^{2} - q_{x}^{2} - q_{y}^{2} + q_{z}^{2}\right) \left(- k_{F} w_{m1}^{2} - k_{F} w_{m2}^{2} - k_{F} w_{m3}^{2} - k_{F} w_{m4}^{2}\right)}{m}\\- 0.5 q_{x} w_{x} - 0.5 q_{y} w_{y} - 0.5 q_{z} w_{z}\\0.5 q_{w} w_{x} + 0.5 q_{y} w_{z} - 0.5 q_{z} w_{y}\\0.5 q_{w} w_{y} - 0.5 q_{x} w_{z} + 0.5 q_{z} w_{x}\\0.5 q_{w} w_{z} + 0.5 q_{x} w_{y} - 0.5 q_{y} w_{x}\\\frac{I_{yy} w_{y} w_{z} - I_{zz} w_{y} w_{z} + L \left(- k_{F} w_{m2}^{2} + k_{F} w_{m4}^{2}\right)}{I_{xx}}\\\frac{- I_{xx} w_{x} w_{z} + I_{zz} w_{x} w_{z} + L \left(k_{F} w_{m1}^{2} - k_{F} w_{m3}^{2}\right)}{I_{yy}}\\\frac{I_{xx} w_{x} w_{y} - I_{yy} w_{x} w_{y} - k_{M} w_{m1}^{2} + k_{M} w_{m2}^{2} - k_{M} w_{m3}^{2} + k_{M} w_{m4}^{2}}{I_{zz}}\end{matrix}\right]$
Matrix([
[ v_x],
[ v_y],
[ v_z],
[ (2*q_w*q_y + 2*q_x*q_z)*(-k_F*w_m1**2 - k_F*w_m2**2 - k_F*w_m3**2 - k_F*w_m4**2)/m],
[ (-2*q_w*q_x + 2*q_y*q_z)*(-k_F*w_m1**2 - k_F*w_m2**2 - k_F*w_m3**2 - k_F*w_m4**2)/m],
[g + (q_w**2 - q_x**2 - q_y**2 + q_z**2)*(-k_F*w_m1**2 - k_F*w_m2**2 - k_F*w_m3**2 - k_F*w_m4**2)/m],
[ -0.5*q_x*w_x - 0.5*q_y*w_y - 0.5*q_z*w_z],
[ 0.5*q_w*w_x + 0.5*q_y*w_z - 0.5*q_z*w_y],
[ 0.5*q_w*w_y - 0.5*q_x*w_z + 0.5*q_z*w_x],
[ 0.5*q_w*w_z + 0.5*q_x*w_y - 0.5*q_y*w_x],
[ (I_yy*w_y*w_z - I_zz*w_y*w_z + L*(-k_F*w_m2**2 + k_F*w_m4**2))/I_xx],
[ (-I_xx*w_x*w_z + I_zz*w_x*w_z + L*(k_F*w_m1**2 - k_F*w_m3**2))/I_yy],
[ (I_xx*w_x*w_y - I_yy*w_x*w_y - k_M*w_m1**2 + k_M*w_m2**2 - k_M*w_m3**2 + k_M*w_m4**2)/I_zz]])
```python
from sympy.physics.mechanics import *
from sympy import sin, cos, symbols, Matrix, solve
```
```python
# Inertial Reference Frame
N = ReferenceFrame('N')
# Define world corredinate origin
O = Point('O')
O.set_vel(N, 0.0)
```
```python
```
```python
```
```python
```
| 6395ace740a1be84ac115a48b002d591d72ed911 | 26,564 | ipynb | Jupyter Notebook | modelDeriv/quadcopterMath.ipynb | lee-iv/sim-quadcopter | 69d17d09a70d11f724abceea952f17339054158f | [
"MIT"
] | 1 | 2020-07-30T00:16:29.000Z | 2020-07-30T00:16:29.000Z | modelDeriv/quadcopterMath.ipynb | lee-iv/sim-quadcopter | 69d17d09a70d11f724abceea952f17339054158f | [
"MIT"
] | null | null | null | modelDeriv/quadcopterMath.ipynb | lee-iv/sim-quadcopter | 69d17d09a70d11f724abceea952f17339054158f | [
"MIT"
] | null | null | null | 34.409326 | 1,153 | 0.322165 | true | 4,734 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.766294 | 0.693228 | __label__yue_Hant | 0.223584 | 0.448932 |
Introduction
-------------------
Brzezniak (2000) is a great book because it approaches conditional expectation through a sequence of exercises, which is what we are trying to do here. The main difference is that Brzezniak takes a more abstract measure-theoretic approach to the same problems. Note that you *do* need to grasp the measure-theoretic to move into more advanced areas in stochastic processes, but for what we have covered so far, working the same problems in his text using our methods is illuminating. It always helps to have more than one way to solve *any* problem. I urge you to get a copy of his book or at least look at some pages on Google Books. I have numbered the examples corresponding to the book.
Examples
-------------
This is Example 2.1 from Brzezniak:
> Three coins, 10p, 20p and 50p are tossed. The values of those coins that land heads up are added to work out the total amount. What is the expected total amount given that two coins have landed heads up?
In this case we have we want to compute $\mathbb{E}(\xi|\eta)$ where
$$ \xi = 10 X_{10} + 20 X_{20} +50 X_{50} $$
where $X_i \in \{ 0,1\} $. This represents the sum total value of the heads-up coins. The $\eta$ represents the fact that only two of the three coins are heads-up. Note
$$\eta = X_{10} X_{20} (1-X_{50})+ (1-X_{10}) X_{20} X_{50}+ X_{10} (1-X_{20}) X_{50} $$
is a function that is only non-zero when two of the three coins is heads. Each triple term catches each of these three possibilities. For example, the first term is when the 10p and 20p are heads up and the 50p is heads down.
To compute the conditional expectation, we want to find a function $h$ of $\eta$ that minimizes the MSE
$$ \sum_{X\in\{0,1\}^3} \frac{1}{8} (\xi - h( \eta ))^2 $$
where the sum is taken over all possible triples of outcomes for $ \{X_{10} , X_{20} ,X_{50}\}$ and the $\frac{1}{8} = \frac{1}{2^3} $ since each coin has a $\frac{1}{2}$ chance of coming up heads.
Now, the question boils down to what function $h(\eta)$ should we try? Note that $\eta \in \{0,1\}$ so $h$ takes on only two values. Thus, we only have to try $h(\eta)=\alpha \eta$ and find $\alpha$. Writing this out gives,
$$ \sum_{X\in\{0,1\}^3} \frac{1}{8} (\xi - \alpha( \eta ))^2 $$
which boils down to solving for $\alpha$,
$$\langle \xi , \eta \rangle = \alpha \langle \eta,\eta \rangle$$
where
$$ \langle \xi , \eta \rangle =\sum_{X\in\{0,1\}^3} \frac{1}{8} (\xi \eta ) $$
This is tedious and a perfect job for `sympy`.
```
import sympy as S
eta = S.Symbol('eta')
xi = S.Symbol('xi')
X10 = S.Symbol('X10')
X20 = S.Symbol('X20')
X50 = S.Symbol('X50')
eta = X10 * X20 *(1-X50 )+ X10 * (1-X20) *(X50 )+ (1-X10) * X20 *(X50 )
xi = 10*X10 +20* X20+ 50*X50
num=S.summation(xi*eta,(X10,0,1),(X20,0,1),(X50,0,1))
den=S.summation(eta*eta,(X10,0,1),(X20,0,1),(X50,0,1))
alpha=num/den
print alpha
```
160/3
This means that
$$ \mathbb{E}(\xi|\eta) = \frac{160}{3} \eta $$
which we can check with a quick simulation
```
import numpy as np
from numpy import array
x=np.random.randint(0,2,(3,5000))
print (160/3.,np.dot(x[:,x.sum(axis=0)==2].T,array([10,20,50])).mean())
```
(53.333333333333336, 53.243528790279981)
Example
--------
This is example 2.2:
> Three coins, 10p, 20p and 50p are tossed as before. What is the conditional expectation of the total amount shown by the three coins given the total amount shown by the 10p and 20p coins only?
For this problem,
$$\eta = 30 X_{10} X_{20} + 20 (1-X_{10}) X_{20} + 10 X_{10} (1-X_{20}) $$
which takes on three values (10,20,30) and only considers the 10p and 20p coins. Here, we'll look for affine functions, $h(\eta) = a \eta + b $.
```
from sympy.abc import a,b
eta = X10 * X20 * 30 + X10 * (1-X20) *(10 )+ (1-X10) * X20 *(20 )
h = a*eta + b
J=S.summation((xi - h)**2 * S.Rational(1,8),(X10,0,1),(X20,0,1),(X50,0,1))
sol=S.solve( [S.diff(J,a), S.diff(J,b)],(a,b) )
print sol
```
{b: 25, a: 1}
This means that
$$ \mathbb{E}(\xi|\eta) = 25+ \eta $$
since $\eta$ takes on only four values, $\{0,10,20,30\}$, we can write this out as
$$ \mathbb{E}(\xi|\eta=0) = 25 $$
$$ \mathbb{E}(\xi|\eta=10) = 35 $$
$$ \mathbb{E}(\xi|\eta=20) = 45 $$
$$ \mathbb{E}(\xi|\eta=30) = 55 $$
The following is a quick simulation to demonstrate this.
```
x=np.random.randint(0,2,(3,5000)) # random samples for 3 coins tossed
eta=np.dot(x[:2,:].T,array([10,20])) # sum of 10p and 20p
print np.dot(x[:,eta==0].T,array([10,20,50])).mean() # E(xi|eta=0)
print np.dot(x[:,eta==10].T,array([10,20,50])).mean()# E(xi|eta=10)
print np.dot(x[:,eta==20].T,array([10,20,50])).mean()# E(xi|eta=20)
print np.dot(x[:,eta==30].T,array([10,20,50])).mean()# E(xi|eta=30)
```
25.1587301587
34.3410852713
43.7323358271
56.6238973536
Example
-----------
This is Example 2.3
Note that "Lebesgue measure" on $[0,1]$ just means uniformly distributed on that interval. Also, note the the `Piecewise` object in `sympy` is not complete at this point in its development, so we'll have to work around that in the following.
```
%pylab inline
```
Populating the interactive namespace from numpy and matplotlib
```
x=S.Symbol('x')
c=S.Symbol('c')
xi = 2*x**2
eta=S.Piecewise((1,S.And(S.Gt(x,0),S.Lt(x,S.Rational(1,3)))), # 0 < x < 1/3
(2,S.And(S.Gt(x,S.Rational(1,3)),S.Lt(x,S.Rational(2,3)))), # 1/3 < x < 2/3,
(0,S.And(S.Gt(x,S.Rational(2,3)),S.Lt(x,1))),
)
h = a + b*eta + c*eta**2
J=S.integrate((xi - h)**2 ,(x,0,1))
sol=S.solve( [S.diff(J,a),
S.diff(J,b),
S.diff(J,c),
],
(a,b,c) )
print sol
print S.piecewise_fold(h.subs(sol))
```
{c: 8/9, b: -20/9, a: 38/27}
Piecewise((2/27, And(x < 1/3, x > 0)), (14/27, And(x < 2/3, x > 1/3)), (38/27, And(x < 1, x > 2/3)))
Thus, collecting this result gives:
$$ \mathbb{E}(\xi|\eta) = \frac{38}{27} - \frac{20}{9}\eta + \frac{8}{9} \eta^2$$
which can be re-written as a piecewise function as
$$\mathbb{E}(\xi|\eta) =\begin{cases} \frac{2}{27} & \text{for}\: 0 < x < \frac{1}{3} \\\frac{14}{27} & \text{for}\: \frac{1}{3} < x < \frac{2}{3} \\\frac{38}{27} & \text{for}\: \frac{2}{3}<x < 1 \end{cases}
$$
The following is a quick simulation to demonstrate this.
```
x = np.random.rand(1000)
f,ax= subplots()
ax.hist(2*x**2,bins=array([0,1/3.,2/3.,1])**2*2,normed=True,alpha=.5)
ax.vlines([2/27.,14/27.,38/27.],0,ax.get_ylim()[1],linestyles='--')
ax.set_xlabel(r'$2 x^2$',fontsize=18);
```
This plot shows the intervals that correspond to the respective domains of $\eta$ with the vertical dotted lines showing the $\mathbb{E}(\xi|\eta) $ for that piece.
Example
-----------
This is Example 2.4
```
x,a=S.symbols('x,a')
xi = 2*x**2
half = S.Rational(1,2)
eta_0=S.Piecewise((2, S.And(S.Ge(x,0), S.Lt(x,half))),
(0, S.And(S.Ge(x,half), S.Le(x,1))))
eta_1=S.Piecewise((0, S.Lt(x,half)),
(x, S.And(S.Ge(x,half),S.Le(x,1))))
v=S.var('b:3') # coefficients for quadratic function of eta
h = a*eta_0 + (eta_1**np.arange(len(v))*v).sum()
J=S.integrate((xi - h)**2 ,(x,0,1))
sol=S.solve([J.diff(i) for i in v+(a,)],v+(a,))
hsol = h.subs(sol)
f=S.lambdify(x,hsol,'numpy')
print S.piecewise_fold(h.subs(sol))
t = np.linspace(0,1,51,endpoint=False)
fig,ax = subplots()
ax.plot(t, 2*t**2,label=r'$\xi=2 x^2$')
ax.plot(t,[f(i) for i in t],'-x',label=r'$\mathbb{E}(\xi|\eta)$')
ax.plot(t,map(S.lambdify(x,eta_0+eta_1),t),label=r'$\eta(x)$')
ax.set_ylim(ymax = 2.3)
ax.grid()
ax.legend(loc=0);
#ax.plot(t,map(S.lambdify(x,eta),t))
```
The figure shows the $\mathbb{E}(\xi|\eta)$ against $\xi$ and $\eta$. Note that $\xi= \mathbb{E}(\xi|\eta)= 2 x^2$ when $x\in[0,\frac{1}{2}]$ . Assembling the solution gives,
$$\mathbb{E}(\xi|\eta) =\begin{cases} \frac{1}{6} & \text{for}\: 0 \le x < \frac{1}{2} \\ 2 x^2 & \text{for}\: \frac{1}{2} < x \le 1 \end{cases}$$
This example warrants more a more detailed explanation since $\eta$ is more complicated. The first question is why did we choose $h(\eta)$ as a quadratic function? Since $\xi$ is a squared function of $x$ and since $x$ is part of $\eta$, we chose a quadratic function so that $h(\eta)$ would contain a $x^2$ in the domain where $\eta=x$. The motivation is that we are asking for a function $h(x)$ that most closely approximates $2x^2$. Well, obviously, the exact function is $h(x)=2 x^2$! Thus, we want $h(x)=2 x^2$ over the domain where $\eta=x$, which is $x\in[\frac{1}{2},1]$ and that is exactly what we have.
We could have used our inner product by considering two separate functions,
$\eta_1 (x) = 2$
where $x\in [0,\frac{1}{2}]$ and
$$\eta_2 (x) = x$$
where $x\in [\frac{1}{2},1]$. Thus, at the point of projection, we have
$$ \mathbb{E}((2 x^2 - 2 c) \cdot 2) = 0$$
which leads to
$$\int_0^{\frac{1}{2}} 2 x^2 \cdot 2 dx = \int_0^{\frac{1}{2}} c 2 \cdot 2 dx $$
and a solution for $c$,
$$ c = \frac{1}{12} $$
Assembling the solution for $x\in[0,\frac{1}{2}]$ gives
$$ \mathbb{E}(\xi|\eta) = \frac{2}{12}$$
We can do the same thing for the other piece, $\eta_2$,
$$ \mathbb{E}((2 x^2 - c x^2) \cdot x) = 0$$
which, by inspection, gives $c=2$. Thus, for $x\in[\frac{1}{2},1]$ , we have
$$ \mathbb{E}(\xi|\eta)= 2 x^2$$
which is what we had before.
Example
-----------
This is Exercise 2.6
```
x,a=S.symbols('x,a')
xi = 2*x**2
eta = 1 - abs(2*x-1)
half = S.Rational(1,2)
eta=S.Piecewise((1+(2*x-1), S.And(S.Ge(x,0),S.Lt(x,half))),
(1-(2*x-1), S.And(S.Ge(x,half),S.Lt(x,1))))
v=S.var('b:3') # assume h is quadratic in eta
h = (eta**np.arange(len(v))*v).sum()
J=S.integrate((xi - h)**2 ,(x,0,1))
sol=S.solve([J.diff(i) for i in v],v)
hsol = h.subs(sol)
print S.piecewise_fold(h.subs(sol))
t = np.linspace(0,1,51,endpoint=False)
fig,ax = subplots()
fig.set_size_inches(5,5)
ax.plot(t, 2*t**2,label=r'$\xi=2 x^2$')
ax.plot(t,[hsol.subs(x,i) for i in t],'-x',label=r'$\mathbb{E}(\xi|\eta)$')
ax.plot(t,map(S.lambdify(x,eta),t),label=r'$\eta(x)$')
ax.legend(loc=0)
ax.grid()
```
The figure shows that the $\mathbb{E}(\xi|\eta)$ is continuous over the entire domain. The code above solves for the conditional expectation using optimization assuming that $h$ is a quadratic function of $\eta$, but we can also do it by using the inner product. Thus,
$$ \mathbb{E}\left((2 x^2 - h(\eta_1(x)) )\eta_1(x)\right)= \int_0^{\frac{1}{2}} (2 x^2 - h(\eta_1(x)) )\eta_1(x) dx = 0$$
where $\eta_1 = 2x $ for $x\in [0,\frac{1}{2}]$. We can re-write this in terms of $\eta_1$ as
$$ \int_0^1 \left(\frac{\eta_1^2}{2}-h(\eta_1)\right)\eta_1 d\eta_1$$
and the solution jumps right out as $h(\eta_1)=\frac{\eta_1^2}{2}$. Note that $\eta_1\in[0,1]$. Doing the same thing for the other piece,
$$ \eta_2 = 2 - 2 x, \hspace{1em} \forall x\in[\frac{1}{2},1]$$
gives,
$$ \int_0^1 \left(\frac{(2-\eta_2)^2}{2}-h(\eta_2)\right)\eta_2 d\eta_2$$
and again, the optimal $h(\eta_2)$ jumps right out as
$$ h(\eta_2) = \frac{(2-\eta_2)^2}{2} , \hspace{1em} \forall \eta_2\in[0,1]$$
and since $\eta_2$ and $\eta_2$ represent the same variable over the same domain we can just add these up to get the full solution:
$$ h(\eta) = \frac{1}{2} \left( 2 - 2 \eta + \eta^2\right) $$
and then back-substituting each piece for $x$ produces the same solution as `sympy`.
Example
-----------
This is Exercise 2.14
```
x,a=S.symbols('x,a')
half = S.Rational(1,2)
xi = 2*x**2
eta=S.Piecewise((2*x, S.And(S.Ge(x,0),S.Lt(x,half))),
((2*x-1),S.Ge(x,half)),
)
v=S.var('b:3')
h = (eta**np.arange(len(v))*v).sum()
J=S.integrate((xi - h)**2 ,(x,0,1))
sol=S.solve([J.diff(i) for i in v],v)
hsol = h.subs(sol)
print S.piecewise_fold(h.subs(sol))
t = np.linspace(0,1,51,endpoint=False)
fig,ax = subplots()
fig.set_size_inches(5,5)
ax.plot(t, 2*t**2,label=r'$\xi=2 x^2$')
ax.plot(t,[hsol.subs(x,i) for i in t],'-x',label=r'$\mathbb{E}(\xi|\eta)$')
ax.plot(t,map(S.lambdify(x,eta),t),label=r'$\eta(x)$')
ax.legend(loc=0)
ax.grid()
```
As before, using the inner product for this problem, gives:
$$ \int_0^1 \left(\frac{\eta_1^2}{2}-h(\eta_1)\right)\eta_1 d\eta_1=0$$
and the solution jumps right out as
$$h(\eta_1)=\frac{\eta_1^2}{2} , \hspace{1em} \forall \eta_1\in[0,1$$
where $\eta_1(x)=2x$. Doing the same thing for $\eta_2=2x-1$ gives,
$$ \int_0^1 \left(\frac{(1+\eta_2)^2}{2}-h(\eta_2)\right)\eta_1 d\eta_2=0$$
with
$$h(\eta_2)=\frac{(1+\eta_2)^2}{2} , \hspace{1em} \forall \eta_2\in[0,1]$$
and then adding these up as before gives the full solution:
$$ h(\eta)= \frac{1}{2} +\eta + \eta^2$$
Back-substituting each piece for $x$ produces the same solution as `sympy`.
```
xs = np.random.rand(100)
print np.mean([(2*i**2-hsol.subs(x,i))**2 for i in xs])
print S.integrate((2*x**2-hsol)**2,(x,0,1)).evalf()
```
0.282679822464177
0.270833333333333
## Summary
We worked out some of the great examples in Brzezniak's book using our methods as a way to show multiple ways to solve the same problem. In particular, comparing Brzezniak's more measure-theoretic methods to our less abstract techniques is a great way to get a handle on those concepts which you will need for more advanced study in stochastic process.
As usual, the corresponding [IPython Notebook](www.ipython.org) notebook for this post is available for download [here](https://github.com/unpingco/Python-for-Signal-Processing/blob/master/Conditional_expectation_MSE_Ex.ipynb)
Comments and corrections welcome!
References
---------------
* Brzezniak, Zdzislaw, and Tomasz Zastawniak. Basic stochastic processes: a course through exercises. Springer, 2000.
```
```
| f4458f3c6719ed6059dec716f27c59f0f1991699 | 98,510 | ipynb | Jupyter Notebook | Conditional_expectation_MSE_Ex.ipynb | BadPhysicist/Python-for-Signal-Processing | a2565b75600359c244b694274bb03e4a1df934d6 | [
"CC-BY-3.0"
] | 10 | 2016-11-19T14:10:23.000Z | 2020-08-28T18:10:42.000Z | Conditional_expectation_MSE_Ex.ipynb | dougmcclymont/Python-for-Signal-Processing | a2565b75600359c244b694274bb03e4a1df934d6 | [
"CC-BY-3.0"
] | null | null | null | Conditional_expectation_MSE_Ex.ipynb | dougmcclymont/Python-for-Signal-Processing | a2565b75600359c244b694274bb03e4a1df934d6 | [
"CC-BY-3.0"
] | 5 | 2018-02-26T06:14:46.000Z | 2019-09-04T07:23:13.000Z | 135.130316 | 25,204 | 0.826617 | true | 4,982 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.855851 | 0.737348 | __label__eng_Latn | 0.938744 | 0.551439 |
L2 error squared estimation
----------------------------
### Bilinear quad
```python
from sympy import *
from sympy.integrals.intpoly import polytope_integrate
from sympy.abc import x, y
```
```python
points = [ Point2D(-1, -1), Point2D(2, -2), Point2D(4, 1), Point2D(-2, 3)]
def phi_alpha_beta(alpha, beta, x, y):
return (1 + alpha * x) * (1 + beta * y) / 4
# Define basis functions phi(x, y)
def phi_local(x, y):
return [
phi_alpha_beta(-1, -1, x, y),
phi_alpha_beta(1, -1, x, y),
phi_alpha_beta(1, 1, x, y),
phi_alpha_beta(-1, 1, x, y)
]
# Define transformation from reference element T: K_hat -> K,
# with K being the element defined by quad.
def T(x, y):
p = phi_local(x, y)
return points[0] * p[0] + points[1] * p[1] + points[2] * p[2] + points[3] * p[3]
def u(x, y):
return 5 * x * y + 3 * x - 2 * y - 5
def u_local(xi, eta):
(x, y) = T(xi, eta)
return u(x, y)
u_h_weights = [u(p[0], p[1]) for p in points]
def u_h_local(xi, eta):
p = phi_local(xi, eta)
u = u_h_weights
return u[0] * p[0] + u[1] * p[1] + u[2] * p[2] + u[3] * p[3]
```
```python
det_J_K = Matrix(T(x, y)).jacobian(Matrix([x, y])).det()
integrand = expand(det_J_K * (u_h_local(x, y) - u_local(x, y))**2)
# Note: It may be necessary to expand the polynomial for use with polytope_integrate
#integral = polytope_integrate(reference_quad, 1)
# Note: polytope_integrate did not seem to work so well. Since we anyway integrate in the reference domain,
# which is a simple square, we can just integrate normally with simple limits
integral = integrate(integrand, (x, -1, 1), (y, -1, 1))
integral
```
$\displaystyle \frac{9955}{12}$
```python
expand(u_h_local(x, y))
```
$\displaystyle \frac{43 x y}{2} + \frac{29 x}{2} - \frac{3 y}{2} - \frac{19}{2}$
```python
expand(u_local(x, y))
```
$\displaystyle - \frac{15 x^{2} y^{2}}{16} - \frac{45 x^{2} y}{8} - \frac{135 x^{2}}{16} + \frac{25 x y^{2}}{4} + \frac{43 x y}{2} + \frac{33 x}{4} + \frac{35 y^{2}}{16} + \frac{33 y}{8} - \frac{37}{16}$
```python
```
```python
```
| bea9d8022650f9cfba8e632e49ba455f80c860e9 | 4,541 | ipynb | Jupyter Notebook | notebooks/unit_tests_analytic_solutions.ipynb | InteractiveComputerGraphics/higher_order_embedded_fem | 868fbc25f93cae32aa3caaa41a60987d4192cf1b | [
"MIT"
] | 10 | 2021-10-19T17:11:52.000Z | 2021-12-26T10:20:53.000Z | notebooks/unit_tests_analytic_solutions.ipynb | InteractiveComputerGraphics/higher_order_embedded_fem | 868fbc25f93cae32aa3caaa41a60987d4192cf1b | [
"MIT"
] | null | null | null | notebooks/unit_tests_analytic_solutions.ipynb | InteractiveComputerGraphics/higher_order_embedded_fem | 868fbc25f93cae32aa3caaa41a60987d4192cf1b | [
"MIT"
] | 3 | 2021-10-20T16:13:05.000Z | 2022-03-16T01:50:35.000Z | 25.088398 | 222 | 0.473244 | true | 766 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944177 | 0.815232 | 0.769724 | __label__eng_Latn | 0.581936 | 0.626658 |
# CS 224n Assignment #2: word2vec
## Understanding word2vec
Let’s have a quick refresher on the word2vec algorithm. The key insight behind word2vec is that ‘a word is known by the company it keeps’. Concretely, suppose we have a ‘center’ word c and a contextual window surrounding c. We shall refer to words that lie in this contextual window as ‘outside words’. For example, in Figure 1 we see that the center word c is ‘banking’. Since the context window size is 2, the outside words are ‘turning’, ‘into’, ‘crises’, and ‘as’.
The goal of the skip-gram word2vec algorithm is to accurately learn the probability distribution $P(O|C)$. Given a specific word $o$ and a specific word $c$, we want to calculate $P (O = o|C = c)$, which is the probability that word $o$ is an ‘outside’ word for $c$, i.e., the probability that $o$ falls within the contextual window of $c$.
In word2vec, the conditional probability distribution is given by taking vector dot-products and applying the softmax function:
\begin{equation*}
P(O = o | C = c) = \frac{\exp(u_o^t v_c)}{\sum_\limits{w \in Vocab} \exp(u_w^tv_c)}
\tag{1}
\end{equation*}
Here, $u_o$ is the ‘outside’ vector representing outside word $o$, and $v_c$ is the ‘center’ vector representing center word $c$. To contain these parameters, we have two matrices, $U$ and $V$ . The columns of $U$ are all the ‘outside’
vectors $u_w$. The columns of $V$ are all of the ‘center’ vectors $v_w$. Both $U$ and $V$ contain a vector for every $w \in$ Vocabulary$^{1}$<a id='note1'></a>.
Recall from lectures that, for a single pair of words c and o, the loss is given by:
\begin{equation*}
J_{naive-softmax}(v_c,o,U) = −\log P(O = o|C = c).
\tag{2}
\end{equation*}
Another way to view this loss is as the cross-entropy$^{2}$<a id='note2'></a> between the true distribution $y$ and the predicted distribution $\hat{y}$. Here, both $y$ and $\hat{y}$ are vectors with length equal to the number of words in the vocabulary. Furthermore, the $k^{th}$ entry in these vectors indicates the conditional probability of the $k^{th}$ word being an ‘outside word’ for the given $c$. The true empirical distribution $y$ is a one-hot vector with a $1$ for the true outside word $o$, and $0$ everywhere else. The predicted distribution $\hat{y}$ is the probability distribution $P (O|C = c)$ given by our model in equation $\eqref{eq:eq1}$.
Notes:
* [1](#note1) Assume that every word in our vocabulary is matched to an integer number $k$. Bolded lowercase letters represent vectors. $u_k$ is both the $k^th$ column of $U$ and the ‘outside’ word vector for the word indexed by $k$. $v_k$ is both the $k^th$ column of V and the ‘center’ word vector for the word indexed by $k$. In order to simplify notation we shall interchangeably use $k$ to refer to the word and the index-of-the-word.
* [2](#note2) The Cross Entropy Loss between the true (discrete) probability distribution p and another distribution $q$ is $−\sum_{i} p_i \log(q_i)$.
### (a)
\begin{equation}
- \sum\limits_{w \in {Vocab}} y_w \log(\hat{y}_w) = - \sum\limits_{w \in {Vocab}} \mathbb{1}_{w = o} \log(\hat{y}_w) = - \log (\hat{y}_o)
\tag{3}
\end{equation}
### (b)
Remembering that:
\begin{equation}
u_w = \delta_w U^t \text{ and } y = \delta_o
\tag{3.b.1}
\end{equation}
\begin{equation}
\frac{\partial}{\partial v}\left(\exp(u^tv)\right) = u \cdot \exp(u^tv)
\tag{3.b.2}
\end{equation}
\begin{equation}
\frac{d}{d v}\log(f(v)) = \frac{1}{f(v)} \frac{df}{dv}(v)
\tag{3.b.3}
\end{equation}
If $s_j = \frac{e^{x_j}}{\sum_\limits{k} e^{x_k}}$, then:
\begin{equation}
\frac{\partial s_j}{\partial x_i} =
\begin{cases}
s_j(1-s_j) & \text{if $i = j$} \\
-s_is_j & \text{otherwise}
\end{cases}
\tag{3.b.4}
\end{equation}
Then, from \ref{eq:eq1} and \ref{eq:eq2}, the chain-rule, then writing $x_w^c = u_w^tv_c$ :
\begin{equation}
\frac{\partial}{\partial v_c} J_{naive-softmax} = - \frac{\partial}{\partial v_c} \log \left( \hat{y}_o \right) = − \frac{1}{s_o^c} \frac{\partial \hat{y}_o} {\partial v_c} = − \frac{1}{\hat{y}_o} \sum\limits_{w \in Vocab} \frac{\partial x_w^c}{\partial v_c} \frac{\partial \hat{y}_o}{\partial x_w^c}
\end{equation}
\begin{equation}
\frac{\partial}{\partial v_c} J_{naive-softmax} = − \frac{1}{\hat{y}_o} \left( - \sum\limits_{\substack w \in Vocab \\ w \neq o} u_w^t \hat{y}_o \hat{y}_w + u_o^t \hat{y}_o(1-\hat{y}_o) \right) = - u_o^t + \sum\limits_{w \in Vocab} \hat{y}_w u_w^t = -\delta_o U^t + \left(\sum\limits_{w \in Vocab} \hat{y}_w \delta_w\right) U^t
\tag{3.b.5}
\end{equation}
Finally:
\begin{equation}
\frac{\partial}{\partial v_c} J_{naive-softmax} = U^t (\hat{y}-y)
\tag{3.b}
\end{equation}
### c
Adapting $\ref{eq:eq3.b}$:
\begin{equation}
\frac{\partial}{\partial u_w} J_{naive-softmax} = − \frac{1}{\hat{y}_o} \sum\limits_{w' \in Vocab} \frac{\partial x_{w'}^c}{\partial u_w} \frac{\partial \hat{y}_o}{\partial x_{w'}^c}
\end{equation}
\begin{equation}
\frac{\partial x_{w'}^c}{\partial u_w} = \frac{\partial (u_{w'}^tv_c)}{\partial u_w} =
\begin{cases}
0 & \text{if $w \neq w'$}\\
v_c & \text{otherwise}
\end{cases}
\end{equation}
Then:
\begin{equation}
\frac{\partial}{\partial u_w} J_{naive-softmax} = − \frac{1}{\hat{y}_o} v_c \frac{\partial \hat{y}_o}{\partial x_w^c} =
\begin{cases}
− \frac{1}{\hat{y}_o} v_c \hat{y}_o (1 - \hat{y}_o) & = (\hat{y}_o - 1) v_c & = (\hat{y}_o - y_o) v_c & \text{if $w = o$}\\
\frac{1}{\hat{y}_o} v_c \hat{y}_o \hat{y}_w & = \hat{y}_w v_c & = (\hat{y}_w - y_w) v_c & \text{otherwise}
\end{cases}
\end{equation}
\begin{equation}
\frac{\partial}{\partial u_w} J_{naive-softmax} = (\hat{y} - y)v_c^t
\tag{3.c}
\end{equation}
### d
The sigmoid function is given by:
\begin{equation}
\sigma(x) = \frac{1}{1 + e^{-x}} = \frac{e^x}{e^x + 1}
\tag{4}
\end{equation}
Then, the derivative of the sigmoid function with respect to the scalar $x$ is given by:
\begin{equation}
\frac{d\sigma}{dx} = - \frac{-e^{-x}}{{\left(1 + e^{-x}\right)}^2} = \frac{1 +e^{-x} - 1}{{\left(1 + e^{-x}\right)}^2} = \sigma(x)(1 - \sigma(x))
\tag{4.d}
\end{equation}
### e
Now we shall consider the Negative Sampling loss, which is an alternative to the Naive Softmax loss. Assume that K negative samples (words) are drawn from the vocabulary. For simplicity of notation we shall refer to them as $w_1,w_2,\dots,w_K$ and their outside vectors as $u_1,\dots,u_K$. Note that $o \notin \left\{w_1,\dots,w_K\right\}$. For a center word $c$ and an outside word $o$, the negative sampling loss function is given by:
\begin{equation}
J_{neg-sample}(v_c,o,U) = − \log\left(\sigma(u_o^t v_c)\right) − \sum\limits_{k=1}^{K}
\log\left(\sigma(−u_k^t v_c)\right)
\tag{5}
\end{equation}
####(i)
\begin{equation}
\frac{\partial}{\partial v_c} J_{neg-sample}(v_c,o,U) = -\frac{1}{\sigma(x_o^c)} \frac{\partial x_o^c}{\partial v_c} \frac{d \sigma}{d x}(x_o^c) - \sum\limits_{k=1}^{K} \frac{1}{\sigma(-x_k^c)} \left(-\frac{\partial x_k^c}{\partial v_c} \frac{d \sigma}{d x}(-x_k^c)\right) = -(1 - \sigma(x_o^c)) u_o^t + \sum\limits_{k=1}^{K} (1 - \sigma(-x_k^c)) u_k^t
\end{equation}
\begin{equation}
\frac{\partial}{\partial v_c} J_{neg-sample}(v_c,o,U) = -(1 - \sigma(u_o^t v_c)) u_o^t + \sum\limits_{k=1}^{K} (1 - \sigma(-u_k^t v_c)) u_k^t
\tag{5.1}
\end{equation}
####(ii)
\begin{equation}
\frac{\partial}{\partial u_o} J_{neg-sample}(v_c,o,U) = -\frac{1}{\sigma(x_o^c)} \frac{\partial x_o^c}{\partial u_o} \frac{d \sigma}{d x}(x_o^c) - \sum\limits_{k=1}^{K} \frac{1}{\sigma(-x_k^c)} \left(-\frac{\partial x_k^c}{\partial u_o} \frac{d \sigma}{d x}(-x_k^c)\right) = -(1 - \sigma(u_o^t v_c)) v_c
\tag{5.2}
\end{equation}
####(iii)
\begin{equation}
\frac{\partial}{\partial u_k} J_{neg-sample}(v_c,o,U) = -\frac{1}{\sigma(x_o^c)} \frac{\partial x_o^c}{\partial u_k} \frac{d \sigma}{d x}(x_o^c) - \sum\limits_{k'=1}^{K} \frac{1}{\sigma(-x_{k'}^c)} \left(-\frac{\partial x_{k'}^c}{\partial u_k} \frac{d \sigma}{d x}(-x_{k'}^c)\right) = (1 - \sigma(-u_k^t v_c)) v_c
\tag{5.2}
\end{equation}
### f
Suppose the center word is $c = w_t$ and the context window is $\left[w_{t−m}, \dots, w_{t−1}, w_t, w_{t+1},\dots, w_{t+m}\right]$, where $m$ is the context window size. Recall that for the skip-gram version of word2vec, the total loss for the context window is:
\begin{equation}
J_{skip-gram}(v_c,w_{t−m},\dots,w_{t+m},U) = \sum\limits_{\substack{−m \leq j \leq m \\ j\neq0}} J(v_c,w_{t+j},U)
\tag{6}
\end{equation}
Here, $J(v_c,w_{t+j},U)$ represents an arbitrary loss term for the center word $c = w_t$ and outside word $w_{t+j}$. $J(v_c,w_{t+j},U)$ could be $J_{naive-softmax}(v_c,w_{t+j},U)$ or $J_{neg-sample}(v_c,w_{t+j},U)$, depending on your implementation.
#### (i)
\begin{equation}
\frac{\partial}{\partial U} J_{skip-gram}(v_c,w_{t−m},\dots,w_{t+m},U) = \sum\limits_{\substack{−m \leq j \leq m \\ j\neq0}} \frac{\partial}{\partial U} J(v_c,w_{t+j},U)
\tag{6.i}
\end{equation}
#### (ii)
\begin{equation}
\frac{\partial}{\partial v_c} J_{skip-gram}(v_c,w_{t−m},\dots,w_{t+m},U) = \sum\limits_{\substack{−m \leq j \leq m \\ j\neq0}} \frac{\partial}{\partial v_c} J(v_c,w_{t+j},U)
\tag{6.ii}
\end{equation}
#### (iii)
If $w \neq c$:
\begin{equation}
\frac{\partial}{\partial v_w} J_{skip-gram}(v_c,w_{t−m},\dots,w_{t+m},U) = 0
\tag{6.iii}
\end{equation}
## Coding: Implementing word2vec
### Word2Vec with skip-grams and Negative Sampling loss
```
# utils
import numpy as np
def normalizeRows(x):
""" Row normalization function
Implement a function that normalizes each row of a matrix to have
unit length.
"""
N = x.shape[0]
x /= np.sqrt(np.sum(x**2, axis=1)).reshape((N,1)) + 1e-30
return x
def softmax(x):
"""Compute the softmax function for each row of the input x.
It is crucial that this function is optimized for speed because
it will be used frequently in later code.
Arguments:
x -- A D dimensional vector or N x D dimensional numpy matrix.
Return:
x -- You are allowed to modify x in-place
"""
orig_shape = x.shape
if len(x.shape) > 1:
# Matrix
tmp = np.max(x, axis=1)
x -= tmp.reshape((x.shape[0], 1))
x = np.exp(x)
tmp = np.sum(x, axis=1)
x /= tmp.reshape((x.shape[0], 1))
else:
# Vector
tmp = np.max(x)
x -= tmp
x = np.exp(x)
tmp = np.sum(x)
x /= tmp
assert x.shape == orig_shape
return x
```
```
# checks
#!/usr/bin/env python
import numpy as np
import random
# First implement a gradient checker by filling in the following functions
def gradcheck_naive(f, x, gradientText):
""" Gradient check for a function f.
Arguments:
f -- a function that takes a single argument and outputs the
loss and its gradients
x -- the point (numpy array) to check the gradient at
gradientText -- a string detailing some context about the gradient computation
Notes:
Note that gradient checking is a sanity test that only checks whether the
gradient and loss values produced by your implementation are consistent with
each other. Gradient check passing on its own doesn’t guarantee that you
have the correct gradients. It will pass, for example, if both the loss and
gradient values produced by your implementation are 0s (as is the case when
you have not implemented anything). Here is a detailed explanation of what
gradient check is doing if you would like some further clarification:
http://ufldl.stanford.edu/tutorial/supervised/DebuggingGradientChecking/.
"""
rndstate = random.getstate()
random.setstate(rndstate)
fx, grad = f(x) # Evaluate function value at original point
h = 1e-4 # Do not change this!
# Iterate over all indexes ix in x to check the gradient.
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
ix = it.multi_index
x[ix] += h # increment by h
random.setstate(rndstate)
fxh, _ = f(x) # evalute f(x + h)
x[ix] -= 2 * h # restore to previous value (very important!)
random.setstate(rndstate)
fxnh, _ = f(x)
x[ix] += h
numgrad = (fxh - fxnh) / 2 / h
# Compare gradients
reldiff = abs(numgrad - grad[ix]) / max(1, abs(numgrad), abs(grad[ix]))
if reldiff > 1e-5:
print("Gradient check failed for %s." % gradientText)
print("First gradient error found at index %s in the vector of gradients" % str(ix))
print("Your gradient: %f \t Numerical gradient: %f" % (
grad[ix], numgrad))
return
it.iternext() # Step to next dimension
print("Gradient check passed!. Read the docstring of the `gradcheck_naive`"
" method in utils.gradcheck.py to understand what the gradient check does.")
def grad_tests_softmax(skipgram, dummy_tokens, dummy_vectors, dataset):
print ("======Skip-Gram with naiveSoftmaxLossAndGradient Test Cases======")
# first test
output_loss, output_gradCenterVecs, output_gradOutsideVectors = \
skipgram("c", 3, ["a", "b", "e", "d", "b", "c"],
dummy_tokens, dummy_vectors[:5,:], dummy_vectors[5:,:], dataset)
assert np.allclose(output_loss, 11.16610900153398), \
"Your loss does not match expected loss."
expected_gradCenterVecs = [[ 0., 0., 0. ],
[ 0., 0., 0. ],
[-1.26947339, -1.36873189, 2.45158957],
[ 0., 0., 0. ],
[ 0., 0., 0. ]]
expected_gradOutsideVectors = [[-0.41045956, 0.18834851, 1.43272264],
[ 0.38202831, -0.17530219, -1.33348241],
[ 0.07009355, -0.03216399, -0.24466386],
[ 0.09472154, -0.04346509, -0.33062865],
[-0.13638384, 0.06258276, 0.47605228]]
assert np.allclose(output_gradCenterVecs, expected_gradCenterVecs), \
"Your gradCenterVecs do not match expected gradCenterVecs."
assert np.allclose(output_gradOutsideVectors, expected_gradOutsideVectors), \
"Your gradOutsideVectors do not match expected gradOutsideVectors."
print("The first test passed!")
# second test
output_loss, output_gradCenterVecs, output_gradOutsideVectors = \
skipgram("b", 3, ["a", "b", "e", "d", "b", "c"],
dummy_tokens, dummy_vectors[:5,:], dummy_vectors[5:,:], dataset)
assert np.allclose(output_loss, 9.87714910003414), \
"Your loss does not match expected loss."
expected_gradCenterVecs = [[ 0., 0., 0. ],
[-0.14586705, -1.34158321, -0.29291951],
[ 0., 0., 0. ],
[ 0., 0., 0. ],
[ 0., 0., 0. ]]
expected_gradOutsideVectors = [[-0.30342672, 0.19808298, 0.19587419],
[-0.41359958, 0.27000601, 0.26699522],
[-0.08192272, 0.05348078, 0.05288442],
[ 0.6981188, -0.4557458, -0.45066387],
[ 0.10083022, -0.06582396, -0.06508997]]
assert np.allclose(output_gradCenterVecs, expected_gradCenterVecs), \
"Your gradCenterVecs do not match expected gradCenterVecs."
assert np.allclose(output_gradOutsideVectors, expected_gradOutsideVectors), \
"Your gradOutsideVectors do not match expected gradOutsideVectors."
print("The second test passed!")
# third test
output_loss, output_gradCenterVecs, output_gradOutsideVectors = \
skipgram("a", 3, ["a", "b", "e", "d", "b", "c"],
dummy_tokens, dummy_vectors[:5,:], dummy_vectors[5:,:], dataset)
assert np.allclose(output_loss, 10.810758628593335), \
"Your loss does not match expected loss."
expected_gradCenterVecs = [[-1.1790274, -1.35861865, 1.53590492],
[ 0., 0., 0. ],
[ 0., 0., 0. ],
[ 0., 0., 0. ],
[ 0., 0., 0. ]]
expected_gradOutsideVectors = [[-7.96035953e-01, -1.79609012e-02, 2.07761330e-01],
[ 1.40175316e+00, 3.16276545e-02, -3.65850437e-01],
[-1.99691259e-01, -4.50561933e-03, 5.21184016e-02],
[ 2.02560028e-02, 4.57034715e-04, -5.28671357e-03],
[-4.26281954e-01, -9.61816867e-03, 1.11257419e-01]]
assert np.allclose(output_gradCenterVecs, expected_gradCenterVecs), \
"Your gradCenterVecs do not match expected gradCenterVecs."
assert np.allclose(output_gradOutsideVectors, expected_gradOutsideVectors), \
"Your gradOutsideVectors do not match expected gradOutsideVectors."
print("The third test passed!")
print("All 3 tests passed!")
def grad_tests_negsamp(skipgram, dummy_tokens, dummy_vectors, dataset, negSamplingLossAndGradient):
print ("======Skip-Gram with negSamplingLossAndGradient======")
# first test
output_loss, output_gradCenterVecs, output_gradOutsideVectors = \
skipgram("c", 1, ["a", "b"], dummy_tokens, dummy_vectors[:5,:],
dummy_vectors[5:,:], dataset, negSamplingLossAndGradient)
assert np.allclose(output_loss, 16.15119285363322), \
"Your loss does not match expected loss."
expected_gradCenterVecs = [[ 0., 0., 0. ],
[ 0., 0., 0. ],
[-4.54650789, -1.85942252, 0.76397441],
[ 0., 0., 0. ],
[ 0., 0., 0. ]]
expected_gradOutsideVectors = [[-0.69148188, 0.31730185, 2.41364029],
[-0.22716495, 0.10423969, 0.79292674],
[-0.45528438, 0.20891737, 1.58918512],
[-0.31602611, 0.14501561, 1.10309954],
[-0.80620296, 0.36994417, 2.81407799]]
assert np.allclose(output_gradCenterVecs, expected_gradCenterVecs), \
"Your gradCenterVecs do not match expected gradCenterVecs."
assert np.allclose(output_gradOutsideVectors, expected_gradOutsideVectors), \
"Your gradOutsideVectors do not match expected gradOutsideVectors."
print("The first test passed!")
# second test
output_loss, output_gradCenterVecs, output_gradOutsideVectors = \
skipgram("c", 2, ["a", "b", "c", "a"], dummy_tokens, dummy_vectors[:5,:],
dummy_vectors[5:,:], dataset, negSamplingLossAndGradient)
assert np.allclose(output_loss, 28.653567707668795), \
"Your loss does not match expected loss."
expected_gradCenterVecs = [ [ 0., 0., 0. ],
[ 0., 0., 0. ],
[-6.42994865, -2.16396482, -1.89240934],
[ 0., 0., 0. ],
[ 0., 0., 0. ]]
expected_gradOutsideVectors = [ [-0.80413277, 0.36899421, 2.80685192],
[-0.9277269, 0.42570813, 3.23826131],
[-0.7511534, 0.34468345, 2.62192569],
[-0.94807832, 0.43504684, 3.30929863],
[-1.12868414, 0.51792184, 3.93970919]]
assert np.allclose(output_gradCenterVecs, expected_gradCenterVecs), \
"Your gradCenterVecs do not match expected gradCenterVecs."
assert np.allclose(output_gradOutsideVectors, expected_gradOutsideVectors), \
"Your gradOutsideVectors do not match expected gradOutsideVectors."
print("The second test passed!")
# third test
output_loss, output_gradCenterVecs, output_gradOutsideVectors = \
skipgram("a", 3, ["a", "b", "e", "d", "b", "c"],
dummy_tokens, dummy_vectors[:5,:],
dummy_vectors[5:,:], dataset, negSamplingLossAndGradient)
assert np.allclose(output_loss, 60.648705494891914), \
"Your loss does not match expected loss."
expected_gradCenterVecs = [ [-17.89425315, -7.36940626, -1.23364121],
[ 0., 0., 0. ],
[ 0., 0., 0. ],
[ 0., 0., 0. ],
[ 0., 0., 0. ]]
expected_gradOutsideVectors = [[-6.4780819, -0.14616449, 1.69074639],
[-0.86337952, -0.01948037, 0.22533766],
[-9.59525734, -0.21649709, 2.5043133 ],
[-6.02261515, -0.13588783, 1.57187189],
[-9.69010072, -0.21863704, 2.52906694]]
assert np.allclose(output_gradCenterVecs, expected_gradCenterVecs), \
"Your gradCenterVecs do not match expected gradCenterVecs."
assert np.allclose(output_gradOutsideVectors, expected_gradOutsideVectors), \
"Your gradOutsideVectors do not match expected gradOutsideVectors."
print("The third test passed!")
print("All 3 tests passed!")
```
```
import random
```
```
def sigmoid(x):
"""
Compute the sigmoid function for the input here.
Arguments:
x -- A scalar or numpy array.
Return:
s -- sigmoid(x)
"""
### YOUR CODE HERE (~1 Line)
s = 1. / (1. + np.exp(-x))
### END YOUR CODE
return s
```
```
def naiveSoftmaxLossAndGradient(
centerWordVec,
outsideWordIdx,
outsideVectors,
dataset
):
""" Naive Softmax loss & gradient function for word2vec models
Implement the naive softmax loss and gradients between a center word's
embedding and an outside word's embedding. This will be the building block
for our word2vec models.
Arguments:
centerWordVec -- numpy ndarray, center word's embedding
in shape (word vector length, )
(v_c in the pdf handout)
outsideWordIdx -- integer, the index of the outside word
(o of u_o in the pdf handout)
outsideVectors -- outside vectors is
in shape (num words in vocab, word vector length)
for all words in vocab (U in the pdf handout)
dataset -- needed for negative sampling, unused here.
Return:
loss -- naive softmax loss
gradCenterVec -- the gradient with respect to the center word vector
in shape (word vector length, )
(dJ / dv_c in the pdf handout)
gradOutsideVecs -- the gradient with respect to all the outside word vectors
in shape (num words in vocab, word vector length)
(dJ / dU)
"""
### YOUR CODE HERE (~6-8 Lines)
### Please use the provided softmax function (imported earlier in this file)
### This numerically stable implementation helps you avoid issues pertaining
### to integer overflow.
v_c = np.expand_dims(centerWordVec, axis=-1) # rank = (K,1)
x_w = np.dot(outsideVectors, v_c) # u_k^t v_c ; rank = (V,K) * (K,1) = (V,1)
y_hat = softmax(x_w.T) # y_hat # rank = (V,1)
loss = -np.log(y_hat[0, outsideWordIdx])
dy = y_hat.T
dy[outsideWordIdx] -= 1.
gradCenterVec = np.squeeze(np.dot(outsideVectors.T, dy)) # rank = (K,V) * (V,1) = (K,1)
gradOutsideVecs = np.dot(dy, v_c.T) #rank = (V,1) * (1,K) = (V,K)
### END YOUR CODE
return loss, gradCenterVec, gradOutsideVecs
```
```
def getNegativeSamples(outsideWordIdx, dataset, K):
""" Samples K indexes which are not the outsideWordIdx """
negSampleWordIndices = [None] * K
for k in range(K):
newidx = dataset.sampleTokenIdx()
while newidx == outsideWordIdx:
newidx = dataset.sampleTokenIdx()
negSampleWordIndices[k] = newidx
return negSampleWordIndices
```
```
def negSamplingLossAndGradient(
centerWordVec,
outsideWordIdx,
outsideVectors,
dataset,
K=10
):
""" Negative sampling loss function for word2vec models
Implement the negative sampling loss and gradients for a centerWordVec
and a outsideWordIdx word vector as a building block for word2vec
models. K is the number of negative samples to take.
Note: The same word may be negatively sampled multiple times. For
example if an outside word is sampled twice, you shall have to
double count the gradient with respect to this word. Thrice if
it was sampled three times, and so forth.
Arguments/Return Specifications: same as naiveSoftmaxLossAndGradient
"""
# Negative sampling of words is done for you. Do not modify this if you
# wish to match the autograder and receive points!
negSampleWordIndices = getNegativeSamples(outsideWordIdx, dataset, K)
indices = [outsideWordIdx] + negSampleWordIndices
### YOUR CODE HERE (~10 Lines)
### Please use your implementation of sigmoid in here.
v_c = np.expand_dims(centerWordVec, axis=-1) # rank = (K,1)
u_o, u_W = outsideVectors[outsideWordIdx], outsideVectors[negSampleWordIndices]
sigmoid_o = sigmoid(np.dot(u_o, v_c))[0]
sigmoid_k = sigmoid(-np.dot(u_W, v_c)) # rank = (W,K)*(K,1) = (W,1)
loss = -np.log(sigmoid_o) - np.sum(np.log(sigmoid_k))
gradCenterVec = (
- (1 - sigmoid_o) * u_o # rank = (K,1)
+ np.dot((1 - sigmoid_k).T, u_W) # rank = (1,W)*(W,K) = (K,1)
)
gradOutsideVecs = np.zeros_like(outsideVectors)
gradOutsideVecs_k = np.dot((1 - sigmoid_k), v_c.T) # rank = (W,1)*(1,K) = (W,K)
for idx, gradOutsideVec_k in zip(negSampleWordIndices, gradOutsideVecs_k):
gradOutsideVecs[idx] += gradOutsideVec_k
gradOutsideVecs[outsideWordIdx] += np.squeeze(-(1 - sigmoid_o) * v_c)
### END YOUR CODE
return loss, gradCenterVec, gradOutsideVecs
```
```
import functools
def skipgram(currentCenterWord, windowSize, outsideWords, word2Ind,
centerWordVectors, outsideVectors, dataset,
word2vecLossAndGradient=naiveSoftmaxLossAndGradient):
""" Skip-gram model in word2vec
Implement the skip-gram model in this function.
Arguments:
currentCenterWord -- a string of the current center word
windowSize -- integer, context window size
outsideWords -- list of no more than 2*windowSize strings, the outside words
word2Ind -- a dictionary that maps words to their indices in
the word vector list
centerWordVectors -- center word vectors (as rows) is in shape
(num words in vocab, word vector length)
for all words in vocab (V in pdf handout)
outsideVectors -- outside vectors is in shape
(num words in vocab, word vector length)
for all words in vocab (U in the pdf handout)
word2vecLossAndGradient -- the loss and gradient function for
a prediction vector given the outsideWordIdx
word vectors, could be one of the two
loss functions you implemented above.
Return:
loss -- the loss function value for the skip-gram model
(J in the pdf handout)
gradCenterVec -- the gradient with respect to the center word vector
in shape (word vector length, )
(dJ / dv_c in the pdf handout)
gradOutsideVecs -- the gradient with respect to all the outside word vectors
in shape (num words in vocab, word vector length)
(dJ / dU)
"""
loss = 0.0
gradCenterVecs = np.zeros(centerWordVectors.shape)
gradOutsideVectors = np.zeros(outsideVectors.shape)
### YOUR CODE HERE (~8 Lines)
c = word2Ind[currentCenterWord]
w = [word2Ind[outsideWord] for outsideWord in outsideWords]
v_c = centerWordVectors[c,:]
gradCenterVec = np.zeros(centerWordVectors.shape[1:])
loss, gradCenterVec, gradOutsideVectors = functools.reduce(
lambda X,Y : tuple(xi + yi for xi,yi in zip(X,Y)),
(word2vecLossAndGradient(v_c, w_k, outsideVectors, dataset) for w_k in w),
(loss, gradCenterVecs[c,:], gradOutsideVectors)
)
gradCenterVecs[c] = gradCenterVec
### END YOUR CODE
return loss, gradCenterVecs, gradOutsideVectors
```
```
#############################################
# Testing functions below. DO NOT MODIFY! #
#############################################
def word2vec_sgd_wrapper(word2vecModel, word2Ind, wordVectors, dataset,
windowSize,
word2vecLossAndGradient=naiveSoftmaxLossAndGradient):
batchsize = 50
loss = 0.0
grad = np.zeros(wordVectors.shape)
N = wordVectors.shape[0]
centerWordVectors = wordVectors[:int(N/2),:]
outsideVectors = wordVectors[int(N/2):,:]
for i in range(batchsize):
windowSize1 = random.randint(1, windowSize)
centerWord, context = dataset.getRandomContext(windowSize1)
c, gin, gout = word2vecModel(
centerWord, windowSize1, context, word2Ind, centerWordVectors,
outsideVectors, dataset, word2vecLossAndGradient
)
loss += c / batchsize
grad[:int(N/2), :] += gin / batchsize
grad[int(N/2):, :] += gout / batchsize
return loss, grad
def test_word2vec():
""" Test the two word2vec implementations, before running on Stanford Sentiment Treebank """
dataset = type('dummy', (), {})()
def dummySampleTokenIdx():
return random.randint(0, 4)
def getRandomContext(C):
tokens = ["a", "b", "c", "d", "e"]
return tokens[random.randint(0,4)], \
[tokens[random.randint(0,4)] for i in range(2*C)]
dataset.sampleTokenIdx = dummySampleTokenIdx
dataset.getRandomContext = getRandomContext
random.seed(31415)
np.random.seed(9265)
dummy_vectors = normalizeRows(np.random.randn(10,3))
dummy_tokens = dict([("a",0), ("b",1), ("c",2),("d",3),("e",4)])
print("==== Gradient check for skip-gram with naiveSoftmaxLossAndGradient ====")
gradcheck_naive(lambda vec: word2vec_sgd_wrapper(
skipgram, dummy_tokens, vec, dataset, 5, naiveSoftmaxLossAndGradient),
dummy_vectors, "naiveSoftmaxLossAndGradient Gradient")
grad_tests_softmax(skipgram, dummy_tokens, dummy_vectors, dataset)
print("==== Gradient check for skip-gram with negSamplingLossAndGradient ====")
gradcheck_naive(lambda vec: word2vec_sgd_wrapper(
skipgram, dummy_tokens, vec, dataset, 5, negSamplingLossAndGradient),
dummy_vectors, "negSamplingLossAndGradient Gradient")
grad_tests_negsamp(skipgram, dummy_tokens, dummy_vectors, dataset, negSamplingLossAndGradient)
test_word2vec()
```
==== Gradient check for skip-gram with naiveSoftmaxLossAndGradient ====
Gradient check passed!. Read the docstring of the `gradcheck_naive` method in utils.gradcheck.py to understand what the gradient check does.
======Skip-Gram with naiveSoftmaxLossAndGradient Test Cases======
The first test passed!
The second test passed!
The third test passed!
All 3 tests passed!
==== Gradient check for skip-gram with negSamplingLossAndGradient ====
Gradient check passed!. Read the docstring of the `gradcheck_naive` method in utils.gradcheck.py to understand what the gradient check does.
======Skip-Gram with negSamplingLossAndGradient======
The first test passed!
The second test passed!
The third test passed!
All 3 tests passed!
### Stochastic Gradient Method
```
# Save parameters every a few SGD iterations as fail-safe
SAVE_PARAMS_EVERY = 5000
import pickle
import glob
import random
import numpy as np
import os.path as op
def load_saved_params():
"""
A helper function that loads previously saved parameters and resets
iteration start.
"""
st = 0
for f in glob.glob("saved_params_*.npy"):
iter = int(op.splitext(op.basename(f))[0].split("_")[2])
if (iter > st):
st = iter
if st > 0:
params_file = "saved_params_%d.npy" % st
state_file = "saved_state_%d.pickle" % st
params = np.load(params_file)
with open(state_file, "rb") as f:
state = pickle.load(f)
return st, params, state
else:
return st, None, None
def save_params(iter, params):
params_file = "saved_params_%d.npy" % iter
np.save(params_file, params)
with open("saved_state_%d.pickle" % iter, "wb") as f:
pickle.dump(random.getstate(), f)
```
```
def sgd(f, x0, step, iterations, postprocessing=None, useSaved=False,
PRINT_EVERY=10):
""" Stochastic Gradient Descent
Implement the stochastic gradient descent method in this function.
Arguments:
f -- the function to optimize, it should take a single
argument and yield two outputs, a loss and the gradient
with respect to the arguments
x0 -- the initial point to start SGD from
step -- the step size for SGD
iterations -- total iterations to run SGD for
postprocessing -- postprocessing function for the parameters
if necessary. In the case of word2vec we will need to
normalize the word vectors to have unit length.
PRINT_EVERY -- specifies how many iterations to output loss
Return:
x -- the parameter value after SGD finishes
"""
# Anneal learning rate every several iterations
ANNEAL_EVERY = 20000
if useSaved:
start_iter, oldx, state = load_saved_params()
if start_iter > 0:
x0 = oldx
step *= 0.5 ** (start_iter / ANNEAL_EVERY)
if state:
random.setstate(state)
else:
start_iter = 0
x = x0
if not postprocessing:
postprocessing = lambda x: x
exploss = None
for iteration in range(start_iter + 1, iterations + 1):
# You might want to print the progress every few iterations.
loss = None
### YOUR CODE HERE (~2 lines)
loss, gradient = f(x)
x -= step * gradient
### END YOUR CODE
x = postprocessing(x)
if iteration % PRINT_EVERY == 0:
if not exploss:
exploss = loss
else:
exploss = .95 * exploss + .05 * loss
print("iter %d: %f" % (iteration, exploss))
if iteration % SAVE_PARAMS_EVERY == 0 and useSaved:
save_params(iteration, x)
if iteration % ANNEAL_EVERY == 0:
step *= 0.5
return x
```
```
def sanity_check():
quad = lambda x: (np.sum(x ** 2), x * 2)
print("Running sanity checks...")
t1 = sgd(quad, 0.5, 0.01, 1000, PRINT_EVERY=100)
print("test 1 result:", t1)
assert abs(t1) <= 1e-6
t2 = sgd(quad, 0.0, 0.01, 1000, PRINT_EVERY=100)
print("test 2 result:", t2)
assert abs(t2) <= 1e-6
t3 = sgd(quad, -1.5, 0.01, 1000, PRINT_EVERY=100)
print("test 3 result:", t3)
assert abs(t3) <= 1e-6
print("-" * 40)
print("ALL TESTS PASSED")
print("-" * 40)
sanity_check()
```
Running sanity checks...
iter 100: 0.004578
iter 200: 0.004353
iter 300: 0.004136
iter 400: 0.003929
iter 500: 0.003733
iter 600: 0.003546
iter 700: 0.003369
iter 800: 0.003200
iter 900: 0.003040
iter 1000: 0.002888
test 1 result: 8.414836786079764e-10
iter 100: 0.000000
iter 200: 0.000000
iter 300: 0.000000
iter 400: 0.000000
iter 500: 0.000000
iter 600: 0.000000
iter 700: 0.000000
iter 800: 0.000000
iter 900: 0.000000
iter 1000: 0.000000
test 2 result: 0.0
iter 100: 0.041205
iter 200: 0.039181
iter 300: 0.037222
iter 400: 0.035361
iter 500: 0.033593
iter 600: 0.031913
iter 700: 0.030318
iter 800: 0.028802
iter 900: 0.027362
iter 1000: 0.025994
test 3 result: -2.524451035823933e-09
----------------------------------------
ALL TESTS PASSED
----------------------------------------
### Application to the Stanford Sentiment Treebank
```
! wget http://nlp.stanford.edu/~socherr/stanfordSentimentTreebank.zip
! unzip stanfordSentimentTreebank.zip
```
--2020-03-04 10:07:49-- http://nlp.stanford.edu/~socherr/stanfordSentimentTreebank.zip
Resolving nlp.stanford.edu (nlp.stanford.edu)... 171.64.67.140
Connecting to nlp.stanford.edu (nlp.stanford.edu)|171.64.67.140|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://nlp.stanford.edu/~socherr/stanfordSentimentTreebank.zip [following]
--2020-03-04 10:07:49-- https://nlp.stanford.edu/~socherr/stanfordSentimentTreebank.zip
Connecting to nlp.stanford.edu (nlp.stanford.edu)|171.64.67.140|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6372817 (6.1M) [application/zip]
Saving to: ‘stanfordSentimentTreebank.zip’
stanfordSentimentTr 100%[===================>] 6.08M 11.3MB/s in 0.5s
2020-03-04 10:07:49 (11.3 MB/s) - ‘stanfordSentimentTreebank.zip’ saved [6372817/6372817]
Archive: stanfordSentimentTreebank.zip
creating: stanfordSentimentTreebank/
inflating: stanfordSentimentTreebank/datasetSentences.txt
creating: __MACOSX/
creating: __MACOSX/stanfordSentimentTreebank/
inflating: __MACOSX/stanfordSentimentTreebank/._datasetSentences.txt
inflating: stanfordSentimentTreebank/datasetSplit.txt
inflating: __MACOSX/stanfordSentimentTreebank/._datasetSplit.txt
inflating: stanfordSentimentTreebank/dictionary.txt
inflating: __MACOSX/stanfordSentimentTreebank/._dictionary.txt
inflating: stanfordSentimentTreebank/original_rt_snippets.txt
inflating: __MACOSX/stanfordSentimentTreebank/._original_rt_snippets.txt
inflating: stanfordSentimentTreebank/README.txt
inflating: __MACOSX/stanfordSentimentTreebank/._README.txt
inflating: stanfordSentimentTreebank/sentiment_labels.txt
inflating: __MACOSX/stanfordSentimentTreebank/._sentiment_labels.txt
inflating: stanfordSentimentTreebank/SOStr.txt
inflating: stanfordSentimentTreebank/STree.txt
```
import pickle
import os
class StanfordSentiment:
def __init__(self, path=None, tablesize = 1000000):
if not path:
path = "stanfordSentimentTreebank"
self.path = path
self.tablesize = tablesize
def tokens(self):
if hasattr(self, "_tokens") and self._tokens:
return self._tokens
tokens = dict()
tokenfreq = dict()
wordcount = 0
revtokens = []
idx = 0
for sentence in self.sentences():
for w in sentence:
wordcount += 1
if not w in tokens:
tokens[w] = idx
revtokens += [w]
tokenfreq[w] = 1
idx += 1
else:
tokenfreq[w] += 1
tokens["UNK"] = idx
revtokens += ["UNK"]
tokenfreq["UNK"] = 1
wordcount += 1
self._tokens = tokens
self._tokenfreq = tokenfreq
self._wordcount = wordcount
self._revtokens = revtokens
return self._tokens
def sentences(self):
if hasattr(self, "_sentences") and self._sentences:
return self._sentences
sentences = []
with open(self.path + "/datasetSentences.txt", "r") as f:
first = True
for line in f:
if first:
first = False
continue
splitted = line.strip().split()[1:]
# Deal with some peculiar encoding issues with this file
sentences += [[w.lower() for w in splitted]]
self._sentences = sentences
self._sentlengths = np.array([len(s) for s in sentences])
self._cumsentlen = np.cumsum(self._sentlengths)
return self._sentences
def numSentences(self):
if hasattr(self, "_numSentences") and self._numSentences:
return self._numSentences
else:
self._numSentences = len(self.sentences())
return self._numSentences
def allSentences(self):
if hasattr(self, "_allsentences") and self._allsentences:
return self._allsentences
sentences = self.sentences()
rejectProb = self.rejectProb()
tokens = self.tokens()
allsentences = [[w for w in s
if 0 >= rejectProb[tokens[w]] or random.random() >= rejectProb[tokens[w]]]
for s in sentences * 30]
allsentences = [s for s in allsentences if len(s) > 1]
self._allsentences = allsentences
return self._allsentences
def getRandomContext(self, C=5):
allsent = self.allSentences()
sentID = random.randint(0, len(allsent) - 1)
sent = allsent[sentID]
wordID = random.randint(0, len(sent) - 1)
context = sent[max(0, wordID - C):wordID]
if wordID+1 < len(sent):
context += sent[wordID+1:min(len(sent), wordID + C + 1)]
centerword = sent[wordID]
context = [w for w in context if w != centerword]
if len(context) > 0:
return centerword, context
else:
return self.getRandomContext(C)
def sent_labels(self):
if hasattr(self, "_sent_labels") and self._sent_labels:
return self._sent_labels
dictionary = dict()
phrases = 0
with open(self.path + "/dictionary.txt", "r") as f:
for line in f:
line = line.strip()
if not line: continue
splitted = line.split("|")
dictionary[splitted[0].lower()] = int(splitted[1])
phrases += 1
labels = [0.0] * phrases
with open(self.path + "/sentiment_labels.txt", "r") as f:
first = True
for line in f:
if first:
first = False
continue
line = line.strip()
if not line: continue
splitted = line.split("|")
labels[int(splitted[0])] = float(splitted[1])
sent_labels = [0.0] * self.numSentences()
sentences = self.sentences()
for i in range(self.numSentences()):
sentence = sentences[i]
full_sent = " ".join(sentence).replace('-lrb-', '(').replace('-rrb-', ')')
sent_labels[i] = labels[dictionary[full_sent]]
self._sent_labels = sent_labels
return self._sent_labels
def dataset_split(self):
if hasattr(self, "_split") and self._split:
return self._split
split = [[] for i in range(3)]
with open(self.path + "/datasetSplit.txt", "r") as f:
first = True
for line in f:
if first:
first = False
continue
splitted = line.strip().split(",")
split[int(splitted[1]) - 1] += [int(splitted[0]) - 1]
self._split = split
return self._split
def getRandomTrainSentence(self):
split = self.dataset_split()
sentId = split[0][random.randint(0, len(split[0]) - 1)]
return self.sentences()[sentId], self.categorify(self.sent_labels()[sentId])
def categorify(self, label):
if label <= 0.2:
return 0
elif label <= 0.4:
return 1
elif label <= 0.6:
return 2
elif label <= 0.8:
return 3
else:
return 4
def getDevSentences(self):
return self.getSplitSentences(2)
def getTestSentences(self):
return self.getSplitSentences(1)
def getTrainSentences(self):
return self.getSplitSentences(0)
def getSplitSentences(self, split=0):
ds_split = self.dataset_split()
return [(self.sentences()[i], self.categorify(self.sent_labels()[i])) for i in ds_split[split]]
def sampleTable(self):
if hasattr(self, '_sampleTable') and self._sampleTable is not None:
return self._sampleTable
nTokens = len(self.tokens())
samplingFreq = np.zeros((nTokens,))
self.allSentences()
i = 0
for w in range(nTokens):
w = self._revtokens[i]
if w in self._tokenfreq:
freq = 1.0 * self._tokenfreq[w]
# Reweigh
freq = freq ** 0.75
else:
freq = 0.0
samplingFreq[i] = freq
i += 1
samplingFreq /= np.sum(samplingFreq)
samplingFreq = np.cumsum(samplingFreq) * self.tablesize
self._sampleTable = [0] * self.tablesize
j = 0
for i in range(self.tablesize):
while i > samplingFreq[j]:
j += 1
self._sampleTable[i] = j
return self._sampleTable
def rejectProb(self):
if hasattr(self, '_rejectProb') and self._rejectProb is not None:
return self._rejectProb
threshold = 1e-5 * self._wordcount
nTokens = len(self.tokens())
rejectProb = np.zeros((nTokens,))
for i in range(nTokens):
w = self._revtokens[i]
freq = 1.0 * self._tokenfreq[w]
# Reweigh
rejectProb[i] = max(0, 1 - np.sqrt(threshold / freq))
self._rejectProb = rejectProb
return self._rejectProb
def sampleTokenIdx(self):
return self.sampleTable()[random.randint(0, self.tablesize - 1)]
```
```
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
import time
# Check Python Version
import sys
assert sys.version_info[0] == 3
assert sys.version_info[1] >= 5
# Reset the random seed to make sure that everyone gets the same results
random.seed(314)
dataset = StanfordSentiment()
tokens = dataset.tokens()
nWords = len(tokens)
# We are going to train 10-dimensional vectors for this assignment
dimVectors = 10
# Context size
C = 5
# Number of iterations
ITERATIONS = 40000
# Reset the random seed to make sure that everyone gets the same results
random.seed(31415)
np.random.seed(9265)
startTime=time.time()
wordVectors = np.concatenate(
((np.random.rand(nWords, dimVectors) - 0.5) /
dimVectors, np.zeros((nWords, dimVectors))),
axis=0)
wordVectors = sgd(
lambda vec: word2vec_sgd_wrapper(skipgram, tokens, vec, dataset, C,
negSamplingLossAndGradient),
wordVectors, 0.3, ITERATIONS, None, True, PRINT_EVERY=400)
# Note that normalization is not called here. This is not a bug,
# normalizing during training loses the notion of length.
print("sanity check: cost at convergence should be around or below 10")
print("training took %d seconds" % (time.time() - startTime))
# concatenate the input and output word vectors
wordVectors = np.concatenate(
(wordVectors[:nWords,:], wordVectors[nWords:,:]),
axis=0)
visualizeWords = [
"great", "cool", "brilliant", "wonderful", "well", "amazing",
"worth", "sweet", "enjoyable", "boring", "bad", "dumb",
"annoying", "female", "male", "queen", "king", "man", "woman", "rain", "snow",
"hail", "coffee", "tea"]
visualizeIdx = [tokens[word] for word in visualizeWords]
visualizeVecs = wordVectors[visualizeIdx, :]
temp = (visualizeVecs - np.mean(visualizeVecs, axis=0))
covariance = 1.0 / len(visualizeIdx) * temp.T.dot(temp)
U,S,V = np.linalg.svd(covariance)
coord = temp.dot(U[:,0:2])
for i in range(len(visualizeWords)):
plt.text(coord[i,0], coord[i,1], visualizeWords[i],
bbox=dict(facecolor='green', alpha=0.1))
plt.xlim((np.min(coord[:,0]), np.max(coord[:,0])))
plt.ylim((np.min(coord[:,1]), np.max(coord[:,1])))
plt.savefig('word_vectors.png')
```
iter 400: 23.788721
iter 800: 23.651472
iter 1200: 23.437040
iter 1600: 23.423329
iter 2000: 23.361609
iter 2400: 23.270850
iter 2800: 23.182961
iter 3200: 22.921591
iter 3600: 22.660957
iter 4000: 22.358877
iter 4400: 22.186541
iter 4800: 21.785855
iter 5200: 21.473992
iter 5600: 21.239454
iter 6000: 20.995119
iter 6400: 20.880119
iter 6800: 20.540802
iter 7200: 20.289593
iter 7600: 20.044701
iter 8000: 19.758854
iter 8400: 19.445766
iter 8800: 19.025912
iter 9200: 18.843652
iter 9600: 18.543918
iter 10000: 18.183247
iter 10400: 17.893945
iter 10800: 17.640432
iter 11200: 17.337233
iter 11600: 17.034636
iter 12000: 16.838378
iter 12400: 16.682625
iter 12800: 16.468133
iter 13200: 16.246599
iter 13600: 15.994704
iter 14000: 15.742924
iter 14400: 15.559705
iter 14800: 15.301844
iter 15200: 15.117236
iter 15600: 14.884666
iter 16000: 14.626344
iter 16400: 14.432468
iter 16800: 14.115190
iter 17200: 13.960216
iter 17600: 13.772148
iter 18000: 13.604191
iter 18400: 13.353116
iter 18800: 13.162832
iter 19200: 12.992807
iter 19600: 12.951957
iter 20000: 12.824193
iter 20400: 12.668114
iter 20800: 12.600277
iter 21200: 12.497501
iter 21600: 12.352722
iter 22000: 12.262171
iter 22400: 12.112166
iter 22800: 12.028569
iter 23200: 11.967956
iter 23600: 11.858721
iter 24000: 11.717870
iter 24400: 11.672659
iter 24800: 11.536587
iter 25200: 11.455251
iter 25600: 11.379393
iter 26000: 11.361207
iter 26400: 11.246546
iter 26800: 11.170501
iter 27200: 11.051471
iter 27600: 10.950918
iter 28000: 10.915814
iter 28400: 10.843146
iter 28800: 10.831022
iter 29200: 10.706812
iter 29600: 10.659240
iter 30000: 10.537610
iter 30400: 10.495808
iter 30800: 10.471543
iter 31200: 10.432114
iter 31600: 10.467945
iter 32000: 10.493934
iter 32400: 10.372921
iter 32800: 10.360953
iter 33200: 10.313822
iter 33600: 10.296646
iter 34000: 10.228125
iter 34400: 10.213338
iter 34800: 10.177693
iter 35200: 10.168569
iter 35600: 10.193499
iter 36000: 10.171302
iter 36400: 10.085124
iter 36800: 10.100318
iter 37200: 10.092373
iter 37600: 10.129698
iter 38000: 10.133342
iter 38400: 10.064348
iter 38800: 9.989171
iter 39200: 9.974958
iter 39600: 9.912160
iter 40000: 9.867437
sanity check: cost at convergence should be around or below 10
training took 7018 seconds
```
from IPython.display import Image
Image("word_vectors.png")
```
```
```
| f5e9a91cb04b45a9e47abd80303d5207f6e557b1 | 103,855 | ipynb | Jupyter Notebook | a2.ipynb | beunouah/cs224n | 9b23d573e72979108c09c68b9c687e265ff40e66 | [
"MIT"
] | null | null | null | a2.ipynb | beunouah/cs224n | 9b23d573e72979108c09c68b9c687e265ff40e66 | [
"MIT"
] | null | null | null | a2.ipynb | beunouah/cs224n | 9b23d573e72979108c09c68b9c687e265ff40e66 | [
"MIT"
] | null | null | null | 62.151406 | 27,546 | 0.592268 | true | 15,117 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.847968 | 0.735218 | __label__eng_Latn | 0.70415 | 0.54649 |
# A Gentle Introduction to HARK: Buffer Stock Saving
This notebook explores the behavior of a consumer identical to the perfect foresight consumer described in [Gentle-Intro-To-HARK-PerfForesightCRRA](https://econ-ark.org/materials/Gentle-Intro-To-HARK-PerfForesightCRRA) except that now the model incorporates income uncertainty.
```python
# This cell has a bit of initial setup.
# Click the "Run" button immediately above the notebook in order to execute the contents of any cell
# WARNING: Each cell in the notebook relies upon results generated by previous cells
# The most common problem beginners have is to execute a cell before all its predecessors
# If you do this, you can restart the kernel (see the "Kernel" menu above) and start over
import matplotlib.pyplot as plt
import numpy as np
import HARK
from time import clock
from copy import deepcopy
mystr = lambda number : "{:.4f}".format(number)
from HARK.utilities import plotFuncs
```
## The Consumer's Problem with Transitory and Permanent Shocks
### Mathematical Description
Our new type of consumer receives two income shocks at the beginning of each period. Permanent income would grow by a factor $\Gamma$ in the absence of any shock , but its growth is modified by a shock, $\psi_{t+1}$:
\begin{align}
P_{t+1} & = \Gamma P_{t}\psi_{t+1}
\end{align}
whose expected (mean) value is $\mathbb{E}_{t}[\psi_{t+1}]=1$. Actual income received $Y$ is equal to permanent income $P$ multiplied by a transitory shock $\theta$:
\begin{align}
Y_{t+1} & = \Gamma P_{t+1}\theta_{t+1}
\end{align}
where again $\mathbb{E}_{t}[\theta_{t+1}] = 1$.
As with the perfect foresight problem, this model can be rewritten in terms of _normalized_ variables, e.g. the ratio of 'market resources' $M_{t}$ (wealth plus current income) to permanent income is $m_t \equiv M_t/P_t$. (See [here](http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/) for the theory). In addition, lenders may set a limit on borrowing: The ratio $a_{t}$ of end-of-period assets to permanent income $A_t/P_t$ must be greater than $\underline{a} \leq 0$. (So, if $\underline{a}=-0.3$, the consumer cannot borrow more than 30 percent of their permanent income).
The consumer's (normalized) problem turns out to be:
\begin{eqnarray*}
v_t(m_t) &=& \max_{c_t} ~~u(c_t) + \beta \mathbb{E} [(\Gamma_{t+1}\psi_{t+1})^{1-\rho} v_{t+1}(m_{t+1}) ], \\
& \text{s.t.} & \\
a_t &=& m_t - c_t, \\
a_t &\geq& \underline{a}, \\
m_{t+1} &=& a_t R/(\Gamma_{t+1} \psi_{t+1}) + \theta_{t+1}.
\end{eqnarray*}
For present purposes, we assume that the transitory and permanent shocks are independent. The permanent shock is assumed to be (approximately) lognormal, while the transitory shock has two components: A probability $\wp$ that the consumer is unemployed, in which case $\theta^{u}=\underline{\theta}$, and a probability $(1-\wp)$ of a shock that is a lognormal with a mean chosen so that $\mathbb{E}_{t}[\theta_{t+n}]=1$.
### Representing the Income Shocks
Computers are discrete devices; even if somehow we knew with certainty that the transitory and permanent shocks were, say, continuously lognormally distributed, in order to be represented on a computer those distributions would need to be approximated by a finite set of points. A large literature in numerical computation explores ways to construct such approximations; probably the easiest discretization to understand is the equiprobable approximation, in which the continuous distribution is represented by a set of $N$ outcomes that are equally likely to occur.
In the case of a single variable (say, the permanent shock $\psi$), and when the number of equiprobable points is, say, 5, the procedure is to construct a list: $\psi^{0}$ is the mean value of the continuous $\psi$ given that the draw of $\psi$ is in the bottom 20 percent of the distribution of the continuous $\psi$. $\psi^{1}$ is the mean value of $\psi$ given that the draw is between the 20th and 40th percentiles, and so on. Having constructed these, the approximation to the expectation of some expression $g(\psi)$ can be very quickly calculated by:
$$
\mathbb{E}_{t}[g(\psi)] \equiv \int_{0}^{\infty} g(\psi) dF_{\psi} \approx (1/N) \sum_{i=0}^{N-1} g(\psi^{i}).
$$
(For a graphical depiction of a particular instance of this, see [SolvingMicroDSOPs/#discreteApprox](http://www.econ2.jhu.edu/people/ccarroll/SolvingMicroDSOPs/#discreteApprox).)
## The New Parameters
In addition to the parameters required for the perfect foresight model (like the time preference factor $\beta$), under the assumptions above, we need to choose values for the following extra parameters that describe the income shock distribution and the artificial borrowing constraint.
| Param | Description | Code | Value |
| :---: | --- | --- | :---: |
| $\underline{a}$ | Artificial borrowing constraint | $\texttt{BoroCnstArt}$ | 0.0 |
| $\sigma_\psi$ | Underlying stdev of permanent income shocks | $\texttt{PermShkStd}$ | 0.1 |
| $\sigma_\theta^{e}$ | Underlying stdev of transitory income shocks | $\texttt{TranShkStd}$ | 0.1 |
| $N_\psi$ | Number of discrete permanent income shocks | $\texttt{PermShkCount}$ | 7 |
| $N_\theta$ | Number of discrete transitory income shocks | $\texttt{TranShkCount}$ | 7 |
| $\wp$ | Unemployment probability | $\texttt{UnempPrb}$ | 0.05 |
| $\underline{\theta}$ | Transitory shock when unemployed | $\texttt{IncUnemp}$ | 0.3 |
## Representation in HARK
HARK agents with this kind of problem are instances of the class $\texttt{IndShockConsumerType}$, which is constructed by "inheriting" the properties of the $\texttt{PerfForesightConsumerType}$ and then adding only the _new_ information required:
```python
# This cell defines a parameter dictionary for making an instance of IndShockConsumerType.
IndShockDictionary = {
'PermShkStd': [0.1], # ... by specifying the new parameters for constructing the income process.
'PermShkCount': 7,
'TranShkStd': [0.1],
'TranShkCount': 7,
'UnempPrb': 0.05,
'IncUnemp': 0.3, # ... and income for unemployed people (30 percent of "permanent" income)
'BoroCnstArt': 0.0, # ... and specifying the location of the borrowing constraint (0 means no borrowing is allowed)
'cycles': 0 # signifies an infinite horizon solution (see below)
}
```
## Other Attributes are Inherited from PerfForesightConsumerType
You can see all the **attributes** of an object in Python by using the `dir()` command. From the output of that command below, you can see that many of the model variables are now attributes of this object, along with many other attributes that are outside the scope of this tutorial.
```python
from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType
pfc = PerfForesightConsumerType()
dir(pfc)
```
['AgentCount',
'BoroCnstArt',
'CRRA',
'DiscFac',
'LivPrb',
'MaxKinks',
'PermGroFac',
'PermGroFacAgg',
'RNG',
'Rfree',
'T_age',
'T_cycle',
'__call__',
'__class__',
'__delattr__',
'__dict__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__getattribute__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__le__',
'__lt__',
'__module__',
'__ne__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'aNrmInitMean',
'aNrmInitStd',
'addToTimeInv',
'addToTimeVary',
'assignParameters',
'cFunc_terminal_',
'checkConditions',
'checkElementsOfTimeVaryAreLists',
'checkRestrictions',
'clearHistory',
'cycles',
'delFromTimeInv',
'delFromTimeVary',
'distance',
'getAvg',
'getControls',
'getMortality',
'getPostStates',
'getRfree',
'getShocks',
'getStates',
'initializeSim',
'makeShockHistory',
'pLvlInitMean',
'pLvlInitStd',
'postSolve',
'poststate_vars',
'poststate_vars_',
'preSolve',
'pseudo_terminal',
'quiet',
'readShocks',
'read_shocks',
'resetRNG',
'seed',
'shock_vars',
'shock_vars_',
'simBirth',
'simDeath',
'simOnePeriod',
'simulate',
'solution_terminal',
'solution_terminal_',
'solve',
'solveOnePeriod',
'timeFlip',
'timeFwd',
'timeReport',
'timeRev',
'time_flow',
'time_inv',
'time_inv_',
'time_vary',
'time_vary_',
'tolerance',
'track_vars',
'unpackcFunc',
'updateSolutionTerminal',
'vFunc_terminal_',
'verbose']
In python terminology, `IndShockConsumerType` is a **superclass** of `PerfForesightConsumerType`. This means that it builds on the functionality of its parent type (including, for example, the definition of the utility function). You can find the superclasses of a type in Python using the `__bases__` attribute:
```python
from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType
IndShockConsumerType.__bases__
```
(HARK.ConsumptionSaving.ConsIndShockModel.PerfForesightConsumerType,)
```python
# So, let's create an instance of the IndShockConsumerType
IndShockExample = IndShockConsumerType(**IndShockDictionary)
```
As before, we need to import the relevant subclass of $\texttt{AgentType}$ into our workspace, then create an instance by passing the dictionary to the class as if the class were a function.
## The Discretized Probability Distribution
The scatterplot below shows how the discretized probability distribution is represented in HARK: The lognormal distribution is represented by a set of equiprobable point masses.
```python
# Plot values for equiprobable distribution of permanent shocks
plt.scatter(IndShockExample.PermShkDstn[0][1],
IndShockExample.PermShkDstn[0][0])
plt.xlabel("Value")
plt.ylabel("Probability Mass")
plt.show()
```
This distribution was created, using the parameters in the dictionary above, when the `IndShockConsumerType` object was initialized.
## Solution by Backwards Induction
HARK solves this problem using _backwards induction_: It will derive a solution for each period ($t$) by finding a mapping between specific values of market resources $\{m[0],m[1],...\}$ and the corresponding optimal consumption $\{c[0],c[1],...\}$. The function that "connects the dots" will be stored in a variable named `cFunc`.
Backwards induction requires a "terminal" (last; final) period to work backwards from. `IndShockExample` constructed above did not specify a terminal consumption function, and consequently it uses the default terminal function in which all resources are consumed: $c_{T} = m_{t}$.
```python
IndShockExample.solution_terminal
```
<HARK.ConsumptionSaving.ConsIndShockModel.ConsumerSolution at 0x7fc761001c50>
The consumption function `cFunc` is defined by _piecewise linear interpolation_.
It is defined by a series of $(m,c)$ points on a grid; the value of the function for any $m$ is the $c$ determined by the line connecting the nearest defined gridpoints.
You can see below that in the terminal period, $c = m$; the agent consumes all available resources.
```python
# Plot terminal consumption function
plt.plot(IndShockExample.solution_terminal.cFunc.x_list,
IndShockExample.solution_terminal.cFunc.y_list,
color='k')
plt.scatter(IndShockExample.solution_terminal.cFunc.x_list,
IndShockExample.solution_terminal.cFunc.y_list)
```
The solution also has a representation of a `value function`, the value `v(m)` as a function of available market resources. Because the agent consumes all their resources in the last period, the value function for the terminal solution looks just like the CRRA utility function: $v_{T}(m) = u(m)$.
```python
# Final consumption function c=m
m = np.linspace(0.1,1,100)
plt.plot(m,IndShockExample.solution_terminal.vFunc(m))
```
## Solving the problem
This solution is generated by invoking `solve()` which is a **method** that is an **attribute** of the `IndShockExample` object. **Methods** in Python are supposed to have **documentation** that tell you what they do. You can read the documentation for methods and other attributes in HARK with the built-in Python `help()` function:
```python
help(IndShockExample.solve)
```
Help on method solve in module HARK.core:
solve(verbose=False) method of HARK.ConsumptionSaving.ConsIndShockModel.IndShockConsumerType instance
Solve the model for this instance of an agent type by backward induction.
Loops through the sequence of one period problems, passing the solution
from period t+1 to the problem for period t.
Parameters
----------
verbose : boolean
If True, solution progress is printed to screen.
Returns
-------
none
### Finite or Infinite Horizon?
$\texttt{ConsIndShockType}$ can solve either finite-horizon (e.g., life-cycle) problems, or infinite-horizon problems (where the problem is the same in every period). Elsewhere you can find documentation about the finite horizon solution; here we are interested in the infinite-horizon solution which is obtained (by definition) when iterating one more period yields a solution that is essentially the same. In the dictionary above we signaled to HARK that we want the infinite horizon solution by setting the "cycles" paramter to zero:
```python
IndShockExample.cycles # Infinite horizon solution is computed when cycles = 0
```
0
```python
# Solve It
IndShockExample.solve(verbose=True) # Verbose prints progress as solution proceeds
```
Finished cycle #1 in 0.0009517669677734375 seconds, solution distance = 100.0
Finished cycle #2 in 0.0011010169982910156 seconds, solution distance = 10.088015890333441
Finished cycle #3 in 0.0010843276977539062 seconds, solution distance = 3.3534114736589693
Finished cycle #4 in 0.0009758472442626953 seconds, solution distance = 1.669952961389428
Finished cycle #5 in 0.0015230178833007812 seconds, solution distance = 0.9967360674688486
Finished cycle #6 in 0.0017900466918945312 seconds, solution distance = 0.6602619046109517
Finished cycle #7 in 0.002032041549682617 seconds, solution distance = 0.46809484231437537
Finished cycle #8 in 0.0014519691467285156 seconds, solution distance = 0.34807706501006663
Finished cycle #9 in 0.0010118484497070312 seconds, solution distance = 0.2681341538834978
Finished cycle #10 in 0.0008890628814697266 seconds, solution distance = 0.2122324816862755
Finished cycle #11 in 0.001032114028930664 seconds, solution distance = 0.17162798586899441
Finished cycle #12 in 0.0011997222900390625 seconds, solution distance = 0.14121714401876417
Finished cycle #13 in 0.0018072128295898438 seconds, solution distance = 0.11786112023934692
Finished cycle #14 in 0.0011539459228515625 seconds, solution distance = 0.09954374358267426
Finished cycle #15 in 0.0013070106506347656 seconds, solution distance = 0.08492077965589928
Finished cycle #16 in 0.0010509490966796875 seconds, solution distance = 0.07306820983636797
Finished cycle #17 in 0.0010731220245361328 seconds, solution distance = 0.06333371450893699
Finished cycle #18 in 0.0011019706726074219 seconds, solution distance = 0.055246317280595036
Finished cycle #19 in 0.0009510517120361328 seconds, solution distance = 0.04845886926538867
Finished cycle #20 in 0.0009570121765136719 seconds, solution distance = 0.042711109600137576
Finished cycle #21 in 0.001049041748046875 seconds, solution distance = 0.037804865822300915
Finished cycle #22 in 0.0010437965393066406 seconds, solution distance = 0.03358704056809714
Finished cycle #23 in 0.0010540485382080078 seconds, solution distance = 0.029937835775703636
Finished cycle #24 in 0.001168966293334961 seconds, solution distance = 0.02676242583398336
Finished cycle #25 in 0.0010809898376464844 seconds, solution distance = 0.02398495974448922
Finished cycle #26 in 0.0010461807250976562 seconds, solution distance = 0.021544181039296006
Finished cycle #27 in 0.0009508132934570312 seconds, solution distance = 0.019390181762535263
Finished cycle #28 in 0.0008988380432128906 seconds, solution distance = 0.01748196739049135
Finished cycle #29 in 0.0009710788726806641 seconds, solution distance = 0.015785611379662168
Finished cycle #30 in 0.0009200572967529297 seconds, solution distance = 0.014272839895088651
Finished cycle #31 in 0.0009160041809082031 seconds, solution distance = 0.012919936192925086
Finished cycle #32 in 0.0011010169982910156 seconds, solution distance = 0.011706884785620986
Finished cycle #33 in 0.0009951591491699219 seconds, solution distance = 0.010616703056517629
Finished cycle #34 in 0.0012810230255126953 seconds, solution distance = 0.009634898474986997
Finished cycle #35 in 0.001035928726196289 seconds, solution distance = 0.00874904442068214
Finished cycle #36 in 0.0009348392486572266 seconds, solution distance = 0.007948413988061898
Finished cycle #37 in 0.0010099411010742188 seconds, solution distance = 0.00722372447082309
Finished cycle #38 in 0.0009341239929199219 seconds, solution distance = 0.006566906564932307
Finished cycle #39 in 0.0010230541229248047 seconds, solution distance = 0.005970916077075117
Finished cycle #40 in 0.000985860824584961 seconds, solution distance = 0.005429579002497409
Finished cycle #41 in 0.0009751319885253906 seconds, solution distance = 0.004937463273915643
Finished cycle #42 in 0.0008919239044189453 seconds, solution distance = 0.004489772052598262
Finished cycle #43 in 0.0009341239929199219 seconds, solution distance = 0.004082254546442954
Finished cycle #44 in 0.0008981227874755859 seconds, solution distance = 0.003711131170160087
Finished cycle #45 in 0.0010197162628173828 seconds, solution distance = 0.003373030500466001
Finished cycle #46 in 0.000885009765625 seconds, solution distance = 0.0030649359736791837
Finished cycle #47 in 0.0010480880737304688 seconds, solution distance = 0.0027841406665807256
Finished cycle #48 in 0.0009770393371582031 seconds, solution distance = 0.0025282088157077
Finished cycle #49 in 0.0010709762573242188 seconds, solution distance = 0.0022949429754119954
Finished cycle #50 in 0.0009758472442626953 seconds, solution distance = 0.0020823559119378388
Finished cycle #51 in 0.000942230224609375 seconds, solution distance = 0.0018886464739757969
Finished cycle #52 in 0.0009889602661132812 seconds, solution distance = 0.0017121788176539532
Finished cycle #53 in 0.0010190010070800781 seconds, solution distance = 0.0015514644867238303
Finished cycle #54 in 0.000904083251953125 seconds, solution distance = 0.0014051468883913287
Finished cycle #55 in 0.0011577606201171875 seconds, solution distance = 0.0012719878080478253
Finished cycle #56 in 0.0011050701141357422 seconds, solution distance = 0.0011508556554602478
Finished cycle #57 in 0.0010859966278076172 seconds, solution distance = 0.001040715183035168
Finished cycle #58 in 0.0009870529174804688 seconds, solution distance = 0.0009406184572178233
Finished cycle #59 in 0.0009639263153076172 seconds, solution distance = 0.0008496968979514463
Finished cycle #60 in 0.0009260177612304688 seconds, solution distance = 0.0007671542298588463
Finished cycle #61 in 0.0009109973907470703 seconds, solution distance = 0.0006922602130656763
Finished cycle #62 in 0.0009732246398925781 seconds, solution distance = 0.00062434504219544
Finished cycle #63 in 0.0008819103240966797 seconds, solution distance = 0.0005627943195669616
Finished cycle #64 in 0.0008387565612792969 seconds, solution distance = 0.0005070445234869325
Finished cycle #65 in 0.0008761882781982422 seconds, solution distance = 0.00045657890516936916
Finished cycle #66 in 0.0009970664978027344 seconds, solution distance = 0.0004109237583840297
Finished cycle #67 in 0.0009307861328125 seconds, solution distance = 0.0003696450150627584
Finished cycle #68 in 0.0008831024169921875 seconds, solution distance = 0.0003323451276209255
Finished cycle #69 in 0.0009019374847412109 seconds, solution distance = 0.0002986602050065734
Finished cycle #70 in 0.0009551048278808594 seconds, solution distance = 0.0002682573749837047
Finished cycle #71 in 0.0009250640869140625 seconds, solution distance = 0.00024083234929284103
Finished cycle #72 in 0.0008687973022460938 seconds, solution distance = 0.00021610717220754694
Finished cycle #73 in 0.0009531974792480469 seconds, solution distance = 0.00019382813573720625
Finished cycle #74 in 0.0010759830474853516 seconds, solution distance = 0.0001737638473713332
Finished cycle #75 in 0.0009999275207519531 seconds, solution distance = 0.0001557034380539335
Finished cycle #76 in 0.0011141300201416016 seconds, solution distance = 0.00013945489982480908
Finished cycle #77 in 0.0009200572967529297 seconds, solution distance = 0.00012484354371977702
Finished cycle #78 in 0.0008566379547119141 seconds, solution distance = 0.00011171056964087711
Finished cycle #79 in 0.0009541511535644531 seconds, solution distance = 9.991174074253095e-05
Finished cycle #80 in 0.0009610652923583984 seconds, solution distance = 8.931615549911953e-05
Finished cycle #81 in 0.0010380744934082031 seconds, solution distance = 7.98051111954301e-05
Finished cycle #82 in 0.0009188652038574219 seconds, solution distance = 7.127105311699466e-05
Finished cycle #83 in 0.00102996826171875 seconds, solution distance = 6.361660386744461e-05
Finished cycle #84 in 0.0008890628814697266 seconds, solution distance = 5.675366786039859e-05
Finished cycle #85 in 0.0008931159973144531 seconds, solution distance = 5.060260605649347e-05
Finished cycle #86 in 0.0008687973022460938 seconds, solution distance = 4.5091476412739695e-05
Finished cycle #87 in 0.0009589195251464844 seconds, solution distance = 4.0155335675251536e-05
Finished cycle #88 in 0.0008571147918701172 seconds, solution distance = 3.573559836667073e-05
Finished cycle #89 in 0.0008890628814697266 seconds, solution distance = 3.1779449082502964e-05
Finished cycle #90 in 0.000865936279296875 seconds, solution distance = 2.8239304256771902e-05
Finished cycle #91 in 0.0009510517120361328 seconds, solution distance = 2.507231993220671e-05
Finished cycle #92 in 0.0009131431579589844 seconds, solution distance = 2.2239942116808464e-05
Finished cycle #93 in 0.0009338855743408203 seconds, solution distance = 1.970749654489623e-05
Finished cycle #94 in 0.0008361339569091797 seconds, solution distance = 1.7443814824158466e-05
Finished cycle #95 in 0.0009217262268066406 seconds, solution distance = 1.5420894161621845e-05
Finished cycle #96 in 0.0008060932159423828 seconds, solution distance = 1.361358793783296e-05
Finished cycle #97 in 0.0010938644409179688 seconds, solution distance = 1.1999324726730265e-05
Finished cycle #98 in 0.0008931159973144531 seconds, solution distance = 1.0557853297399333e-05
Finished cycle #99 in 0.0008471012115478516 seconds, solution distance = 9.271011537581586e-06
Finished cycle #100 in 0.0007710456848144531 seconds, solution distance = 8.122517190400913e-06
Finished cycle #101 in 0.0008628368377685547 seconds, solution distance = 7.097778525810838e-06
Finished cycle #102 in 0.0008351802825927734 seconds, solution distance = 6.183723240127392e-06
Finished cycle #103 in 0.00084686279296875 seconds, solution distance = 5.368643900993675e-06
Finished cycle #104 in 0.0008349418640136719 seconds, solution distance = 4.6420585193551744e-06
Finished cycle #105 in 0.0008552074432373047 seconds, solution distance = 3.994584801603196e-06
Finished cycle #106 in 0.0008208751678466797 seconds, solution distance = 3.4178268535356437e-06
Finished cycle #107 in 0.0009191036224365234 seconds, solution distance = 2.9042732037076746e-06
Finished cycle #108 in 0.0008389949798583984 seconds, solution distance = 2.4472050093038433e-06
Finished cycle #109 in 0.0009319782257080078 seconds, solution distance = 2.0406135354811283e-06
Finished cycle #110 in 0.0008518695831298828 seconds, solution distance = 1.6791260115667228e-06
Finished cycle #111 in 0.0009329319000244141 seconds, solution distance = 1.3579390065743269e-06
Finished cycle #112 in 0.001390218734741211 seconds, solution distance = 1.0727586690073565e-06
Finished cycle #113 in 0.0009188652038574219 seconds, solution distance = 8.197470720006095e-07
```python
# plotFuncs([list],min,max) takes a [list] of functions and plots their values over a range from min to max
plotFuncs([IndShockExample.solution[0].cFunc,IndShockExample.solution_terminal.cFunc],0.,10.)
```
## Changing Constructed Attributes
In the parameter dictionary above, we chose values for HARK to use when constructing its numerical representation of $F_t$, the joint distribution of permanent and transitory income shocks. When $\texttt{IndShockExample}$ was created, those parameters ($\texttt{TranShkStd}$, etc) were used by the **constructor** or **initialization** method of $\texttt{IndShockConsumerType}$ to construct an attribute called $\texttt{IncomeDstn}$.
Suppose you were interested in changing (say) the amount of permanent income risk. From the section above, you might think that you could simply change the attribute $\texttt{TranShkStd}$, solve the model again, and it would work.
That's _almost_ true-- there's one extra step. $\texttt{TranShkStd}$ is a primitive input, but it's not the thing you _actually_ want to change. Changing $\texttt{TranShkStd}$ doesn't actually update the income distribution... unless you tell it to (just like changing an agent's preferences does not change the consumption function that was stored for the old set of parameters -- until you invoke the $\texttt{solve}$ method again). In the cell below, we invoke the method $\texttt{updateIncomeProcess}$ so HARK knows to reconstruct the attribute $\texttt{IncomeDstn}$.
```python
OtherExample = deepcopy(IndShockExample) # Make a copy so we can compare consumption functions
OtherExample.PermShkStd = [0.2] # Double permanent income risk (note that it's a one element list)
OtherExample.updateIncomeProcess() # Call the method to reconstruct the representation of F_t
OtherExample.solve()
```
The given parameter values violate the Individual Growth Impatience Condition; the GIFInd is: 1.0220
In the cell below, use your blossoming HARK skills to plot the consumption function for $\texttt{IndShockExample}$ and $\texttt{OtherExample}$ on the same figure.
```python
# Use the remainder of this cell to plot the IndShockExample and OtherExample consumption functions against each other
```
## Buffer Stock Saving?
There are some combinations of parameter values under which problems of the kind specified above have "degenerate" solutions; for example, if consumers are so patient that they always prefer deferring consumption to the future, the limiting consumption rule can be $c(m)=0$.
The toolkit has built-in tests for a number of parametric conditions that can be shown to result in various characteristics in the optimal solution.
Perhaps the most interesting such condition is the ["Growth Impatience Condition"](http://econ.jhu.edu/people/ccarroll/Papers/BufferStockTheory/#GIC): If this condition is satisfied, the consumer's optimal behavior is to aim to achieve a "target" value of $m$, to serve as a precautionary buffer against income shocks.
The tests can be invoked using the `checkConditions()` method:
```python
IndShockExample.checkConditions(verbose=True)
```
The value of the Perfect Foresight Growth Impatience Factor for the supplied parameter values satisfies the Perfect Foresight Growth Impatience Condition. Therefore, in the absence of any risk, the ratio of individual wealth to permanent income would fall indefinitely.
The value of the Individual Growth Impatience Factor for the supplied parameter values satisfies the Individual Growth Impatience Condition. Therefore, a target level of the individual market resources ratio m exists (see http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#onetarget for more).
The value of the Aggregate Growth Impatience Factor for the supplied parameter values satisfies the Aggregate Growth Impatience Condition. Therefore, it is possible that a target level of the ratio of aggregate market resources to aggregate permanent income exists.
The Weak Return Impatience Factor value for the supplied parameter values satisfies the Weak Return Impatience Condition (see http://econ.jhu.edu/people/ccarroll/papers/BufferStockTheory/#WRIC for more).
The Finite Value of Autarky Factor (FVAV) for the supplied parameter values satisfies the Finite Value of Autarky Condition.
Since both WRIC and FVAC are satisfied, the problem has a nondegenerate solution
GIFPF = 0.984539
GIFInd = 0.993777
GIFAgg = 0.964848
Thorn = AIF = 0.994384
PermGroFacAdj = 1.000611
uInvEpShkuInv = 0.990704
FVAF = 0.932054
WRIF = 0.213705
DiscFacGIFIndMax = 0.972061
DiscFacGIFAggMax = 1.010600
| 27816ac87a447f411287f19460dbd3be22fa14cb | 78,193 | ipynb | Jupyter Notebook | notebooks/Gentle-Intro-To-HARK-Buffer-Stock-Model.ipynb | frankovici/DemARK | 177c09bd387160d06f979c417671b3de18746846 | [
"Apache-2.0"
] | null | null | null | notebooks/Gentle-Intro-To-HARK-Buffer-Stock-Model.ipynb | frankovici/DemARK | 177c09bd387160d06f979c417671b3de18746846 | [
"Apache-2.0"
] | null | null | null | notebooks/Gentle-Intro-To-HARK-Buffer-Stock-Model.ipynb | frankovici/DemARK | 177c09bd387160d06f979c417671b3de18746846 | [
"Apache-2.0"
] | null | null | null | 93.756595 | 11,488 | 0.811709 | true | 8,244 | Qwen/Qwen-72B | 1. YES
2. YES | 0.749087 | 0.709019 | 0.531117 | __label__eng_Latn | 0.971561 | 0.072293 |